A Word on Waste
Whether it was when I was getting my MBA learning lean manufacturing or lean software development, I learned that we needed to focus on reducing waste to be successful. There were eight main kinds of waste to reduce in a process (manufacturing or software development). To remember them all, there is a catchy acronym TIMWOODS, which stands for:
- Transport
- Inventory
- Motion
- Waiting
- Overproduction
- Overprocessing
- Defects
- Skills (underutilizing skills).
I’ve written about the ninth kind of waste I see in product management: ego. Recently, as I received long, wordy emails assisted by AI, I realized a danger of the AI revolution: overproduction.
You can find many articles about how AI makes people 200% faster at delivering. The question is, delivering what? Is it value-added? The biggest problem I see is not people’s ability to deliver code, perhaps because I work with all-star developers, but rather whether the code solves the problem we need and adds value.
You can have AI HELP you create code to create a gibberish generator, and it will do that because you asked. That doesn’t mean it has any value. I would argue, after reading too many “assisted” emails, that perhaps AI is a gibberish generator.
Why AI is a bigger problem than you think.
AI is probably not coming for your job, but everyone, even those who don’t know what they’re doing, will use it to produce things faster than ever. This means more emails, spreadsheets, presentations, websites, scripts, etc. Is that what we need? Spreadsheets that even the creators can’t tell you what trade-offs and assumptions were made? AI-generated code that determines when to create something in the cloud and, thus, when to spend your money? Can you trust AI to tell you why it did something? Hint: you cannot.
People look at AI as speeding up jobs. This is true, but one of lean production’s core tenets is minimizing waste. When producing code is easy, the chance of creating unneeded code is high.
Don’t get me wrong; AI has been great at helping to create software. I have used Copilot, and I love it. However, you have to know what good looks like. Consider an AI image generator. Say I want to use to create an image of a car speeding down the road. In my prompt, I say: make me a vehicle that can comfortably seat four people and goes really fast. Google Gemini AI created this:
This satisfies the query. This isn’t exactly what I meant, but when I see that picture, I can adjust the query and refine the outcome because I know what good looks like to me.
Now consider AI helping you create a report on which you plan to make a go-to-market decision. With every prompt, you get the report to look more like you want it to. There are a couple of problems with this:
- As I discussed in my article on finding value, we are subject to 180+ cognitive biases, which is important to remember when looking at data. Unfortunately, our queries reflect those biases.
- AI is terrible at math. Hopefully, it isn’t helping you with that, or you will get the wrong values. For example, AI LLMs think that 0.11 > 0.9 is better because 11 is bigger than 9.
- Number-wise, there are all sorts of incorrect methods of using statistical models that don’t accurately reflect reality. For example, p-hacking is a way of picking different variables after an experiment or manipulating the data to find a statistically significant result.
As a result of these queries, you will get a chart/report that will look good but could be visualizing the wrong data. What is worse is that AI also reflects all the bad aspects of human knowledge. It may incorporate bias, bad math, or P-hacking without you knowing it.
How to avoid AI overproduction and bad results.
I’ve learned these tips for using LLM AI, and I’m sure I will have learned more over time.
- Don’t ask AI to generate things for you where you don’t know what good looks like.
- Ask questions like, “What are the risks of doing something like X?” or “What would a reader of this have questions about this not being clear.” Have it spur you on to think about your problem in a new way.
- With AI, it is helpful to add a persona because that affects the response. Queries start with “As a XYZ , what would you be concerned about.” Examples I’ve used:
- “As a smart high-level manager of a technical company with 25+ years of experience, what would concern you about the following:”
- “As a dev-ops engineer with 10+ years of experience using Kubernetes and docker to host globally distributed clusters, what downtime risks do you see in the following configuration:”
- Warning: This is not a substitute for user research. You can’t make a fake AI persona and build a product your users will like using it.
- Ask more than once. AI generates answers. For important questions, ask them again in a different way. Try a different persona in your prompt. What would a manager vs. an HR rep see as an issue in the following presentation.
- Take your results from one conversation and re-examine it with AI in a new session. For example, if it is code, you can highlight code AI generated and ask, “What risks does this code code have when it comes to security?” It may tell you it does things you didn’t ask it to do.
- Open-ended questions will lead to a convincing response but may miss something important. You will get a richer response by asking for a list of risks and then asking specific questions about each risk.
AI can be a great way to check your work. Allow you to think of framing problems you would not have considered alone. Or generate a bunch of gibberish you need to represent or support without knowing if it is good.
Prompter beware
Leave a reply to andrewfb76c04c75 Cancel reply