01Temperature and sampling randomness
Every time a language model generates a response, it samples from a probability distribution. Higher temperature settings increase randomness - the same prompt can produce noticeably different outputs each time. Even at lower temperatures, sampling introduces variation that compounds across longer outputs.
02No persistent memory
Most AI tools start every session from zero. There is no memory of what was discussed yesterday, what was decided last week, or what the business rules are. Each conversation is isolated, so the same question asked twice has no shared foundation to land on.
03Prompt drift
When different people write different prompts, they get different results. Small changes in wording, context, or structure can shift the output significantly. Across a team, this means the same task executed by ten people produces ten variations - none of them wrong, but none of them consistent.
04Missing business context
The AI doesn't know your tone of voice, your approval process, your naming conventions, or your compliance requirements. Without that context baked in, it falls back on generic patterns. The output might be competent, but it won't match how your business actually operates.
05Model updates that change behaviour silently
AI providers regularly update their models. These updates can change how the model interprets prompts, what it prioritises in outputs, and how it handles edge cases - often without any notice. A workflow that worked reliably last month can start producing different results overnight.