
This is the second part of the Dispelling Myths: Lessons from Leading a Year of Generative AI Initiative (Part 1), and I am sharing my observations and experiences regarding how Generative AI operates, alongside myths and facts from the leading Generative AI initiative. If you haven’t read the first part, I recommend starting with that.
Here are additional key lessons I learned about generative models, hopefully clarifying much of the hype.
5. AI Is Always Accurate
Despite the sensational headlines about AI surpassing human accuracy in certain tasks, it is misguided to assume that AI never makes mistakes. Accuracy largely depends on the quality and quantity of training data, the robustness of the algorithm, the prompt, how questions are phrased, and how effectively the system is tested. In my project, even after providing numerous data points to the AI bot, we discovered gaps where the AI struggled due to real-world variability that caused the model to deviate from a straightforward answer.
The upside is that AI can become very accurate in well-defined scenarios. For example, guide a beginner user about the core capabilities of a product or help a developer to learn a new library, but it it goes to advanced use case and scenarios that it’s out of the scope of documents and need human creativity, such AI systems can still fail. As with humans, AI’s accuracy is relative, and continuous improvement is necessary to keep error rates low.
Pros:
- Can achieve high accuracy with enough quality data and proper training.
- Learns quickly and can improve over time.
Cons:
- Accuracy Depends on Data, Prompts, and Testing – Even a well-trained AI can produce errors if the input phrasing is unclear, the algorithm isn’t robust, or if it encounters scenarios outside its training scope.
- A single flaw in data or design can lead to big mistakes.
6. AI Is Not Accurate
On the opposite end of the spectrum, some critics argue that AI’s inaccuracies render it unfit for real-world applications. Certainly, AI models can be flawed—often riddled with biases or inadequately trained. We experienced inaccuracies due to limited diversity in our training data, which resulted in the AI misclassifying specific inputs.
The key takeaway is that AI’s accuracy isn’t set in stone, especially in Generative AI. The way you phrase your questions and engage in conversation significantly impacts the quality of responses. Since GenAI can maintain context and understand intent over multiple exchanges, continuing the dialogue helps refine its answers. However, AI can sometimes drift off track, misunderstanding the direction or generating unintended responses.
One key advantage of interacting with AI versus humans is the ability to reset and start over at any time. Unlike human conversations, where misunderstandings can persist, with AI, you can pause, step back, and reframe your intent, often leading to better results. This flexibility makes AI an effective tool for iterative problem-solving, allowing users to refine their queries until they achieve the desired outcome.
Pros:
- AI Can Improve Over Time – Unlike static software, AI models can be retrained, refined, and updated to improve accuracy, especially when errors are identified and corrected.
- Conversational AI Allows for Iteration – Unlike human misunderstandings, which can persist, AI offers the ability to reset, refine, and restart a conversation, leading to better clarity and improved responses over time.
Cons:
- AI Can Drift Off-Topic or Misinterpret Intent – While GenAI maintains context, it sometimes loses direction, generating responses that deviate from the intended topic or introduce inaccuracies.
- Accuracy Depends on Data Quality & Training – AI’s effectiveness is only as good as the data it learns from. If the training data is biased, limited, or outdated, the AI will produce flawed or misleading results.
7. AI Is Our Potential Enemy Like in Sci-Fi Movies
Hollywood loves to depict AI as a near-sentient entity with evil goals. While these storylines make for thrilling cinema, they often bear far from how most AI systems operate today. Although it’s almost possible to build something like terminators based on advanced robotic systems, high-speed network connections and artificial intelligence, the question is why and how it is justifiable from a financial and security perspective to build indeterministic warfare based on probabilities and gives freedom to that. Of course, as I mentioned before, it can help significantly in many business cases, but it’s unrealistic to rely on Hollywood movies to predict the real future and scare off that.
Many movies like The Matrix, Her, Westworld, iRobot, Terminator, and others depict various images and possibilities for the future based on writers’ imaginations, filmmakers’ revenue models, and audience interest rather than reality. Don’t get me wrong; we need this imagination to expand our thoughts and innovate, but having the right expectations helps us to face reality rather than hype.
Pros:
- Can be used for positive changes, like better healthcare diagnoses or safer self-driving cars.
- No hidden “evil intent”—it does what it’s programmed to do.
Cons:
- Harmful outcomes can happen if AI is used irresponsibly (e.g., biased tools).
- Lack of proper regulations can lead to breaches of privacy and misuse of data.
8. Gen AI Is Probabilistic, Not Deterministic
This is a very important point and may be a bit technical, but it helps us understand the nature of Generative AI. Unlike traditional software that produces the same output each time under the same conditions, which we are all used to, AI often relies on probabilities to make predictions, create, or make decisions. This means that although we can limit the context and provide instructions, Generative AI’s responses are not fixed, and many parameters can change the outcome. Is this bad? Not at all. We should take a step back to consider the reason and intent behind its design. Generative AI is meant for creativity and modelling human language, behaviour, and reasoning, so we should not expect to receive the same answer each time; humans do not behave that way. Instead, we should anticipate that it will help us narrow down to the expected outcome.
Pros:
- Flexible in addressing complex problems where exact rules are difficult to define.
- Continuously adapts as it “learns” from more data, which can enhance predictions.
- Processes vast amounts of human-generated data and can model reasoning, innovation, empathy, and some degree of emotions.
Cons:
- Outputs can vary from run to run or update to update, which may be confusing for users.
- Cannot guarantee a 100% certain result—there’s always a chance of error, hallucination, confusion, and misunderstanding.
In the next article, I will share some more points and also advice based on my experience and conclude this series.