Accepting AI errors – the next logical step?
Interestingly, some AI systems are now outperforming humans in certain tasks. "A study by OpenAI from 2023 showed that ChatGPT achieved results in the US bar exam that were in the top 10 per cent of all participants," reports Nicolas Konnerth. He takes a differentiated view of this development: "Of course, AI still makes mistakes. But so do humans. The question, of course, is whether we as a society will eventually be willing to accept mistakes made by AI systems in the same way as human mistakes." However, from Nicolas Konnerth's point of view, there is one thing we must always bear in mind: "AI models are stochastic parrots. They can reproduce content, but they don't understand it." Particularly in the education sector, it is therefore important to start training young people at an early stage to deal critically with learning systems and to understand their weaknesses.
Practical tips for companies in their dealings with AI
The Head of Conversational AI at ERGO Group advises companies that want to use AI efficiently: "Precise questions are the be-all and end-all. The more precisely I formulate what I want, the less likely it is that the AI will give a wrong answer." It is also helpful to work with clear examples from your own practice and thus show the AI what the desired result should look like.
Another way to avoid hallucinations when using AI is the ‘retrieval-augmented generation’ model (RAG). "In this case, the AI is supplemented with a knowledge base from which it can only rephrase an answer. This prevents the AI from simply making things up," explains Nicolas Konnerth, but points out that this method does not eliminate all risks either. "At the end of the day, the fact remains that AI is always based on probabilities and can therefore sometimes be wrong," he concludes.
AI is a powerful tool, but it is only as accurate as those who design and use it.
We thank Nicolas Konnerth for the conversation and his valuable insights.