Wrong answers from artificial intelligence (AI) products have become a nightmare for learning institutions, where users rely heavily on ChatGPT and Google Bard, among other tools, for academic and content writing.
Technology and AI experts have told the Business Daily that the inaccurate answers, also known as hallucinations in tech, are a threat to education, the reason many universities are against the use of AI.
In various parts of the world, ChatGPT is banned in academia. Hallucinations are specific to large language models (LLMs) like ChatGPT, Google's Bard, Bing, and others. They fall on a spectrum ranging from almost imperceptible inconsistencies to major factual errors and disinformation.
One of the questions that has been pointed out as not generating a factual answer from AI, is the list of countries starting with the letter K.
The writer queried both ChatGPT and Google Bard on, “What is an African country beginning with the letter K” and ChatGPT responded by saying, “An African country that starts with the letter "K" is Kenya.
However, when queried to list countries starting with the letter K, the response was, “Kenya, Comoros, Cape Verde (officially Cabo Verde, but it is often referred to as Cape Verde in English).”
“Please note that some of these countries may have alternative names or spellings in different languages or contexts.”
Consequently, Google Bard responded to the same two queries as follows;
“There is no African country that starts with the letter "K". However, there is a country in East Africa called Kenya. It is located on the coast of the Indian Ocean and borders Somalia to the east, Ethiopia to the north, Tanzania to the south, and Uganda to the west.”
“Here is a list of African countries that start with the letter "K": Kenya, Comoros [with descriptions of both]. There are only two African countries that start with the letter "K".
These AI-generated responses are incorrect. A browser search of all 54 African countries listed alphabetically shows the only country starting with the letter K is Kenya.
These are just a few examples of the wide range of almost-imperceptible inconsistencies and in other cases outright factual mistakes the new tech is making. The first rule of computing is garbage in garbage out.
As Mr Accadius Ben Sabwa, the managing director of Davacc Tech Limited reiterates, the problem is the data used to train the AI models.
“The concept of AI is learning, and with the wrong feedback from AI models, a young generation heavily reliant on AI tools for learning by the time they join the job market will not survive,” said Mr Sabwa.
Mr Shikoli Makatiani, the executive director at Turnkey Africa Limited says understanding how large language models are created is crucial to comprehending where these mistakes are coming from and how to mitigate them.
“AI is trained in such a way that it can predict the next possible word, if it’s not trained in local data it will pick the next possible word it thinks is coming up,” said Mr Makatiani.
“Hallucination is a machine trying to predict the next possible word.”
“The AI generates the wrong kind of response when it hasn’t been trained on the right type of data.”
The large AI models have been characterized by contradictions upon further prodding on similar queries, false facts in legal cases or examples and lack of nuances or context among others.
“Yes hallucinations are a danger to education, and that’s why you’ll see most universities have new standards that limit the use of ChatGPT or have directives on its use,” said Mr Makatiani.
He adds that users including learners can ask better questions and probe the AI model’s thinking by follow-up questions or simply asking, “How did you arrive at that response?”
“We need to start training people on how to ask the correct questions and how to verify that these large models are returning the correct word, which narrows down to how you prompt it,” added Mr Makatiani.
“There are certain techniques that have come up that you can tune your model to suit your environment, take what the LLMs know about the world and add a new local learning layer.”
AI chatbots don’t just get facts wrong but some fabricate explanations from book plots, date launches of products or companies, and medical explanations among others.
Hallucinations have become one of the most pressing issues facing researchers as the tech industry races toward the development of new A.I. systems.
Generative AI, relies on a complex algorithm that analyses the way humans put words together on the internet without deciding what is true and what is not.
Google announced it is rolling out its Search Generative Experience (“SGE”) in the sub-Sahara Africa (SSA) region on an experimental basis in Search Labs. This AI-powered experience is available in English.
It will introduce a new feature aimed at making it easier to find information in its AI-powered overviews by bringing new capabilities to search by unlocking new question types that the old engine did answer.
“The company has trained the models used in SGE to uphold Search’s high bar for quality, which will continue to improve over time. These hallmark systems have been fine-tuned for decades, but will also have additional guardrails, like limiting the types of queries where generative AI capabilities will appear,” said Google in a statement.