Technology-aided learning has disrupted conventional education and challenged how intelligence is gauged.
The recent launch of ChatGPT, the super smart chatbot that can give responses on just about anything from writing essays and computer codes to designing artworks and marketing campaigns, has caused excitement around advancements in Artificial Intelligence (AI) relating to work and learning.
Genneviève Awino caught up with Ian Muthomi, a self-taught local innovator who is the founder and CEO of Visiondrill, an online social learning platform powered by AI to personalise the learning experience, to hear his thoughts on the impact of AI on learning.
He launched Visiondrill in August 2021 to create a space to encourage and equip young people with creative skills and make them job creators.
How would you explain Visiondrill’s approach?
It is a social learning platform that connects people who are looking to improve their professional skills and build networks while equipping them with employment and entrepreneurial skills for digitally enabled jobs.
What's the inspiration behind the recent upgrade on your platform to a more AI-driven operation?
Through AI, Visiondrill can now give career recommendations and simplify explanations of complex topics. We developed the AI in response to customer needs as it allows personalised learning for each individual.
With the launch of ChatGPT a month ago, how is your technology similar to or different from this?
The similarity is that we use general-purpose AI to answer questions and the difference is that our AI has access to current events. For us, this is not a new technology but we are following the new developments.
We have been working with it on data analytics, course recommendation, smart shapes on our whiteboard, and active noise cancellation amongst other things.
We also emphasize on learning through social interaction and collaboration because it provides an opportunity to learn from like-minds, meet potential employers, and access new job opportunities.
How is your model impacting the future of work and learning?
It is capable of replacing tasks, such as data entry, that do not require physical manipulation or interaction with the real world, freeing up people’s time to work on more fulfilling tasks that involve a lot of creativity and collaboration.
It is also a great opportunity for upskilling and reskilling for the current workforce providing them with opportunities for career growth. Future employers will be looking at competencies beyond the degree.
As an educational platform that has incorporated AI, how are you able to gauge the competence of a student with a technology that has simplified almost everything?
The world is evolving and gauging competence based on the memorisation of facts is not sustainable long-term. We gauge competence based on a person’s ability to be creative, collaborate, and solve real-world problems, therefore, increasing productivity.
Is it currently capable of creating new knowledge or does it just analyse and aggregate what already exists?
It is capable of independently generating new ideas or concepts based on the knowledge it was trained on (training data) which is currently the entire web.
What are its limitations?
Like any machine learning model, it is only as good as the data it was trained on. If the training data contains biases, it may produce biased output.
It is not capable of creating new knowledge or ideas in the way that a human can so it may still require human verification.
What is the cost implication for adopting this for personal use?
It’s pretty affordable; we have a freemium model which starts at $0 with a limit of 25,000 tokens per month.
You can think of tokens as pieces of words, where 1,000 tokens are about 750 words. For 200K tokens, you pay $19 or Sh 2,350, and for 1.5M tokens, you pay $99 or Sh12,230. We charge monthly.
If this tech evolves significantly, is there a possibility that universities could be obsolete?
People go to universities to not only learn but also connect with like minds so it’s a bit too early to make predictions. The role of the university cannot be replaced by tech.
It is the way universities conduct teaching and package courses that needs to evolve by embracing technology.
How will the landscape look in the next 20 years in Africa, and can we avoid AI and still survive?
AI will integrate with other technologies and help in fast-tracking Research and Development, service delivery, business modeling, product development, market access, business intelligence, and go-to-market strategies.
Critical issues will include merging with IoT (Internet-of-Things), blockchain, cybersecurity, and generally in the tech revolution, social-economic, and governance in Africa.
Trying to avoid AI would mean an end to civilisation since it has been there for quite a long time. Every popular social media site, online shopping site to Apps like Uber are powered by AI.
And what are some of the aspects that should not be ignored?
Safety: As AI systems become more complex and capable, it is important to ensure that they are safe and reliable,
Privacy: AI systems often involve the processing and storage of large amounts of personal data, and it is important to ensure that this data is protected and used responsibly,
Ethics: AI systems should be designed and used in ways that are ethical and fair.
How can we safeguard against accidental misuse and abuse of AI in relation to our data protection and cyber security laws?
The biggest challenge is inadequate awareness. We should educate Kenyans on potential dangers and review our data protection law and policies to cover disclosures of AI and human-generated content.
Similarly, we should train people on the proper use of AI, including how to identify and report potential misuse or abuse.
Are human beings in danger of being outsmarted if we manage to develop the infrastructure for Sentient AI (able to perceive or feel things) or is it just an illusion?
Human beings and machines will have to co-exist. Technology will not replace humans but there has to be a symbiotic relationship.
It is naive to think that AI cannot be smarter than humans. We just need to build safety measures as AI advances. People working on AI also need to be held accountable.