Global tech researchers are toying with the idea of building smarter Artificial Intelligence (AI) models that will be equipped with selective memory loss capabilities in a development poised to safeguard against privacy breaches as well as quell environmental concerns.
A new study published by scholars at the Tokyo University of Science (TUS) demonstrates a new method erasing specific information from existing AI models, suggesting that the technique could build smarter and more efficient specialist AI rather than generalist systems.
If effected, the researchers note that the process could aid in enhancing data privacy as well as reducing energy expenditure.
The capabilities of large-scale pre-trained AI models have recently grastically improved, as demonstrated by large-scale vision-language models like CLIP (Contrastive Language-Image Pre-training) or ChatGPT, noted the TUS researchers in the study paper.
“These typical generalist models can perform reasonably well in tasks covering a large variety of fields, which has paved the way for their widespread adoption by the public. However, such versatility no doubt comes at a cost.”
The techies argue that operating large-scale models consumes extreme amounts of energy and time, which goes against sustainability goals on top of limiting the types of computers that they can be deployed on.
Additionally, they say, users in many practical applications want AI models to fulfil specific roles rather than be jacks-of-all-trades.
“In such cases, a model's generalist capabilities might be useless and even counter-productive, reducing accuracy,” they note.
To this end, the team led by associate professor Go Irie has developed a methodology dubbed “black-box forgetting” through which one can iteratively optimise the text prompts presented to a black-box vision-language classifier model to have it selectively ‘forget’ some classes it can recognise.
“Retaining the classes that do not need to be recognised may decrease overall classification accuracy, as well as cause operational disadvantages such as the waste of computational resources and the risk of information leakage,” writes Dr Irie.
According to the research group, this marked the first study in which the goal is to have a pre-trained vision-language model fail to recognise specific classes under black-box conditions with the team adding that based on reasonable performance baselines, the results were ‘very promising’.
The researchers tested their proposed method using CLIP, an AI that classifies images or identifies what is in an image, with the goal being to get the AI to ‘forget’ 40 percent of the classes of objects that it could identify.
Kenyan IT specialist Gathirwa Irungu said the proposed model, if successful, could have significant implications, especially enhancing AI usage to achieve more useful objectives.
“This groundbreaking approach holds significant potential for advancing artificial intelligence and machine learning. It may enhance the performance of large-scale models in specialised tasks, further broadening their impressive range of applications,” observes Mr Irungu.
“Additionally, it could be used to prevent image generation models from creating inappropriate content by enabling them to disregard specific visual contexts.”
The Japanese scholars had also observed that the proposed method could help address privacy issues, which are an increasingly rising concern in the AI field.
“If a service provider is asked to remove certain information from a model, this can be accomplished by retraining the model from scratch by removing problematic samples from the training data. However, retraining a large-scale model consumes enormous amounts of energy," said Dr Irie.
"Selective forgetting, or so-called machine unlearning, may provide an efficient solution to this problem. In other words, it could help develop solutions for protecting the so-called ‘right to be forgotten,’ which is a particularly sensitive topic in healthcare and finances.”
According to the team, the approach not only empowers large-scale AI models but also safeguards end users, paving way for the seamless integration of AI into everyday human life.