What Kenya can learn from Europe on how to make well-balanced artificial intelligence laws

Artificial Intelligence

The media and entertainment industry has undergone a significant transformation in recent years, driven by advanced technologies.

Photo credit: Courtesy | Reuters

The European Union (EU) Regulations On Artificial Intelligence (AI) were recently passed after a five-year process. Prior to their passing, there had been a lot of lobbying and advocacy both for and against the new law.

Human rights experts and researchers felt that allowing unfettered AI would be detrimental to human rights and welfare in the long run. They felt that a law is needed to curtail and address any negative impacts of AI law.

Tech giants on the other hand, felt that legislating AI would stifle innovation. Before the new law was passed there was already a lot of debate surrounding the issue.

There is a need to balance human welfare on the one hand and technological development on the other hand. Left unfettered AI can be very dangerous, especially if it falls into the wrong hands. However, it is equally dangerous to pass a very restrictive law against AI as this will stifle innovation. AI has very many uses and advantages.

This is why the new EU law was carefully drafted to address both needs. A quick read of the law will show that while it is human-centric, it does not directly stifle innovation but encourages AI innovation. Overall, it is a very human-centric law that provides for regulation according to the level of risk. What I love about this law is that it does not altogether ban AI. but provides a safety net through which AI can be used without harming human welfare.

The new law categorises risk into four areas and provides different regulations depending on the level of risk identified. This is very well thought out as this means that simple AI solutions will not be subjected to stringent regulations and compliance. Therefore, innovation and uptake of simple AI solutions can still be done without the need for any compliance.

The higher the level of risk identified, the higher the compliance levels required. The highest level of risk is classified as unacceptable. High risk though acceptable, is subjected to very strict assessment and compliance. High risk includes any activities that may have an impact on human rights.

Some high-risk AI includes AI on access to justice such as AI-enabled case search in the legal sector. Before risk classified as high risk can be allowed, it must be subjected to very strict compliance mechanisms. The advantage of differentiating the level of risk and regulation is that it will not subject simple AI solutions to tough compliance measures, while at the same time safeguarding human rights in more complex AI-driven solutions.

While the law is applicable in the EU, for now, it is a law to watch for businesses that undertake globalisation. If you intend to do tech related business in the EU then it is good to acquaint yourself with this law.

If the data protection laws are anything to go by, it is likely that the AI legislation discussion may find its way to Kenya. Therefore, it is important to project that there may soon be a law regulating AI in Kenya.

Businesses that are already providing AI-related services to the EU will need to comply with the new law. This new law may impact the startup funding sector. A lot of start-up funding comes from investors within the EU and they may require tech companies they fund to be compliant with the new AI law. It may also affect technology transfer arrangements between Kenya start ups and EU businesses if the technology pertains to AI.

Ms Mputhia is the founder of C Mputhia Advocates | [email protected]

PAYE Tax Calculator

Note: The results are not exact but very close to the actual.