Artificial intelligence (AI) uptake within organisations has necessitated organisational AI policies.
AI is an emerging issue that creates a lot of gaps and grey areas in so far as the law is concerned. An organizational policy should be able to cover the grey areas.
Some of the grey areas that may necessitate an organisational AI policy include the moral and ethical issues that arise with the use of AI.
How do you ensure that AI doesn’t infringe on other people’s rights and laws, for example, data privacy, intellectual property rights, among others?
There are some AI solutions that may infringe on other people’s intellectual property.
AI is largely machine-based and machines can make mistakes. Mistakes made through the use of AI can be very costly.
There are many inaccuracies and sometimes even falsities that can be generated. How then does your organisation rectify these mistakes?
Having in place a policy on human-machine collaboration will help minimise mistakes caused by use of AI. Human effort should still be encouraged despite the usage of AI.
For example, an AI tool can be used to generate basic research. However, the staff member should go further and ensure the accuracy of the AI-generated research and possibly even improve on it.
The human interface should still be used at the approval and finalisation stage.
AI being a machine may lack the emotional intelligence to deal directly with customers. The organisation needs a policy on how far AI should be used in customer care.
Most websites these days have a chat box which is able to respond to simple customer care inquiries. However, at some point, the chatbot may not be able to respond to more detailed inquiries and these may require human efforts.
Therefore, drafting an AI organisational policy may help provide solutions to manage these grey areas.
When drafting the policy, first define the purpose. Why do you have a policy? Is it to set the governance structure or deal with ethical issues that arise with AI?
Secondly, define the scope and usage of the policy. Who within the organisation shall be affected by AI? A good AI policy helps staff clearly understand their rights, obligations and limitations when using AI.
The definition of scope and limitation is crucial as it will help staff understand how far to go in the application of AI.
An example of a limitation is that staff are not allowed to use AI which has not been pre-authorised by the organisation in the delivery of tasks.
Thirdly, create the organisational and governance structure to guide on usage of AI. The recommendation is to have in place a team or committee comprising several experts who would be able to provide guidance in the uptake of AI.
The team will source for AI solutions and advice on uptake, taking into account the risks inherent with each AI solution.
Lastly, to avoid haphazard usage it would be necessary to centralise the use of AI. This will ensure that organisational data is protected and that risk is managed.
Overall responsibility for the outcomes generated by AI should remain with the staff. Otherwise, you may have a situation where staff blame AI tools for lacklustre performance.
It is good to also define when the human interface and human effort become necessary to avoid a situation where staff leave all the work and outcomes to AI.
Ms Mputhia is the founder of C Mputhia Advocates | [email protected]
Unlock a world of exclusive content today!Unlock a world of exclusive content today!