Artificial intelligence (AI) has the potential to do what humans can do. AI-enabled tools or solutions have the ability to process data as humans do. There are many benefits in deploying AI. We can now diagnose diseases, drive cars, and even identify criminals using AI solutions. But what are the ethical considerations in using AI? What are the developers’ ethical obligations in developing intelligent machines that will work alongside human beings? And how has Singapore’s Model AI Governance Framework developed since its initial launch in 2019?
What is ethics?
The word “ethics” refers to concepts of what is morally right and wrong or what is morally good and bad based on certain set of principles or values. Ethics is usually considered within a system or code of practice and rules. Hence, you commonly find “ethics” being considered in religions, cultures, professions, or any other group that is at characterized by its moral outlook.
“Ethics” defines what is good for the individual and for society and establishes the nature of duties that people owe themselves and one another.
When applied to the development and use of AI, ethics needs to be considered at two levels; the first level is the moral behaviour of the developers who design, make, use and treat artificially intelligent systems, and the second level is the behaviour of the AI-enabled machines designed by those developers.
What is the relevance of ethics in AI? Why should we be concerned with ethics in AI?
AI is a technology designed by humans to replicate, augment or replace human intelligence. It has the potential to reach a level of autonomy and intelligence to be human-like.
We humans have an innate “moral compass” that helps us to distinguish between right from wrong. When we see something that is not right, such as an injustice done to an innocent person, our brain tells us that something isn’t right.
Companies too can have some kind of “moral compass” in the form of governance and compliance policies that guide them as to what is right and acceptable within the organisation and is not what they ought to do so as to stay out of trouble with the law and with regulators.
Unfortunately, AI lacks such a moral compass. It can only discern what is right and wrong based on data that has been input into the system. AI does not have any self-awareness. All that it does is what the developer has programmed into its system as right and wrong. Poorly designed projects built on data that is faulty, inadequate or biased can have unintended, potentially harmful, consequences.
That is why an AI ethics framework is important. Such a framework establishes guidelines for the responsible use of AI and highlights the risks and benefits of the AI tools that is being developed.
Consequences of not applying ethics in AI
Failing to apply AI ethics when operationalising data can threaten the safety of users and ultimately the bottom line of the company. It can also expose the company to reputational, regulatory, and legal damage.
Amazon had spent years working on an AI hiring software to review job applicants’ resumes with the aim of mechanizing the search for top talent, but a year after implementing the system, Amazon discovered that the new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way. Amazon tried to edit the programs to make them neutral, but there was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory. Amazon eventually scrapped the program¹.
In the case of AI embedded in autonomous vehicles, the AI would have to make ethical decisions while navigating the vehicle on the road, such as in an emergency when pedestrians suddenly appear on the road and deciding whether to avoid hitting pedestrians or saving the driver and his passengers to avoid a collision.
If developers of AI-enabled systems do not deliberately contemplate these ethical issues early in the design and implementation of the AI system, they will run into more serious consequences when the system goes live. Not only would they have to re-design the AI system as a result of failing to apply AI ethics in the design and development of the system, there may be other consequences arising from such failure such as wasted resources, inefficiencies in product development and deployment, and negative impact on users.
How to apply ethics in AI
Ethics in AI addresses issues of fairness and societal norms so as to minimize discrimination and eliminate biases that may unwittingly be built into the AI solution. The following principles can be used as a guide in the design of the AI system.
Human centric
Being aware of the potential risks the AI solution may have to the public.
Transparent, explainable, fair and unbiased
Being able to explain the development, use and results of the AI solution to customers and other users in a user-friendly way.
Privacy and data protective
Using only those data that are necessary for development purposes and encrypt or anonymise the data where possible. Respect user privacy and data rights and inform users how their personal data are being collected, stored, used and protected through clear privacy statements.
Accountable
Providing a point of contact for questions, directly or through partner networks. Evaluating user feedback, addressing any application issues promptly, and incorporating practical feedback into the AI solution.
Safe, secure and sustainable
Ensuring that the AI solution is secure and robust to prevent misuse and reduce the risk of being compromised by cyber-attacks.
The Singapore Model AI Governance Framework
Singapore launched a Model AI Governance Framework (the “Model Framework”) in January 2019 at the World Economic Forum in Davos. A year later, the second edition of the Model Framework was published, incorporating experiences of organisations that have adopted AI and feedback received from leading international platforms. The second edition of the Model Framework provides clearer and more effective guidance for organisations to implement AI responsibly.
The Model Framework focuses primarily on four broad areas:
- internal governance structures and measures,
- human involvement in AI-augmented decision-making,
- operations management and
- stakeholder interaction and communication.
Internal governance structures and measures
This aspect of the Model Framework is intended to guide organisations in developing appropriate internal governance structures that would allow them to have appropriate oversight of how AI technologies are brought into their operations and/or products and services.
The organisation can use its existing internal governance structures and measures to incorporate values, risks, and responsibilities relating to algorithmic decision-making. For example, risks associated with the use of AI can be managed within the enterprise risk management structure, while ethical considerations can be introduced as corporate values and managed through ethics review boards or similar structures.
Determining the level of human involvement in AI-augmented decision-making
This aspect of the Model Framework is intended to help organisations determine the appropriate extent of human oversight in AI-augmented decision-making.
The organisation should develop a methodology that would systematically guide it in setting its risk appetite for use of AI, i.e. determining acceptable risks and identifying an appropriate level of human involvement in AI-augmented decision-making.
Operations management
This aspect of the Model Framework is intended to help organisations adopt responsible measures in the operations aspect of their AI adoption process. Issues to be considered when developing, selecting and maintaining AI models, including data management.
Stakeholder interaction and communication
This aspect of the Model Framework is intended to help organisations take appropriate steps to build trust in the stakeholder relationship strategies when deploying AI.
Organisations are encouraged to provide general information on whether AI is used in their products and/or services. Where appropriate, this could include information on what AI is, how AI is used in decision-making in relation to consumers, what are its benefits, why your organisation has decided to use AI, how your organisation has taken steps to mitigate risks, and the role and extent that AI plays in the decision-making process. For example, an online portal may inform its users that they are interacting with an AI-powered chatbot and not a human customer service agent.
The Model Framework is meant to be flexible. Organisations can adapt the Model Framework to suit their specific needs and adopt those elements that are relevant.
Conclusion
While every AI implementation is different and the considerations vary widely, understanding the possible ramifications and identifying all the relevant stakeholders is critical to this process.
It’s our responsibility to ensure that AI is created ethically and maintains ethical standards. Prioritizing an ethical approach in the design and implementation of AI is an important first step toward building the technology of the future.