It’s no secret that AI has the ability and opportunity to transform the way we live our everyday lives. The idea that we can build computers that will learn and adapt as we do, sometimes even better and efficient than us, is both breathtakingly innovative and amazingly frightening. AI could easily follow the same unfettered approach as social media and data privacy, with all the scandals and headlines, however, we not only can, but mostly need to do it differently. The result is the numbing of our intellectual and emotional capabilities in exchange for wonderful connectivity. We all love technology and the good it’s facilitated but that doesn’t mean we can leave it unchecked. The role of social media in the whole storming of the Capitol situation is proof of this. I want public service leaders to make decisions about the role of AI in our society, not technology company leaders.
Although we can never fully predict the consequences of ever-evolving technological advancements, a precautionary approach needs to be taken — starting with building a universal framework that is governed by human values; algorithms that adapt to a particular set of rules. Albeit various declarations have been made on AI standards such as the Montreal Declaration for a Responsible Development of Artificial Intelligence and the EU’s Ethics Guidelines for Trustworthy Artificial Intelligence, however that doesn’t control the dominion AI has the potential of taking globally.
A universal framework basically means creating a shared set of values and approaches as human beings before a legal translation to digital technology. There are plenty of precedents so please don’t throw the freedom line at us — a classic example of this is the development of the Passport. Similarly, this framework would need to live within the UN or something similar; a globally recognised and respected institution that has the capacity to enforce it. UNESCO has indeed released reports and recommendations over the years on the ethics of AI, particularly Robotics, however no globally agreed declaration has yet been made. I am acutely aware that we face four great crises (Climate, COVID, Economy and Race) but to avoid the addition of a 5th, we need a sliver of brain power and resource allocation to nip this challenge in the bud.
Without diving deeply into the philosophical syntax of what is truly ethical, here is a compilation of what an AI framework could look like:
1. Safety + Freedom from Weaponisation: Human protection, as well as protection from cybersecurity and terror threats. Institutions could also ensure accuracy and validity of AI actions. A clear mandate could be established that AI would not be used to oppress, subjugate, or inflict any kind of harm on others.
2. Accountability: Designers and companies could be held responsible for use and any form of abuse of an AI product. Every AI product should be passed with an evaluation of its objectives, benefits and risks.
3. Human Value Alignment: AI could be designed in accordance with human rights standards, aligned with the UN declaration — and it could be driven to enhance human existence. Any business model even attempting to use the technology to replace human roles in our economy should be shut down.
4. Privacy + Liberty: Institutions must ensure data governance is aligned with global standards and assure data input methods are of quality and purpose. People should also have a right to determine which of their data is used and how.
5. Shared Benefit and Prosperity: AI designs should positively impact as many people as possible, through implication or financial prosperity. Further, if AI motorises the harvesting of human data those very humans should be unequivocally able to enjoy some of the financial benefits from this, or opt to have their data removed. It’s our data not theirs.
6. Human Oversight: Humans should be in control of where and how AI designs should be implemented, and could also have the ability to terminate any programmes. Further, there should always be an approved group overseeing such activity, rather than one individual or one self-interested group. We need a democratised oversight approach.
7. Transparency: There should be a factor of traceability of AI decision-making as well as open human communication around the technical processes of AI systems. Answers and explanations should be readily available in any case. One idea is a type of crowdsourced catalogue of AI activity so people can see how, where and why it’s happening.
8. Fairness: Non-Discrimination + Wellbeing: Equal access through inclusive design must be realised. Prejudice, unfair bias and marginalisation cannot be reflected in AI design processes. This can be enforced through culturally diverse and represented hiring. AI systems should be user-friendly and relate to all. There should also be an awareness and avoidance of environmental repercussions.
Despite the idea of a universal agreement being thrown into rooms of discussion over the last several years, the biggest barrier here is who is going to pay for it? If you take the upkeep of the UN for example, all 193 members pay a certain percentage towards the UN budget, with an additional reliance on independent donations. Using this model, we could ascertain a similar code of conduct for Ethical AI. It all depends on commitment. As it stands, there is a lot of procrastination around installing a legitimate framework — I don’t think people truly understand the risks of AI running wild, what it could become and how it could adversely impact our society. Data harvesting is a prime example of the violations of AI, particularly now more than ever due to the retreat we’ve been forced to make into our homes due to lockdown after lockdown, exacerbating our dependence on technology.
However, there is so much opportunity whether it’s agricultural development in Africa or improving the way we develop medical treatments — there is no doubt that AI has played a critical role in helping us develop and roll out the COVID-19 vaccine. For these varying benefits, a universal framework needs to be reached so that the greater masses have equal opportunity and access to all forms of development.
The research is out there already — if you look at the work of the Alan Turing Institute, for example, they exist to address the challenges arising from and towards the deployment of Robotics and Autonomous Systems (RAS) in solving socially relevant problems in a safe and ethical manner. ATI has done great work around “meta-learning algorithms” and Environmental tech that, if we paid attention to on an international platform, could help towards our global climate change efforts. Other examples of standing research and models into ethical AI framework include OpenAI, Future of Life Institute and AI Now Institute. The World Economic Forum has put together a similar agency where multi-stakeholder collaboration is at the centre of implementing an accountable contract through testing policy structures, which may be the best outlet we have at the moment and through combined means, it is most definitely worth getting behind.
This article is not meant to have the answers. It’s not even meant to be the beginning of the answers. The goal is to provoke reflection. To encourage us to be more thoughtful than we were when the data-hungry era emerged. We urgently urge governments and agencies around the world to allocate resources, brainpower and capacity to look at these issues with the prism being focused on human value rather than capital value. If AI is only explored through the hyper-capitalistic lens of big business and big tech, we will lose. No-one wants to stifle innovation but we must ensure our humanity maintains pole position in any developments.
Stephen Bediako x Ravina Mehta