AI Governance #1 In AI we trust

05-03-2021

 

"Never trust a computer you can’t throw out a window "— Steve Wozniak


Human ingenuity is often linked to the capacity to design and successfully use tools. The earliest Philosophy of technology, taught by Plato and Aristotle, was already quite taken in the relationship between the creator and the tool. In the arts, the problematic nature of this dynamic was discussed intensely, and many arguments insisted on affirming that the genius of creation had to be credited to the hand guiding the paintbrush and not the opposite. 

Nonetheless, the information revolution brought about a fundamental shift. Tools ceased to perform routine or laborious tasks. They now undertake intellectual functions, a phenomenon which brings to the fore the issue of trust in technologies.

Trust is our societies ‘ keystone. If Greek Philosophers questioned trustworthiness, the topic quickly became ignored. Since the beginning of the 21st-century, scholars, once again, try to decipher the ubiquity of trust in our social organizations. Such a resurgence may be the consequence of another shift. With the information revolution, humans have to rely on their peers less. Instead, individuals must entrust tools in their daily lives. Once, one had trust in their metro driver to swiftly guide them home. Now, one needs to rely on an algorithm to perform the same exact task. We never had to trust a tool in such a way before. If we encounter a problem with our car we know that the issue is due to external factors such as a bad constructor or the quality of the road… The car did not make a choice leading to malfunction. Artificial Intelligence (AI), self-learning, used for decision-making, messes with this tacit rationale. AI is described as a new sort of intelligence. Thus, for the first time in history, using a tool it created, humanity has to trust a non-human, and to its great anxiety, a non-sentient, amoral thing. This turmoil occurs at a time when we are getting increasingly dependent on these technologies. 

Why is it so hard to trust? Trust is risky. It requires vulnerability. Trust may be a dangerous bet. Trusting, someone becomes at the mercy of mistakes, failures, betrayal. 

The risk is bearable when we deal with humans: their decisions are predictable, we know how their rationality operates. Humanity is obsessed with rational behavior because it is accountable, trackable. With AI, trust links us to a truly novel object, we can not predict how it operates, it does not answer to human conventions: it is unfamiliar. We must trust something alien. It is a leap of faith. A costly one. If we listen to creative scenarios of storytellers, the machine will soon take over us and lead us with its algorithms to a cold, technician society, devoid of human morality.

Yet, mistrusting AI has a price. Contemporary challenges, such as climate change, space exploration, epidemics, require capacities out of our reach, for instance, to collect and analyze massive amounts of data. If we do not trust AI, we would have to grieve more than just missed opportunities. We could be harmed. The issue is crucial. AI can not move forward without our trust.


 “Lack of trust in artificial intelligence is often considered one of the strongest current barriers to the widespread adoption of AI technology and autonomous systems”. –  Laila Goubran 


Trust is built on reliability, consistency, transparency, and competencies. Studying the causes of our distrust in AI shows that there is room for improvement in these four domains.  

  • The Machine will take over

Terminator, I Robot, 2001 Space Odyssey, The Matrix, Ex-Machina, Wall-E … The AI apocalypse is a profitable trope. As all technological revolutions did, AI will fundamentally challenge societies, certainly in less catastrophic and caricatural ways. Yet, the intensity of the challenge is what sets AI apart. 

AI used for decision-making purposes forces us to consider the very definition of humanity. Indeed, this technology might perform better than us in what is esteemed to be our species ‘ heritage: the ability to make (good) decisions. As the World Economic Forum alerted in a report, reliance on AI may lead to the deterioration of typical human strengths (intuition and emotion) and of autonomy.  A world where the machine takes over is one where humanity is not self-determined anymore. 

  • Failure and biases 

"AI can be used for social good. But it can also be used for other types of social impacts in which one man's good is another man's evil. We must remain aware of that" - James Handler

Consistency is key. It is AI’s strongest selling argument. We value its capacity to perform continuously with the same quality, undisturbed by external circumstances. This expectation is also set for our tools. If a pen performs poorly, it is replaced. In this regard, we are harsher judges when it comes to tools than when it comes to our peers. We believe that humanity has room for failure because it can improve. If AI fails, should we estimate that the technology is essentially discredited? 

Fear of biased outcomes arises from this question. Luckily, the human stands at the core of this issue, as IBM Chief Watson Scientist Grady Booch explains: "I think today, the AI community at large has a self-selecting bias simply because the people who are building such systems are still largely white, young and male. I think there is a recognition that we need to get beyond it, but the reality is that we haven't necessarily done so yet". The human values of the developer, the commercial objectives of the sponsor are programmed into the technology. Data is pivotal, especially data used to train AI. It needs to be diverse. Otherwise, the exponential use of biased AI will reduce the plurality of decisions considered and exclude social groups from its recommendations. 

Ultimately, the current legal void nourishes mistrust. When an individual makes a mistake, procedures guide the society as to how the litigation is solved.
What about AI? If it made an ill-advised decision that harmed people, who shall be responsible? The creators? The one who collected the data? The firm which uses the technology? The team implementing the AI decision? 

  • The black box 

As AI  improves and becomes more performant, algorithms grow more complex. Thus, fewer people can understand how decision-making AI operates. The gap of knowledge is problematic. Mistrust in AI is a sensible concern if one does not understand how the machine reaches a recommendation. The term black box coins this situation, our current situation: AI lacks transparency. It is arduous for the general public to have access to and understand the path AI follows from the data to its generated decision. 

Human rationality is based on a methodology. Decision-making, if it desires to be understood, accepted, needs to be a visible and publicized process. We must program the possibility to access and analyze it, whether it performs successfully or not.  

 

The way forward

How can we trust AI? We believe in Governance. So that societies entrust AI technologies, Prisma European Networks recommends decision-makers to:

  • Educate: Governments should inform citizens about AI technologies. The knowledge gap must be bridged to reduce the black box effect.
  • Set up norms: As the world does not agree on universal values, legal norms can be AI Governance’s first steps.  

Among other key issues, legal frameworks must clarify the regulation on the collection and use of data; the use of AI for decision-making purposes by public and private actors; the legal process when AI fails or is misused. Transparency is pivotal to promote trust in AI technologies. It does not only tackle that of AI’s decision-making process. Applications developers must be transparent regarding the AI they use, the data they collect, when they use AI, and what for. Accountability and privacy must guide the legal framework’s construction, as well as consent: the user should be able to turn off AI and data collection function, whenever it pleases them.