Home AI Ethical Artificial Intelligence, Part Two: Intelligibility
AI - Articles - April 6, 2020

Ethical Artificial Intelligence, Part Two: Intelligibility

In Part One of this series, I outlined some of the elements that make an AI solution trustworthy, and I proposed authenticity and honesty as being important. A component of this, and one of the principles proposed by the House of Lords Select Committee on Artificial Intelligence, in its paper published back in April 2018[1]is ‘intelligibility’. In my view, we don’t need to explain everything about every AI. We do need to explain enough to justify trust.

Machine Nature Transparent

The first requirement for an AI to be understood, is to understand that it is an AI. Using technology to deceive, mislead or misinform is at best morally questionable, and in some cases downright criminal. Unfortunately, this type of behaviour is not unheard of. In 2016, Canadian dating site Ashley Madison admitted using fake female profiles operated by ‘chatbots’ to persuade male users to pay for the ability to respond to messages from non-existent women. Amazingly, it was alleged that 80% of initial purchases were to respond to messages written by machines. The US Federal Trade Commission launched an investigation, leading to a $1.6 million settlement—not an inexpensive mistake. But not so much of a disincentive to prevent similar behaviour by others.

In 2010, the Engineering and Physical Sciences Research Council (EPSRC) proposed that whilst sometimes it might be a good thing for an AI to seem like a human, any person that interacts with it should have the ability to determine both its machine-nature, and what its purpose is. I agree, and think it would be dangerous to permit Providers to create products that deceive humans for an undisclosed purpose.

I’d also agree with the view expressed by the European Group on Ethics in Science and New Technologies[2],  which proposed legal limits on how people can be “led to believe that they are dealing with human beings while in fact they are dealing with algorithms”. Such limits were described as being necessary to support the principle of inherent dignity required to establish human rights—something I’ll come back to later in the series. The Ashley Madison case demonstrates that this has already caused issues, and AI’s ability to act human is only going to improve. Deception in this context does not necessarily require malicious intent in order to cause harm. It is easy to imagine how destructive it could be for a human—particularly a vulnerable user such as a child—to attribute emotions to and establish a relationship with a machine unknowingly.

We need to be careful about how we regulate this issue not to be overly prescriptive whilst still addressing the mischief. There will be a question as to the optimum timing for disclosure of the AI’s machine-nature, and the answer will not necessarily be the same in all scenarios. As a quick straw poll, I asked a group of around thirty senior in-house counsel whether they would want to be made aware at the start of an interaction if they were communicating with an AI. A substantial majority agreed that they would.


Explanation of the AI…

Read The Full Article

Leave a Reply