Before presenting my thoughts on XAI as a behavior-regulating feature, it is important to recap an excerpt from what I wrote last month in my “five observations” post:
Regulating AI behavior is necessary in order to mitigate harm. One approach for achieving this is imposing a legal requirement that prior to deployment, the AI must be certified as having passed training on what constitutes acceptable behavior. (Another way to think of this certification is that once the AI passes this training, it is in effect licensed to operate.) The AI’s acceptable-behavior framework, the learning set, is constructed from a variety of universally-accepted criteria, including, for example, applicable international standards, which helps yield uniform application and operational performance. The AI’s acceptable-behavior model is then algorithmically isolated in the application (be it cyber or cybernetic) and hard-coded, meaning it is made to be operationally independent from the AI’s capabilities, rendering it immune from iterative code changes. This acceptable-behavior approach dynamically disciplines the AI’s behavior. It enables real-time deterrence and allows regulating AI behavior.
XAI plays an important part in the makeup of the acceptable behavior framework, so much that its absence may be reasonably viewed as not merely curious, but arguably negligent and, from a licensee’s perspective, contractually unacceptable. Of course, the XAI interface must be such that it efficiently overcomes vulnerabilities in presenting information and this means that the XAI delivers to the human user what can be regarded as “perfect” information. “Perfect” information is information that is: (1) relevant, (2) easily understood, and (3) is not prone to misrepresentation. As to the third element, this also encompasses data integrity, meaning that the information delivered is unalterable, which suggests that in some AI applications, especially those in heavily-regulated sectors, the use of a blockchain infrastructure becomes necessary.