Artificial Intelligence vs. Genuine Stupidity


I am about to discuss topics about which that I just don’t have much experience, but am learning – Audit and Artificial Intelligence.

The Artificial Intelligence Part

My current shinny object has been Artificial Intelligence (AI) and it now should be at near the top of everyone’s inquiry list because it will impact us all. I just received a chatbot call about my vehicle’s warranty expiring. Could be of value but not in this case. My current vehicle is only 18 months old and has a 36 month warranty and I am no where near the 36,000 mile limit. Not the most effective chatbot that I have encountered for two reasons: (1) It did not really listen to me and (2) had really poor audio.   I rate this at the bottom end and it may be an early attempt by the warranty insurance company to replace their real sales agents. Some have been quite good and reasonably interactive. One when confronted with the question “Are you a real person?” asked “Why did I think I am not a real person?” A good attempt to determine why I thought it was not a real person on the other end of the line and maybe adjust its approach the next time I was contacted by the same source.

One topic I am only beginning to find in the main stream media and basic technical publications is AI Safety. It is well represented in the more focused publications, such as MIT Technology Review (https://www.technologyreview.com/)and Harvard Business Review (https://hbr.org/), although both require subscriptions. For more esoteric and focused open websites please consider:

Now to the Audit portion.

Most of my search efforts seem to find results around how AI will be used to do audits, rather than how to audit AI. The Institute of Internal Auditors has a number of papers that start the discussion, but I have not been unable to find any details on model audit work plans. Their AI Auditing Framework seems slim but at least it’s a start. There are several concepts that should be expanded:

  • AI Competencies – Audit staff needs to know how AI works and understand the risks its presents. IIA Standard 1210 Proficiency and 1210.A3 provide expectations as to what auditors need to know but not some specific guidance as to where to start.
  • Governance – AI policies and procedures need to be established. It is also necessary to clearly define the relationships and responsibilities of each of the three lines of defense as it related to AI.
  • Regulations – Other than current, non-AI, regulations, what additional regulations should be considered?
  • Data and Infrastructure – “Big Data” is expected to be the basis for much of the data used by AI’s algorithms. The quality (completeness, accuracy and reliability) of the data will be critical. Are the current audit requirements for this activity adequate? If not, what needs to be improved?

Closing Thoughts

I see three challenges to auditing AI:
  1. Knowledge of AI
    • How will internal auditors gain the necessary knowledge to adequately audit an organization’s AI activities? Just expanding the current IT audit activities may/will not be sufficient. It may be necessary to engage specialists to assist in the activity and traditional audit work plans may not address the dynamic nature of some AI environments. These variables point to additional costs and staff just to keep up with an organization’s activities.
    • How will organizations manage their AI activities? Just as an organization’s audit activity must expand its knowledge of AI, the organization’s management must also become more knowledgeable of AI in general and their AI activities. This also expands to the Board of Directors, who are ultimately responsible for the actions of the organization.
    • How will organizations communicate their use of AI to their customer and other stakeholders (employees, vendors, …)? Letting customers know how AI has been incorporated into the product will become critical as customers become more AI aware. If bad outcomes occur, corporate communications about the problem will have to be as swift as the actions to correct the problem.
  2. Recognition that AI is different than IT
    • Although parts of AI can be considered just fancy IT work, its dynamic, potentially self-learning, characteristics makes it different.
    • It requires specific planning, policies, standards, procedures and management to ensure that it properly managed and operates as intended. Without specific policies and standards how could anyone determine (audit) if it was operating as intended.
    • Old models of operations will be broken, and new ones developed. These may have major disruption on the organizations current employees and need to be addressed in the planning stages.
  3. Regulation is coming, but it will be a mess.
    • Competition for AI development will press many to ignore the need for appropriate regulations. Each state wants to be the place where autonomous vehicles are developed. Countries want to be the leader in the development of AI for various reasons, some not so good. As the Law of Unintended Consequences is enforced regulations will come. Just as it is necessary for an organization to become knowledgeable about AI, legislators and regulators will face the same problem of lack of knowledge.

Cybersecurity regulations will only be part of the solution. While reading the new SEC Cybersecurity Interpretive Guidance one could almost substitute “Artificial Intelligence” where every you find “Cybersecurity”. I would expect that companies, to comply with this guidance, will have to list the risks associated with their AI activities as ones that “pose grave threats to investors, our capital markets, and our country.” (Securities and Exchange Commission, 17 CFR Parts 220 and 249, [Release Nos. 33010459; 34-82746], I. Introduction, A. Cybersecurity).