The Black Box problem
Explaining AI decisions
AI can improve how businesses make decisions. But how does a business explain the rationale behind AI decisions to its customers? In this episode, we explore this issue through the scenario of a bank that uses AI to evaluate loan applications and needs to be able to explain to customers why an application may have been rejected. We do so with the help of Andrew Burgess, founder of Greenhouse Intelligence (andrew@thegreenhouse.ai).
About Andrew: He has worked as an advisor to C-level executives in Technology and Sourcing for the past 25 years. He is considered a thought-leader and practitioner in AI and Robotic Process Automation, and is regularly invited to speak at conferences on the subject. He is a strategic advisor to a number of ambitious companies in the field of disruptive technologies. Andrew has written two books - The Executive Guide to Artificial Intelligence (Palgrave MacMillan, 2018) and, with the London School of Economics, The Rise of Legal Services Outsourcing (Bloomsbury, 2014). He is Visiting Senior Fellow in AI and RPA at Loughborough University and Expert-In-Residence for AI at Imperial College’s Enterprise Lab. He is a prolific writer on the ‘future of work’ both in his popular weekly newsletter and in industry magazines and blogs.
Further reading:
- ICO and The Alan Turing Institute, ‘Explaining decisions made with AI’ (2020)
- ICO, ‘Guide to the General Data Protection Regulation (GDPR)’ (2021)
- The Data Protection & Privacy chapter in The Law of Artificial Intelligence (Sweet & Maxwell, 2020)
- An explanation of the SHAP and LIME tools mentioned by Andrew can be found at https://towardsdatascience.com/idea-behind-lime-and-shap-b603d35d34eb, and a deeper explanation for the more mathematically minded can be found here: https://www.kdnuggets.com/2019/12/interpretability-part-3-lime-shap.html