AI INSIGHTS, August 2022
AI Governance frameworks and what is left (out)
Björn Preuß
CHIEF DATA SCIENTIST, 2021.AI
Since 2015, we have seen scholars discuss the ethics and risks associated with artificial intelligence (AI) and other intelligent systems. Following the discussion, we see certain holes in the results. Along with this essay, we want to raise the concern that implications and implementation suggestions are often left out even though they are from a societal perspective the same as important as the regulation. We want an ethical usage of AI but we do not want to lose the race against others just to use their systems. So we have to think practically.
Introduction
During the last years, I witnessed the emergence of many new frameworks and suggestions for AI governance. Since 2015, the question of how to regulate AI and intelligent systems became more and more popular [1]. Scholars became concerned about the ethics and risk around AI systems. That is understandable having the many high-impact applications in mind that intelligent systems might have during the next years [2]. So far it is right that the industry is still lacking regulation [3] and that we have to search for the best possible way to do so by not harming the development and falling behind the US or China in the race towards AI. Even though there is a risk of slowing down the technological development, the world economic forum among others is pointing towards the need for regulation and says this is the time to get it started [15].
However, one thing I miss in the debate is the fact that scholars barely discuss the actual application of such regulations. Having frameworks is a first step but once in a while, they appear to be more like a desk product. The frameworks often let open how the tests etc. need to be fulfilled and if we can actually test the requirements with an acceptable high or low effort. We see in the suggested frameworks some differences in the applicability or to say it different action-ability of it. Some are very theoretical and high level and one has to think about which actual implications they will have. Others, however, have some interesting insights that relate to statistical or methodological approaches to measure and test intelligent systems in production. In the following text, we will shed some light on the high-level thought of the from our perspective most relevant frameworks. We will not finally answer the actual applicability but will point into a direction of this.
The quest for effective and efficient implications
The first framework includes the quest for explainable AI. Scholars focussed on this already bridge into the applicability of it [4] [5]. Despite the quest for model explainability, this has to go hand in hand with suggestions on what to act upon. Just knowing why a model is doing a prediction will not help in identifying risk. One has to deliver also suggestions on what to look out for. Which variables are high risk and which should we monitor. One way for this could be to have a guiding algorithm that points out the risk areas [6]. The good thing with this approach is that we can see the immediate action and the way forward. Such an application of Ai guiding what to control for brings up other questions like who controls that algorithm and so forth. But the idea is to some extent appealing, hence the generation of impact metrics through frameworks such as shap, lime, eli5, or others will over time be difficult for a person to oversee [17] [18]. Even more so when we have models like the GPT-3 with millions of variables [16]. The importance of explainability stated by research fosters two different approaches to make models explainable [19]. We see two approaches; i) to try building models that are better explainable so-called self-explainable [21] and ii) post-hoc interpretability methods which used a second model or method to derive interpretability of the system [19] [22]. These two approaches have their pros and cons and lead to a trade-off performance vs. explainability. Where the ones of group i) have higher explainability, they might fall behind in their performance and so do the ones of group ii) who have lower explainability [19].
But one point that is not sufficiently covered in the framework is the overall process. Looking at the quest for transparency, it is good to look at the model but in the same way; one has to watch the making of the model. Missing out at the first impedes the auditing and accountability of such a system or model [19] [20]. Having a transparent life-cycle of the model would allow additional governance around the actual construction [4]. Such a process level of transparency would allow the implementation of decision rules along the process to steer it [7] [8] [9]. Looking across the suggestions, we can immediately see useful elements of the framework, however, also here we miss concrete answers towards how a system should first track all the events around the life-cycle and secondly enforce rules upon it. We would suggest having some more extensive research going into the way we can actually implement such suggestions for governance in such a framework and what this would require. The question we want to raise as well is whether we can make such a system efficient so it does not block firms from actually venturing into AI and building novice systems. We see the quest for regulation as a valuable point and also follow scholars such as [1] in the relevance of it, however, the regulation should not mean that European players fall behind other international vendors because of technological limits or increased overhead costs because of the regulation. The idea raised by [10] is interesting however, it might be limited and slow in the practical execution. Having supranational governance bodies has been not as efficient as having self-regulating functional units in organizations.
An intelligent model for regulating learning algorithms [1] is another suggestion. Some scholars suggest using algorithms to test algorithms [1]. This might be questionable. They further say intelligent systems should test algorithms that can be interpreted in different ways [9]. We have to decide what we see as an intelligent system and who is then responsible for governing it. However, this approach might be also appealing from a couple of angles. One is that the complexity of automated systems and combinations of AI models might increase to an extent where this will be difficult for humans to oversee it. As mentioned earlier, the complexity of variables to watch might increase with complex models such as GPT-3 [16]. Here, it might be an option to use intelligent systems to watch the algorithms. Another possibility is to use models to select metrics to watch. One easy one could be an outlier detection on the performance metrics. This one could cover the production state of the model and control its behavior. Another suggestion goes into the direction to do this while the construction of the model [11]. One could use such a system to flag specifications during the model development process and document these.
Moving from the in-depth control of the models, one might ask which implication this might have for a company and why besides ethics one should get engaged in this. One reason is for sure the risk that one would carry when not engaging in such systems [1] [6]. But besides that, common reasons for governance are the consequences that one would face when not being compliant. Here one could EU commission suggestions for guidelines [13] and fines [14]. But one could also think about a model where companies in the event of failure would be judged differently based on what they have done to mitigate the risk of a failure [12].
Concluding remarks
Concluding we can say that the initiatives take and filled with goodwill. However, some might lack the practical perspective on the applicability, and hence this might on one side slow down the development by introducing manual processes and on the other side might not be implemented in the practice as an automated system. Many frameworks also fall short in the point of seeing Ai systems being dynamic and probably more like living systems rather than static machines. Given this analogy, we have to watch them over time and can not go with one audit and. then leave them.
Being ethical and building trust in this new technology will be key to its success. If people do not trust the systems, then we will see no adaption. But we have to take the suggested frameworks and systematically approach their applicability and feasibility to cope with the nature of machine learning systems and alike. One should aim for a simplistic approach but keeping it effective and not too simplified. We have to do something but we have to do it the right way to not stop innovation and development but still being ethical and trustworthy.
References
- Almeida, P., Santos, C., & Farias, J. S. (2020). Artificial Intelligence Regulation: A Meta-Framework for Formulation and Governance. In Proceedings of the 53rd hawaii international conference on system sciences.
- Holder, C., Khurana, V., Harrison, F., & Jacobs, L. (2016). Robotics and law: Key legal and regulatory implications of the robotics age (Part I of II). Computer law & security review, 32(3), 383-402.
- Reed, C. (2018). How should we regulate artificial intelligence?. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128), 20170360.
- Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751-752.
- Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access, 6, 52138-52160.
- Firth-Butterfield, K. (2017). Artificial Intelligence and the Law: More Questions than Answers?. Scitech Lawyer, 14(1), 28-31.
- Butterworth, M. (2018). The ICO and artificial intelligence: The role of fairness in the GDPR framework. Computer Law & Security Review, 34(2), 257-268.
- Tutt, A. (2017). An FDA for algorithms. Admin. L. Rev., 69, 83.
- Buiten, M. C. (2019). Towards intelligent regulation of artificial intelligence. European Journal of Risk Regulation, 10(1), 41-59.
- Wallach, W., & Marchant, G. E. (2018). An agile ethical/legal model for the international and national governance of AI and robotics. Association for the Advancement of Artificial Intelligence.
- Arnold, T., & Scheutz, M. (2018). The “big red button” is too late: an alternative model for the ethical evaluation of AI systems. Ethics and Information Technology, 20(1), 59-69.
- Scherer, M. U. (2015). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harv. JL & Tech., 29, 353.
- Smuha, N. A. (2019). The eu approach to ethics guidelines for trustworthy artificial intelligence. Computer Law Review International, 20(4), 97-106.
- Benkler, Y. (2019). Don’t let industry write the rules for AI. Nature, 569(7754), 161-162.
- WEF (2021). AI governance’s time has come. 6 ways to act now. Online source https://www.weforum.org/agenda/2021/04/deferring-ai-governance-makes-no-sense-here-s-what-we-can-do-instead/
- Floridi, L., & Chiriatti, M. (2020). GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 30(4), 681-694.
- Parsa, A. B., Movahedi, A., Taghipour, H., Derrible, S., & Mohammadian, A. K. (2020). Toward safer highways, application of XGBoost and SHAP for real-time accident detection and feature analysis. Accident Analysis & Prevention, 136, 105405.
- Garreau, D., & Luxburg, U. (2020, June). Explaining the explainer: A first theoretical analysis of LIME. In International Conference on Artificial Intelligence and Statistics (pp. 1287-1296). PMLR.
- Bauer, K., Hinz, O., van der Aalst, W., & Weinhardt, C. (2021). Expl (ai) n it to me–explainable ai and information systems research.
- Obermeyer, Z., & Weinstein, J. N. (2019). Adoption of artificial intelligence and machine learning is increasing, but irrational exuberance remains. NEJM Catalyst Innovations in Care Delivery, 1(1).
- Du, M., Liu, N., & Hu, X. (2019). Techniques for interpretable machine learning. Communications of the ACM, 63(1), 68-77.
- Lundberg, S., & Lee, S. I. (2017). A unified approach to interpreting model predictions. arXiv preprint arXiv:1705.07874.
You might also like…
AI Governance – The impact of AI on different business functions
The assignment of responsibilities regarding AI within organizations remains an evolving topic but critical to AI Governance…
Why AI Governance is critical for Automation Projects
AI is increasingly becoming the focal point of RPA, but RPA initiatives lack…