April 2022

With human intervention, the future of Artificial Intelligence can become auditable and transparent

Responsible AI
AI Governance
Shadow AI

To create a world where humans can trust machines to do the right things, we need to develop auditable and transparent systems.Automation will continue to impact the workforce, but it’s not all bad news. There are plenty of examples where AI has eliminated repetitive tasks and increased efficiency. So it is only natural to expect that the number of machines working alongside humans will multiply in years to come. However, for the new augmented workforce to function, we need to create a world where humans can trust the machines to do the right things for the right reasons. For this, we need to develop auditable and transparent systems that hold the machines accountable. But how do we go about doing this?

In the shadows, machines are making choices

Whether you like it or not, machines are already playing a significant role in your life. Machine Learning (ML) models are set loose in the digital world to detect whether you are being creative with your tax returns, recommend what your credit limit should be, and help decide if you should get the job you apply for.

However, ML models are not magic entities that, by chance, come into existence. Their origin is dependent on developers, data scientists, product and project managers making critical decisions, but when a project is over, the models often get left on their own in the shadows to do the right thing.

Of course, ML models never make 100% decision calls; it is always a matter of statistics and probabilities. So how can we be sure that our models are well maintained, how can we be sure that data fed into the models are the suitable datasets, where did the data come from? How can we be sure that models are continuously being monitored and improved? How can we ensure that model outputs are fair, explainable, and transparent.

Models lack the human judgment to adapt to a changing world

We need to distinguish and define good models from subpar models. However, determining whether a model is “good” largely depends on an organization’s definitions, the model they are building, and its impact on humans. On top of that, many organizations are struggling to make ends meet when it comes to ensuring that all the paperwork, data processes, good development practices, assessment of impact, etc., are being completed.

For anybody to use such technology, we need to trust it. ML Models need to be trusted. They need to be developed in responsible ways, with thought and carefully aligned with the intended use, principles of transparency, fairness, and so on. AI models lack the crucial element of human judgment, they can’t adapt to the changing world without a bit of help.

So, contrary to popular opinion, models are not always better than humans when it comes to prediction. In such cases, it is better for humans to make decisions. Such models can be useful for some processes like categorizing data, filtering out unnecessary information, and so on. As Tim Leberecht rightly said, “As we move from Computers as our tools to Computers as our peers, we will embrace the fact that computers are more efficient at certain tasks than humans are, but never forget that human judgment trumps all.”

Creating fertile ground for human collaboration

Many organizations want to make it right and trust that risks are mitigated across the organization; however, this task quickly becomes unmanageable, or worse, left to only a few individuals in the organization to enforce. Furthermore, ML Models can be hard to explain and work with, especially when ensuring compliance with ethical guidelines or regulations. They are hard to relate to and interpret. E.g., what does “fair” mean? “Fair” to who? What does risk entail for a data scientist who might not know what the final application of the model will entail?

That is why teams across an organization must collaborate on what “good” or responsible ML models look like. However, that is not an easy task. Project Managers, Data scientists, Compliance Managers, Lawyers, Developers, and Designers lack a common language for collaboration. Without a shared understanding when developing, deploying, and operating ML Models, things can get chaotic or left to chance, resulting in “Shadow AI” or BlackBox models that only very few people know about and can alter.

To avoid this, organizations need to have services and tools that can support multiple stakeholders to work together towards the common goal of ensuring adherence with current policies and that risks are minimized and mitigated throughout the organization. That means tools that make it possible for:

  • data scientists and data engineers to limit data and model risks;
  • developers and IT to set up proper cybersecurity, stability, and scalability to mitigate risk;
  • policy managers to ensure that the right regulations are followed;
  • compliance managers to track and document compliance processes and decisions;
  • project managers to ensure that planning, security, on/off-boarding processes are met;
  • C-level executives to check that model KPIs are being reported on and tracked.
Responsible use of technology requires humans to join forces to guide the machines

To create a better world where humans can trust machines to do the right things for the right reasons, we need to develop auditable and transparent systems that align perfectly with all regulatory guidelines and ethics statements. One essential part of creating that system we can trust in is making it easier for all human stakeholders to chip in with their piece of the puzzle.

So, we are looking for additional partners, customers, communities, developers, data scientists, business people, regulators, and scholars to join forces to increase trust and responsibility throughout the entire AI Ecosys – we are looking forward to meeting you and joining forces!

Transcript

More news

Get the latest news

Stay up to date on our latest news and industry trends