Executive Insights, December 2022
Ten Questions with AI pioneer Danny Lange – industry insights from Silicon Valley
Danny Lange
SVP of AI, UNITY TECHNOLOGIES
Having worked with tech giants like Microsoft, Amazon Web, and Uber, Danny Lange is currently the Vice-President for Artificial Intelligence and Machine Learning at Unity Technologies. On a visit to Copenhagen from San Francisco, Lange sat down to chat with us about the pace of tech innovation, the evolving state of the industry, and the potential of generative AI models.
We live in exciting times for AI, so let’s get down to it: What do you believe are the biggest challenges for tech leaders today?
I think the biggest challenge is the rapid innovation. This week’s hot topic is ChatGPT and the idea of generative AI, which few people expected. We’re seeing it not just in text but also in graphics, these generative AI tools that can generate fantastic visual art from simple prompts.
As tech leaders, we can get overwhelmed by these new developments. But what is real? What is hype? What will really impact me and my organization? That is the big question.
What surprises you the most about working with technology?
I’m always amazed by what we can accomplish with data-driven technology. Consider Amazon. Amazon has hundreds of millions of customers, but when you visit the website you get a personalized experience. That is what Machine Learning and AI enable, and it works.
Is that one of the characteristics for a tech leader in Silicon Valley compared to what you see elsewhere?
What we’ve seen in Silicon Valley is that expectations are very high. Investors want their money back. It’s extremely competitive, so tech leaders use these technologies to get an edge and to give a better customer experience than the competition.
That forces these leaders to be just at the technological forefront—but not too far, because the technology may not work for them. And they can’t be too slow, because then someone else will do better.
That’s the biggest challenge right now: to be at the forefront but not by too much.
From our side, it’s the same when we talk about AI Governance and compliance. Real-world challenges inform how we use data, AI, or Machine Learning across enterprises. Right?
Yeah. Uber is one example. We were trying to move fast, and we ran into well-known issues around privacy and data use. I think we have learned from that. At Unity Technologies, we’ve made it an important principle to be good stewards of our customers’ data.
Let’s elaborate on your own experiences. What else can you share in terms of advice?
I’ll discuss how the industry focus has often been on the data science component. Everyone wants to use the latest algorithms to build superior models, but even the best models need management. So we’ve realized that model management — what we call MLOps — is critical.
Now we are moving further down the stack. If the data is contaminated or has privacy issues, then data can cause a lot of trouble. So there is increasingly a focus on basic data cleansing, monitoring data, detecting changes in data, etc. We are looking at data governance and regulations before we train or execute models.
We’re always focused on the data science aspect, but now we realize it’s the operational part that matters.
A few years ago, the common thesis was that the model with the most data will win, right? That’s not where we are anymore. Now it’s about having good data, reliable data —
— and data that does not get us in trouble. In Unity’s case, you’re looking at managing billions of users and thousands of games, and you have to do that safely and securely. So now we’re back to handling data in a proper way.
How about the generalization of Machine Learning? It is difficult to generalize a Machine Learning model on an end-user product. We’re seeing cases in terms of bias and how bias might affect hiring processes, for instance.
Bias is a pervasive problem in Machine Learning. It goes back to the data. We think of data as representing truth, right? But the way you collect data can lead to bias, so we have to scrutinize the data before we train our models.
You mentioned hiring. One efficient method of dealing with bias is to employ a diverse team of data engineers and Machine Learning engineers, who will look at the data and applications from many viewpoints. Sometimes they point out issues I would not have seen.
Regarding generalization — I don’t think of Machine Learning models as generalized. We aim to build models that serve specific purposes. We look at the application we’re trying to build, collect the necessary data, and build models for that specific usage. But in general, I would prefer to use a Machine Learning platform so I can efficiently build and manage specialized models, rather than try to build generalized models.
For organizations interested in Machine Learning/AI track, what’s the best way to get started? Where should the innovation come from?
I recommend that companies look at Machine Learning and AI broadly. Avoid picking tiny, marginal applications for AI.
Instead, look at the company’s business model and see if there’s a data-driven aspect. Almost always you’re trying to optimize your manufacturing floor, your machinery, your warehouse, your trucking/shipping company, etc. In these cases, you should look for the data and then consider how to optimize the systems that run your business.
If you apply that mindset — look for the data, gather the data, and use data to optimize the business — I guarantee you that almost every single business model can be improved.
Can generative models solve the lack of datasets, like datasets related to people’s life?
Yes. We call that synthetic data, and it’s a powerful way of giving access to data without violating privacy.
The way generative AI works is that you train it on existing data, but then you ask it to generate new data based on the learnings of the database trainer. For instance, if you take a dataset of COVID health records from individuals who have personal data, we can train a model on it and then discard the original data. We don’t want to share that data with anyone.
I can now ask that model to generate COVID health records for me, and those synthetic records cannot be traced back to any individual, but the statistics and properties of the synthetic dataset are identical to the original dataset. So synthetic datasets are ideal for research and can be shared without violating privacy.
Last question. Danny, what is your perspective on AI/ML SaaS solutions to solve common cross-industry problems?
The challenge with those services is that they are generic tools, so you must know what you’re doing and build around it. These tools are good for building models, which makes deployment easy, but you also have to monitor and retrain models. And you need to govern your data.
Most of these services are too generic to be game-changing for the enterprise. That’s why companies like 2021.AI provide platforms that aggregate functionality by focusing on data governance, model management, Machine Learning operations, and so on.
I do recommend that companies take that step, because working with the backbone services requires much more work. And I don’t recommend that companies build their own foundational technology, because then you’ll spend 90% of your time with basic software engineering instead of tending your business.
For the full interview, please access the video here.
You might also like…
Europe needs to take the global AI-leadership
The role of emerging technologies such as robotics, blockchain, artificial intelligence (AI), quantum computing, advanced manufacturing, and autonomous…
The EU rules on Artificial Intelligence – Five actions to consider now!
For a couple of years, the EU Commission has worked on rules, regulations, and incentives around Artificial Intelligence…