Navigating the new frontier: Biden’s AI Executive Order and its impact on enterprises

By Yina Moe-Lange, Anik Bose

Welcome to our eighth episode of AI Watch!

In this episode, Yina Moe-Lange interviews guest speaker Anik Bose, Managing Partner at BGV and Founder of EAIGG. Together, Yina and Anik explore what the AI Executive Order is comprised of and how the new regulations will play out both in the US and abroad.

If you’re curious if this AI Executive Order could impact you or your business, you’ll want to read more.

And if you’re not looking to commit to watching the full video — we’ve compiled a brief list of the episode’s main highlights:

  1. The new AI Executive Order was introduced by the Biden administration on October 30th.
  2. The focus of the Executive Order is on safe, secure, and trustworthy development and use of AI.
  3. The Order could impact businesses not only across the country but also around the world.
  4. While prioritizing safety and security, the Order is also promoting innovation and competition.
  5. Long-term effects will come from the Executive Order on AI development.

What does the Executive Order entail?

Yina: Hi, folks. Welcome to a new episode of AI Watch. My name is Yina Moe Lang and I’m a Product Manager at 2021 AI. Today, in this episode, I will talk with our guest, Anik Bose, who is a managing partner at BGV, about the new AI Executive Order out of the Biden White House and its potential impact on enterprises.

The plan for me today is to give you a short high-level overview of the Executive Order that just came out of the Biden White house. Then we’ll follow this with a discussion with Anik to see how this Executive Order might play out in the US, the rest of the world, and how it’ll affect enterprises that are using AI systems.

The executive order was introduced October 30 by the Biden administration. It’s the executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The goal was to develop standards, tools, and tests that can help ensure AI systems are safe, secure, and trustworthy, which is something that we’ve talked a lot about at 2021 AI.

We are happy to see this expand to a higher level and become more of a mainstream topic within the AI community, especially in the US. The Executive Order really focused on eight high-level principles. So starting off, we had the new standards for AI safety and security. I will go a little bit further into this, in a moment, since that is the most high-level enterprise focus of the principles, and then two through eight will have an impact on AI enterprises.

But a lot of this is government and federal agencies getting guidelines that they need to get kicked into action and start sorting out on their own. As you’ll notice, there’s really a big theme throughout all of the principles, which is privacy and security for end users of AI systems and ensuring that those that could be impacted by externalities are minimized, and that we’re really just using AI securely both in the public space and in the private space. So developers of foundation models need to share their safety test results and other critical information with the US Government, but with the caveat that this is dependent on the size of their models. And none of the models that we’ve seen from OpenAI or the other LLM creators reach that size yet. But I think we will get close to these sizes and have to start reporting to the government what they’re doing.

Now I want to switch over and welcome Anik to the conversation. I will let him introduce himself and give a little bit more detail on his background.

Anik: So glad to be here today discussing this important topic. Just to give you a quick context, as Yina indicated, I’m a managing partner at BGV, we’re an Early Stage VC firm. We’ve been investing in AI since 2017.

Long term effects of the Executive Order

Yina: I think we should start at a super high level. I want to hear from you what you think the long term effects will be of the Executive Order on AI development and especially the use of AI systems across different sectors.

Anik: I think first and foremost, as you said earlier on in your preamble, the Executive Order is really focused on the government. I believe there’s going to be increased adoption of AI in the government because the Order encourages different governmental agencies to embrace AI technologies for various purposes, all the way from improving public services to enhancing national security. There are some important second order effects that I believe will also span into other sectors in the commercial space. If you think about the US Government, it’s the largest customer in the US Economy, and the federal government’s own purchasing requirements often become industry procurement standards. The third aspect I would say, is data sharing. The Order really promotes data sharing among government agencies to support AI research endowment. As you rightly pointed out, there’s a lot of issues on data privacy and how data gets shared. And fundamentally, to the extent that this adoption within the US Government increases data access and interoperability, it will also surface best practice innovation in this area that could be leveraged in other highly regulated sectors, whether that’s financial services or healthcare, as they develop and deploy AI systems.

How will the Executive Order play a role in use and development of AI?

Yina: So I want you to put on your ethical AI governance group hat. How do you see the Executive Order and the following discussions that we’ve seen in the community play a role in the continuous discussion around the ethical use and development of AI technologies?

Do you think people’s mindsets are changing or is there still a challenge with really pushing the ethical use of AI rather than just being like, “we’re still innovating, we need to try to break things and for them to work”?

Anik: Yeah, good question. As I think about, at the highest level, this new Executive Order on the safe, secure and trustworthy development and use of AI, it really demonstrates the Biden administration is taking seriously its responsibility not only to foster a vibrant AI ecosystem, but also to harness and govern AI. I think that’s a very positive message to take away.

And they’re asking the government agencies to say, you have to consider things like fairness, you have to consider things like transparency, you have to consider things like accountability in the AI systems. And fundamentally, as you said earlier, this will influence how responsible AI gets deployed within the public sector. I believe there’ll be an acceleration in innovation in what I would call the ethical AI landscape. And today, if I look every day, the media is full of news of privacy breaches, algorithm biases, AI oversights. So the public perception has shifted from a state of general obliviousness to a growing recognition that AI technologies and the massive amounts of data that power them poses some real risk, whether it’s to privacy, accountability, fairness, or transparency. As I think about that, I look at system integrators who are selling to the public sector of government today. I know that they’re increasingly partnering with startup innovators to bring ethical AI innovations in the sector. So I think it’s going to accelerate innovation in the ethical AI landscape as there’s more demand for it, whether it’s coming public sector or spillover from the enterprise sector.

How will the Executive Order work together with the EU AI Act?

Yina: How do we see that the US Executive Order and the EU AI Act can work together to set global standards for AI that raise the bar for privacy in data security?

Anik: So from a scope perspective, it’s important to understand that the EU AI Act is a comprehensive legislative framework for AI regulation, while biden’s order really focuses on government agencies and AI adoption within them. So it’s a smaller subset. The Executive Order brings scrutiny based on the threshold of compute resource intensity for AI models, while if you think about the EU AI Act, it’s really focused on applications which have demonstrated impact or harm in society.

But I think these two things taken together are complementary as opposed to saying the answer is A or the answer is B. But I think only time will tell how that evolves. I fundamentally also believe if you go to the other part of the equation in terms of the regulatory approach, as you said earlier, the EU AI Act establishes very specific legal requirements for AI systems, including high risk AI applications and contrast. Think about the Executive Order taking more of a voluntary and a regulatory approach, emphasizing guidelines and best practices. So I think this is tied into the culture of differences between the US and the EU. So I think you have to find the right balance.

Will regulation stifle innovation?

Yina: There’s always some comments around how regulation will stifle innovation. How do you think people should think about that balance between regulation and still allow for quick, rapid innovation?

Anik: I think I’m a firm believer in what I call smart regulations. Smart regulations means it’s not like a book of 8000 pages, but it’s also not no regulation. So what do I mean by that? I think there’s powerful examples where we can see that smart regulation can drive the right behavior. So I believe that has to be in place. I don’t think you can just say I’ll toss it all out and let it be Adam Smith’s invisible hand and hope that bottoms up, the right things will happen. And I think at the same token, you can’t just stifle innovation by saying in order for any innovation to happen, you have to follow this 1000 page rulebook before you do anything because then there’ll be no innovation happening at all.

So to me, it’s all about smart regulation and it’s finding the right balance. And since AI is self-governing, I think the role of smart regulations will be very, very important.

Yina: Cool, well, I think that wraps us up for today. I want to say many, many thanks, Anik, for joining us. Your insights were invaluable.

Anik: Great, thank you so much for having me. Thank you.

Yina: I hope you liked today’s episode. This was a snippet of a longer webinar on this topic, so make sure you check the link in the description if you want to see the entire webinar. Also, don’t forget to subscribe to our YouTube channel or our webpage if you don’t want to miss any of our episodes in the future. Thank you. See you in the next one.

Yina Moe-Lange

Yina Moe-Lange


Yina is a Product Manager focused on AI Governance at 2021.AI and has extensive experience working with AI platforms and investing in early-stage startups. Yina is also the author of the newsletter, The Big Y, where she focuses on interesting and relevant AI topics.

Anik Bose

Anik Bose


Anik is a Managing Partner at BGV, an early stage VC firm investing in Enterprise AI. Anik is the founder of EAIGG, a diverse community of AI practitioners focused on democratizing the growth of Ethical AI governance through best practice innovations around AI governance, Data privacy and AI Security. He is responsible for leading BGV’s Customer Advisory Board and for implementing ESG within the firm.

Watch the previous episodes

AI Watch Video Newsletter

Get the latest know-how from those in the know. Sign up for our AI Watch Newsletter and receive the latest insights from AI experts.