Podchaser Logo
Home
🤖 My chat (+transcript) with tech policy analyst Adam Thierer on regulating AI

🤖 My chat (+transcript) with tech policy analyst Adam Thierer on regulating AI

Released Thursday, 30th May 2024
Good episode? Give it some love!
🤖 My chat (+transcript) with tech policy analyst Adam Thierer on regulating AI

🤖 My chat (+transcript) with tech policy analyst Adam Thierer on regulating AI

🤖 My chat (+transcript) with tech policy analyst Adam Thierer on regulating AI

🤖 My chat (+transcript) with tech policy analyst Adam Thierer on regulating AI

Thursday, 30th May 2024
Good episode? Give it some love!
Rate Episode

While AI doomers proselytize their catastrophic message, many politicians are recognizing that the loss of America’s competitive edge poses a much more real threat than the supposed “existential risk” of AI. Today onFaster, Please!—The Podcast, I talk with Adam Thierer about the current state of the AI policy landscape and the accompanying fierce regulatory debate.

Thierer is a senior fellow at the R Street Institute, where he promotes greater freedom for innovation and entrepreneurship. Prior to R Street, he worked as a senior fellow at the Mercatus Center at George Mason University, president of the Progress and Freedom Foundation, and at the Adam Smith Institute, Heritage Foundation, and Cato Institute.

In This Episode

* A changing approach (1:09)

* The global AI race (7:26)

* The political economy of AI (10:24)

* Regulatory risk (16:10)

* AI policy under Trump (22:29)

Below is a lightly edited transcript of our conversation

A changing approach (1:09)

Pethokoukis: Let's start out with just trying to figure out the state of play when it comes to AI regulation. Now I remember we had people calling for the AI Pause, and then we had a Biden executive order. They're passing some sort of act in Europe on AI, and now recently a senate working group in AI put out a list of guidelines or recommendations on AI. Given where we started, which was “shut it down,” to where we're at now, has that path been what you might've expected, given where we were when we were at full panic?

Thierer: No, I think we've moved into a better place, I think. Let's look back just one year ago this week: In the Senate Judiciary Committee, there was a hearing where Sam Altman of OpenAI testified along with Gary Marcus, who's a well-known AI worrywart, and the lawmakers were falling all over themselves to praise Sam and Gary for basically calling for a variety of really extreme forms of AI regulation and controls, including not just national but international regulatory bodies, new general purpose licensing systems for AI, a variety of different types of liability schemes, transparency mandates, disclosure as so-called “AI nutritional labels,” I could go on down the list of all the types of regulations that were being proposed that day. And of course this followed, as you said, Jim, a call for an AI Pause, without any details about exactly how that would work, but it got a lot of signatories, including people like Elon Musk, which is very strange considering he was at the same time deploying one of the biggest AI systems in history. But enough about Elon.

The bottom line is that those were dark days, and I think the tenor of the debate and the proposals on the table today, one year after that hearing, have improved significantly. That's the good news. The bad news is that there's still a lot of problematic regulatory proposals percolating throughout the United States. As of this morning, as we're taping the show, we are looking at 738 different AI bills pending in the United States according to multistate.ai, an AI tracking service. One hundred and—I think—eleven of those are federal bills. The vast majority of it is state. But that count does not include all of the municipal regulatory proposals that are pending for AI systems, including some that have already passed in cities like New York City that already has a very important AI regulation governing algorithmic hiring practices. So the bottom line, Jim, is it's the best of times, it's the worst of times. Things have both gotten better and worse.

Well—just because the most recent thing that happened—I know with this the senate working group, and they were having all kinds of technologists and economists come in and testify. So that report, is it really calling for anything specific to happen? What's in there other than just kicking it back to all the committees? If you just read that report, what does it want to happen?

A crucial thing about this report, and let's be clear what this is, because it was an important report because senator Senate Majority Leader Chuck Schumer was in charge of this, along with a bipartisan group of other major senators, and this started the idea of, so-called “AI insight forums” last year, and it seemed to be pulling some authority away from committees and taking it to the highest levels of the Senate to say, “Hey, we're going to dictate AI policy and we're really scared.” And so that did not look good. I think in the process, just politically speaking—

That, in itself, is a good example. That really represents the level of concern that was going around, that we need to do something different and special to address this existential risk.

And this was the leader of the Senate doing it and taking away power, in theory, from his committee members—which did not go over well with said committee members, I should add. And so a whole bunch of hearings took place, but they were not really formal hearings, they were just these AI insight forum working groups where a lot of people sat around and said the same things they always say on a daily basis, and positive and negatives of AI. And the bottom line is, just last week, a report came out from this AI senate bipartisan AI working group that was important because, again, it did not adopt the recommendations that were on the table a year ago when the process got started last June. It did not have overarching general-purpose licensing of artificial intelligence, no new call for a brand new Federal Computer Commission for America, no sweeping calls for liability schemes like some senators want, or other sorts of mandates.

Instead, it recommended a variety of more generic policy reforms and then kicked a lot of the authority back to those committee members to say, “You fill out the details, for better for worse.” And it also included a lot of spending. One thing that seemingly everybody agrees on in this debate is that, well, the government should spend a lot more money and so another $30 billion was on the table of sort of high-tech pork for AI-related stuff, but it really did signal a pretty important shift in approach, enough that it agitated the groups on the more pro-regulatory side of this debate who said, “Oh, this isn't enough! We were expecting Schumer to go for broke and swing for the fences with really aggressive regulation, and he's really let us down!” To which I can only say, “Well, thank God he did,” because we're in a better place right now because we're taking a more wait-and-see approach on at least some of these issues.

A big, big part of the change in this narrative is an acknowledgement of what I like to call the realpolitik of AI policy and specifically the realpolitik of geopolitics

The global AI race (7:26)

I'm going to ask you in a minute what stuff in those recommendations worries you, but before I do, what happened? How did we get from where we were a year ago to where we've landed today?

A big, big part of the change in this narrative is an acknowledgement of what I like to call the realpolitik of AI policy and specifically the realpolitik of geopolitics. We face major adversaries, but specifically China, who has said in documents that the CCP [Chinese Communist Party] has published that they want to be the global leader in algorithmic and computational technologies by 2030, and they're spending a lot of money putting a lot of state resources into it. Now, I don't necessarily believe that means they're going to automatically win, of course, but they're taking it seriously. But it's not just China. We have seen in the past year massive state investments and important innovations take place across the globe.

I'm always reminding people that people talk a big game about America's foundational models are large scale systems, including things like Meta’s Llama, which was the biggest open source system in the world a year ago, and then two months after Meta launched Llama, their open source platform, the government of the UAE came out with Falcon 180B, an open source AI model that was two-and-a-half times larger than Facebook's model. That meant America's AI supremacy and open source foundational models lasted for two months. And that's not China, that's the government of the UAE, which has piled massive resources into being a global leader in computation. Meanwhile, China's launched their biggest super—I'm sorry, Russia's launched their biggest supercomputer system ever; you've got Europe applying a lot of resources into it, and so on and so forth. A lot of folks in the Senate have come to realize that problem is real: that if we shoot ourselves in the foot as a nation, they could race ahead and gain competitive advantage in geopolitical strategic advantages over the United States if it hobbles our technology base. I think that's the first fundamental thing that's changed.

I think the other thing that changed, Jim, is just a little bit of existential-risk exhaustion. The rhetoric in this debate, as you've written about eloquently in your columns, has just been crazy. I mean, I've never really seen anything like it in all the years we've been covering technology and economic policy. You and I have both written, this is really an unprecedented level of hysteria. And I think, at some point, the Chicken-Littleism just got to be too much, and I think some saner minds prevailed and said, “Okay, well wait a minute. We don't really need to pause the entire history of computation to address these hypothetical worst-case scenarios. Maybe there's a better plan than that.” And so we're starting to pull back from the abyss, if you will, a little bit, and the adults are reentering the conversation—a little bit, at least. So I think those are the two things that really changed more, although there were other things, but those were two big ones.

The political economy of AI (10:24)

To what extent do you think we saw the retreat from the more apocalyptic thinking—how much that was due from what businesses were saying, venture capitalists, maybe other tech . . . ? What do you think were the key voices Congress started listening to a little bit more?

That's a great question. The political economy of AI policy and tech policy is something that is terrifically interesting to me. There are so many players and voices involved in AI policy because AI is the most important general-purpose technology of our time, and as a widespread broad base—

Do you have any doubt about that? (Let me cut you off.) Do you have any doubt about that?

I don't. I think it's unambiguous, and we live in a world of “combinatorial innovation,” as Hal Varian calls it, where technologies build on top of the other, one after another, but the thing is they all lead to greater computational capacity, and therefore, algorithmic and machine learning systems come out of those—if we allow it. And the state of data science in this country has gotten to the point where it's so sophisticated because of our rich base of diverse types of digital technologies and computational technologies that finally we're going to break out of the endless cycle of AI booms and busts, and springs and winters, and we're going to have a summer. I think we're having it right now. And so that is going to come to affect every single segment and sector of our economy, including the government itself.

I think industry has been very, very scrambled and sort of atomistic in their approach to AI policy, and some of them have been downright opportunistic, trying to throw each other’s competitors under the bus

Now let me let you go return to the political economy, what I was asking you about, what were the voices, sorry, but I wanted to get that in there.

Well, I think there are so many voices, I can't name them all today, obviously, but obviously we're going to start with one that's a quiet voice behind the scenes, but a huge one, which is, I think, the National Security community. I think clearly going back to our point about China and geopolitical security, I think a lot of people behind the scenes who care about these issues, including people in the Pentagon, I think they had conversations with certain members of Congress and said, “You know what? China exists. And if we're shooting ourselves in the foot, we begin this race for geopolitical strategic supremacy in an important new general-purpose technology arena, we're really hurting our underlying security as a nation. I think that that thinking is there. So that's an important voice.

Secondly, I think industry has been very, very scrambled and sort of atomistic in their approach to AI policy, and some of them have been downright opportunistic, trying to throw each other’s competitors under the bus, unfortunately, and that includes OpenAI trying to screw over other companies and technologies, which is dangerous, but the bottom line is: More and more of them are coming to realize, as they saw the actual details of regulation and thinking through the compliance costs, that “Hell no, we won't go, we're not going to do that. We need a better approach.” And it was always easier in the old days to respond to the existential risk route, like, “Oh yeah, sure, regulation is fine, we'll go along with it!” But then when you see the devilish details, you think twice and you realize, “This will completely undermine our competitive advantage in the space as a company or our investment or whatever else.” All you need to do is look at Exhibit A, which is Europe, and say, if you always run with worst-case scenario thinking and Chicken-Littleism is the basis of your technology policy, guess what? People respond to incentives and they flee.

Hatred of big tech is like the one great bipartisan, unifying theme of this Congress, if anything. But at the end of the day, I think everyone is thankful that those companies are headquartered in the United States and not Beijing, Brussels, or anywhere else.

It’s interesting, the national security aspect, my little amateurish thought experiment would be, what would be our reaction, and what would be the reaction in Washington if, in November, 2022, instead of it being a company, an American company with a big investment from another American company having rolled out ChatGPT, what if it would've been Tencent, or Alibaba, or some other Chinese company that had rolled this out, something that's obviously a leap forward, and they had been ahead, even if they said, “Oh, we're two or three years ahead of America,” it would've been bigger than Sputnik, I think.

People are probably tired of hearing about AI—hopefully not, I hope they'll also listen to this podcast—but that would all we would be talking about. We wouldn’t be talking about job loss, and we wouldn't be talking about ‘The Terminator,’ we'd be talking about the pure geopolitical terms that the US has suffered a massive, massive defeat here and who's to blame? What are we going to do? And anybody at that moment who would've said, “We need to launch cruise missile strikes on our own data centers” for fear. . . I mean! And I think you're right, the national security component, extremely important here.

In fact, I stole your little line about “Sputnik moment,” Jim, when I testified in front of the House Oversight Committee last month and I said, “Look, it would've been a true ‘Sputnik moment,’ and instead it's those other countries that are left having the Sputnik moment, right? They're wondering, ‘How is it that, once again, the United States has gotten out ahead on digital and computational-based technologies?’” But thank God we did! And as I pointed out in the committee room that day, there's a lot of people who have problems with technology companies in Congress today. Hatred of big tech is like the one great bipartisan, unifying theme of this Congress, if anything. But at the end of the day, I think everyone is thankful that those companies are headquartered in the United States and not Beijing, Brussels, or anywhere else. That's just a unifying theme. Everybody in the committee room that day nodded their head, “Yes, yes, absolutely. We still hate them, but we're thankful that they're here.” And that then extends to AI: Can the next generation of companies that they want to bring to Congress and bash and pull money from for their elections, can they once again exist in the United States?

Regulatory risk (16:10)

So whether it's that working group report, or what else you see in Congress, what are a couple, three areas where you're concerned, where there still seems to be some sort of regulatory momentum?

Let’s divide it into a couple of chunks here. First of all, at the federal level, Congress is so damn dysfunctional that I'm not too worried that even if they have bad ideas, they're going to pursue them because they're just such a mess, they can't get any basic things done on things like baseline privacy legislation, or driverless car legislation, or even, hell, the budget and the border! They can't get basics done!

I think it's a big positive that one, while they're engaging in dysfunction, the technology is evolving. And I hope, if it's as important as I think you and I think, more money will be invested, we'll see more use cases, it'll be obvious—the downsides of screwing up the regulation I think will be more obvious, and I think that's a tailwind for this technology.

We're in violent agreement on that, Jim, and of course this goes by the name of “the pacing problem,” the idea that technology is outpacing law in many ways, and one man's pacing problem is another man's pacing benefit, in my opinion. There's a chance for technology to prove itself a little bit. That being said, we don't live in a legislative or regulatory vacuum. We already have in the United States 439 government agencies and sub-agencies, 2.2 million employees just at the federal level. So many agencies are active right now trying to get their paws on artificial intelligence, and some of them already have it. You look at the FDA [Food and Drug Administration], the FAA [Federal Aviation Administration], NHTSA [National Highway Traffic Safety Administration], I could go all through the alphabet soup of regulatory agencies that are already trying to regulate or overregulating AI right now.

Then you have the Biden administration, who's gone out and done a lot of cheerleading in favor of more aggressive unilateral regulation, regardless of what Congress says and basically says, “To hell with all that stuff about Chevron Doctrine and major questions, we're just going to go do it! We're at least going to jawbone a lot and try to threaten regulation, and we're going to do it in the name of ‘algorithmic fairness,’” which is what their 100-plus-page executive order and their AI Bill of Rights says they're all about, as opposed to talking about AI opportunity and benefits—it's all misery. And it's like, “Look at how AI is just a massive tool of discrimination and bias, and we have to do something about it preemptively through a precautionary principle approach.” So if Congress isn't going to act, unfortunately the Biden administration already is and nobody's stopping them.

But that's not even the biggest problem. The biggest problem, going back to the point that there are 730-plus bills pending in the US right now, the vast majority of them are state and local. And just last Friday, governor Jared Polis of Colorado signed into law the first major AI regulatory measure in Colorado, and there's a bigger and badder bill pending right now in California, there's 80 different bills pending in New York alone, and any half of them would be a disaster.

I could go on down the list of troubling state patchwork problems that are going to develop for AI and ML [Machine Learning] systems, but the bottom line is this: This would be a complete and utter reversal of the winning formula that Congress and the Clinton administration gave us in the 1990s, which was a national—a global framework for global electronic commerce. It was very intentionally saying, “We're going to break with the Analog Era disaster, we're going to have a national framework that's pro-freedom to innovate, and we're going to make sure that these meddlesome barriers do not develop to online speech and commerce.” And yet, here with AI, we are witnessing a reversal of that. States are in the lead, and again, like I said, localities too, and Congress is sitting there and is the dysfunctional soup that it is saying, “Oh, maybe we should do something to spend a little bit more money to promote AI.” Well, we can spend all the money we want, but we can end up like Europe who spends tons of money on techno-industrial policies and gets nothing for it because they can't get their innovation culture right, because they’re regulating the living hell out of digital technology.

So you want Congress to take this away from the states?

I do. I do, but it's really, really hard. I think what we need to do is follow the model that we had in the Telecommunications Act of 1996 and the Internet Tax Freedom Act of 1998. We've also had moratoriums, not only through the Internet Tax Freedom Act, but through the Commercial Space Amendments having to do with space commercial travel and other bills. Congress has handled the question of preemption before and put moratoria in place to say, “Let's have a learning period before we go do stupid things on a new technology sector that is fast moving and hard to understand.” I think that would be a reasonable response, but again, I have to go back to what we just talked about, Jim, which is that there's no chance of us probably getting it. There's no appetite in it. Not any of the 111 bills pending in Congress right now says a damn thing about state and local regulation of technology!

Is the thrust of those federal bills, is it the kinds of stuff that you're generally worried about?

Mostly, but not entirely. Some of it is narrower. A lot of these bills are like, “Let's take a look at AI and. . . fill in the blank: elections, AI and jobs, AI and whatever.” And some of them, on the merits, not terrible, others, I have concerns, but it's certainly better that we take a targeted sectoral approach to AI policy and regulation than having the broad-based, general-purpose stuff. Now, there are broad-based, general-purpose measures, and here's what they do, Jim: They basically say, “Look, instead of having a whole cloth new regulatory approach, let's build on the existing types of approaches being utilized in the Department of Commerce, namely through our NIST [National Institute of Standards and Technology], and NTIA [National Telecommunications and Information Administration] sub-agencies there. NIST is the National Standards Body, and basically they develop best practices through something called the AI Risk Management Framework for artificial intelligence development—and they're good! It's multi-stakeholder, it's bottom up, it's driven by the same principles that motivated the Clinton administration to do multi-stakeholder processes for the internet. Good model. It is non-regulatory, however. It is a consensus-based, multi-stakeholder, voluntary approach to developing consensus-based standards for best practices regarding various types of algorithmic services. These bills in Congress—and there's at least five of them that I count, that I've written about recently—say, “Let's take that existing infrastructure and give it some enforcement teeth. Let's basically say, ‘This policy infrastructure will be converted into a quasi-regulatory system,’” and there begins the dangerous path towards backdoor regulation of artificial intelligence in this country, and I think that's the most likely model we'll get. Like I said, five models, legislative models in the Senate alone that would do that to varying degrees.

AI policy under Trump (22:29)

Do you have any feel for what a Trump administration would want to do on this?

I do, because a month before the Trump administration left office, they issued a report through the Office of Management and Budget (OMB), and it basically laid out for agencies a set of principles for how it should evaluate artificial intelligence systems, both that are used by the government or that they regulate in the private sector, and it was an excellent set of principles. It was a restatement of the importance of policy, forbearance and humility. It was a restatement of a belief in cost-benefit analysis and identifying not only existing regulatory capacity to address these problems, but also non-regulatory mechanisms or best practices or standards that could address some of these things. It was a really good memo. I praised it in a piece that I wrote just before the Trump administration left. Now, of course, the Trump administration may change.

Yes, and also, the technology has changed. I mean, that was 2020 and a lot has happened, and I don't know where. . . . I'm not sure where all the Republicans are. I think some people get it. . .

I think the problem, Jim, is that, for the Republican Party, and Trumpian conservatives, in particular, they face a time of choosing. And what I mean by this is that they have spent the last four to six years—and Trump egged this on—engaging in nonstop quote-unquote “big tech bashing” and making technology companies in the media out to be, as Trumps calls them, “the enemy of the American people.” And so many hearings now are just parading tech executives and others up there to be beaten with a stick in front of the public, and this is the new thing. And then there's just a flood of bills that would regulate traditional digital technologies, repeal things like Section 230, which is liability protection for the tech sector, and so on, child safety regulations.

Meanwhile, that same Republican Party and Mr. Trump go around hating on Joe Biden in China. If it's one thing they can't stand more than big tech, it's Joe and China! And so, in a sense, they've got to choose, because their own policy proposals on technology could essentially kneecap America's technology base in a way that would open up the door to whether it's what they fear in the “woke DEI policies” of Biden or the CCP’s preferred policy agenda for controlling computation in the world today. Choose two, you don't get all three. And I think this is going to be an interesting thing to watch if Mr. Trump comes back into office, do they pick up where that OMB memo left off, or do they go right back to beating that “We’ve got to kill big tech by any means necessary in a seek-and-destroy mission, to hell with the consequences.” And I don't know yet.

Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.



This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

Show More

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features