Tech Panic, Then and Now: Judge Glock on AI, Regulation, and Real Harms

Is tech panic new—or just history on repeat? Judge Glock (Manhattan Institute) walks through what past tech scares (lead gasoline, CFCs, TV) got right and wrong, why “externalities” matter more than vibes, and how to think about AI regulation today—transparency mandates, liability vs. preclearance, “AI pauses,” and realistic optimism. We end with his own journey from socialism to markets.
Want to explore more?
- Katherine Mangu-Ward on AI: Reality, Concerns, and Optimism, a Great Antidote podcast.
- The Past and Future of AI (with Dwarkesh Patel), an EconTalk podcast.
- Tyler Cowen on the Risks and Impact of Artificial Intelligence, an EconTalk podcast.
- Walter Donway, Neoliberalism on Trial: Artificial Intelligence and Existential Risk, at Econlib.
- Read more of Glock's work for the Manhattan Institute.
Read the transcript.
Juliette Sellgren
Science is the great antidote to the poison of enthusiasm and superstition. Hi, I'm Juliet Sellgren, and this is my podcast, the Great Antidote- named for Adam Smith, brought to you by Liberty Fund. To learn more, visit www.AdamSmithWorks.org. Welcome back. Today on October 3rd, 2025. I'm excited to talk about something super relevant- AI. We're not just going to be talking about that though. We're going to be talking about the history of technology regulation and whether or not it goes well and how optimistic we should be. So backing up, kind of modern speculation with the historical record, I think you could say. Today I'm excited to be joined by Judge Glock. He is the director of research at the Manhattan Institute, and he has written a lot about regulation and specifically tech regulation. Welcome to the podcast.
Judge Glock
Oh, thank you so much for having me, Juliette.
Juliette Sellgren
So first question, what is the most important thing that people my age or in my generation should know that we don't? And also Gen Alpha, they're kind of relevant now.
Judge Glock
About life, about the world as it is or as it should be, and all of the mysteries and wonders the universe or about tech regulation?
Juliette Sellgren
Whichever is most important.
Judge Glock (1:35)
I mean, I guess it would be the former, but I know very little about that. So I'll focus on the latter. I think it's important to understand that there has obviously been a long history of panics. We can comfortably call them about new technologies. Some of those panics have been borne out. We can talk about specific things such as the ozone layer. Obviously there was a real concern caused by a very particular type of chemical that we needed to phase out, but obviously as you already mentioned on the whole, most technology has been created by humans and is for the benefit of humans and has worked very well for humans. The reason why supposedly someone is listening or watching this right now and is capable of doing that as opposed to me just shouting over an amphitheater or something equivalent, but understanding that long history of concerns and panics about new technology and how to regulate them I think is important for understanding how we're going to look about the regulation of AI and just tech more broadly over the next 10, 20 years. And maybe to summarize those kind of vacuous or vague statements that it's very hard to know what the impact of technology is going to be when it starts out. And that's something we've seen time and time again in the history of technology.
Juliette Sellgren (3:04)
So referring to a chemical as technology, as someone who studies economics, I totally see where you come from, but I feel like for those not trained in economics, maybe an eyebrow or two would be raised. That's not technology. So when you say technology inclusive of chemicals, what do you mean? What do they do? I feel like that will help situate the conversation if we start with what even is tech?
Judge Glock (3:41)
Yeah, well, I mean it could potentially encompass everything that humans have ever done from the shaped stone arrowhead found in the waste of the Western Middle East, yes to AI and podcasts and all the rest of it. Now, chemicals are an interesting case, because on one level we're very concerned about entirely naturally occurring chemicals, say like arsenic, which have been out in the world for quite a while, but the ability to manufacture it, take some technology, do you have squeezing those almonds, I still think is how you get a fair amount of the arsenic. And people have been doing that for quite some time to poison each other. And we've also just been concerned about the extent of arsenic in food. So there's technology and we would have to figure out technology to measure that. So something that's even naturally occurring such as arsenic requires technology to harvest it and to produce it and requires technology to measure it and to regulate it.
And in the 19th and early 20th century with the biggest concerns were around new technology, were around new food manufacturing processes and what were they doing to human health? Some of that we now know is entirely legitimate. They were using say, lead solders to kind of keep on the tops of tin food, tin canned food. And even that little soldering that's keeping the tin together, melting it together could get into people's food and poison with blood. That is a very real concern. And that lead soldering was also used of course in a lot of pipes that we're still dealing with going into people's homes from water and so forth. Lead of course, is a naturally occurring product out there. But again, we used technology to understand what the potential effects could be. And importantly, to measure it, you had to have somebody who knew how to look for lead in tiny parts per million in food in America's water supply.
Well, so I'll just wrap up just mentioning that the issue with the ozone layer was a very simple, relatively small, minimally produced particle called chlorocarbon, which was not naturally occurring. It was produced by man synthetically, and some of this was escaping out into the atmosphere and it was not something that anyone had their eye on, but it turned out these chloro fluorocarbon CFCs interacted with the ozone layer, broke down the ozone, and that allowed outside ultraviolet rays to come into the earth and could even potentially blind people cause increased skin cancer. All sorts of negative things could happen. We got together internationally to ban this relatively simple product and it worked very well. The ban did at least, and we don't have an ozone layer concern as we did in the 1980s. So there's a lot of technology that gets involved in every aspect of this, and sometimes things that seem simple on their face can be very high tech when you actually look into the background. But maybe we just want to talk about when people say tech today, when they use tech without the full multisyllabic noun, they're often talking about a computer and they're talking about bits, which actually has a lot of overlap with the history of other technology regulation, which I hope we'll talk about. But that is just one aspect of technology.
Juliette Sellgren
So then I guess, yeah, I guess things we use to make things…
Judge Glock (7:41)
The technology can be an algorithm and an algorithm expansively defined to include something like you take a harder rock to chip away a piece of flint until it's the shape of a triangle so you can stab a mammoth. That is a sort of algorithm that someone has in their brain. It's nothing manifestly physical out there, but the technology is how do we use the material reality of our world around us to accomplish our ends? Maybe I'll stick with that definition. I like that. Now that I say it.
Juliette Sellgren
Yeah, I like that too. I used a drain cleaner earlier today to unclog my bathtub and…
Judge Glock
A wonderful technology!
Juliette Sellgren (8:24)
Except let me tell you a horror story. Last year my roommates and I did that and then it didn't work. So we called our landlord who called some guy who comes with a drill and is drilling in the pipe and it gets unclogged. We're like, wonderful. The next day we hear this dripping on the first floor in my roommate's closet, and then all of a sudden the ceiling in her closet that is waterlogged falls down and everything with gallons of water from the tub. So it can go really wrong. But also the house was about 200 years old probably, and we used a boatload of drain cleaner. Did not tell the landlord that's probably part of the problem. And then this drill just went and completely dealt the final blow to these pipes, and every single piece of clothing in that closet got ruined. Rest in peace.
Judge Glock
Yeah, RIP clothing.
Juliette Sellgren (9:36)
But most of the time that doesn't happen. So I was really worried that my neighbor was going to come upstairs and be like, you ruined my, this is a tangent. But the way that we deal with technology, even stuff that you don't really think about that often, like, oh, drain cleaner is technology can have a big effect in the same way that if your computer breaks, all of a sudden you're aware of your decreased quality of life. So I like that definition that is more expansive because I feel like that it drives home the point that when it goes wrong, we care a lot.
Judge Glock (10:16)
And it also drives home the point I hope that the model world we exist in is composed of frankly an infinitude of technologies that all of us can only even in our most expansive and knowledgeable state rock at a tiny, tiny percent of. And it requires a lot of technologically adept individuals out there working in tiny narrow fields to make sure all of us are comfortable and fed and cooled and heated and all the rest of it. And a lot of it is much newer than one would imagine. It was not a hundred years ago, most of us were living in a house that was not dissimilar from a house that would've been around 2000 years ago. And in the past a hundred so years, we've created most of our modern amenities through the application of technology. And obviously it's gone very well, but occasionally it's gone bad and occasionally we've tried new technologies that haven't worked out so well. Sometimes they've been regulated away. Sometimes the companies and others using them just realize it's not a good tool and we move on. But there's a place for each, obviously. I tend a little more to the non-regulatory side, but we have to understand historically there has been times and places where the government has needed to step in to provide some sort of regulation.
Juliette Sellgren (11:44)
Obviously, hindsight is 20/20, but are there things that have been identified as potential predictors or things that are highly correlated with the technology being successful or not and becoming detrimental? Have we gotten better at thinking about whether it will help or not before we use it?
Judge Glock (12:12)
I wouldn't say there's anything we can know for certain ahead of time. The real issue with technology when the government or someone else's needed to step in is when they're, what economists call externalities, when there's the use of a technology by one person impacts a lot of other people, and that first person who's using the technology in a non-regulated state or in just kind of a state of nature will use too much of that technology will have impacts on other people that are important and detrimental and someone needs to step in. So we can get to this later on the AI issue, but AI on first blush doesn't seem to have a ton of externalities. From the user's perspective, the user by using AI isn't necessarily impacting a lot of other people. If you were using chlorofluorocarbons in your refrigerator, which is where this kind of, again, relatively minor chemical was mainly used, you were having a massive effect that could say be blinding cows in Argentina, and you were blissfully unaware of that effect, and that's when someone else needed to step in. A lot of times we're looking for when technology goes wrong, it's something like that. It has an effect on other people that we are not aware of and we don't have to pay for, and therefore someone else needs to step in and call a halt or limit the use of it. So the burden on others is not too excessive. And I mean, we can talk about lead and gasoline, which is probably one of the most disastrous social technological experiments in modern history. Should we talk about that for a second?
Juliette Sellgren
Yeah. What happened? I don't know about any of this. Luckily. I mean, it means we figured it out, right?
Judge Glock (14:04)
Yeah, it did. So it's a bit of a funny story because I believe it was the same guy, Thomas Midgley, who was an inventor who came up both with the use of chlorofluorocarbons that ate away at the ozone layer and were banned internationally with an international treaty in 1987, but also with the smart idea of putting lead as what was known as an anti-knock agent in gasoline, a problem with very high speed automobile engines as they have a tendency to knock, to rattle against the pistons, rattle against the chamber they're in. And so lead had this wonderful effect of smoothing out the process of an internal combustion engine. Everyone was very excited about this makes your car engine doesn't blow up as quickly, it runs a lot smoother. The explosions inside the car engine don't happen is erratically. And there was only just a little problem is that every time people were burning that gasoline, a little bit of the lead crept up into the atmosphere.
And it turned out that every Tom, Dick and Harry in America and other places that were using this was absorbing a lot of lead, which we now know is an incredibly dangerous neurotoxin that makes people over the long run, a little dumber, lowers IQ and can lower IQ significantly and makes them less capable of exerting self-control. So I was born way back in 1981, and I'm at the tail end of the generation that was still absorbing possibly even in my mother's uterus, trace amounts of lead from the atmosphere. And therefore, because of that, I'm doubtless a little dumber and a little less self-controlled than I would've be otherwise. That's just a fact. It is one of those weird things. And eventually the EPA and others stepped in and said, no, you can't put lead in gasoline that has this incredible externality on lots of other individuals.
And they banned it. They found other ways to create anti knock agents in gasoline, and that was overall very much for the good. That's something we did not want in our air anymore. I worry sometimes about my wife who was over in Greece in the 1980s when everybody was driving these atrocious lead gasoline, fuel diesel engines, and she was sucking down tons of that air back in the day. It was really a mass experiment on human beings carried out everywhere, and it took a while for us to figure out what was going on and to stop it, but that's in a case, a real clear case of where things went wrong.
Juliette Sellgren (16:57)
I mean, that example first, this guy has a really bad track record, it sounds like, but what is so interesting to me about this is how do you know what the effect is? It seems like being a little dumber and having a little bit less self-control are minor enough. Obviously intelligence is important and we don't want people to be dumber than they have to be. I guess not to put it bely or bluntly, but even just instrumentally for economic growth, you need brains. And so if you're diminishing people's brains over the long run, you're actually going to diminish the quality of life for a lot of people because we're not going to be able to produce the awesome things that can say fix these problems. Okay, but how do you land like them?
Judge Glock (17:48)
If I could just add, I mean there's a whole debate, and obviously I bet many of your listeners are familiar with this, is that how much the prevalence of lead in the atmosphere led to the crime surge in the 1960s and seventies? It was that bad.
Well, there's a lot. People have had some very interesting studies, and there's questions when you look internationally about both the introduction of lead gasoline and the banning of lead gasoline. And there does seem to be a correlation about 15 to 20 years down the road after each of these of either increasing or decreasing violent crime because people a little dumber and a little less, yeah, these effects can be very bad. And the CFCs is again by Thomas Ley, I believe. I have to double check this. He could have been absolutely atrocious. The scientists who originally discovered that this was destroying the ozone layer famously came home to his wife one day and the wife asked, how's your work going? And he said, really great. I made a big breakthrough, but it may be the end of human civilization. He found this thing. It's like, wait a second. This thing might, if it keeps going, absolutely wreck this absolutely crucial part of our atmosphere that's essential for sustaining life on earth. And he was worried without any action that could happen, but we took action and it became basically a non-problem. So yeah, there are ways where these can go wrong and people can step in to limit them, but the effects can be pretty bad, as you said. And it led one is one of the worst.
Juliette Sellgren (19:22)
So this is a good example of, I mean, these examples of actually swift action being taken when we know, but pragmatically, how do we know? It seems like these things, obviously you would rather maybe prevent them before they happen, but I don't really believe we can do that. How are we supposed to know, right? It's exactly what you were saying about externalities. So how long does it take to learn these sorts of lessons? How do we make sure people are paying attention? How do you make that sort of connection? I don't think I would've ever thought, wow, people are really dumb nowadays. I wonder if there's something we're inhaling that is causing that. So how did it get identified?
Judge Glock (20:11)
Yeah, there had been issues say on the lead going back generations. People even knew in Roman times when lead pipes were more common, that plumbers and just like if you look on the periodic table, it's pb, right? Lead is Pb. That's because in Latin, that's the root of it, just like it's the root of plumbers. If you're working with pipes, you're working with lead. And people knew plumbers had a lot of troubles. They were generally dealing with pipes when they were grown up, and so people knew they had gastrointestinal issues, they had other problems. They didn't know that lead was necessarily a neurotoxin for people growing up, which was the big issue, although people did have concerns even about young kids of being too much around it. So there's often some awareness going back for a long period that something can have detrimental effects. The other class example of this, of course, is the Mad Hatter in Alice in Wonderland. Hatters were famous for losing their minds, for going schizophrenic, and they worked with mercury. And mercury is also a neurotoxin, which people, again, even back in Alice in Wonderland days, people knew this had a negative effect. But yes, the lead in the air, how do people necessarily learn that ahead of time? So don't often you need to test these sorts of things. And we've gotten better on that. So there's two ways if we're, I guess we're now we're deep, deep, deep in the chemical regulation world, but we might as well go through it. These are probably the best cases for regulation for this podcast that I imagine that much like me is pretty free market and pretty open to letting individuals and entrepreneurs take their courses.
We're starting off with a lot of the bad cases, but nonetheless, we can go into this. So you have a lot of pre-market clearance sort of things. You have stuff like the Toxic Substances Control Act from the EPA, which before you introduce new chemicals, you often have to take it to get tested by the EPA or others to make sure it's generally regarded as safe. The fungicide, insecticide, fungicide, insecticide, pesticide act. I believe it is a similar sort of process where if you're doing these things, you have to take it to the EPA, most famously, of course, the FDA gets to decide whether or not we can take new pills and whether or not those new pills or biologics inventions are actually safe. So before someone starts taking that pill, they have to go through an entire excruciating FDA review process. Now, the pill medicine sort of issue seems to me further away from your best case for regulation in that there aren't usually externalities there. It's mainly how that affects individuals. So the other way besides these kind of pre-market clearance regulations, which we do have in some cases probably makes sense, is where you have a liability regime, which if someone releases a drug or something else that hurts customers, they get to suffer lawsuits for the damages their drug or other paused on that individual. Now again, assigning liability can sometimes be a little tricky. Something like lead would almost be impossible to do through a liability regime.
You burn a little gasoline and everybody's IQ goes down two points. Well, who sues who and how do you get it together? Maybe you could do a big class action or something, but it's really tough to know who caused what. So regulatory reasons...
Juliette Sellgren (23:59)
Especially because the people who are suing and the people who are getting sued all have the same exposure.
Judge Glock
We're all in the same boat.
Juliette Sellgren
So everyone is dumber. So who actually gets hurt? Everyone?
Judge Glock (24:12)
Yeah. And Thomas Midgley, and if he had any kids, his kids are also absorbing that too. So who's at fault here? Tough to know. But there is that regime, which for a lot of cases, and people like me who are more free market oriented, think that a liability regime seems to be a pretty good substitute for regulation. In those cases where you can identify clear victims, you create something like Vioxx, which was the pill, I forget what it was supposed to do, some sort of medication, but it had these very, in the early two thousands, had these negative effects, believe heart palpitations, heart attacks and others, and it was withdrawn from the market. The company that I believe Pfizer put it out [Editor’s note: Vioxx was produced by Merck], had a lot of lawsuits. There you go. One could say, Hey, this was a huge failure. The pill got out. And it clearly was, this is why even in a non-FDA world pharmaceutical companies would perform a lot of tests, but to make sure that the product was good before it gets out.
But in some cases, things are going to get out of the world. They're going to go poorly, and then people are going to be sued. And that's going to be the incentive for that company has to make sure they don't release the bad drug that's going to hurt someone. Besides the obvious effects that you're never going to make money if you're a company that constantly releases drugs that kill their consumers. So the liability slash pre kind of registered regime or the main ways to deal with those concerns about effects of new technology, new chemicals or others, when you're not positive about what they could be, those both are different ways of making companies cautious and kind of weigh the cost benefits of what a new introduction of a chemical, a pesticide, whatever could be before they come out. And I would say, if anything today, we're leaning probably far too much into the sort of pre-clearance regime where you're supposed to check everything with a government bureaucrat who often doesn't know the actual damages here and can't possibly fathom what could happen down the road. But that's the world we live in right now.
Juliette Sellgren (26:13)
Yeah, I mean even just expanding this to social media and phones and that sort of modern tech technology…
Judge Glock (26:28)
This is my thing. When I get into the tech regulatory world, immediately I feel like I'm all of a sudden moved into a slower lane, not because this is not important, not because not anything larger dollar amounts at stake here than there are others, but when you look at the history of all these things are like we have 40,000 people die a year in car crashes, and the history of regulation of automobiles is one of the most extensive and conflicted in modern American life. You look at chemicals that are making people dumber and more violent, they're killing people. You have the famous case of thalidomide, that was the pill that in the early 1960s was given to women for morning sickness and was causing extreme birth defects for people, the famous flipper babies where babies would have their fingers welded together. And so you have all this parade of horribles that people have talked about and continue to talk about issues are still at stake. Then you get down into these social media discussions and you're like, well, what if someone sees something that makes them upset, makes them upset.
That's often the worst case scenario that people are talking about regulating. I'm just often don't understand the sort of vigor that can go into the regulation of this. When people talk about the harms, I'm not saying there's no harms, but it just doesn't seem of the same magnitude as everything else that is actually currently being discussed in kind of the regulatory, technological, chemical, et cetera, environmental world. Again, it's attracted immense amount of attention.
Juliette Sellgren (28:06)
Yeah, well, I think because for people with so much abundance where we actually, not that there aren't these huge cases and lawsuits and problems with drugs, but because there's less innovation, I think I can say that with certainty in stuff like drugs, the thing where you really see us learning about something becoming problematic or having adverse effects I guess is social media and stuff, because we actually don't have the same level of innovation. So it's not, I don't know. That might be a stretch.
Judge Glock (28:52)
Yeah. One, it's like you said, everyone's very familiar with it, and obviously some of the closest sort of analogies can be history of television and radio regulation, which is truly an atrocious story of regulators failing time and time again and making things worse and more expensive. It is kind of the polar opposite of the case of banning lead and gasoline or something like a successful regulation like that. So one, that kind of history of telecommunications regulation, it just went, has been very, very, very poor, should make people suspicious. But the reason it attracts some attention back then is because people consumed a lot of it, and you're very familiar with in your everyday life, and it's something that impacts everyone. So yes, people concerned about the direction of the modern world are going to look at social media and say, my Lord, what do we do about this and that and the other thing, even if the actual concrete harms they're talking about are very hard to grasp and often very speculative.
Juliette Sellgren (29:59)
Well, I feel like it's almost easy because easy. It's easy to worry about that and not come to a conclusion. Whereas I think it is harder to see that, oh, we've been using this leaded gas and it is making people dumber in a way. It feels less costly to the individual to let go of that. Once the harm is identified, the harm is so great. So maybe in a way because the harm is less tangible, it is easier social media to spend a lot of time arguing about it when really we're not going to do anything, and it's not that bad compared to all this other stuff because we don't need swift justice because it's not lowering people's IQs. I don't know.
Judge Glock (30:47)
So here's the issue that makes it hard when people are talking about regulating the negative effects of social media or whatever it is, or television back in the day, which I think again, is a good analogy. I very much remember the conversations in the 1980s and 1990s that television was a near existential threat to America in the sense of it was making people dumber. We weren't reading anymore. We were watching five to six hours of TV a day. It was too violent. It was creating all of the crime waves that we were talking about at the time. Television was often pointed to as a…
Juliette Sellgren
Was it lead?
Judge Glock (31:27)
Well, yeah, more likely we had to guess the two of them more definitely more likely lead than tv. But a lot of these concerns that people have about social media were expressed back in the day about television. And there was famous regulator, Newton Minnow, who was the head of the Federal Communications Commission, which was the big television regulator back in the day, described television as a vast wasteland that needed the government to raise the standard and make sure Americans weren't being polluted by this atrocious programming that was going out there that was too violent, ignorant, et cetera, et cetera. And again, on the whole, what we know is the government, if anything, was encouraging worse programming. They were handing out station license based on who was politically favorable, who often just gave bribes to the FCC. We know about a history of that was very common. One of my favorite stories is President Lyndon Johnson made his fortune because as a congressman and as a senator, he was very friendly with a lot of FCC employees, and they got him a lot of special preferences on his radio station in Austin, Texas, which made him surprisingly, even though working his whole life in government became one of the richest people in the American Congress. So anyway, that background is important to understand.
We have this kind of vague, amorphous concern with AI technology. It's just not nice. It's making people not do nice things. It's not providing people this sort of information that they have. And then the government regulation was also in kind of practice very amorphous and hard to pinpoint what exactly it should be or shouldn't be doing. And what that led to a practice was a lot of obscene politicization and favoritism and bad policies decided by bureaucrats instead of by people who actually consume this stuff. And that's what I worry about with a lot of the social media and tech regulation that the reason maybe another, we talked about before, what are the issues when we know a technology might be harmful or what are the indications that we know our technology might be harmful? Another way to also think about that is what are the cases where regulatory policy can be effective? And it can be effective if you can pull a nice clean lever, A to B government is very good at saying don't do A, do B if it's a nice clear A and B, that was, do not put chlorofluorocarbons in your refrigerator. Do not put lead in your gasoline hold the lever world moves on very clear. Something like the regulation of television back in the day, do not be a vast wasteland. Do not put too much Mr Ed and Green Acres on the air. These shows are terrible. Do not put too much violence on. All of this is very amorphous and the regulatory policy on it tends not to be very helpful. Similar thing when you look at AI or social media today, when you look down and brass texts or what is the lever people are asking, the social media companies are AI companies to pull, and it's really, really hard to find out what it is.
It'll often come down to something like transparency, which transparency regulation, and I shouldn't be too tough on it because I have advocated and even helped work on different types of transparency for government regulation, regulations or rules, but it's kind of the last refuge of the scoundrel and that it's when we don't know what else to do, we tell people we need transparency, which is something nobody's ever against when you bring it up in Congress or in a regulatory meeting. But there is also another name for transparency, and it's called paperwork. It just means filling out some forms, going on a website that nobody ever reads again, and that hits the transparency demand. But that's a lot of what we're seeing today.
Juliette Sellgren (35:23)
Or I guess to put it more crudely, it's the covering your ass policy. It's making sure that as the government or as a company, you're covered because you filled out papers and now you can say you thought about it or you were asked to think about it or something.
Judge Glock (35:41)
Yeah, and there's a book on this. It's not the best book in the world, but I believe it's two law professors called More Than You Wanted to know: The Failure of Mandated Disclosure. And they look into this long history of the government where they're concern about, say, data breaches or they're concerned about the nature of a product that could be used inappropriately. You don't swallow, say your drain cleaner or something. And so you mandate more disclosure, more X, Y, and Z. And what this leads, of course, in what these authors put in their book more than You wanted to know, is that of course, nobody reads all this infinitude of disclosures that we're all bombarded with every day. This is the giant 35 page Apple agreement before you sign on for iTunes or whatever it is. This is your thousands of pages of mortgage regulation, disclosures, housing disclosures, et cetera.
Yeah, it's a cover your ass thing just as much for the companies as for the regulators. The regulators can say, Hey, we did the disclosure thing. We force people to print up hundreds of pages of worthless documents that nobody looks at, and then somebody has to sign them at the bottom to show that the disclosure has happened. Not very effective, but that's again, a lot of what people are talking about in the AI and tech world more broadly is disclosure, especially around data privacy, which is, we can get into a little more, but for the contemporary tech regulators, the privacy issue is often paramount. And so even though again, the kind of negative effects of that can be amorphous and so often that the solution is more disclosure, which just means more. That's you clicking the little cookie every time. Yes, I accept the cookie. Every time you go to a new webpage people…
Juliette Sellgren
And you still don't know what a cookie is, what on earth is a cookie?
Judge Glock (37:36)
Nobody knows what this stuff is. It's supposed to just help the computer remember that you've been to certain page, collect certain information about you, so your stuff fills in easily so they know where you're coming from, et cetera. For some reason, somebody in the regulatory community thought this was an obscene burden for all of us to have these little cookies being saved on our browsers. And you can clear them whenever you want. And so now we all have to go through this. Yes, yes, I accept the cookies. I know I'm having cookies put on my computer. 99.999% of Americans don't know what that means, but the disclosure has happened and therefore the regulators are satisfied and therefore they're doing their job even though it's a complete waste of human life.
Juliette Sellgren (38:26)
Yeah. Well, so I think there's social media, but then I think of AI as a whole different thing. Maybe I shouldn't, that's kind of wrong. But in certain ways it seems more analogous to a, I don't know. It doesn't seem as straightforward to me as a led kind of situation, but it seems closer to lead than social media is, but maybe not very close at all on a scale. I don't know what the relative distances are. I just know that, well, I guess I don't know what the absolute distances are, but I know that relatively, everyone talks about how AI something turns into the matrix, something the world ends, and that feels a little ridiculous slash we don't really know enough to do anything about that. But then it seems more powerful, so potentially more I harmful than social media. But I don't know, how do you kind of identify and situate it in terms of all the different technologies we've had and seen and observed, and I don't know what does that tell us about how this is going to play out or how it is playing out because it feels unique, but it's probably not given the amount of stuff that has been created.
Judge Glock (40:01)
Well, some grammarians go wild with this because unique should be a singular term and shouldn't be qualified necessarily. But it is of course somewhat unique and somewhat not unique. It shares properties with previous technologies in many ways. And some of the interesting things about the AI regulatory universe is that a lot of concerns about regulating new technology. It's actually just kind of applying the history of technology regs that we already have or technology laws that we already have, and seeing how they apply to that new one. So a lot of the left has been very concerned with algorithmic discrimination around ai. This has been one of the kind of concerns when you read California Laws President Joe Biden's executive order on AI and others, it's well, the AI could be used to discriminate. Now, one, you're not allowed to discriminate anyway. So if you are programming an AI that says don't hire women, or you're using AI to ferret out women from your employee pool, then you can be sued whether or not you can be sued, whether or not you did that through AI or whether or not you just did that by telling your coworkers not to hire women at the job.
And in one sense, that's nothing new on about that. It's just saying, this technology can't be used to do things you couldn't do anyway. Now, some of that to my mind is kind of absurd because what they mean by algorithm discrimination is basically unequal results, is that, say the AI algorithm or whatever it is says women aren't as likely to have completed an engineering degree, which is just a fact. It's not a discrimination that's just the facts in the world. But if the AI through its algorithm has determined that, and therefore somehow search people, some of the regulatory, the regulatory groups say that is discrimination that we need to stop now, that to me seems absurd. But that's also the kind of long history of disparate impact regulation that we've had for employers and others, which basically say, if you have unequal results for different groups, that is discrimination per se, and you need to stop that.
That's the law in a lot of employment law, in a lot of housing and other discrimination cases that even if there's kind of a factual basis for your unequal results, that's inappropriate, and you need to find a different way to do that. So part of that is just applying that these kind of previous concerns to ai. Now, the bigger concern you pointed out is the singularity. And to me, it is a very interesting discussion. I do not have the strongest priors or even sentiments on which way this will actually go. I am not totally dismissive of the possibility that AI could fundamentally reshape all of human life, could lead to mass unemployment, could even in some sense, turn against its creators and have these negative effects just argue endo. Let's pretend all of that is a real possibility. I think it could be. I don't think it's certain, but it's a possibility.
Let's all admit that possibility. Then the question is, okay, what do we do now in 2025 about it? What we actually do? What can we do to prevent the singularity? And when you look at people like that, they'll say often, again, they'll go to the transparency route. We need to see where the algorithm is sort of coming from or the matrix algebra is being formulated. And then they'll just say something like, we need to do an AI pause. And now that also seems absurd to me because when a lot of people were calling for an AI pause of say, six months or 12 months years ago…
Juliette Sellgren (44:06)
And what does that mean? They just stopped?
Judge Glock
No, they just shouldn't develop it. Open AI, Anthropic, all these…just don’t fire your employees because obviously that would be bad too, but just freeze.
Just hold on. Let's hold on until we supposedly know what's happening here and we can figure it out. But there's no, and then what do we do? If there was some sort of argument about, Hey, let's figure out what's happening with AI and therefore we have a series of potential regulatory paths we can take. But if you think it's going to be a singularity and going to destroy all human civilization, well then the pause just has to be forever and there's no other path we can take. And I think most importantly, it's now been years since people calling for a six month pause, and all we've gotten is slightly more effective large language models and increased ability for video and image creation. That seems all good to me. The civilization clearly has not ended.
Juliette Sellgren
Oh, you didn't notice? It ended yesterday.
Judge Glock (45:16)
It might have and we could have missed it. And that's the problem with this kind of regulatory concern. Even if you think that is real, what are you going to do about it? You can make the case that we have to blow up every AI system right now, which I don't think really any sensible person is talking about.
I think there's some UKI or whoever it is and some others who are claiming that needs to be the course, but that's not what anyone real world thing is going to do. And then otherwise, there's nothing else to do. If you don't blow it up, the singularity is going to happen. And I don't see any clear kind of regulatory path is well, we know the lever to pull. Again, talking about the levers that's going to make the singularity not come. We don't know that lever and so therefore a pause or, and we're not going to know it until these things play out. There's just no way to understand how they're going to play out in practice. Therefore, those kind of regulatory goals seem absurd to me.
Juliette Sellgren (46:16)
Yeah. Should we proceed with optimism? It seems absurd, as you said, to kind of block all of this, especially when what we've seen so far is not bad, not dangerous, in no way really a threat. But do we, given what you've seen and what you've studied with all these other technologies, how should we feel? How should we be as consumers engaging with this sort of technology and kind of living in an age that might be that of a huge technological innovation?
Judge Glock (47:00)
Yeah, I mean, again, the history of technology as we discussed at the beginning is largely positive. There are these clear cases where things go wrong largely. We're able to get those under control and on the whole life continues to get much more better and much more comfortable in ways that are even hard to fathom today. So there is that, what's the old investment saying? They provide you on every new mutual fund or whatever past returns, do not guarantee future results. It doesn't mean that the future is going to be the past just because we've had plus 3% GDP growth forever and people have gotten wealthier and happier and so forth. Doesn't mean that's going to keep happening until the end of time necessarily. So you can't be certain. But again, to me, the scale of the types of harm that people are imagining are really tough to clarify and really tough to even imagine going forward without someone taking action.
The last thing maybe I'd say here is we do have one case, or we invented something. Humans invented a technology that had the very clear capacity to end all of civilization. That was the atomic bomb and then the fusion bomb. We invented something that everyone agreed is unbelievably dangerous, the sort of thing that all things equal we probably wish we hadn't invented if we could. And now in one level, the technology for that is like the AI today and is that we weren't going to stop working on fusion bombs if the Soviet Union was not stopping working on fusion bombs. And I don't think we should stop working on AI as long as China is going to keep working on AI, because we don't just want one country to have that. And so well, that's just going to be the world we're going to live in. Just like there was the world and the 1960s and seventies where we're going to have 40,000 atomic warheads pointed at each other, and in 30 minutes all of human civilization could have ended.
That's not over, but atomic warheads are down about 90% from where they were back in that era. The world is gently gotten more peaceful. We have not had a mass use of that things, and that seemed like the sort of best bad case for technology going bad. And even in that case, we've managed to do pretty well with it. We should be pretty happy with how the atomic age kind of started into some extent finished. That should give us more optimism about dealing with something that seems to me very manageable, like a bunch of graphic processing chips that are able to perform a matrix algebra and spit out Pokemon images occasionally. This seems more manageable to me than the nuclear bomb problem, and yet we manage that one.
Juliette Sellgren (49:41)
Yeah, there's so much more to talk about, and I wish I could bombard you with way more questions, especially about cases where it goes well, like who doesn't like Pokemon images? But so far, I have one more question because we are about out of time, and that is, what is one thing that you believed at one time in your life that you later changed your position on and why?
Judge Glock (50:04)
So this one might be a little easy for me in that I was an old socialist. I was one of those guys that grew up not as a red diaper baby and that my parents were socialists. But from my earliest conscious memories about politics or policy, I was a socialist. And the two things that convinced me I was wrong about that were moving to Philadelphia and seeing that a very activist government with very strong unions and a lot of interventionist policies did not lead to a socialist utopia. And then even better case, I lived in China for a year and living in a country that at the time was coming out of the darkest doldrums of socialism and meeting people who were incredibly optimistic about the future, incredibly dismissive ideology in the past made me understand that that's probably a hopeful route to continue on for the future. And that has remained my firm belief to this day, that capitalism and independent people dealing with issues through their own free will and drive are able to solve these sorts of problems. And I've maintained a steadfast belief in that up through the AI age.
Juliette Sellgren
Once again, I'd like to thank my guests for their time and insight. I'd also like to thank you for listening to the Great Antidote Podcast. It means a lot. The Great Antidote is sound engineered by Rich Goyette. If you have any questions, any guests or topic recommendations, please feel free to reach out to me at Great antidote@libertyfund.org. Thank you.