Monday, during an interview that aired on FNC’s “Tucker Carlson Tonight,” Tesla and Twitter CEO Elon Musk warned about the dangers of artificial intelligence (AI) and how it is being manipulated to suit a political agenda.
CARLSON: So all of a sudden, AI is everywhere. People who weren’t quite sure what it was or playing with it on their phones, is that good or bad?
ELON MUSK, CEO, TESLA: Yes, so I’ve been thinking about AI for a long time since I was in college, really. It was one of the things that sort of four or five things I thought would really affect the future dramatically.
It is fundamentally profound in that the smartest creatures, as far as you know, on this earth are humans, it is our defining characteristic.
MUSK: We are obviously weaker than, say chimpanzees, less agile, but we are smarter.
So now, what happens when something vastly smarter than the smartest person comes along in silicon form, it is very difficult to predict what will happen in that circumstance.
It’s called the singularity. It’s a singularity, like a black hole, because you don’t know what happens after that. It’s hard to predict.
So I think we should be cautious with AI, and we should, I think there should be some government oversight, because it affects the — it’s a danger to the public. And so when you get when you have things that are a danger to the public, you know, like, let’s say, food, food and drugs, that’s why we have the Food and Drug Administration and the Federal Aviation Administration, the FCC.
We have these agencies to oversee things that affect the public where they could be public harm. And you don’t want companies cutting corners on safety, and then having people suffer as a result. So that’s why I’ve actually, for a long time been a strong advocate of AI regulation.
So that I think regulation is — it’s not fun to be regulated. It’s sort of somewhat of — sort of arduous to be regulated.
I have a lot of experience with regulated industries, because obviously, automotive is highly regulated. You can fill this room with all the regulations that are required for a production car just in the United States, and then there’s a whole different set of regulations in Europe and China and the rest of the world.
So I am very familiar with being overseen by a lot of regulators, and the same thing is true with rockets. You can’t just willy-nilly shoot rockets off, but not big ones anyway, because the FAA oversees that and then even to get a launch license, there are probably half a dozen or more Federal Agencies that need to approve it, plus State Agencies.
So I’ve been through so many regulatory situations it’s insane. And you know, sometimes people think I’m some sort of like, regulatory maverick that sort of defies regulators on a regular basis, but this is actually not the case.
So, you know, once in a blue moon, rarely, I will disagree with regulators, but the vast majority of the time, my companies agree with the regulations and comply.
Anyway, so I think we should take this seriously, and we should have a regulatory agency. I think it needs to start with a group that initially seeks insight into AI, then solicits opinion from industry, and then has proposed rulemaking, and then those rules, you know, will probably, hopefully gradually be accepted by the major players in AI, and I think we’ll have a better chance of advanced AI being beneficial to humanity in that circumstance.
CARLSON: But all regulations start with a perceived danger and planes fall out of the sky or food causes botulism.
CARLSON: I don’t think the average person playing with AI on his iPhone perceives any danger. Can you just roughly explain what you think the dangers might be?
MUSK: Yes, so the danger, really, AI is perhaps more dangerous than, say, a mismanaged aircraft design or production maintenance or bad car production, in the sense that it is — it has the potential, however, small one make regard that probability, but it is non-trivial. It has the potential of civilizational destruction.
There’s movies like “Terminator,” but it wouldn’t quite happen like “Terminator” because the intelligence would be in the data centers.
MUSK: The robots just the end effector. But I think perhaps what you may be alluding to here is that regulations are really only put into effect after something terrible has happened.
CARLSON: That’s correct.
MUSK: If that’s the case for AI, and we’re only putting regulation after something terrible has happened, it may be too late to actually put the regulations in place. The AI may be in control at that point.
CARLSON: You think that’s real? It is conceivable that AI could take control and reach a point where you couldn’t turn it off, and it would be making the decisions for people.
MUSK: Yes. Absolutely.
MUSK: No, that’s definitely the way things are headed, for sure. I mean, the things like say, ChatGPT, which is based on JP4 from OpenAI, which is a company that I played a critical role in creating, unfortunately.
CARLSON: Back when it was a nonprofit.
MUSK: Yes. I mean, the reason OpenAI exists at all is that Larry Page, used to be close friends, and I would stay at his house in Palo Alto, and I would talk to him late into the night about AI safety.
And at least my perception was that Larry was not taking AI safety seriously enough, and —
CARLSON: What did he say about it?
MUSK: Larry Page, he really seemed to be — wanted sort of it digital super intelligence, basically digital god, if you will, as soon as possible.
CARLSON: He wanted that?
MUSK: Yes, he’s made many public statements over the years that the whole goal of Google is what’s called AGI, Artificial General Intelligence or artificial superintelligence. But no — and I agree with him that there’s great potential for good, but there’s also potential for bad and so if you’ve got some radical new technology, you want to try to take a set of actions that maximize probably, it will do good and minimize probably, it will do bad things.
MUSK: It can’t just be helpful, and let’s just go, you know, barreling forward and, you know, hope for the best. And then at one point, I said, well, what about, you know, we’re going to make sure humanity is okay here. And then he called me a speciesist.
CARLSON: Did he use that term?
MUSK: Yes, and there were witnesses, I wasn’t the only one there when he called me speciesist. And so I was like, okay, that’s it. Yes, I’m a speciesist. Okay, you got me.
What are you? Yes, I am fully a speciesist. Busted.
So that was the last straw. At the time, Google had acquired DeepMind, and so Google DeepMind together had about three-quarters of all the AI talent in the world. They obviously had a transfer of money and more computers than anyone else, so I’m like okay we have a unipolar world here where there’s just one company that has close to a monopoly on AI talent and computers like scale computing. And the person who is in charge doesn’t seem to care about safety. This is not good.
So then I thought, what is the furthest thing from Google, it would be like a nonprofit that is fully open because Google was closed for profit. So that’s why the OpenAI are the first to open source, you know, transparency so that people know what’s going on.
MUSK: And we don’t want to have like a — I mean, while I’m normally in favor of for profit, we don’t want this to be sort of a profit maximizing demon from hell.
CARLSON: That’s right.
MUSK: That just never stops.
MUSK: So that’s how OpenAI was —
CARLSON: So he you want speciesist incentives here? Incentives that —
MUSK: Yes, I think we want pro-human.
MUSK: Just like, is the future good for the humans?
MUSK: Yes. Because we’re humans.
CARLSON: So can you just put it — I keep pressing you, but just for people who haven’t thought this through and aren’t familiar with it and the cool parts of Artificial Intelligence are so obvious, you know, write your college paper for you, write a limerick about yourself.
CARLSON: But there is a lot there that’s fun and useful, but can you be more precise about what’s potentially dangerous and scary? Like what could it do? What specifically are you worried about?
MUSK: I am going with old saying, the pen is mightier than the sword. So if you have a super intelligent AI that is capable of writing incredibly well, and in a way that is very influential, very convincing, and then — and is constantly figuring out what is more convincing to people over time, and then enter social media, for example, Twitter, but also Facebook and others, you know, and potentially manipulates public opinion in a way that is very bad. How would we even know?
Read the full article here
Discussion about this post