>> High stakes in the age of artificial intelligence, this week on "Firing Line."
>> I know that you and Frank were planning to disconnect me.
>> From "2001: A Space Odyssey" and "Star Wars"... >> Behave yourself, R2.
You're going to get us into trouble.
>> ...to "The Matrix" and "The Terminator," Hollywood has predicted the rise of the robot, both as friend and foe, for decades.
But with artificial intelligence, A.I., already telling us what movies to watch, listening to our commands... >> Alexa, order trash liners.
>> ...and soon to be driving our cars, will the fiction of the movies soon be reality?
Former Google CEO Eric Schmidt says A.I.
is changing everything.
>> We could be really altering human beings' experiences unless we figure out a way to deal with this uncertainty.
>> The computer scientist is now taking a hard look at whether the United States and the world are ready.
>> Democracy needs to have a digital update where the digital systems are consistent with the democratic values.
>> Plus, new ways to tackle disinformation... and climate change.
What does Eric Schmidt say now?
>> "Firing Line with Margaret Hoover" is made possible in part by... Corporate funding is provided by... >> Eric Schmidt, welcome to "Firing Line."
>> Thank you for having me.
>> A 100-year-old who is alive today has seen many firsts.
Those firsts include a nuclear bomb, the polio vaccine, a man on the moon, test-tube baby, the personal computer, the smartphone, the creation of many new words, including verbs like "to Google."
You are the former CEO of Google, the head of Schmidt Futures, and co-author of a new book about artificial intelligence, or A.I.
How different, Eric, will the world look 100 years from now?
>> Well, we say it's going to be incredibly different because we're going through an epochal change.
This isn't just a technology change with artificial intelligence.
It's really a change similar to the transition from the age of faith to the age of reason, which created the Enlightenment.
And the reason we say that is we've never had a situation where humans had a humanlike intelligence to help them, partner with them, and travel through the day with them, that the presence of this new kind of intelligence is going to change society in enormous ways.
We'll be richer, poorer, faster, slower, happier, sadder, more anxious, and more complacent, at all the same time.
>> The book you have written alongside Dr. Henry Kissinger, as well as a computer scientist Daniel Huttenlocher -- it's called "The Age of AI: And Our Human Future," and you say, "Slowly, almost passively, we have come to rely on the technology without registering either the fact of our dependence or the implications of it."
In the book, you give an example involving search engines.
Can you share that example with me and the audience?
>> Well, in our case with Google, we didn't realize we needed Google until we had it.
And Google, of course, uses very sophisticated ranking and ad targeting.
And a lot of this technology was invented over the history with Google.
I think the important point here is that it's very difficult today to imagine modern life without the Internet and the social media, the information sources.
Google and the others, we think, make us smarter.
But because they're so powerful, they also change the way we think in ways that are subtle.
When A.I.
comes in with its ability to target precisely what you want, all of a sudden we're going to be in each of our own little filter bubbles where we see exactly what we most want to see and will most get us excited.
This only leads, in my view, to bad outcomes, but nevertheless it's coming.
>> But bad outcomes why?
>> Because society is always about new ideas and about building consensus.
And the current social-media players are optimizing around revenue.
The best way to optimize revenue is for engagement.
And the best way to optimize engagement is outrage.
These technologies, because of their targeting, drive us to one side or the other.
>> So do I hear you as the former CEO of Google taking a step back as a critique, evaluating it as not necessarily a positive influence culturally?
>> Well, 15 years ago, we thought that the correct answer for bad speech was more speech.
And what we've learned is that the weaponization of information through social media is really harmful to society.
I'll give you my position.
It's very simple.
I'm completely in favor of free speech for humans.
And by the way, Donald Trump is a human being, and therefore he should have free speech.
What I'm not in favor of -- >> And he does.
>> Well, he shouldn't be eliminated from platforms.
But what we shouldn't do is we shouldn't take his aggressive speech and automatically amplify it through bots and other sort of ways of having a single individual have a huge voice.
What's happening is that individuals who are particularly charismatic, but they can also be wrong and lying and outrageous, seem to drive out rational conversation, fact-based conversation.
That's not because the people are saying the wrong thing.
It's because the algorithms are finding that speech and promoting it.
>> I'd like to go through some examples of A.I.
and how it might be used in the future and get your reaction.
Every year, as you know, more than one million people are killed in car accidents.
Now, research has shown that A.I.-driven cars will become safer than human drivers.
Do you welcome the day when cars are truly driving themselves?
>> I do, and the reason is that artificial intelligence -- one of the first big wins was artificial intelligence vision is better than human vision.
And furthermore, it doesn't get tired at night, it doesn't get drunk, it doesn't make those kinds of mistakes that lead to these terrible outcomes.
So we should be driving in self-driving cars.
>> I'm going to tick through a few more.
Artificial intelligence has already detected breast cancer earlier than human doctors.
Do you welcome A.I.
as a medical revolutionary force?
>> We do.
And the biggest area I think we're going to see wins in A.I.
will be in biology and medicine.
>> Should A.I.
machines replace teachers?
>> Well, they won't replace teachers.
And what typically happens when you talk about artificial intelligence is everyone assumes that people will not have jobs.
All of the evidence is that there will be more jobs for humans.
After all, we have this huge shortage of people who want to work right now.
But when they do show up at work, they're going to be smarter because they're going to have this digital assistance.
And let me give you an example of what's possible in the next, say, five years.
It will be possible to take a digital assistant and personalize it to you.
It becomes a digital second self.
Now this is under your control and trained by you.
Now, eventually, that second self will be watching you and learning.
And it will watch you toward the rest of your life.
And when you ultimately and tragically die, as we all do, it can survive as a digital replica of who you were.
And who knows?
Maybe it can learn some new things even after your death.
>> In 1997 -- and you write about this in your book -- the computer Deep Blue defeated chess grand master Garry Kasparov, also a "Firing Line" guest.
But in that era, programmers had taught the machine how to play chess.
Now, five years ago, as you know, Google's DeepMind created AlphaZero.
And this time the program actually taught itself how to play chess.
Humans gave it the rules, and in four hours it had become the most powerful chess player on the planet.
Explain to our audience how computers teach themselves.
>> So, in this particular case, the team had worked on for two years an algorithm which basically -- it's called reinforcement learning.
So, if you gave the computer the set of game rules, it could figure out optimal play.
It did this in four hours for chess, and it did it in roughly a day for the world's most complicated game, which is called Go.
That technology ultimately beat the smartest Korean and Chinese Go players in the world, who are brilliant young men.
And I know because I was at both of them and saw it happen.
>> You saw history made here tonight.
>> AlphaGo has won again.
Three straight wins.
>> What's interesting about Go was that in the Go game, which has been played for 2,500 years, it appears that the computer invented a new move.
It also looks like when the top players, the top human players, played chess and Go with the computer helping them, their own skills got better.
So, one way to think about this is that we're in a good period right now where the computer is getting smarter.
It's not capable of overthrowing us yet -- and hopefully never will be -- but it can augment us.
It can make whatever you're good at even better.
And I defy you to argue that that's bad.
Making humans smarter and more capable, more productive, has got to be good.
>> So, what we know is that these computer programs have become very powerful at the specific tasks like playing chess or playing Go.
But human common sense is a more difficult skill to impart upon a machine.
And what we know is that the next step is something called artificial general intelligence, A.G.I., as opposed to A.I.
Can you, for the audience, tell us what that means?
>> Well, today, the computers and the things we're talking about seem pretty simple to me as a computer scientist, although they're powerful.
The human decides what the computer should do based on what the human thought was interesting.
With general intelligence, the idea is that the computer can begin to fix its own objective function.
Against its own thinking, it can decide what it wants to pursue.
And in the most extreme view of A.G.I., not only will it be able to pick where it's going to go, but it'll also be able to write code to do so.
We don't know what real A.G.I.
looks like, but one way to think about it, it's unlikely to be human-like intelligence.
Because the human intelligence is to some degree a burden.
Right?
It's biologically determined, but a computer doesn't have that restriction.
If I think of an evil opponent -- Let's think about Putin in Russia.
There are some things we know about him.
He still has to sleep.
He still has to eat.
He still has a physical lifespan.
The computers that we're talking about won't have any of those kinds of constraints.
What happens if they go off on left field in some way that's completely non-human and not appropriate?
And we don't even know how to constrain that.
So, today we know that in the next 5 to 10 years, we're going to have incredibly powerful conversational systems.
You and I will be talking to the computer, it will help us, it will be super smart, it will generate pictures, it'll generate movies, it's going to be a lot of fun, and so forth.
What we don't know is, once it can start doing its own objectives, where does it want to go?
>> Even in your tone as you talk about A.G.I., I mean, your tone goes towards, sort of, the what-can-go-wrong, you know?
You have said, "We'll need to be strictly guarded to prevent misuse."
>> I think it's fair to say that these computers -- and there won't be that many in the world because this is very expensive, very difficult -- they'll end up being very similar to plutonium plants, which are heavily guarded, that they'll have limits on who can use them that are very carefully examined because of the potential for misuse.
Imagine if I came up to one of these things and I said, "I want you to come up with a drug that will kill one million people that are different from me, and here's the parameters."
That's obviously not okay.
Those kinds of questions will have to be banned.
And so the computer, properly done -- there'll be the computer which has general intelligence, but then there'll be a computer ahead of it which is trying very hard to make sure that it only gets asked appropriate questions and only gives appropriate answers.
>> Okay, I want to have a little bit of fun here.
I'd like you to take a look at a very famous fictional example of A.I.
This is the character Data from "Star Trek" after he gets the emotion chip.
>> I believe this beverage has produced an emotional response.
>> It looks like he hates it.
>> Yes, that is it.
I hate this!
>> "The Jetsons" got it wrong.
Cars still don't fly.
But how did "Star Trek" do, in the sense -- like, are we going to be walking around with Datas in our midst?
>> Well, they're highly unlikely to be humanoid in form.
It's much, much more likely that this digital intelligence, this digital second twin, this digital partner, will be something that you access through your phone or through your computer, or through other kinds of devices.
What's interesting about your clip is that emotion is something that can be learned, too.
And you could imagine, for example, that A.I.
systems could learn how to be the world's best salespeople.
So, salespeople, for example, learn to never say the word "no" or "negative."
And whatever you say, they say something which is confirming and positive.
But you can imagine that's relatively easy for a computer to learn.
So, is the computer being emotional or is it just -- has it learned how to sound emotional?
One of the core problems with A.I.
is we don't understand consciousness.
And we may never know, certainly not in our lifetimes, if these things have any form of real consciousness.
But we'll certainly have things which look an awful lot like human behavior, especially if it has a goal-seeking behavior, like selling something or telling a story or talking about love or so forth.
But does it really understand the importance of love?
Probably not.
>> Open the pod bay doors, HAL.
>> I'm sorry, Dave.
I'm afraid I can't do that.
>> There's, you know, "2001: A Space Odyssey" to "Terminator" to "The Matrix" to "Black Mirror," movies and television shows that have been warning us of the hypothetical dangers of artificial intelligence for some time.
And the late Stephen Hawking had sounded the alarm about the prospect of A.I.
and what it could bring to humanity.
Likewise, Elon Musk has warned that A.I.
could be leading us "toward either superintelligence or civilization ending."
You've called Musk's quote, "exactly wrong."
>> Yeah.
>> Why aren't you worried about singularity and when machine intelligence becomes unstoppable?
You know, killer robots and the like.
>> Well, in the first place nobody, to my knowledge, is building killer robots right now, but if they were, we would be watching them very carefully.
So, these scenarios that are science fiction, where we end up in a singularity and the computer outraces and so forth, that's a wonderful movie plot.
But if we ever get to that point, there'll be so many people watching and worrying and so many detective systems and so forth, I'm not too worried about that.
What I'm really worried about is the change in information space.
Imagine -- Today we have books and we have authorities and so forth.
A world which is A.I.-stoked will have great dynamism, all sorts of new content.
It'll be very difficult to tell what is made and photographed versus what is false and doctored.
So, this issue around truth becomes more important.
And since we don't have a uniform definition of truth, it's hard to build a system to enforce it.
>> There are some companies who are using A.I.
technology to screen candidates when they're hiring.
And they use A.I.
to monitor productivity.
But the U.S. government says that using A.I.
in this way can actually discriminate against people with disabilities.
And, similarly, there have been questions about how A.I.
algorithms use and determine who gets bail and whether those algorithms perpetuate inequalities, specifically racial inequalities.
I know you've expressed optimism that the bias issues will eventually be resolved.
Why are you so optimistic?
>> The reason I'm so optimistic is that I know so many people are working on solving these problems.
We're talking about hundreds and hundreds of people who recognize the problems that you rightly described.
We should be able to solve them with various techniques.
What I'm actually worried about are the unintended effects of things.
When I look at social media as the current bad example of this, I'm very concerned that with A.I., because A.I.
so powerful, when it begins to affect the way we think, the way we learn, the way our friends influence us, we don't have any idea what happens to human beings.
We have no precedent for this, and we need to get ahead of it.
There are also issues around, what are jobs like in the future?
How does national security look like in the future?
But the most important one is, what does it mean to be human when there's another kind of intelligence that's similar to ours but not the same?
Do we wait for it?
Do we defer to it?
Do we criticize it?
Do we view it as a lesser intelligence, even if it's smarter?
Are we prejudiced for it or against it?
We don't know.
>> I'd like to take a piece of that.
Let's talk about labor and jobs and A.I.
in the future and how that impacts the economy.
You know, you said something earlier which struck me.
You said all the evidence suggests that there will not be fewer jobs in the future, but there will be more jobs.
Can you dive into that more for me?
>> So, there's been a lot of economic research on jobs.
And the consensus right now is that, at least for the next 30 years, three decades, there's going to be not enough people to fill the jobs that are going to exist.
The reasons have to do largely with demographics.
Many of the most advanced countries have a replacement ratio below two.
The solution, by the way, is immigration, which for various complicated reasons people don't seem to want to do.
So if you're not willing to have more young people, either by birth or by immigration, which we're not doing, you're going to have not enough people to do the jobs.
The A.I.
technology today is not powerful enough and not right enough to put it in a life-critical decision.
You don't want an A.I.
system running -- flying the airplane.
You want the human flying the airplane with an AI system giving advice.
With doctors, you don't want the system making the health decision.
You want the doctor to have the A.I.
system scan everything, tell me what's going on, give me your assessment, and I'll think about it.
>> You say in the next couple of decades, but what about the next decade?
Are you -- Am I hearing that you're actually not concerned about A.I.
disrupting jobs in the next decade?
Because there is a lot of research from MIT and Oxford and McKinsey that suggest the next decade actually will displace quite a number of jobs.
>> We're talking about humans -- So, the problem is the people will need to be retrained.
So, the jobs will be there, but the right people won't be there.
>> You wrote in the book, "Societies need to be ready to supply the displaced not only with alternative sources of income but also with alternative sources of fulfillment."
What do you mean by that?
>> Well, we know that humans need meaning.
And that meaning often comes from work.
And so we're going to have to find ways to give people meaning in this new, more digital world.
My own view is a lot of that will come from the tools themselves.
The computers are getting good enough now that they can really serve to make you smarter and more capable, and, I think, probably more relevant to the society around you.
>> Your coauthor, Dr. Henry Kissinger, appeared on the original "Firing Line" with William F. Buckley Jr. many times.
And in 1975, he spoke about preventing nuclear war with the Soviet Union.
Take a look at this clip.
>> In the world of nuclear superpowers, in the world in which American power is no longer as predominant as it was in the late 1940s, it is necessary for us to conduct a more complicated foreign policy without the simple categories of a more fortunate historical past.
>> I've heard you say that the reason we're alive today is because of the doctrine that Dr. Kissinger and his colleagues pursued during the Cold War.
And you've also said that when it comes to the threat of large-scale A.I.
warfare, the time to act is now, before we have real tragedy.
What kind of policies can prevent this?
>> So, I was part of a commission that was created by Congress, and indeed I was the chairman of it, called the National Security Commission for A.I.
And we looked at all this very carefully.
What we concluded today is that the United States is ahead of China, but not by much, and that the United States needs to get ready for A.I.-enabled conflict and security.
We don't have a doctrine for how to deal with A.I.-enabled warfare because it will happen so quickly.
Our military spends its time in what is called the OODA loop, where it's observe, orient, decide, and act, and that loop is organized around human decision-making time.
All of the things that Dr. Kissinger and others developed in the 1950s, including mutually assured destruction, containment, all of those doctrines, are under enormous threat because of the speed problem.
We just don't have time.
And the algorithms, at least today, are not precise enough to know exactly what they're going to do.
They need human oversight.
>> So, you know, I'm glad you mentioned China.
I wonder, as the former CEO of Google, the company that famously quit China years ago, what responsibility do U.S. companies have that have invested in A.I.
in China and in the expansion of China's surveillance state?
>> What China's doing with surveillance is really a violation of the way we think of human rights.
It's really a surveillance state.
And it's not okay for American firms to have helped there.
In practice, the collaboration between China and the U.S. over the higher end of computer science and of information is going to be stopping.
And the reason is not because we don't like what they're doing.
It's because China does not want our liberal democratic information into their information space.
China will literally prevent all of that information from getting in.
That's why YouTube was banned.
That's why Google was banned.
That's why Twitter is banned.
That's why Facebook is banned.
And you should expect more of that.
>> Let me ask you about climate change.
Is there a role for A.I.
in climate change?
>> There is.
So, first place, climate change is not climate change.
It's climate destruction.
And A.I.
is useful in a number of ways.
We have funded a series of research on climate modeling, plants using less fertilizer, changing the ecosystem in farming so that there's more essentially CO2 absorption.
I can go on.
Over and over again, A.I.
is the understanding or the way in which we will adapt these systems.
We have to deal with climate change.
The reason we have to do it now is that every year the compound effect of the damage gets harder to reverse.
>> The critics, Eric, as I know you know, say that tech and A.I.
are contributing to the climate crisis and that the tech sector's estimated 2020 global carbon footprint -- compared to that of the aviation industry, it was larger than that of the country of Japan, which is the fifth largest polluter in the world.
So, how do you square that?
>> So, I think the tech industry -- You're correct in criticizing the tech industry, but what is not correct is to say that the tech industry is not doing something about it.
The leading solutions to most of these problems are probably going to come out as the byproduct of the tech industry.
Most of the climate-change things for businesses are good for business because the cost of energy is not going down.
>> Eric Schmidt, the book is "The Age of AI: And Our Human Future."
Thanks very much for joining me on "Firing Line."
>> Thank you, Margaret.
Thank you so much for "Firing Line."
>> "Firing Line with Margaret Hoover" is made possible in part by... Corporate funding is provided by... ♪♪ ♪♪ ♪♪ ♪♪ >> You're watching PBS.