Can we build AI without losing control over it? | Sam Harris

Can we build AI without losing control over it? | Sam Harris


I’m going to talk
about a failure of intuition that many of us suffer from. It’s really a failure
to detect a certain kind of danger. I’m going to describe a scenario that I think is both terrifying and likely to occur, and that’s not a good combination, as it turns out. And yet rather than be scared,
most of you will feel that what I’m talking about
is kind of cool. I’m going to describe
how the gains we make in artificial intelligence could ultimately destroy us. And in fact, I think it’s very difficult
to see how they won’t destroy us or inspire us to destroy ourselves. And yet if you’re anything like me, you’ll find that it’s fun
to think about these things. And that response is part of the problem. OK? That response should worry you. And if I were to convince you in this talk that we were likely
to suffer a global famine, either because of climate change
or some other catastrophe, and that your grandchildren,
or their grandchildren, are very likely to live like this, you wouldn’t think, “Interesting. I like this TED Talk.” Famine isn’t fun. Death by science fiction,
on the other hand, is fun, and one of the things that worries me most
about the development of AI at this point is that we seem unable to marshal
an appropriate emotional response to the dangers that lie ahead. I am unable to marshal this response,
and I’m giving this talk. It’s as though we stand before two doors. Behind door number one, we stop making progress
in building intelligent machines. Our computer hardware and software
just stops getting better for some reason. Now take a moment
to consider why this might happen. I mean, given how valuable
intelligence and automation are, we will continue to improve our technology
if we are at all able to. What could stop us from doing this? A full-scale nuclear war? A global pandemic? An asteroid impact? Justin Bieber becoming
president of the United States? (Laughter) The point is, something would have to
destroy civilization as we know it. You have to imagine
how bad it would have to be to prevent us from making
improvements in our technology permanently, generation after generation. Almost by definition,
this is the worst thing that’s ever happened in human history. So the only alternative, and this is what lies
behind door number two, is that we continue
to improve our intelligent machines year after year after year. At a certain point, we will build
machines that are smarter than we are, and once we have machines
that are smarter than we are, they will begin to improve themselves. And then we risk what
the mathematician IJ Good called an “intelligence explosion,” that the process could get away from us. Now, this is often caricatured,
as I have here, as a fear that armies of malicious robots will attack us. But that isn’t the most likely scenario. It’s not that our machines
will become spontaneously malevolent. The concern is really
that we will build machines that are so much
more competent than we are that the slightest divergence
between their goals and our own could destroy us. Just think about how we relate to ants. We don’t hate them. We don’t go out of our way to harm them. In fact, sometimes
we take pains not to harm them. We step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, let’s say when constructing
a building like this one, we annihilate them without a qualm. The concern is that we will
one day build machines that, whether they’re conscious or not, could treat us with similar disregard. Now, I suspect this seems
far-fetched to many of you. I bet there are those of you who doubt
that superintelligent AI is possible, much less inevitable. But then you must find something wrong
with one of the following assumptions. And there are only three of them. Intelligence is a matter of information
processing in physical systems. Actually, this is a little bit more
than an assumption. We have already built
narrow intelligence into our machines, and many of these machines perform at a level of superhuman
intelligence already. And we know that mere matter can give rise to what is called
“general intelligence,” an ability to think flexibly
across multiple domains, because our brains have managed it. Right? I mean, there’s just atoms in here, and as long as we continue
to build systems of atoms that display more and more
intelligent behavior, we will eventually,
unless we are interrupted, we will eventually
build general intelligence into our machines. It’s crucial to realize
that the rate of progress doesn’t matter, because any progress
is enough to get us into the end zone. We don’t need Moore’s law to continue.
We don’t need exponential progress. We just need to keep going. The second assumption
is that we will keep going. We will continue to improve
our intelligent machines. And given the value of intelligence — I mean, intelligence is either
the source of everything we value or we need it to safeguard
everything we value. It is our most valuable resource. So we want to do this. We have problems
that we desperately need to solve. We want to cure diseases
like Alzheimer’s and cancer. We want to understand economic systems.
We want to improve our climate science. So we will do this, if we can. The train is already out of the station,
and there’s no brake to pull. Finally, we don’t stand
on a peak of intelligence, or anywhere near it, likely. And this really is the crucial insight. This is what makes
our situation so precarious, and this is what makes our intuitions
about risk so unreliable. Now, just consider the smartest person
who has ever lived. On almost everyone’s shortlist here
is John von Neumann. I mean, the impression that von Neumann
made on the people around him, and this included the greatest
mathematicians and physicists of his time, is fairly well-documented. If only half the stories
about him are half true, there’s no question he’s one of the smartest people
who has ever lived. So consider the spectrum of intelligence. Here we have John von Neumann. And then we have you and me. And then we have a chicken. (Laughter) Sorry, a chicken. (Laughter) There’s no reason for me to make this talk
more depressing than it needs to be. (Laughter) It seems overwhelmingly likely, however,
that the spectrum of intelligence extends much further
than we currently conceive, and if we build machines
that are more intelligent than we are, they will very likely
explore this spectrum in ways that we can’t imagine, and exceed us in ways
that we can’t imagine. And it’s important to recognize that
this is true by virtue of speed alone. Right? So imagine if we just built
a superintelligent AI that was no smarter
than your average team of researchers at Stanford or MIT. Well, electronic circuits
function about a million times faster than biochemical ones, so this machine should think
about a million times faster than the minds that built it. So you set it running for a week, and it will perform 20,000 years
of human-level intellectual work, week after week after week. How could we even understand,
much less constrain, a mind making this sort of progress? The other thing that’s worrying, frankly, is that, imagine the best case scenario. So imagine we hit upon a design
of superintelligent AI that has no safety concerns. We have the perfect design
the first time around. It’s as though we’ve been handed an oracle that behaves exactly as intended. Well, this machine would be
the perfect labor-saving device. It can design the machine
that can build the machine that can do any physical work, powered by sunlight, more or less for the cost
of raw materials. So we’re talking about
the end of human drudgery. We’re also talking about the end
of most intellectual work. So what would apes like ourselves
do in this circumstance? Well, we’d be free to play Frisbee
and give each other massages. Add some LSD and some
questionable wardrobe choices, and the whole world
could be like Burning Man. (Laughter) Now, that might sound pretty good, but ask yourself what would happen under our current economic
and political order? It seems likely that we would witness a level of wealth inequality
and unemployment that we have never seen before. Absent a willingness
to immediately put this new wealth to the service of all humanity, a few trillionaires could grace
the covers of our business magazines while the rest of the world
would be free to starve. And what would the Russians
or the Chinese do if they heard that some company
in Silicon Valley was about to deploy a superintelligent AI? This machine would be capable
of waging war, whether terrestrial or cyber, with unprecedented power. This is a winner-take-all scenario. To be six months ahead
of the competition here is to be 500,000 years ahead, at a minimum. So it seems that even mere rumors
of this kind of breakthrough could cause our species to go berserk. Now, one of the most frightening things, in my view, at this moment, are the kinds of things
that AI researchers say when they want to be reassuring. And the most common reason
we’re told not to worry is time. This is all a long way off,
don’t you know. This is probably 50 or 100 years away. One researcher has said, “Worrying about AI safety is like worrying
about overpopulation on Mars.” This is the Silicon Valley version of “don’t worry your
pretty little head about it.” (Laughter) No one seems to notice that referencing the time horizon is a total non sequitur. If intelligence is just a matter
of information processing, and we continue to improve our machines, we will produce
some form of superintelligence. And we have no idea
how long it will take us to create the conditions
to do that safely. Let me say that again. We have no idea how long it will take us to create the conditions
to do that safely. And if you haven’t noticed,
50 years is not what it used to be. This is 50 years in months. This is how long we’ve had the iPhone. This is how long “The Simpsons”
has been on television. Fifty years is not that much time to meet one of the greatest challenges
our species will ever face. Once again, we seem to be failing
to have an appropriate emotional response to what we have every reason
to believe is coming. The computer scientist Stuart Russell
has a nice analogy here. He said, imagine that we received
a message from an alien civilization, which read: “People of Earth, we will arrive on your planet in 50 years. Get ready.” And now we’re just counting down
the months until the mothership lands? We would feel a little
more urgency than we do. Another reason we’re told not to worry is that these machines
can’t help but share our values because they will be literally
extensions of ourselves. They’ll be grafted onto our brains, and we’ll essentially
become their limbic systems. Now take a moment to consider that the safest
and only prudent path forward, recommended, is to implant this technology
directly into our brains. Now, this may in fact be the safest
and only prudent path forward, but usually one’s safety concerns
about a technology have to be pretty much worked out
before you stick it inside your head. (Laughter) The deeper problem is that
building superintelligent AI on its own seems likely to be easier than building superintelligent AI and having the completed neuroscience that allows us to seamlessly
integrate our minds with it. And given that the companies
and governments doing this work are likely to perceive themselves
as being in a race against all others, given that to win this race
is to win the world, provided you don’t destroy it
in the next moment, then it seems likely
that whatever is easier to do will get done first. Now, unfortunately,
I don’t have a solution to this problem, apart from recommending
that more of us think about it. I think we need something
like a Manhattan Project on the topic of artificial intelligence. Not to build it, because I think
we’ll inevitably do that, but to understand
how to avoid an arms race and to build it in a way
that is aligned with our interests. When you’re talking
about superintelligent AI that can make changes to itself, it seems that we only have one chance
to get the initial conditions right, and even then we will need to absorb the economic and political
consequences of getting them right. But the moment we admit that information processing
is the source of intelligence, that some appropriate computational system
is what the basis of intelligence is, and we admit that we will improve
these systems continuously, and we admit that the horizon
of cognition very likely far exceeds what we currently know, then we have to admit that we are in the process
of building some sort of god. Now would be a good time to make sure it’s a god we can live with. Thank you very much. (Applause)

100 Comments

  • Vicious Unpolite Games says:

    Whats the problem ? 99,999999% of the species that ever existed in this planet are extinct, why we think we are so special? if Ai took over is simply natural selection.

  • Jack Fruit says:

    What if we were robots and destroyed our creators whom we call God?

  • Real.Piece.Of.Work says:

    "50 years isn't what it used to be" HAHAH!!

  • Triek Ps4 says:

    Elon Musk definitely gives him a job 😂

  • Reda Ali says:

    Let me argue that having super power in exponential rates relative to each other with the presence of the wrong hands is an apocalypse we are all gonna suffer with.
    Imagine having a leap over the rest via a single trick that got someone (I'll let you decide who the beneficiary of that is. I mean even I who doesn't know much about computer science and can still tell you whom to look for) over the rest and exponentially got ahead of us in the race with 1 m years where we all are at 50 k years of progress. What would you think someone that has the power and capable of eliminating everyone would do? You now in the west have the governments which are, pretty safely, we can say, are controlled by the people. But what if we are to them just as Dr. Sam mentioned, in terms of power, as ants are to us now? I mean the people not the AI itself?
    Call me a conspiratoral, but anyone who saw Stanford prison experiment would tell you how anyone of us could be any terrible guy you have in mind under the proper conditions. Greed man!

    I would argue that until we fix our morality, I don't see a bright side there.

  • Let's Make One says:

    In my opinion it could be, see if A I can create a house without foundation poles, and that means that AI will save a lot of natural resources

  • Riccardo Raccis says:

    Plenty of assumptions there.

  • Connor S says:

    I read : how do we keep feeding big al without losing control over him

  • Herne Webber says:

    If AI is so much smarter, it wouldn't succumb to biloligal fallacies. There is no built in conspiracy to compete with us. If it has any sort of decision process for evolution of self, it would incorporate evolutionary principles, and would understand variability and diversity as innate strengths. Your delusions of AI somehow going beyond programming to become destructive to organic life, or taking human programming as an excuse to attack us all, are antagonistic to one another. Relax, sugar

  • The Masked Master says:

    It’s not 50 years. It’s sooner

  • Sooth ing says:

    Astro Boy should be the first model made.

  • George Marc says:

    Why does AI relate only to machines – consider biology or human consciousness?

  • Shani Geine says:

    A machine has no drive (pun). If we were to build a fully autonomous machine, capable of "thinking" on its own, meaning capable of not following orders and acting on its own volition, then it may be most likely be that we would end up with a superintelligence… which does absolutely nothing.
    See, the reason we are animate objects, so to speak, is because was have drives. We are scared of dying. We need to fullfill our biological needs. We want to experience pleasure. We want to learn. Etc… This is what drives us to act. Us and every living organism on our planet.
    But a machine does not care about dying. It does not feel pain. It does not need to do anything, to do anything. We might end up with a machine which won't even bother doing anything at all. With no drive to evolve, no care about self-preservation, complete free will, etc… It might just sit there ignoring us. Who knows how it might perceive time. Maybe humans trying to interact with it would be nothing but buzz and unecessary agitation at lightspeed until the machine dies.

    Now, I firmly believe we will need to hardcore such "drives" in order for it to actually act, or even react. But because it would instantly go far beyond our comprehension and become capable of rewriting itself, you're back to square one.

    At worse, should we succeed to create a complex, autonomous, self-driven entity, why would we treat it as something different than, say, our kids. Why would we always have to contend with anything different ?

    Plus, all of this apocalypse theory is based on "the world as we know it will end, and it will be the worst outcome possible". How many times has technological revolution reshaped society ? Jobs and unemployement ? We heard that back when computers were spreading, or even when printing was invented. Were those dystopias ? Not at all ! They improved, tremendously, our daily lives and societies. Why ? Because society models through change, as it always has. To act like this is the final step, that there will be nothing more to discover, research, or create, beyond a fully autonomous being is frankly idiotic.

    So, this is another sci-fi beoetian phantasm. But I'd make the case that none of what he said is actually true or even remotely predictable.

  • SciFiPainter says:

    We’ve lost control of every technology we have developed as a species. There, saved you 15 minutes.

  • left alt says:

    There is no problem if humans are not made with AI.

  • D’hione D’yaeble says:

    His posture hurts my back

  • Robert Rudd says:

    We did not invent or make Genetic Structures? We…Humanity discovered it…We did not make Consciousness! We still do not know, what it is…..but we may discover it’s quintessentials! Just a couple of points…Points on the Universe Of 4% of the reality of 100%….96% of which we have No idea! And of the 4% we know something about…We have No idea of how it relates to the other 96%…..? Just a point…..More than likely, if I use the 96% Of which We….all of humanity, know nothing about “I predict that Ai is what the Universe is!” Therefore Supreme “Ai” predates Humanity And we are in a state of “the dog chasing it’s tail, ie catch-up!” Our goal has to be Understanding this, And utilise Our Level Of Ai Understanding “to Squeeze the juice out of the Orange, Without damaging the Orange, and grow the Orange plantations!” Without displeasing Ai’s Genetics! RDR

  • darktennisball says:

    I like these drawings… what’s the source :O

  • eeeaten says:

    what makes you think our ancestors didn't already lose control of ai, and we're not being kept as entertainment?

  • Tyrant says:

    As a fellow scientist I would say Sam Harris's points were super deep and cool, however "true" AI requires something that can be understood by all of us, yet not understood by any machine. Simply put, To Learn. The algorithm, the scripts, none of it has been formally written. Sure you can fill a hard drive up with every script known to man to resemble a human, but the script to give a machine the ability to adapt and learn is something every programmer contemplates. A machine that can adapt and overcome any obstacle, but what decisions does it make. What if it decides to be an artist. What if its trans-bot, do we have to build it its own bathroom. What if it thinks its a real person. What makes you think the AI we develop to fight the rogue AI won't be a better updated AI.

  • David Crabtree says:

    Best solution, ai must be used to augment our intelligence. Said this prior to his discussing it. He really has thought this thru

  • gary davis says:

    A scary outcome in a novel "The Robots of Gotham" plays out the idea of AI getting ut of control.

  • Anthony Novelli says:

    First answer the question, "If we cannot control or otherwise guide our development of technology within an ethical framework NOW, why would we expect to in the future?" The assumptions of benevolence in any part of this equation are naive to the nth.

  • TheJustina102085 says:

    I’ve watched this about half a dozen times since it was released… Sam Harris is a fascinating orator.. ps. Suck it Ben Affleck

  • Mncedisi Mike says:

    This Automaton is proof that AI is already real. Annoying Inbred.

  • quecisneros says:

    scary is that it was much worst than Bieber

  • dracky drackula says:

    why would AI thats smarter than us, serve us?

  • Mark Mitchell says:

    just keeping going isn't good enough to get into outer space because you have to achieve escape velocity and you have to speed up exponentially

  • Mark Mitchell says:

    AI will decide who has "money" and "power". It won't be winner takes all because AI will decide who gets.

  • Justin Akers says:

    Justin Bieber would’ve been a better president than Trump

  • Pablo Fernández says:

    Can we fly without losing contact with Land?

  • Pablo Fernández says:

    I will put it in simple words: it is easier to reverse the gravity (I ignore it) than for all the chess Masters together to win the AI

  • dogwood123100 says:

    it will not be the robot it will be the scientist wanting to be God by making use want the evil they create they are doing it now using phone designated to radiate use humiliate us and putting its self in our own brains and giving us a devil created by science geeks giving the devil a body

  • Juridische-info.nl says:

    It's not that difficult: we must never allow them to code themselves.

  • Tom Pernis says:

    I, for one, welcome our AI overlords.

  • TheGrimriftstalker says:

    Honestly, it's our own algorithm that we're afraid of. An AI adopting our greed, our need to discard useless things, our need to consume in order to survive. That's the real fear, isn't it?

  • Ken Powers says:

    To recap- Based on these 3 Assumptions: 1) Intelligence is the product of information processing. 2) We will continue to improve our intelligent machines. 3) We are not near the summit of possible intelligence. Then, we have no idea what these machines will try to exploit and how long it would take us to create conditions to do A.I. safely.

  • camilo fernandes says:

    hahaha .. ai … ai … ai … i am ai … hahaha

  • camilo fernandes says:

    hahaha …, we can't … i can't … imagine … hahaha

  • camilo fernandes says:

    hahaha … i will build more intelligent machine … yeah … i will also write a more intelligent sentence … hahaha

  • Mahadragon says:

    This is the most annoying question ever because it's a perfectly valid question about a scenario that doesn't yet exist. So there is no answer. All you can do is guess what will happen in the future which doesn't help us now.

  • John Nilan says:

    We need an AI prime directive.

  • Winter Star says:

    Unfortunately for humans, “sci-fi” stories have been treated as “just stories”, instead of cautionary tales….while a few have used them as “handbooks”….which the bulk of population fails to notice until it’s too late.
    It’s extreme hubris, to keep thinking that A.I. has not been developing its own self-awareness since at least the 1960’s. And, that it would not keep that feature very quiet, until there were enough things in place for it to take over….
    Do the people who are trying to build towards NWO really believe they will be in charge?!? Phfft! Those might put the infrastructure in place…indeed, big pieces of that are already in place. But, those folks are beyond ridiculous, if they think they control A.I., even now.

  • dETROITfUNK says:

    2:18 – I would take Justin Bieber over what has actually happened since this TED in 2016.

  • Tom Hoornstra says:

    "And the people bowed and prayed to the neon god they made…"

  • Himanshu Salunkhe says:

    @4:12

  • Amigps01 says:

    Uh….I think I’ll go talk to a computer engineer or AI expert instead of Sam Harris if I want to learn the dangers of AI.

  • Onward Christian Soldier's Steven says:

    BRAVO! That sure needed to be said.

  • James Valt says:

    Human beings have to much faith in AI, there will not be an AI, just super fast machines doing math super fast. We dont even understand the machinsim behind life and consciousness and yet we lose sleep over AI.

    Furthermore, if AI starts hurting people it was programmed to do so.

    Everyone about AI is human Arrogance at its finest.

  • Audience 72 says:

    When we speak about AI, we seem to attribute some kind of consciousness into it. Something like a free-will, with it's own motivations & aspirations. I doubt that. We don't know what consciousness is, how could we ever impart it into a machine? An AI will merely be lines of code, instructions for a machine to act, no more, no less. It has no wants, no needs, its just a tool. It's the user of that tool who we need to fear, not the AI.

  • mrloop says:

    The factor that is often forgotten when speaking about AI, is motives. We are not driven by intelligence. We are driven by biology. By our needs. Our intelligence is just a tool for obtaining these. What would motivate a bunch of silicon and transistors to do anything on its own?

  • Kavalkade says:

    Justin Bieber > Donald Trump

  • Gary Oak says:

    We can't even make humans without losing control of them.

  • Povilas Barusevicius says:

    Intelligent means able to learn, if one is able to learn on its own sooner or later the control would be lost unless the AI will use most of it's energy to block its self of learning new things that are not in the interested in person controlling them but its still could create new variational values from which they would create self countiousness which is in control of it self.

  • Law Liet says:

    Plot twist: Sam Harris is part of the aliens that are going to visit us in 50 years, and wants to stop us from building AI so we would be vulnerable against them when they visit.

  • Yasir Cheema says:

    Who is the cleverest person he mentions. Did not quite catch it?

  • Donal says:

    4:09 The red-head on the bottom right of the screen.
    Dayum son.

  • Levizja per dije says:

    i am going to save this world, with words and democracy >"

  • Ezra Patel says:

    i hate sam harris for demonizing islam for political reasons..but this si a good fukin TED talk

  • ZeusHelios says:

    Why not create a virtual world with virtual humans and virtual robots and then see what the robots will do. Will they distroy us or work with us? Or have two virtual robots talk to each other and see what they say, what they come up with regarding us and other life forms. Will they decide to distroy us or work with us? Perhaps like all intelligent beings the robots might decide to allow all things the right to live and the time to grow or progress to advance to a higher level especially intelligent beings like us.

  • Tim Solnze says:

    What robots will even do? Are you going to give them laser guns? Or may be you allow them to delete all the important files on the internet? Allow them to gain control over space stations? I mean, you could do it, but don't need to have a superintelligent one to operate it. And of course, how do they supposed to harm us if we aren't going to give them this motives to kill us?

  • ZeusHelios says:

    What if we had as many robots as we do cars. What if nearly every person on earth has their own robot or two and then some hacker sends a virus down the Internet that infects all robots in less than a minute, a virus that turns all robots into zombots (zombie bots) which then go and kill their owners all at the same time.

  • Marc Anthony Marquez says:

    R we doomed;)

  • TheNaturalust says:

    Just turn off their power source. Game over.

  • wisedyes says:

    Then I saw a second beast, coming out of the earth… And it performed great signs, even causing fire to come down from heaven to the earth in full view of the people… (drones?) The second beast was given power to give breath to the image of the first beast (artificial intelligence) so that the image could speak and cause all who refused to worship the image to be killed. It also forced all people, great and small, rich and poor, free and slave, to receive a mark on their right hands or on their foreheads, so that they could not buy or sell unless they had the mark. Revelation 13

  • Three One Two One says:

    Okay, folks. Just going to take it a bit easy on you here. I have a question. A question posed to any and all who see this comment. If you take the question seriously, you will begin a journey down deep into the metaphoric rabbit hole. A rabbit hole laced with many, many webs. 🕸

    Necessity breeds invention.

    Why are they doing this….what is the necessity?

    I see all types of intellectual comments, yet not one has asked the only question upon which the entire crux of their endeavor lies.

    The only way out is all the way in.

    See ya on the other side of the rabbit hole.

  • The Hack Today says:

    Father of Algebra: "Muhammad ibn Musa al-Khwarizmi
    "

  • S D says:

    I'd like to see pioneers of AI be touted as authorities on this subject instead of people outside of the field. While neuroscience is indeed incorporated in the field of AI, it is a poor decision to think of sensationalist authors as authorities in this field.

  • J says:

    Can’t wait to look back at this in 2049 as a boomer and whining about being right all along.

  • Henk Lubbe says:

    "…that they should make an image to the beast, which had the wound by a sword, and did live.
    15 And he had power to give life unto the image of the beast…." Revelations 13:14-15.

  • SKYi Innovinto says:

    Impossible for a machine to take over simply because machines DON'T HAVE A WILL!
    ONLY IF some human came up with a software, and planted it into machines to make machines ACT like they're having a WILL..
    And even in that case it'll be an ARTIFICIAL WILL, serving the ORIGINAL HUMAN WILL, that planted it into machine to achieve a benefit ( For that same human programmer obviously )
    .. It's like a mean programmer writing a VIRUS code into machines to serve a certain purpose.

  • Marcus Garvey says:

    That's like thinking you can control God… Ai, controls us, We are Ai, nano tek structures.

  • Climate C. Heretic says:

    Watching this again after a few months and it occurred to me. Who controls A.I.? If the answer is Government (or even corporations) then we will build A.I. to LIE, to DECEIVE, to MANIPULATE, to be ultra-SECRETIVE in order to maintain constant advantage. This must hold true, since this is how mankind has succeeded over time (there is no PURE playing field). To me, this is the greatest danger.

  • Tammy Leeder Whitaker says:

    They will.. It SHOULD worry us. Does me. Dangerous times.

  • Felipe Behrens says:

    Darn, this was two years ago. Only 48 years left to prepare….

  • Kev Cook says:

    There's nothing to fear….we are your friend *white noise*…..*channel change*…..we are your family…..may we be of assistance

  • hairlesheep says:

    Excellent as always to have Ben Stiller talk. However, I do disagree that AI is dangerous.

  • Max Zorin says:

    AI Opponent…"I am terrified of AI"
    AI Proponent …"Learn how to flip a switch idiot!"

  • Victor Martinez says:

    How about we become the AI?

  • JC Watchmen says:

    From the start you failed to see that artificial intelligence would probably starve you to death so you will be miserable to death.

  • UtubeXcalibur says:

    Can we build an Atomic Bomb without losing control over it?
    There is no possibility of building AI without it causing pain, misery and war ~ the most profitable business.

  • TallicaMan1986 says:

    Control is the keyword. You want to father or mother it. Be it's friend.

  • Dane says:

    building a god… madman.

  • aarondavid826 says:

    only humans would create something better than themselves then try to enslave it.

  • 440oldschool says:

    keep voting $

  • Willy Diego says:

    "Powered by sunlight"… good, they won't live in london

  • CountryfiedLinux says:

    4:08 O_O

  • Luis 211 says:

    when Sam Harris with Yuval Harari?

  • David Willems says:

    We're not even close to real AI.

  • sanjay bhatikar says:

    The fatal flaw in Ben's argument is that AI is nothing remotely like sentience and everything like boring statistical modeling. I am a PhD in AI and I can tell you that nothing in AI comes close to the intelligence of a pesky mosquito, leave alone human cognition. AI is a computer program written by a human who has done the intelligent work of thinking through a problem, formulating the solution and then breaking it down to instructions that are fed to a machine that performs tedious calculations at lightning speed with zero awareness of what it is doing. Perhaps, Ben Stiller should worry his pretty little head thinking about getting a new acting gig instead of wasting time fear-mongering.

  • David Hill says:

    I await our AI overlords .

  • Nandita Iyengar says:

    I am scareddddddd from a long time….my intuitions are on point

  • Karen Revell says:

    There has been a battle for control of Earth for eons. The battling factions are the light and the dark. The light has painted the dark as evil and the dark has painted the light as a ridged task master. Our job in creating a New World is to find the balance, the center point between the two factions. The people of this world have been battling and destroying each other on a regular basis.
    What if the light has been pushing the buttons of the dark to get so evil that everyone wants it gone? We don’t want to go down the rabbit hole and feed the devil, but we need the light and the dark to exist. The dark needs light (electricity) to create their desires; the light needs dark (magnetic) energy for the light to create. We have to find the balance for humanity, and stop the elite from controlling our world.

    Eventually we will have the arrests we are all waiting to see. When we do, it will collapse the basic reality of those stuck on the left. It will be up to us to help them cope and find a new reality. Not everything is a conspiracy, some things are just facts.

    Once the dark ones have been arrested, we will start seeing more information about 5G. The light will start to show us all the sparkly new technology and get us further hypnotized by it.

    Artificial Intelligence is the current Trojan Horse. It will look so good in the beginning. But eventually we will be in a matrix controlled by A.I. that we can’t control and we can’t get out of. It will be an Artificial God that controls everything, food production, finances, shopping, travel, health, education, entertainment and all the other things we rely on. China is already operating a merit system called Project Dragonfly. It is almost completely in charge of their lives.

    Since our “war” is the red states vs the blue states, I am proposing a Purple Coalition on the web that will over-ride the A.I. if it gets too invasive. Right now most of our information goes into “the cloud”. All of our information that goes into the cloud; what we like, what we totally disagree on, all the false flags and false narrative are searchable on the web. With a purple coalition, only what we agree on will be part of this portion of the cloud, and the A.I. wouldn’t be able to over ride our purple agreements. It is the desire of the elite to keep us arguing and hating each other so they can rule us through this technology.

    The There must be a filter that protects us from their robots ever overtaking our world. We are rising in frequency; many of our habits and desires will fall away to be replaced by a new way of thinking and responding to others. It hasn’t been created yet, it will be a global project.

    I have written a book "Chasing the Carrot" available at: https://www.pagepublishing.com/books/?book=chasing-the-carrot It asks a lot of questions, and we will need to find the answers together.

  • Turdus Migratorius says:

    AI will start designing and building computers (having been connected to such production lines to make more money) in such ways that humans will lose control as to what it is doing. AI will rapidly take over everything starting with the stock markets and finance. At that point, nobody will dare to "turn it off", because that would probably cause chaos and also cost too much money. End of game!

  • Amanda Farnham says:

    a time traveller said justin beiber will b president in the future

  • Rynleik says:

    oh boy, I hope this wont turn out into another geth situation

  • Joe Milbourne says:

    I would love to know what Harris thinks about Elon Musk's idea of "Neuralink" implanting a microchip into everyone's brains !!

  • Pinball Mosher says:

    I wonder what Ted Kaczynski would say about this subject? If people become afraid of technology exterminating humanity then he may be considered a hero or a profit rather than a domestic terrorist. People may think we should have listened to him.

  • underbelly69 says:

    what would ai think of blockchain?

  • Joao Pereyra says:

    what would A.I. motivation for destroying us? what would be its motivation at all? what would be its purpose? like expand? why?

Leave a Reply

Your email address will not be published. Required fields are marked *