The Student Room Group

Artificial superintelligence has the very real potential to destroy humanity

Scroll to see replies

Original post by VannR
Though such nihilism has comedic value, I would argue that if we are to compare AI to ourselves, we have no reason to think that it will not at least as ethical as a human being, if not more so by orders of magnitude. Human beings value history and ethics - we value our lineage and our past and can be driven to great actions through nothing but pure sentiment. Why would a strong AI be any different?

P.S. to say that you could even begin to understand the conclusions that a super-advanced AI would make about the teleology of life and consciousness is either an oversight on your part, or possibly the most arrogant thing anyone has ever said :tongue:.


I did mention it was a guess... Why do you think true AI would be like us anyway? We are ethical partly because of empathy and because it's in our interests but why would AI have any interests or emotion?
Original post by VannR
Could you point me to some of this literature? (this is not rhetorical - I am genuinely interested).

Does this point not ignore the notion that a strong A.I would be super-intelligent, meaning that if it placed any moral value on human life it simply would not do something which went against its value system since it is much more aware of the wider implications of any action than we could ever be?


https://intelligence.org/ has quite a bit of stuff. I also (separately) recommend the book Superintelligence, by Nick Bostrom.
I think we're safe from the machines. For now.

There's even some theories that to truly define consciousness we need to work on the theory of Quantum Gravity. I don't think anyone can ever truly replicate a biological 'mind' into a computer, just create very clever 'clones' based on very precise coding.

There won't ever be such thing as SkyNet or other 'superintelligences', as AIs always have and always will have very precise goals to achieve.The only threats come if people create these 'false intelligences' with the sole purpose of achieving military domination.

And in a world where computing is becoming pretty much a central part of society, it won't be long before it becomes the main tool for attacks by terrorist organisations.

It's not the machines we need to fear.

It's the people.


Damn, I need to sleep o.O
(edited 8 years ago)
Reply 63
Original post by miser
Sure. The blog Wait But Why has a great summary (and fairly neutral) covering of the subject available here: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

If you want to hear from academics themselves, you can YouTube the big names involved - most of them have academic talks available on the subject. Nick Bostrom (director of Oxford's Future of Humanity Institute) has some good talks and has published a book called Superintelligence which specifically talks about some of the risks. Ray Kurzweil (a futurist and computer scientist who works for Google) is more optimistic and I believe also has talks and a book.


Kurzweil's work was my introductiom to the subject a couple of years ago now, but I have not looked at Bostrom yet. My father is an AI researcher and I'm getting to computer science in September, so this a bit more than a hobby for me, more like a future career. I tend to hate on the reductionists e.g. Dennett but I guess Bostrom may have something up his sleeve.
I'm going to chip in here because I actually know what I'm talking about. The whole AI takeover scenario is so overplayed- it could only happen if we let superintelligent systems have full control over physical equipment. Which isn't going to happen. A computer can be completely limited by having a partition which it can't access, which contains limits for it's behaviour. There would be certain things which the computer, no matter how intelligent, simply can't do. Also, just because a system is intelligent, it doesn't mean that it is a world-class hacker. The computers couldn't possibly learn how to hack and program on their own. So if we created a supercomputer which was designed to be a surgeon robot, it would always be a surgeon robot. It would be isolated, and left to do it's work. It can learn new things, and adapt it's behaviour, but it's not going to learn how to hack itself and remove it's software limitations. It's simply not possible, unless you directly teach the AI how to write code.

Just to clarify, I used to develop partially-intelligent AI robots which you could talk to, and expand your relationship with (just like Siri, where the AI learns about you and gets to know you better). I just did it as a little hobby, but I learned a lot about how AI systems evolve.
Original post by viddy9



It's not that the AGI would be inherently evil and simply wish to inflict unnecessary suffering on other sentient beings due to its own greed. No, unlike humans, the AGI would not be shaped by natural selection.

However, every Artificial Intelligence that we've created so far is proficient at carrying out a task, whether it's beating humans at chess or driving around the roads, whilst being inept at performing any other task.

1.But, any exogenously specified utility function, or goal, that we give an Artificial General Intelligence would almost certainly lead to disaster.

Let's say that utility is specified as "cure cancer". The AGI will then resort to extreme measures to cure cancer - it may try to acquire resources to achieve its goal, and may try to maximise the probability of its own continued existence. How would it do this? Kill everyone else without cancer, because it reduces the probability that it will be deactivated.

Or, we could propose the existence of an AGI whose goal is to maximize the number of paperclips in its collection. If it has been constructed with a roughly human level of general intelligence, the AGI might collect paperclips, earn money to buy paperclips, or begin to manufacture paperclips.

Most importantly, however, it would undergo an intelligence explosion: It would work to improve its own intelligence, where "intelligence" is understood in the sense of optimization power, the ability to maximize a reward/utility function—in this case, the number of paperclips. The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips. Having increased its intelligence, it would produce more paperclips, and also use its enhanced abilities to further self-improve. Continuing this process, it would undergo an intelligence explosion and reach far-above-human levels.

It would innovate better and better techniques to maximize the number of paperclips. At some point, it might convert most of the matter in the solar system into paperclips.

This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important terminal values, such as life, love, and variety. The AGI won't revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future, and that opposes its current goal. It has one simple goal of maximizing the number of paperclips; human life, learning, joy, and so on are not specified as goals. An AGI is simply an optimization process—a goal-seeker, a utility-function-maximizer. Its values can be completely alien to ours. If its utility function is to maximize paperclips, then it will do exactly that.

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Essentially, we need to be able to give any AGI a utility function which encompasses human values, as some on this thread have already alluded to.

But, just like the term British values, the term human values is incredibly ambiguous. Some humans, such as myself, are utilitarians; most believe that morality comes from a supernatural being; a few are deontologists.

I support the work of Oxford's Future of Humanity Institute and would strongly recommend its founder Nick Bostrom's book Superintelligence, as well as Stuart Russell's leading AI textbook, Artificial Intelligence: A Modern Approach. The Centre for the Study of Existential Risk is a similar organisation which looks, in part, into the dangers posed by Artificial Intelligence, as is the Future of Life Institute.

In the future, I plan to donate to the Machine Intelligence Research Institute, which is also conducting research into the problem posed above - how do we create a Friendly AI with a utility function that won't result in catastrophic effects? They are, quite rightly, looking into rationality and cognitive biases, as well as utilitarianism, in order to solve the problem.

Humans aren't rational and don't have a consistent moral code, so we need to study rationality and the best bet for a universal moral code, utilitarianism, in order to ensure that AI does not result in catastrophe.



For the reasons stated above, agreed.

Artificial General Intelligence is one of the biggest threats to our way of life, as well as to our lives.


Can't you synthesise your ideas better? That's a whole load of unstructured text to read.

1. What makes you think so? a) How can you know how an intelligence (more intelligent than you) would react in a particular scenario? I think this is a belief of yours.

2. Again, see point 1. There is no reason to think that a superior intelligence would resort to extreme measures to reach a goal. That's BS. If you know a thing of two about AI, you know that even simple AI can optimise their behaviour to reach a goal where that goal avoids using goal paths that include dangers to humans. It's like thinking that a Google car will take any path to your destination without preventing paths that are dangerous to humans, as I said, nonsense.

3. " Or, we could propose the existence of an AGI whose goal is to maximize the number of paperclips in its collection. If it has been constructed with a roughly human level of general intelligence, the AGI might collect paperclips, earn money to buy paperclips, or begin to manufacture paperclips." What is the point of this? This is not relevant to anything I said.

4. "Most importantly, however, it would undergo an intelligence explosion: It would work to improve its own intelligence, where "intelligence" is understood in the sense of optimization power, the ability to maximize a reward/utility function—in this case, the number of paperclips. The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips. Having increased its intelligence, it would produce more paperclips, and also use its enhanced abilities to further self-improve. Continuing this process, it would undergo an intelligence explosion and reach far-above-human levels." Another pointless paragraph. This tackles nothing of what I said.

5. " It would innovate better and better techniques to maximize the number of paperclips. At some point, it might convert most of the matter in the solar system into paperclips." I fail to see the point of this paragraph as well. I never questioned the ability of some intelligence in converting most of the matter in the solar system into paperclips.

6. " This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important terminal values, such as life, love, and variety." It does not take a superintelligence to see that those values are biologically programmed. If you are free of some aspects of this particular programming like some humans are, there is no reason to think you would value those as much. And thus, it would be perfectly rational not to pursue things that are not your goals.

"The AGI won't revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future, and that opposes its current goal." That's a very simplistic notion of a synthetic intelligence. Biological programming is loose enough to allow for a changing set of goals in humans, it would be ridiculous to think that this type of loose programming has no equivalent in other mediums. A synthetic intelligence (not necessarily, equal to or above human levels) would need to be able to adapt to its environment and that at some point will involve the ability to be the agent or the subject of changing goals.

"It has one simple goal of maximizing the number of paperclips; human life, learning, joy, and so on are not specified as goals. An AGI is simply an optimization process—a goal-seeker, a utility-function-maximizer. Its values can be completely alien to ours. If its utility function is to maximize paperclips, then it will do exactly that." Assuming that there is no ways its goals can change is ridiculous. We are talking about a super-intelligence, by definition, it will have the same capabilities that we do, and that will include the ability to reflect and change its goals.

" Essentially, we need to be able to give any AGI a utility function which encompasses human values, as some on this thread have already alluded to." That's your belief. You haven't given a rational argument as to why a superior power will need to treat different than the rest of life forms in the planet. You are being speciesist. The idea that you can somehow control or limit an intelligence more complex than yours is nonsensical. Go to south america yourself, and try to control or limit a random tribe in the Amazon. I bet you will fail. The point is that, by definition, a super intelligence won't be under the control, limitations or understanding of a lower intelligence.

"Some humans, such as myself, are utilitarians; most believe that morality comes from a supernatural being; a few are deontologists." Yet unlike most prominent utilitarians, you share the anthropocentric religious-based beliefs of many deontologists.

"In the future, I plan to donate to the Machine Intelligence Research Institute, which is also conducting research into the problem posed above - how do we create a Friendly AI with a utility function that won't result in catastrophic effects? They are, quite rightly, looking into rationality and cognitive biases, as well as utilitarianism, in order to solve the problem." Any serious researcher will understand that trying to limit a superintelligence to any particular traits is bound to failure. You won't be able to just program "ethical values" into a superintelligence for the simple you won't be able to program a super intelligence. From our current scientific understanding of intelligence, this intelligence will most likely surpass human capabilities on its own rather than by a human getting a metaphorical intelligence slide turned up to the maximum.

" Humans aren't rational and don't have a consistent moral code, so we need to study rationality and the best bet for a universal moral code, utilitarianism, in order to ensure that AI does not result in catastrophe." Interestingly enough, there is no reason to believe that there is an objective morality any more than there is an objective beauty. We wished that there was such a thing but there does not seem to be the case. The very words "morality", "right", "wrong", "good" and "bad" referred to desirable and undesirable physical states to a particular individual. And even if there was an objective morality, you would not be able to enforce it on humans, because humans choose what to follow. If you can't enforce it on human level intelligence, you won't be able to enforce on a more complex intelligence.

A more complex intelligent is likely to seem to us to display unpredictable behaviour and possibly irrational behaviour. The biological limits that constrain our behaviour won't apply to this synthetic intelligence and this will just add to its unpredictable behaviour. A mathematician once gave a mathematical definition of complexity above which, physical phenomena cannot be comprehended by us. A super intelligence is likely to have a similar property.

At the end of the day, AI is a threat to us in the same way the Neanderthals were or in the way we are to other species. Neanderthals had the same capabilities as we did and they did not necessarily wanted to do as we wanted to do. And it is similar with other species, they would not wish us to hunt them down, torture them or kill them for our pleasure. But we do. And as I said, one way or another humans are going to disappear from the Earth eventually whether killed by a super intelligence or gradually by natural evolution. Scaremongering helps no one. Rational discussion does.
Original post by Nuvertion
I'm guessing advanced AI would be pragmatic and thus not place much if any value on human life. A super-advanced AI would probably realise life is pointless anyway and just shut down permanently.


For all we know, it won't place any value in anything forever.
Reply 67
Original post by VannR
Kurzweil's work was my introductiom to the subject a couple of years ago now, but I have not looked at Bostrom yet. My father is an AI researcher and I'm getting to computer science in September, so this a bit more than a hobby for me, more like a future career. I tend to hate on the reductionists e.g. Dennett but I guess Bostrom may have something up his sleeve.

Wow that's great. I also considered pursuing a career in AI reserach (specifically AI safety and existential risk) - though perhaps in another life.

Haha, yes, I personally dislike Dennett more than a little.
Original post by VannR
Though such nihilism has comedic value, I would argue that if we are to compare AI to ourselves, 1.we have no reason to think that it will not at least as ethical as a human being, if not more so by orders of magnitude. Human beings value history and ethics - we value our lineage and our past and can be driven to great actions through nothing but pure sentiment. Why would a strong AI be any different?

P.S. 2.to say that you could even begin to understand the conclusions that a super-advanced AI would make about the teleology of life and consciousness is either an oversight on your part, or possibly the most arrogant thing anyone has ever said :tongue:.


1. It might very well have its own system of ethics. At any point, we won't be likely to recognise it. Under babylonian justice, a pregnant woman might be killed because of his father's actions. You might not call that justice but it was justice was for them. Their system of ethics was very different to ours and their ethics reflected different values. If we have issues recognising/understanding ethical systems of fellow humans. Chances are, assuming that it acquires them, the ethics of a synthetic intelligence (whether equal to us or more complex than us) will be entirely incomprehensible to us. The father of computation and a true visionary of synthetic intelligence had beliefs similar this. That non-human intelligence is likely to be alien to us, far more alien than the rituals of an Amazonic tribe can be.

2. I agree. If there is one thing we should expect from this synthetic intelligence is that it will be highly unpredictable and due to this, uncontrollable. We will be luckier trying 100% effective hypnosis on every human in this planet.
Original post by Smaug123
But what are its ethics? Why must they resemble human ethics in any way, a priori? That's why we have to be very careful about creating it.


I agree. I think that a synthetic intelligence will be alien to us and we might not recognise as intelligence when we see it. Interestingly, there is a similar point about non-terrestrial intelligence.
Original post by VannR
Absolutely correct. But given that the jury is essentially still out for the metaphysical existence of morality (and metaphysics in general), we currently have no way of knowing what it means for ourselves (let alone a machine) to possess ethics. This is why I pointed out that if we are to make any comparison with ourselves, we should come to the conclusion that strong AI will be ethical. However, whether or not it is correct to make this comparison at all is currently unknown, but we do have to start somewhere, and I am of the mind that a comparison with human beings and their values is more reasonable that strong AI = Skynet.


That's quite meaningless to us given that ethics is a relative term and we are a very good example of it. Human ethics is obviously centred about issues relevant to humans. It does not concerned itself with whether 1 is prime number. Similarly, the ethics of synthetic intelligence will not necessarily be concerned with the things we are concerned about. Perhaps, this intelligence might value the primehood of numbers and based its ethics around that. And while you might find it arbitrary, it is no less arbitrary than the values we humans hold. Current scientific knowledge, suggests that these human values are nothing but biological programming with human socialisation. Perhaps, a different programming could make us value three-legged insects. The point here is to remember that all values are arbitrary. And any hopes for understanding a synthetical intelligence (well before it starts going super-human level) is that a lot of our understanding of intelligence is biased towards humans. The whole growth of AI in the 60s/70s had to face that when thought they could just make a human intelligence by programming some simple predicate calculus on a machine or making it memorise x number of facts.

If you think about it is really crazy, because humans in this planet, by being born in a particular age, have value human freedom and human slavery, woman as a man's property and woman as an independent human, marital rape as a right and marital rape as a crime, x set of behaviours as masculine and the same set of behaviours as feminine. If among us, humans, there is such variability of values, even within the same age and often within the same geographical region, an entity that does not even share our biology nor our neural odevelopment is likely be the most alien thing we will ever see.

Sorry for the huge paragraphs. :|
Original post by Alexion
I think we're safe from the machines. For now.

There's even some theories that to truly define consciousness we need to work on the theory of Quantum Gravity. I don't think anyone can ever truly replicate a biological 'mind' into a computer, just create very clever 'clones' based on very precise coding.

There won't ever be such thing as SkyNet or other 'superintelligences', as AIs always have and always will have very precise goals to achieve.The only threats come if people create these 'false intelligences' with the sole purpose of achieving military domination.

And in a world where computing is becoming pretty much a central part of society, it won't be long before it becomes the main tool for attacks by terrorist organisations.

It's not the machines we need to fear.

It's the people.



Damn, I need to sleep o.O


I agree.
P.S. The whole debate on soul/consciousness/God and the like are philosophy not science. Science does not acknowledge the existence of any of the above 3 things. The reason I hope is obvious.
Anyway, the definition of AI is ambiguous enough that anything that excels in any area or more of human cognition will get the title "AI". This is all related to the fact that there in science "intelligence" has no agreed definition. Pretty much any academic Joe can come and say "this" is intelligence where "this" is often vaguely defined as "that which an intelligence test measure". And so, given any test (however stupid) you call "intelligence test", intelligence will be whatever your tests measure, even if you yourself have no idea of what it is! As you can see, if we cannot even understand our own intelligence, we have no chances of scrapping the surface of a synthetic one.

Sorry for the pointless paragraph. -_-
As you say, if any synthetic intelligence harms a human, it will mostly have a human behind it.
Original post by miser
You're right and I agree on this point - these are two separate threats.


I think this is a circumstantial factor - sure, we don't know if they will be truly conscious, or how consciousness really operates (the hard problem of consciousness), but that's not the pressing concern: the pressing concern is that 1.ASI could easily be incredibly dangerous and pose tremendous existential risk to humanity.


No, they do not boil down to fear of the unknown - I honestly think that anyone who talks along these lines has not studied the literature enough (or is blinded by optimism). I myself didn't think AI would have good reasons to kill humanity until I read the literature and realised how easily any arbitrary goal could be inadvertantly at odds with the continued prosperity of humanity.


1. I could say the same of extremely wealthy people and extremely poor people. And chances are, humans are worse. It has taken us thousands of years to get where we are and still the affairs of a few have dangerous consequences on the many. I would say this is a bigger threat.

"they do not boil down to fear of the unknown". I disagree. The scaremongering of alien life forms killing humans has a pretty long record in human literature. Whether zombies, aliens, Terminators and the like, we humans are quite morbid with the idea of getting threatened by other intelligences (and getting saved by some "hero").

"I myself didn't think AI would have good reasons to kill humanity until I read the literature and realised how easily any arbitrary goal could be inadvertantly at odds with the continued prosperity of humanity". What continued prosperity are you talking about? I don't know about you, but prosperity is not a term you (as a human or if you were to speak as a representant of non-human life forms) can give to humans as a species. Continued destruction is a more fitting label for us.

We are incredibly harmful to each other as well as to other species as our impressive track of species annihilation and ecosystem destruction shows. There are many threats to the human species that is true, but out of all of them, and ranked in order of likelihood, super human intelligence is the lowest of them. When most of the world ( about 70%) is living in extreme poverty, the chances of some electronic machine running wild is the least of your problems. But then again, the whole idea of the arrival of an alien threat to the future of the human species lends itself more to romantic ideas that we can validate from the literature and movies. In this respect, Marvel has been using this human threat fantasy appeal with incredible success. We truly are a morbid species. :P
(edited 8 years ago)
An Oxford academic is warning that humanity runs the risk of creating super intelligent computers that eventually destroy us all, even when specifically instructed not to harm people.

Dr Stuart Armstrong, of the Future of Humanity Institute at Oxford University, has predicted a future where machines run by artificial intelligence become so indispensable in human lives they eventually make us redundant and take over.


Yeah, I think that this scenario is realistic. From certain views people are redundant now, just think about automatized jobs. And its highly likely that its getting worse for human. I think that in the future - no matter whether in the near or the further one - the technology can do everything what people do in the everyday life, plus to feel and act as people.
Reply 74
Original post by Juichiro
1. I could say the same of extremely wealthy people and extremely poor people. And chances are, humans are worse. It has taken us thousands of years to get where we are and still the affairs of a few have dangerous consequences on the many. I would say this is a bigger threat.

Nooo, no no no. The magnitude of the power that ASI would have exceeds anything ever invented by orders of magnitude - that's not something that's disputed among the researchers in this area. We've survived with extremely wealthy and extremely poor people for millenia already - we're unlikely to get wiped out by them in the next century. But ASI on the other hand - that's something new and it's very easy to underestimate the danger because it's not something we're used to thinking about.

Original post by Juichiro
"they do not boil down to fear of the unknown". I disagree. The scaremongering of alien life forms killing humans has a pretty long record in human literature. Whether zombies, aliens, Terminators and the like, we humans are quite morbid with the idea of getting threatened by other intelligences (and getting saved by some "hero").

It's true that we get scared by stuff and scared by the unknown, but that's absolutely not a reason to shrug off something unknown that upon careful consideration does indeed pose a significant threat. Zombies = scary but no threat. Aliens = (if they showed up) scary and significant threat. Terminators (aka time-travelling robots) = scary but no threat. ASI = apparently not scary to most people, but huge threat.

Original post by Juichiro
"I myself didn't think AI would have good reasons to kill humanity until I read the literature and realised how easily any arbitrary goal could be inadvertantly at odds with the continued prosperity of humanity". What continued prosperity are you talking about? I don't know about you, but prosperity is not a term you (as a human or if you were to speak as a representant of non-human life forms) can give to humans as a species. Continued destruction is a more fitting label for us.

We are incredibly harmful to each other as well as to other species as our impressive track of species annihilation and ecosystem destruction shows. There are many threats to the human species that is true, but out of all of them, and ranked in order of likelihood, super human intelligence is the lowest of them. When most of the world ( about 70%) is living in extreme poverty, the chances of some electronic machine running wild is the least of your problems. But then again, the whole idea of the arrival of an alien threat to the future of the human species lends itself more to romantic ideas that we can validate from the literature and movies. In this respect, Marvel has been using this human threat fantasy appeal with incredible success. We truly are a morbid species. :P

I genuinely feel that you're not appreciating the level of threat involved. I agree for example that genocide is a tremendously bad and destructive thing and there's been a lot of that in human history, and there may well be more of it. You're right that we're destroying the environment, that we're at continual risk of nuclear war, etc. These things have killed and will continue to kill and have the potential to kill huge numbers of us, but we've survived and probably will survive them.

When I talk about 'existential risk', I'm not talking about subsections of humanity being killed - I'm talking about a threat to everyone in the world. ASI could literally kill every single person without difficulty. Skynet is really a joke by comparison - it wouldn't need killing robots - it would be trivially easy for a being with superintelligence. It would be literally incomprehensibly smart and (if it chose to be) lethal by the same measure. It's not 'an electronic machine running wild', or about what is portrayed in films where humanity has a fighting chance - it's a very real threat that if we take lightly could legitimately kill literally everyone and we ignore that threat at our extreme peril.
Original post by miser
1.Nooo, no no no. The magnitude of the power that ASI would have exceeds anything ever invented by orders of magnitude - that's not something that's disputed among the researchers in this area. We've survived with extremely wealthy and extremely poor people for millenia already - we're unlikely to get wiped out by them in the next century. But ASI on the other hand - that's something new and it's very easy to underestimate the danger because it's not something we're used to thinking about.


2.It's true that we get scared by stuff and scared by the unknown, but that's absolutely not a reason to shrug off something unknown that upon careful consideration does indeed pose a significant threat. Zombies = scary but no threat. Aliens = (if they showed up) scary and significant threat. Terminators (aka time-travelling robots) = scary but no threat. ASI = apparently not scary to most people, but huge threat.


I genuinely feel that you're not appreciating the level of threat involved. I agree for example that genocide is a tremendously bad and destructive thing and there's been a lot of that in human history, and there may well be more of it. You're right that we're destroying the environment, that we're at continual risk of nuclear war, etc. These things have killed and will continue to kill and have the potential to kill huge numbers of us, but we've survived and probably will survive them.

When I talk about 'existential risk', I'm not talking about subsections of humanity being killed - I'm talking about a threat to everyone in the world. ASI could literally kill every single person without difficulty. Skynet is really a joke by comparison - it wouldn't need killing robots - it would be trivially easy for a being with superintelligence. It would be literally incomprehensibly smart and (if it chose to be) lethal by the same measure. It's not 'an electronic machine running wild', or about what is portrayed in films where humanity has a fighting chance - it's a very real threat that if we take lightly could legitimately kill literally everyone and we ignore that threat at our extreme peril.


1. I don't dispute that this intelligence would be more powerful than any human or group of humans so far. I am disputing the fact that you believe that living in extreme poverty is surviving. We are not talking of relative poverty, we are talking about absolute poverty here. I don't see much difference between death and absolute poverty apart from the fact that the absolute poor have the ability to suffer. In this sense, you can't get worse than that.

2. "Zombies = scary but no threat." Based on what? A plague of human eating infectious agents could be a threat much more harming than an intelligence would for the simple reason that these zombies have the clear objective to harm human while a synthetic intelligence would not necessarily have that objective forever if at all.

"Aliens = (if they showed up) scary and significant threat." Should we not then take it into account? As far as I know, most programmes related to their possible existence are focused on communication. If these aliens were to be able to come to us and they were as destructive as we are, they might as well trace our communication, find us and do us harm. But for some reason, it is not considered seriously.
P.S. I do not consider it seriously either.

"Terminators (aka time-travelling robots) = scary but no threat." Based on what? These terminators have the goal and the capability of killing humans, surely if they existed, that would be a threat.

Anyway, who does this threat affect? To the 80% people who live in absolute poverty or to the tiny percentage that do not? As I said, there are more urgent issues that require our attention. Building a programme to limit the abilities of an intelligence more complex than ours (or searching for alien intelligence in space) is not one of them imo. I would argue that this applies even more to utilitarians.

"I genuinely feel that you're not appreciating the level of threat involved. I agree for example that genocide is a tremendously bad and destructive thing and there's been a lot of that in human history, and there may well be more of it."
I was not talking about genocide in particular but more about issues that have been around for a really long time such as absolute poverty, redistribution of global resources or simply not destroying our only planet. I find these issues more urgent and serious than an intelligence coming up and wiping us out.

"You're right that we're destroying the environment, that we're at continual risk of nuclear war, etc. These things have killed and will continue to kill and have the potential to kill huge numbers of us, but we've survived and probably will survive them." The issue is not survival but reducing this destructive behaviours imo. It's not about being confident that most of us will survive these risks but ensuring they kill the lowest possible number of people imo. Also, I think the notion of most of us (where one is always among the "most") will survive is a wrong one from an ethical point of view imo and sort of justifies our passive attitude towards most of these issues. If you or your father/mother was at risk, I think your opinion would change.

And anyway, my point was mostly directed towards absolute poverty and the factors surrounding it.

"When I talk about 'existential risk', I'm not talking about subsections of humanity being killed - I'm talking about a threat to everyone in the world."
Brings me back to my previous point. If you have a threat that is almost assuredly likely to kill you, adding a threat with a far more minimal likelihood to kill won't make a substantial difference. And definitely, investing x amount of resources into the second threat might not necessarily be what you want. Let me put it in more extreme terms: if you are dying of cancer, whether the aliens might arrive sometime in the far future won't be of importance to you because you will be dead one way or another. And if this cancer victim (or a member of his family) was given the chance to invest more money to get a cure for cancer (at the expense from lowering the budget towards the funding of unreliable protection against a super intelligence), I have no doubts that the victim or his family would invest that money. For absolutely poor people it is the same. There is no other way to reliably cure cancer but there is another way to prevent this super intelligence scenario: add artificial intelligence to the list of banned research topics. It is not a desirable thing but given that we have more pressing issues at the moment, it is the ethical way imo. Unless someone can make a case for AI where AI can be reliably be used to solve global issues.

"ASI could literally kill every single person without difficulty." Absolute poverty is worse, it kills you slowly, kills your children first and prologues your death so you can have your hopes for help crushed. Only one of them is reliably guaranteed to kill you and only one of them is guaranteed to do it slowly. Unlike, ASI, poverty is real and has been for millenia. ASI is not real and its emergence is a direct consequence of our work on AI. The only difference is that poverty cannot kill you if you are not poor, ASI, if it decides to kill you will do so, regardless of your wealth.
This reminds of the Church asking for money to save you from hell, said to be worse than anything a human could experience in his lifetime. Assuming that ending up in hell has a probability that can be affected by giving money to the Church, the argument against this hell-saving business is the same I gave above with ASI and absolute poverty. One is real (100% probability) and the other is not (a not 0-% not-100% probability).

"Skynet is really a joke by comparison - it wouldn't need killing robots - it would be trivially easy for a being with superintelligence. It would be literally incomprehensibly smart and (if it chose to be) lethal by the same measure.
It's not 'an electronic machine running wild', or about what is portrayed in films where humanity has a fighting chance -"
What exactly it is (an electronic machine gone wild or not) is irrelevant, what it does to us is irrelevant imo. And what it does to us, however powerful it is, won't be worse than suffering or death. And we already have an "agent/factor/phenomena" called absolutely poverty that exists, that can do that if you are poor (and most humans are poor) and it does it 100% guaranteed.

"it's a very real threat that if we take lightly could legitimately kill literally everyone and we ignore that threat at our extreme peril." When someone means "real", I take it they mean that there is a 100% probability that it will happen. I don't think anyone can give such a probability yet. I don't think anyone can even give us 90% probability. But we know, and we can test it under experimental conditions, that absolute poverty will kill you with 100% probability.
In my opinion, this is all very subjective and depends on when you stand when it comes to vulnerability to these threats. So for someone who is not affected by things such as poverty or cancer, I could see them happily investing money on this. Because this threat might be the most dangerous to you and so it is rational that you want that threat erased.

Similarly, I would expect that, if given the chance,a father whose daughter is dying with cancer to invest all his money on an experimental solution that might save her life. Even after she dies, I could still expect him to go around campaigning to attract attent (and financial investment) to cancer and the need for a cure even though most cancer does not affect most humans. Indeed, this case is quite common in campaigns of the type you find in charge.org .

My point with this is that, for an individual the biggest threat to him is different to the biggest threat to someone else. For someone who is not affected by poverty, that might not be the biggest threat to his life, but for someone who is affected, it might be. And as I said, 70-80% of the human population is in absolute poverty, I wouldn't expect most of them to invest their money (or the money of their countries) on fighting a non-existent threat. I would expect them to invest the money on their most pressing issue: poverty. Similarly, if this synthetic intelligence emerged and was carrying out systematic human extermination (without destroying the terrestrial ecosystems) and we learned that the ozone layer was going to disappear in 300 years with some unknown probability, I would still expect that most people would consider the current human-killing intelligence their most pressing threat. Because unlike the ozone layer disappearance threat which is not real, the human-killing threat is real and is 100% guaranteed to kill all humans.

P.S. My main issue with this is ethical. If preventing suffering in human lives are so valuable, then poverty must be tackled. If means to temporarily halt AI research (and hence, investment on hypothetical AI-related consequences), it seems like a good price to pay to me. I bet 80% of the world's poorest agree. Unfortunately, they won't be having any say on this. That only highlights the unethical situation they are in.
(edited 8 years ago)
Reply 76
Original post by Juichiro
1. I don't dispute that this intelligence would be more powerful than any human or group of humans so far. I am disputing the fact that you believe that living in extreme poverty is surviving. We are not talking of relative poverty, we are talking about absolute poverty here. I don't see much difference between death and absolute poverty apart from the fact that the absolute poor have the ability to suffer. In this sense, you can't get worse than that.

Okay, I see what you're saying. I agree that living in extreme poverty is not much better than death, but the species is surviving. Even if it's completely intolerable for those people, it still only affects portions of humanity and it is (likely to be) temporary (either remedied one day by technology or we all die). Edit: I should say it's not temporary for the people who don't survive it, but it may be from the perspective of humanity.

Original post by Juichiro
2. "Zombies = scary but no threat." Based on what? A plague of human eating infectious agents could be a threat much more harming than an intelligence would for the simple reason that these zombies have the clear objective to harm human while a synthetic intelligence would not necessarily have that objective forever if at all.

The likelihood of zombie attack seems a little tangential to the central topic. I personally have seen no evidence that a zombie-like apocalypse scenario is at all likely, but if this is mistaken then I'm willing to update my beliefs based on new evidence on this topic.

Original post by Juichiro
"Aliens = (if they showed up) scary and significant threat." Should we not then take it into account? As far as I know, most programmes related to their possible existence are focused on communication. If these aliens were to be able to come to us and they were as destructive as we are, they might as well trace our communication, find us and do us harm. But for some reason, it is not considered seriously.
P.S. I do not consider it seriously either.

It is taken seriously, but there are serious limitations on aliens being able to do this. The immediate space around us is apparently empty - we've been listening for radio signals for over 30 years but haven't discovered anything noteworthy. If aliens can reach us, they will have far greater technological capacity than we do, in which case we almost definitely wouldn't be able to resist them. And then, if they do have that technological capability, they most likely don't need our resources or to enslave us or anything like that. It presently seems unlikely aliens are going to show up any time soon - ASI on the other hand is often estimated to come about in the next 100 years.

Original post by Juichiro
"Terminators (aka time-travelling robots) = scary but no threat." Based on what? These terminators have the goal and the capability of killing humans, surely if they existed, that would be a threat.

Yes, if they existed, they'd be a threat - but I think we can be reasonably confident that they don't exist, given the absence of any evidence to believe in them.

Original post by Juichiro
Anyway, who does this threat affect? To the 80% people who live in absolute poverty or to the tiny percentage that do not? As I said, there are more urgent issues that require our attention. Building a programme to limit the abilities of an intelligence more complex than ours (or searching for alien intelligence in space) is not one of them imo. I would argue that this applies even more to utilitarians.

Like I said before, 'existential risk' is risk to humanity as a species. Yes, researching the dangers of ASI doesn't affect many of us as individuals (indeed everyone alive now may be dead before it arrives), but it has wide implications for humanity as a species.

As for the means of ensuring ASI doesn't kill us, that's very much up in air.

Original post by Juichiro
"I genuinely feel that you're not appreciating the level of threat involved. I agree for example that genocide is a tremendously bad and destructive thing and there's been a lot of that in human history, and there may well be more of it."
I was not talking about genocide in particular but more about issues that have been around for a really long time such as absolute poverty, redistribution of global resources or simply not destroying our only planet. I find these issues more urgent and serious than an intelligence coming up and wiping us out.

Can we not agree that they are both urgent and serious? It's not as if we can only choose to research one thing.

Original post by Juichiro
"You're right that we're destroying the environment, that we're at continual risk of nuclear war, etc. These things have killed and will continue to kill and have the potential to kill huge numbers of us, but we've survived and probably will survive them." The issue is not survival but reducing this destructive behaviours imo. It's not about being confident that most of us will survive these risks but ensuring they kill the lowest possible number of people imo. Also, I think the notion of most of us (where one is always among the "most") will survive is a wrong one from an ethical point of view imo and sort of justifies our passive attitude towards most of these issues. If you or your father/mother was at risk, I think your opinion would change.

And anyway, my point was mostly directed towards absolute poverty and the factors surrounding it.

No, I don't think I'm mistaken here. Every person's death and suffering is a tragedy - I wholeheartedly agree with that. But surely if one person's death is a tragedy, then two persons' deaths is even more so, and the death of everyone the most so?

Original post by Juichiro
"When I talk about 'existential risk', I'm not talking about subsections of humanity being killed - I'm talking about a threat to everyone in the world."
Brings me back to my previous point. If you have a threat that is almost assuredly likely to kill you, adding a threat with a far more minimal likelihood to kill won't make a substantial difference. And definitely, investing x amount of resources into the second threat might not necessarily be what you want. Let me put it in more extreme terms: if you are dying of cancer, whether the aliens might arrive sometime in the far future won't be of importance to you because you will be dead one way or another. And if this cancer victim (or a member of his family) was given the chance to invest more money to get a cure for cancer (at the expense from lowering the budget towards the funding of unreliable protection against a super intelligence), I have no doubts that the victim or his family would invest that money. For absolutely poor people it is the same. There is no other way to reliably cure cancer but there is another way to prevent this super intelligence scenario: add artificial intelligence to the list of banned research topics. It is not a desirable thing but given that we have more pressing issues at the moment, it is the ethical way imo. Unless someone can make a case for AI where AI can be reliably be used to solve global issues.

I think I agree with this. If you know you're going to die tomorrow, then having a disease that'll kill you in a week doesn't matter. So I would agree that ASI doesn't matter to those people who will die before its advent, but there most likely will be an advent, and at that time there will also be people, and they may die. And not just everyone around at that time, but the entire future of humanity may be denied as some ASI turns the whole world into a factory for producing pencils.

Original post by Juichiro
"ASI could literally kill every single person without difficulty." Absolute poverty is worse, it kills you slowly, kills your children first and prologues your death so you can have your hopes for help crushed. Only one of them is reliably guaranteed to kill you and only one of them is guaranteed to do it slowly. Unlike, ASI, poverty is real and has been for millenia. ASI is not real and its emergence is a direct consequence of our work on AI. The only difference is that poverty cannot kill you if you are not poor, ASI, if it decides to kill you will do so, regardless of your wealth.
This reminds of the Church asking for money to save you from hell, said to be worse than anything a human could experience in his lifetime. Assuming that ending up in hell has a probability that can be affected by giving money to the Church, the argument against this hell-saving business is the same I gave above with ASI and absolute poverty. One is real (100% probability) and the other is not (a not 0-% not-100% probability).

Yes. I won't dispute that poverty is devastating for individuals, but thankfully it is on a smaller scale than being a threat to the entirety of humanity.

Original post by Juichiro
"Skynet is really a joke by comparison - it wouldn't need killing robots - it would be trivially easy for a being with superintelligence. It would be literally incomprehensibly smart and (if it chose to be) lethal by the same measure.
It's not 'an electronic machine running wild', or about what is portrayed in films where humanity has a fighting chance -"
What exactly it is (an electronic machine gone wild or not) is irrelevant, what it does to us is irrelevant imo. And what it does to us, however powerful it is, won't be worse than suffering or death. And we already have an "agent/factor/phenomena" called absolutely poverty that exists, that can do that if you are poor (and most humans are poor) and it does it 100% guaranteed.

I said that because phrasing it in such a nonchalant way doesn't do justice to the scale of the threat. "An electronic machine running wild" doesn't really call to mind the gravity of the implications of ASI. And again, poverty is terrible, but it does only threaten death to some but not all of the global population. The scales are different.

Original post by Juichiro
"it's a very real threat that if we take lightly could legitimately kill literally everyone and we ignore that threat at our extreme peril." When someone means "real", I take it they mean that there is a 100% probability that it will happen. I don't think anyone can give such a probability yet. I don't think anyone can even give us 90% probability. But we know, and we can test it under experimental conditions, that absolute poverty will kill you with 100% probability.

Yes, if you are afflicted by it, but not everyone is afflicted by extreme poverty. On the other hand, everyone will be afflicted by the advent of ASI.

Original post by Juichiro
In my opinion, this is all very subjective and depends on when you stand when it comes to vulnerability to these threats. So for someone who is not affected by things such as poverty or cancer, I could see them happily investing money on this. Because this threat might be the most dangerous to you and so it is rational that you want that threat erased.

Similarly, I would expect that, if given the chance,a father whose daughter is dying with cancer to invest all his money on an experimental solution that might save her life. Even after she dies, I could still expect him to go around campaigning to attract attent (and financial investment) to cancer and the need for a cure even though most cancer does not affect most humans. Indeed, this case is quite common in campaigns of the type you find in charge.org .

My point with this is that, for an individual the biggest threat to him is different to the biggest threat to someone else. For someone who is not affected by poverty, that might not be the biggest threat to his life, but for someone who is affected, it might be. And as I said, 70-80% of the human population is in absolute poverty, I wouldn't expect most of them to invest their money (or the money of their countries) on fighting a non-existent threat. I would expect them to invest the money on their most pressing issue: poverty. Similarly, if this synthetic intelligence emerged and was carrying out systematic human extermination (without destroying the terrestrial ecosystems) and we learned that the ozone layer was going to disappear in 300 years with some unknown probability, I would still expect that most people would consider the current human-killing intelligence their most pressing threat. Because unlike the ozone layer disappearance threat which is not real, the human-killing threat is real and is 100% guaranteed to kill all humans.

I think to a certain extent we're talking around each other. I do understand where you're coming from because I too hold an individualistic approach to ethics. I don't personally think it matters that humanity might get wiped out, except insofar as to do it, everyone would have to be killed first. It's all that killing that I have an opposition to, and as tragic as absolute poverty is, it is less tragic than the killing of literally everyone.

I hope I've given a sufficient response but it was quite long so I tried to address the main points and not get too lost in detail.
(edited 8 years ago)
This made me thing of that new tv show 'humans' which is really good btw
I have accepted that this will eventually happen and when I am outsourced by a robot I will end up committing anomoic suicide.
Original post by viddy9
An Artifical General Inteligence (AGI) would almost certainly be able to disable these "backdoors".

It's not that the AGI would be inherently evil and simply wish to inflict unnecessary suffering on other sentient beings, like we do in the meat industry, due to its own greed. No, unlike humans, the AGI would not be shaped by natural selection.

However, every Artificial Intelligence that we've created so far is proficient at carrying out a task, whether it's beating humans at chess or driving around the roads, whilst being inept at performing any other task.

But, any exogenously specified utility function, or goal, that we give an Artificial General Intelligence would almost certainly lead to disaster.

Let's say that utility is specified as "cure cancer". The AGI will then resort to extreme measures to cure cancer - it may try to acquire resources to achieve its goal, and may try to maximise the probability of its own continued existence. How would it do this? Kill everyone else without cancer, because it reduces the probability that it will be deactivated.



Switches are unlikely to work against an Artificial General Intelligence.



It's not that the AGI would be inherently evil and simply wish to inflict unnecessary suffering on other sentient beings due to its own greed. No, unlike humans, the AGI would not be shaped by natural selection.

However, every Artificial Intelligence that we've created so far is proficient at carrying out a task, whether it's beating humans at chess or driving around the roads, whilst being inept at performing any other task.

But, any exogenously specified utility function, or goal, that we give an Artificial General Intelligence would almost certainly lead to disaster.

Let's say that utility is specified as "cure cancer". The AGI will then resort to extreme measures to cure cancer - it may try to acquire resources to achieve its goal, and may try to maximise the probability of its own continued existence. How would it do this? Kill everyone else without cancer, because it reduces the probability that it will be deactivated.

Or, we could propose the existence of an AGI whose goal is to maximize the number of paperclips in its collection. If it has been constructed with a roughly human level of general intelligence, the AGI might collect paperclips, earn money to buy paperclips, or begin to manufacture paperclips.

Most importantly, however, it would undergo an intelligence explosion: It would work to improve its own intelligence, where "intelligence" is understood in the sense of optimization power, the ability to maximize a reward/utility function—in this case, the number of paperclips. The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips. Having increased its intelligence, it would produce more paperclips, and also use its enhanced abilities to further self-improve. Continuing this process, it would undergo an intelligence explosion and reach far-above-human levels.

It would innovate better and better techniques to maximize the number of paperclips. At some point, it might convert most of the matter in the solar system into paperclips.

This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important terminal values, such as life, love, and variety. The AGI won't revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future, and that opposes its current goal. It has one simple goal of maximizing the number of paperclips; human life, learning, joy, and so on are not specified as goals. An AGI is simply an optimization process—a goal-seeker, a utility-function-maximizer. Its values can be completely alien to ours. If its utility function is to maximize paperclips, then it will do exactly that.

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Essentially, we need to be able to give any AGI a utility function which encompasses human values, as some on this thread have already alluded to.

But, just like the term British values, the term human values is incredibly ambiguous. Some humans, such as myself, are utilitarians; most believe that morality comes from a supernatural being; a few are deontologists.

I support the work of Oxford's Future of Humanity Institute and would strongly recommend its founder Nick Bostrom's book Superintelligence, as well as Stuart Russell's leading AI textbook, Artificial Intelligence: A Modern Approach. The Centre for the Study of Existential Risk is a similar organisation which looks, in part, into the dangers posed by Artificial Intelligence, as is the Future of Life Institute.

In the future, I plan to donate to the Machine Intelligence Research Institute, which is also conducting research into the problem posed above - how do we create a Friendly AI with a utility function that won't result in catastrophic effects? They are, quite rightly, looking into rationality and cognitive biases, as well as utilitarianism, in order to solve the problem.

Humans aren't rational and don't have a consistent moral code, so we need to study rationality and the best bet for a universal moral code, utilitarianism, in order to ensure that AI does not result in catastrophe.



For the reasons stated above, agreed.

Artificial General Intelligence is one of the biggest threats to our way of life, as well as to our lives.


Mind if I cite you on this? I'm doing an EPQ on AI's future and would love to quote some of your ideas :smile:

Quick Reply

Latest

Trending

Trending