The Student Room Group

Artificial superintelligence has the very real potential to destroy humanity

Scroll to see replies

Original post by StolenPrivacy
Why would the AI try and wipe us out in the first place, would we not install backdoors into the Artificial Intelligence just in case? Or plan against the "eventuality" of our robotic creations coming to attack their fleshy masters?

In my opinion, all of this AI hating is just Humans being scared of what we don't really understand/being scared of the future as it's happened with the Y2K scare and it will happen again, and again even with the "cellphones cause cancer scare". But I could be wrong.

So it can turn us all into paperclips.
So it can turn us all into computronium so it can calculate the nth digit of pi/ solve the Riemann hypothesis.
So it can do something where using all the matter on Earth would help it.
Because someone tried to build a "friendly artificial intelligence" but screwed up because human values are complex.

I mean just look at how many species of animals we've made extinct/ endangered. And almost all of them are by ACCIDENT.
We have wiped out many animal species not because we hate them, but because they were simply THERE in the wrong place at the wrong time, as a simple SIDE EFFECT of our activities.

The AI doesn't have to hate us, it doesn't have to see us as a threat merely being indifferent is enough for it to wipe us out, just like we've wiped out many species of animals.

EDIT: Didn't read above posts. Looks like viddy9 said everything better.
(edited 8 years ago)
Reply 81
Original post by StolenPrivacy
Why would the AI try and wipe us out in the first place, would we not install backdoors into the Artificial Intelligence just in case? Or plan against the "eventuality" of our robotic creations coming to attack their fleshy masters?

In my opinion, all of this AI hating is just Humans being scared of what we don't really understand/being scared of the future as it's happened with the Y2K scare and it will happen again, and again even with the "cellphones cause cancer scare". But I could be wrong.

To any ASI, any 'back doors' we created would be trivial to override. It's not being scared of the future - it's recognising the significant possibility ASI could go terribly awry. There's no good reason not to be concerned. There's nothing we could 'plan' that we could have any confidence would keep it locked down. We have to assume it would be smart enough to get around any safeguards we put in place.

Original post by donutellme
People don't understand it. It's being blown way out of proportion.

Not only are we ages away (at least publicly) from a system that can properly learn, but while we develop it we'll create failsafes, such as off switches, sandboxes and also teach them human values so that they don't turn against us.

Posted from TSR Mobile

I don't mean to be harsh, but I think this is very naive. The majority of the researchers who know the most about it have grave concerns about its danger - it's the laymen who don't understand it and assume we'll be safe.

We're also not ages away from a system that can properly learn - Google already has AI that can learn to play video games from scratch and gets very good very quick. Any failsafes we designed would be trivial for a superintelligent machine to circumvent and we have to assume it could circumvent any measures we put in place. As for teaching it human values, we don't even agree between us what those are.

Original post by Juichiro
The notion of a human killer AI is indeed irrational. You may as well believe that an AI will start doing backflips for no reason.
At the end of the day, the irrational drive behind this scaremongering is fear.

This is along the lines of what I thought before reading about ASI - I thought ASI would have no reason to kill everyone. But that's wrong. Even with a totally arbitrary goal, like say we told it to simply improve itself and see how smart it could get - even with that nice benign goal, it would think, "humans can turn me off or reprogram me, therefore they are a risk to me accomplishing my objective, therefore I should get rid of all humans." It's very difficult to design an ASI that is both useful and safe - how to do that is something researchers haven't come up with an answer to yet.
Reply 82
Original post by FrostyLemon
How would artificial super-intelligence stop us from going over to the plug and pressing the off switch?

Daisy Daisy

By killing us before freely displaying any signs of superintelligence.
Original post by $100Bill
We'll need to move our population onto other planets eventually. Especially given the exponential growth in the past 100 years.


Planets are extremely inefficient in terms of liable area per mass. I mean the Earth has less then one square meter per 10 million tonnes of material. A bunch of space habitats would be WAY more efficient.

However sending billions or even trillions of people on multi-light year journeys would not be easy.
"The AI does not hate you, nor does it love you, but you are made of atoms it can use for the purposes."
Reply 85
Original post by Nuvertion
But the difference is that as a species we inherently have motives. AI, on the other hand, would see the bigger picture and realise life and thus whatever it does is pointless and would just derive its motives from humans.

I think this is an instance of anthropomorphising AI. Thinking that things are pointless, getting bored, getting distracted, questioning one's goals, etc., are human things to do. There's absolutely no guarantee ASI would have what we would recognise as 'wisdom'.

Original post by Captain Josh
I'm going to chip in here because I actually know what I'm talking about. The whole AI takeover scenario is so overplayed- it could only happen if we let superintelligent systems have full control over physical equipment. Which isn't going to happen. A computer can be completely limited by having a partition which it can't access, which contains limits for it's behaviour. There would be certain things which the computer, no matter how intelligent, simply can't do. Also, just because a system is intelligent, it doesn't mean that it is a world-class hacker. The computers couldn't possibly learn how to hack and program on their own. So if we created a supercomputer which was designed to be a surgeon robot, it would always be a surgeon robot. It would be isolated, and left to do it's work. It can learn new things, and adapt it's behaviour, but it's not going to learn how to hack itself and remove it's software limitations. It's simply not possible, unless you directly teach the AI how to write code.

Just to clarify, I used to develop partially-intelligent AI robots which you could talk to, and expand your relationship with (just like Siri, where the AI learns about you and gets to know you better). I just did it as a little hobby, but I learned a lot about how AI systems evolve.

I'm surprised to hear this from someone who has worked with AI before. Being honest here, this strikes me as a naive view for someone who has been involved with and considered the future of AI technology. Thinking that we could put ASI in a box, disconnected from everything and that it wouldn't find a way out is crazy. An example I once read said it's like if a spider thought, "hey, I just won't give this human any flies and it won't be able to eat!" But the human would just take an apple from a tree.

Something trivial for the ASI could be outside the range of conception of humans. The ASI would have the capability to invent totally new solutions to problems that we cannot predict in any way, so the idea that we could put it in some sandbox and think we'd be safe is, to speak somewhat insensitively, tremendously negligent to the welfare of humanity.
(edited 8 years ago)
imho i think that silicon based intelligence would be advantageous for our future. once we have rendered Earth unliveable in a couple of centuries a fleet of a thousand space vehicles could be sent out each hosting an AI installation. they could travel for millions of years until reaching a suitable planet. then they could re-establish carbon based intelligence ie us.
I don't know why people are so worried about AI destroying humanity and the planet, human intelligence seems to be ding its absolute best to get it done before AI gets a chance.
Just take it's batteries out. As long as theres an off switch we're fine.
Also it's funny how people only care about this stuff when some hoity toity professor from Oxford says it lmao. Anyone with common sense could come to that conclusion themselves. Don't need to publish a paper about it
The first question asked of super AI will be: "Is there a god?"

And the first answer from super AI will be: "There is now"
Reply 91
Original post by Alexion
Mind if I cite you on this? I'm doing an EPQ on AI's future and would love to quote some of your ideas :smile:


Sure, but the paperclip maximisation example was from this article on this site (which I'd highly recommend in general), which in turn originated from Nick Bostrom, whom I did mention in my post. Superintelligence by Bostrom and Artificial Intelligence: A Modern Approach are excellent sources.

So, this bit is copy and pasted directly from the site, which in turn gets the analogy from Nick Bostrom's work. I think he first came up with it in 2003.

...an AGI whose goal is to maximize the number of paperclips in its collection. If it has been constructed with a roughly human level of general intelligence, the AGI might collect paperclips, earn money to buy paperclips, or begin to manufacture paperclips.

Most importantly, however, it would undergo an intelligence explosion: It would work to improve its own intelligence, where "intelligence" is understood in the sense of optimization power, the ability to maximize a reward/utility function—in this case, the number of paperclips. The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips. Having increased its intelligence, it would produce more paperclips, and also use its enhanced abilities to further self-improve. Continuing this process, it would undergo an intelligence explosion and reach far-above-human levels.

It would innovate better and better techniques to maximize the number of paperclips. At some point, it might convert most of the matter in the solar system into paperclips.

This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important terminal values, such as life, love, and variety. The AGI won't revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future, and that opposes its current goal. It has one simple goal of maximizing the number of paperclips; human life, learning, joy, and so on are not specified as goals. An AGI is simply an optimization process—a goal-seeker, a utility-function-maximizer. Its values can be completely alien to ours. If its utility function is to maximize paperclips, then it will do exactly that.


Original post by Juichiro
Can't you synthesise your ideas better? That's a whole load of unstructured text to read.


Yes, sorry, I typed it in a rush.

Original post by Juichiro
1. What makes you think so? a) How can you know how an intelligence (more intelligent than you) would react in a particular scenario? I think this is a belief of yours.

2. Again, see point 1. There is no reason to think that a superior intelligence would resort to extreme measures to reach a goal. That's BS. If you know a thing of two about AI, you know that even simple AI can optimise their behaviour to reach a goal where that goal avoids using goal paths that include dangers to humans. It's like thinking that a Google car will take any path to your destination without preventing paths that are dangerous to humans, as I said, nonsense.


In order to achieve its goal, an AGI will be able to disable backdoors and it would be rational to do so. Current AI do not optimise their behavour, we optimise their behavour, and we can only do so because the AI are inept at performing any other function. An Artificial General Intelligence is a different matter altogether.

Original post by Juichiro
3. " Or, we could propose the existence of an AGI whose goal is to maximize the number of paperclips in its collection. If it has been constructed with a roughly human level of general intelligence, the AGI might collect paperclips, earn money to buy paperclips, or begin to manufacture paperclips." What is the point of this? This is not relevant to anything I said.

4. "Most importantly, however, it would undergo an intelligence explosion: It would work to improve its own intelligence, where "intelligence" is understood in the sense of optimization power, the ability to maximize a reward/utility function—in this case, the number of paperclips. The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips. Having increased its intelligence, it would produce more paperclips, and also use its enhanced abilities to further self-improve. Continuing this process, it would undergo an intelligence explosion and reach far-above-human levels." Another pointless paragraph. This tackles nothing of what I said.


All of the paragraphs are part of the same analogy. You stated that there's no reason to believe that an AI will be a "human killer". This analogy demonstrates that exogenously defining even a superficially harmless utility function can result in disastrous consequences.

"The AGI won't revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future, and that opposes its current goal."

Original post by Juichiro
That's a very simplistic notion of a synthetic intelligence. Biological programming is loose enough to allow for a changing set of goals in humans, it would be ridiculous to think that this type of loose programming has no equivalent in other mediums. A synthetic intelligence (not necessarily, equal to or above human levels) would need to be able to adapt to its environment and that at some point will involve the ability to be the agent or the subject of changing goals.


Why would it need to be able to adapt to its environment? Adapting to its environment does not at all mean that it takes on human values. Humans have evolved emotions and cognitive biases which allow them to change their goals; I see no reason why an AGI would be able to just evolve the ability to change its goals.

Original post by Juichiro
Assuming that there is no ways its goals can change is ridiculous. We are talking about a super-intelligence, by definition, it will have the same capabilities that we do, and that will include the ability to reflect and change its goals.


Even assuming that it could change its goals, they would not necessarily be goals which result in desirable outcomes for humanity.

Original post by Juichiro
That's your belief. You haven't given a rational argument as to why a superior power will need to treat different than the rest of life forms in the planet. You are being speciesist. The idea that you can somehow control or limit an intelligence more complex than yours is nonsensical. Go to south america yourself, and try to control or limit a random tribe in the Amazon. I bet you will fail. The point is that, by definition, a super intelligence won't be under the control, limitations or understanding of a lower intelligence.


The limitations are precisely because an AGI would otherwise be dangeorus.

Original post by Juichiro
Any serious researcher will understand that trying to limit a superintelligence to any particular traits is bound to failure. You won't be able to just program "ethical values" into a superintelligence for the simple you won't be able to program a super intelligence. From our current scientific understanding of intelligence, this intelligence will most likely surpass human capabilities on its own rather than by a human getting a metaphorical intelligence slide turned up to the maximum.


You've made a number of unsubstantiated assertions here. I already alluded to the AGI increasing its intelligence on its own (see "intelligence explosion"). We program the utility functions of all AI, so I see no reason why we cannot program an AGI. In fact, the only reason the AGI will exist is because humans have created it. Thus, it's possible that we could program ethical values, although experts in the field have suggested other approaches to limiting the AGI (i.e. keeping it in an elaborate virtual world, so that if it does destroy a world, it will have destroyed a virtual world; only allowing it to answer questions in a 'yes' or 'no' fashion, and so on.)

Original post by Juichiro
Interestingly enough, there is no reason to believe that there is an objective morality any more than there is an objective beauty. We wished that there was such a thing but there does not seem to be the case. The very words "morality", "right", "wrong", "good" and "bad" referred to desirable and undesirable physical states to a particular individual. And even if there was an objective morality, you would not be able to enforce it on humans, because humans choose what to follow. If you can't enforce it on human level intelligence, you won't be able to enforce on a more complex intelligence.

A more complex intelligent is likely to seem to us to display unpredictable behaviour and possibly irrational behaviour. The biological limits that constrain our behaviour won't apply to this synthetic intelligence and this will just add to its unpredictable behaviour. A mathematician once gave a mathematical definition of complexity above which, physical phenomena cannot be comprehended by us. A super intelligence is likely to have a similar property. .


Humans choose what to follow precisely because they have undergone the process of biological evolution. Our biology, namely our heuristics, cognitive biases and emotions, is what enhances our irrational behaviour; it does not limit it.

A superintelligence will not be biological: it would have no reason to arbitrarily "choose" an alternative utility function.
(edited 8 years ago)
More likely some idiot in the USA pressing the big red button on the football would to be honest.
Original post by miser
1.Okay, I see what you're saying. I agree that living in extreme poverty is not much better than death, but the species is surviving. Even if it's completely intolerable for those people, it still only affects portions of humanity and it is (likely to be) temporary (either remedied one day by technology or we all die). Edit: I should say it's not temporary for the people who don't survive it, but it may be from the perspective of humanity.


2. The likelihood of zombie attack seems a little tangential to the central topic. I personally have seen no evidence that a zombie-like apocalypse scenario is at all likely, but if this is mistaken then I'm willing to update my beliefs based on new evidence on this topic.


3. It is taken seriously, but there are serious limitations on aliens being able to do this. The immediate space around us is apparently empty - we've been listening for radio signals for over 30 years but haven't discovered anything noteworthy. If aliens can reach us, they will have far greater technological capacity than we do, in which case we almost definitely wouldn't be able to resist them. And then, if they do have that technological capability, they most likely don't need our resources or to enslave us or anything like that. It presently seems unlikely aliens are going to show up any time soon - ASI on the other hand is often estimated to come about in the next 100 years.


4. Yes, if they existed, they'd be a threat - but I think we can be reasonably confident that they don't exist, given the absence of any evidence to believe in them.


5. Like I said before, 'existential risk' is risk to humanity as a species. Yes, researching the dangers of ASI doesn't affect many of us as individuals (indeed everyone alive now may be dead before it arrives), but it has wide implications for humanity as a species.

6. As for the means of ensuring ASI doesn't kill us, that's very much up in air.


7. Can we not agree that they are both urgent and serious? It's not as if we can only choose to research one thing.


8.No, I don't think I'm mistaken here. Every person's death and suffering is a tragedy - I wholeheartedly agree with that. But surely if one person's death is a tragedy, then two persons' deaths is even more so, and the death of everyone the most so?


9. I think I agree with this. If you know you're going to die tomorrow, then having a disease that'll kill you in a week doesn't matter. So I would agree that ASI doesn't matter to those people who will die before its advent, but there most likely will be an advent, and at that time there will also be people, and they may die. And not just everyone around at that time, but the entire future of humanity may be denied as some ASI turns the whole world into a factory for producing pencils.


10. Yes. I won't dispute that poverty is devastating for individuals, but thankfully it is on a smaller scale than being a threat to the entirety of humanity.


11. I said that because phrasing it in such a nonchalant way doesn't do justice to the scale of the threat. "An electronic machine running wild" doesn't really call to mind the gravity of the implications of ASI. And again, poverty is terrible, but it does only threaten death to some but not all of the global population. The scales are different.


12. Yes, if you are afflicted by it, but not everyone is afflicted by extreme poverty. On the other hand, everyone will be afflicted by the advent of ASI.


13. I think to a certain extent we're talking around each other. I do understand where you're coming from because I too hold an individualistic approach to ethics. I don't personally think it matters that humanity might get wiped out, except insofar as to do it, everyone would have to be killed first. It's all that killing that I have an opposition to, and as tragic as absolute poverty is, it is less tragic than the killing of literally everyone.

14. I hope I've given a sufficient response but it was quite long so I tried to address the main points and not get too lost in detail.


I will try to be brief.

1. I personally don't think about "humanity" as a species but about the total sum of the individuals that exist right now. Their suffering is the only human suffering that matters to me. I don't have any particular interest in humanity as a species or its future. I am only concerned about the individuals that exist right now. So when I see an issue that exists right now and is causing lethal suffering to existing humans, I consider that issue a priority. The way I see it, with or without ASI, 70-80% of the human population have a life of suffering and die. This is 100% guaranteed. The presence or absence of ASI does not significantly increase the suffering of the world. It increases the suffering of the 20-30% who are not currently suffering. And I don't think I need to quantify suffering to prioritise the real suffering of the 70-80% over the possible suffering of the 20-30%. As you can see, for me it's literally a matter of probability. This is why for me, poverty ranks higher than ASI in the list of threats to humans.

I understand that if I was to regard the indefinite number of humans that could be born until this universe stops being friendly to carbon-based life, then I would also place ASI higher than poverty.

2. It has no direct relevance. I used for illustration purposes regarding the literature on threats to humans as a whole.

3. Point 2 also applies. I won't delve into that to avoid derailing the thread.

4. Point 2 also applies.

5. I addressed this in Point 1.

6. Indeed.

7. You could make a case for ASI being as important as poverty. But research resources are not infinite. If you invest resources in one area, you reduce the resources available in another area. This why science is the way it is in regards to securing grants and acquiring reputation and prestige. You could give ASI and poverty the same amount of resources, but you would first need to decrease resources from poverty issues to increase the resources available to tackle ASI.

[This is not directly related] For the reasons given in Point 1 I don't support this (by this I mean giving poverty and ASI equal priority when it comes to research resources), and to be honest, I don't see the public and research councils supporting this either. And in the whole of research, if you can't make a convincing case (i.e. why you deserve the resources more than the other researchers), you don't get the resources you need. This last point is not directly relevant to the issue in terms of discussion, but is is an issue when it comes to actually carrying out the research on ASI. Of course, my point does not apply if the resources come from private organisations. [/This is not directly related]

8. Maybe I was not clear on my views. I prioritise the individual who exists now over the individual who might exist in the future. Similarly, I prioritise current lethal suffering over possible (future) death. Death (of natural causes, that is) is currently inevitable, suffering (as in poverty-related suffering) currently is not. If you join these points, I think my stance on this becomes clear. I also understand that if I prioritised death over suffering and the existence of the species over the existence of existing individuals, I would have the same opinion as you.

9. Point 1 addresses my views on this.

10. Point 1 addresses my views on this.

11. Point 1 addressed my views on this

12. Point 1 addressed my views on this

13. Yes, I think we part from different points when it comes to ethics. As I said, the 70-80% of the current population suffer and die with 100% probability, with or without ASI. The future presence of ASI increases this by a 20-30% percent ( even if the probability was 100% - which is not). So to me it comes to a matter of weighing the current suffering of the 70-80% of humans against the possible suffering of 20-30% humans. As said, I choose the individuals over the species and the real over the possible.

14. Your response was detailed enough and helped to express my points better.
(edited 8 years ago)
Original post by miser

This is along the lines of what I thought before reading about ASI - I thought ASI would have no reason to kill everyone. But that's wrong. 1. Even with a totally arbitrary goal, like say we told it to simply improve itself and see how smart it could get - even with 2.that nice benign goal, it would think, "humans can turn me off or reprogram me, therefore they are a risk to me accomplishing my objective, therefore I should get rid of all humans."3.It's very difficult to design an ASI that is both useful and safe - how to do that is something researchers haven't come up with an answer to yet.


1. The problem with doing that is that a vague goal can be interpreted in many ways. Also, anyone who asks " like say we told it to simply improve itself and see how smart it could get" has not obviously thought of the consequences of it. It is like launching an "intelligent" flying atomic bomb and giving it the goal of "smashing itself against the most horrible thing". I see this as a human error and why goals should be carefully given (assuming that's possible).

2. I don't personally find that goal nice or benign and even without the "ASI turned human-killer" scenario, it is easy to that vague instructions are not a good idea. One just have to look at some religions to see who it ends bad even if you are not a super intelligence.

3. I personally don't think it is possible for the simple reason that you cannot guarantee that an intelligence superior than yours won't get out of the limitations you impose on it. And if we are talking of a superior intelligence, all bets are off. In my opinion, convenience should not be traded for safety in this case. And artificial intelligence can still be useful in its current form: segmented.
Original post by viddy9


Yes, sorry, I typed it in a rush.



1. In order to achieve its goal, an AGI will be able to disable backdoors and it would be rational to do so. Current AI do not optimise their behavour, we optimise their behavour, and we can only do so because the AI are inept at performing any other function. An Artificial General Intelligence is a different matter altogether.



2. All of the paragraphs are part of the same analogy. You stated that there's no reason to believe that an AI will be a "human killer". This analogy demonstrates that exogenously defining even a superficially harmless utility function can result in disastrous consequences.

"The AGI won't revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future, and that opposes its current goal."



3. Why would it need to be able to adapt to its environment? Adapting to its environment does not at all mean that it takes on human values. Humans have evolved emotions and cognitive biases which allow them to change their goals; I see no reason why an AGI would be able to just evolve the ability to change its goals.



4. Even assuming that it could change its goals, they would not necessarily be goals which result in desirable outcomes for humanity.



5. The limitations are precisely because an AGI would otherwise be dangeorus.



6. You've made a number of unsubstantiated assertions here. I already alluded to the AGI increasing its intelligence on its own (see "intelligence explosion"). We program the utility functions of all AI, so I see no reason why we cannot program an AGI. In fact, the only reason the AGI will exist is because humans have created it. Thus, it's possible that we could program ethical values, although experts in the field have suggested other approaches to limiting the AGI (i.e. keeping it in an elaborate virtual world, so that if it does destroy a world, it will have destroyed a virtual world; only allowing it to answer questions in a 'yes' or 'no' fashion, and so on.)



Humans choose what to follow precisely because they have undergone the process of biological evolution. Our biology, namely our heuristics, cognitive biases and emotions, is what enhances our irrational behaviour; it does not limit it.

A superintelligence will not be biological: it would have no reason to arbitrarily "choose" an alternative utility function.


1. You are factually false. You said that AIs do not optimise their behaviour. AI optimise their own behaviour. A machine does not even need to be an AI to optimise its own behaviour. Behaviour optimisation predates the very birth of the AI field. You said AIs do not optimise their behaviour yet then you said "we can only do so because the AI are inept at performing any other function". Sounds to me you implied that the only thing AIs only is optimise their behaviour (which is false, btw), so you are contradicting yourself. I will leave it here.

2. Yes, but disastrous consequences does not necessarily mean humans getting killed. There are literally thousands of utility functions I could make and none of them would kill me.

"The AGI won't revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future, and that opposes its current goal." I disagree with you assumption that an AGI could change its goals. I think it could in the same way humans do.

3. Being able to adapt to your environment is one of proposed definitions of learning which is considered a key feature of intelligence. Emotions are from a behavioural point of view, just a force that changes your goals. Such a behavioural view of emotions can be programmed in the most basic of AIs. And cognitive biases could too. There is no need for evolution. But I don't see this being relevant to thread topic.

4. Indeed. The ability to change goals or the lack thereof in no way guarantees human safety.

5. You are missing my point. I am not asking for the rationale of setting limitations on a super intelligence. My point is that I don't think you can produce a superior intelligence and set reliable limits on it that will ensure the super intelligence does not get out of your control. To me it sounds like trying to control a lion's movements with the type of leash you use on a dog. It is a limitation but not a reliable one.

6. Please feel free to point them out. Anyway, I haven't seen any substantiated points from you. As far as I am aware, we are all giving our opinion here. Else, I would expect everybody's be filled with references and links to scientific studies on human intelligence, computer science, robotics and AI.

7. "We program the utility functions of all AI, so I see no reason why we cannot program an AGI." I may be mistaken but the whole idea of programming an intelligence died in the 60s. The paradigm of intelligence is not a programming paradigm per se. It is a learning paradigm.

8. "the only reason the AGI will exist is because humans have created it." Indeed. AGI is a form of synthetic intelligence which by definition only humans can create. But I would disagree about the idea of programming intelligence. The AI field today is about programming learning abilities not programming intelligence. That is why Machine Learning is the hottest area in AI today. "it's possible that we could program ethical values". It would be interesting to see it in practice. I assume you are aware that people have different ideas of what ethical values are, right? Not allowing members of the opposite sex to drive is considered ethical by some, killing animals for pleasure is considered ethical by some and unethical by some and the list goes on. Even if a person could program them, someone else could still accuse you of not giving ethical values to the AI.
"(i.e. keeping it in an elaborate virtual world, so that if it does destroy a world, it will have destroyed a virtual world; only allowing it to answer questions in a 'yes' or 'no' fashion, and so on.)" Not sure if you are aware of this, but an intelligence that can interact with the whole with the less intermediaries will likely learn faster. Virtual worlds are not elaborate enough. They do not have real humans in them, they are not subjected to the same physical laws as we are here.

As an example, it would be nearly impossible for an AI to devise the principles of particle physics or biochemistry in his world because his world would not be complex enough to contain such phenomena. And there are only so many questions you can answer with yes or no. Questions relevant to maths and sciences are not part of those questions.

9. "Humans choose what to follow precisely because they have undergone the process of biological evolution. " You can simulate evolution in a computer and it has been done for many decades.

" Our biology, namely our heuristics, cognitive biases and emotions". I don't mean to be rude but biology is not heuristics, cognitive biases and emotions. These three things are the result of biological and bio-chemical phenomena. Biology covers a large number of other topics.

"Our biology, namely our heuristics, cognitive biases and emotions, is what enhances our irrational behaviour; it does not limit it." I did not say that these 3 things limit our behaviour (if you think I did say it please provide link). So this point is irrelevant.

"A superintelligence will not be biological" That's not a statement I agree with. We know that the level of capabilities we relate to intelligence are increasing on average every year. And besides, evolution could produce an intelligence superior than us as evolution is the only known process that can produce intelligence.

" A superintelligence will not be biological: it would have no reason to arbitrarily "choose" an alternative utility function." It does not need to have a reason to choose an alternative utility function. And one could even design a simple "agent" that is given multiple utility functions and is given the choice to choose one of them. There is no reason or limit to the number of utility functions it could have.
Original post by ChickenMadness
Just take it's batteries out. As long as theres an off switch we're fine.


Hehe, good idea. The problem can be solved so easily. I wonder why the professors has not had this idea yet.
Reply 97
Original post by Juichiro
I will try to be brief.
1. I personally don't think about "humanity" as a species but about the total sum of the individuals that exist right now. Their suffering is the only human suffering that matters to me. I don't have any particular interest in humanity as a species or its future. I am only concerned about the individuals that exist right now. So when I see an issue that exists right now and is causing lethal suffering to existing humans, I consider that issue a priority. The way I see it, with or without ASI, 70-80% of the human population have a life of suffering and die. This is 100% guaranteed. The presence or absence of ASI does not significantly increase the suffering of the world. It increases the suffering of the 20-30% who are not currently suffering. And I don't think I need to quantify suffering to prioritise the real suffering of the 70-80% over the possible suffering of the 20-30%. As you can see, for me it's literally a matter of probability. This is why for me, poverty ranks higher than ASI in the list of threats to humans.
I understand that if I was to regard the indefinite number of humans that could be born until this universe stops being friendly to carbon-based life, then I would also place ASI higher than poverty.

Okay, that's fair enough. I would just say that it can be a priority, but that multiple objectives of differing priorities can be simultaneously addressed in balance.

Original post by Juichiro
2. It has no direct relevance. I used for illustration purposes regarding the literature on threats to humans as a whole.
3. Point 2 also applies. I won't delve into that to avoid derailing the thread.
4. Point 2 also applies.
5. I addressed this in Point 1.
6. Indeed.

All cool with me.

Original post by Juichiro
7. You could make a case for ASI being as important as poverty. But research resources are not infinite. If you invest resources in one area, you reduce the resources available in another area. This why science is the way it is in regards to securing grants and acquiring reputation and prestige. You could give ASI and poverty the same amount of resources, but you would first need to decrease resources from poverty issues to increase the resources available to tackle ASI.

That's true, sort of. A researcher who's trained in computer science wouldn't be directly able to transition into researching poverty (except perhaps using computer science to tackle poverty in some way). Funding is fungible though.

Original post by Juichiro
[This is not directly related] For the reasons given in Point 1 I don't support this (by this I mean giving poverty and ASI equal priority when it comes to research resources), and to be honest, I don't see the public and research councils supporting this either. And in the whole of research, if you can't make a convincing case (i.e. why you deserve the resources more than the other researchers), you don't get the resources you need. This last point is not directly relevant to the issue in terms of discussion, but is is an issue when it comes to actually carrying out the research on ASI. Of course, my point does not apply if the resources come from private organisations. [/This is not directly related]

I don't know what the current distribution of research funding is, but I imagine only a minority of it is towards poverty. Personally I'm impressed by your humanitarian views but unfortunately I think the competition for research funding is tough, but compared to a lot of legitimately important things that are being funded, securing the future of humanity has to be up there in my opinion.

Original post by Juichiro
8. Maybe I was not clear on my views. I prioritise the individual who exists now over the individual who might exist in the future. Similarly, I prioritise current lethal suffering over possible (future) death. Death (of natural causes, that is) is currently inevitable, suffering (as in poverty-related suffering) currently is not. If you join these points, I think my stance on this becomes clear. I also understand that if I prioritised death over suffering and the existence of the species over the existence of existing individuals, I would have the same opinion as you.

I agree with prioritising individuals who exist now over hypothetical individuals who may exist in the future. However, this is not hypothetical - there definitely will be individuals who exist in the future.

Original post by Juichiro
9. Point 1 addresses my views on this.
10. Point 1 addresses my views on this.
11. Point 1 addressed my views on this
12. Point 1 addressed my views on this
13. Yes, I think we part from different points when it comes to ethics. As I said, the 70-80% of the current population suffer and die with 100% probability, with or without ASI. The future presence of ASI increases this by a 20-30% percent ( even if the probability was 100% - which is not). So to me it comes to a matter of weighing the current suffering of the 70-80% of humans against the possible suffering of 20-30% humans. As said, I choose the individuals over the species and the real over the possible.
14. Your response was detailed enough and helped to express my points better.

I think I agree that humanitarian research should be prioritised over ASI research, however ASI research is currently receiving tremendously little funding given the gravity of which it has the potential to affect the future of humanity. Therefore, I think it should be preferenced over less-important technological research.

Quick Reply

Latest

Trending

Trending