The Student Room Group

Stephen Hawking warns artificial intelligence could end mankind

http://www.bbc.co.uk/news/technology-30290540
I enjoyed this headline immensely... there have been numerous films that demonstrate the dangers of AI and it certainly feels like we are getting close to this kind of future.

Do you think the development of full artificial intelligence would be a danger to humans?
Reply 1
Original post by Foo.mp3
Yes, because Arnie :cool:


We'll just invent timertravel and the problem is solved.
He should team up with Martin Rees...

http://en.wikipedia.org/wiki/Our_Final_Hour

a right pair of Cassandras
Reply 3
The proliferation of advanced technology could mean that it's impossible to prevent someone accessing things with the potential to destroy humanity. Machines that can build nanobots, easily-programmable AI, 3D printed weaponry, etc.

With regards to AI specifically, I think it's only a matter of time before AI superior to human intelligence is created, and after that point the future becomes totally unpredictable. Maybe we can instil these machines with values or systems that prevent them from harming us, but that's a long way away. The world's militaries pump incredible amounts of money into these areas, looking for ways to kill people without risking their own lives. AI has been deadly from the beginning.

It's uncertain what will happen. I think we will be okay for a couple of decades at least. After the singularity I think it could go either way.
I think it's good to be cautious but there are technological constraints that make the concept of a singularity implausible. Also, we have to bear in mind that the type of AI we're using to solve technical problems doesn't necessarily need a will, and in fact I can easily imagine it could be programmed to focus on specific area.
Original post by Unkempt_One
I think it's good to be cautious but there are technological constraints that make the concept of a singularity implausible. Also, we have to bear in mind that the type of AI we're using to solve technical problems doesn't necessarily need a will, and in fact I can easily imagine it could be programmed to focus on specific area.


Many engineers and scientists believe that the singularity isn't just plausible, but it will happen before the end of this century, with predictions as early as 2050.
AI means that it can think for itself, if you programmed it then it either A) isn't actually AI, it's just a machine, like a computer, where we put in a code and it does as we tell it. B) If we programmed the AI robot or computer, then, because it can think for itself, eventually it could just disobey an order.
Someone's watched too much Terminator.
When isn't he saying something is going to wipe out mankind, it's not the first tike this year he's said this I think; he comes out with something every few months

Posted from TSR Mobile
Original post by Jammy Duel
When isn't he saying something is going to wipe out mankind, it's not the first tike this year he's said this I think; he comes out with something every few months

Posted from TSR Mobile

I think just because it's steven Hawking people take it seriously.
Original post by jam277
I think just because it's steven Hawking people take it seriously.


Indeed. But I think one thing he claimed this year even a secondary school student should be able to tell you and shouldn't be an issue for at least thousands of years, more likely tens or hundreds of thousands, can't remember what jt is thought, something rather trivial.

Posted from TSR Mobile
I think it's very plausible, it's known as the technological singularity, interesting stuff.

It makes sense because eventually, if we are able to create true AI, then there's no knowing what it is actually going to do and think.

Sounds mental, since you associate this type of thing with movies like Terminator, but when you think how advanced we have become in even the last 10 years, it makes you wonder what is going to be around in 50 years time.
Not really, no. I simply can't understand these fears that otherwise intelligent people have.

Anyone who's done any neural networks research knows that we're laughably behind the capacities of the human brain. The brain is MASSIVELY parallel and self-aware in ways that we can't even model let alone replicate. A future like the one predicted is way way way far ahead of us.

We haven't even been able to replicate the behaviour of simple animals. And behaviour is just one aspect of the human brain. Simple if-then rules. Self-awareness is something entirely different and utterly perplexing. It's like a machine that can generate new actions on the fly, based on previous knowledge. We CAN program a machine to do just that (self-learning algorithms) but again only in ways that we can control/specify beforehand (ie we also program the actions).

In that sense, if we don't program the robots to kill people if certain preconditions arise, robots are simply not going to do that.
(edited 9 years ago)
I suppose one could argue for it to be a serious threat it would need to act as if it has a complex, organic brain and as the above poster has said we're well off that. But then also consider that we don't even fully understand the human brain yet (or brains more generally) and how can be truly emulate something that we don't understand?

Posted from TSR Mobile
I'll repeat what I said in a similar thread.

The vast majority of people who come out with this stuff are eccentrics in their field who are trying to come across as visionaries. They are often older people from the field who are making a final prediction, that if comes true, will cement their legacy once they've passed on.

I studied Artificial Intelligence modules at university and I can assure you that whilst they are capable of doing some rather extraordinary things the chances of computers becoming 'fully aware' to the level required to be a threat to us is minimal, if not non existent, as it currently stands.

The only way I can see robots ever becoming a threat is through them becoming more like us. They will need to be wired and programmed to think like us and it is only at that point that they may deem us disposable and attack us. It would be our own downfall. However, that is unlikely to happen due to limitations and lack of understanding being the main one. To create robots that are like us we need to understand us and of course our understanding of the mind is limited by the field of biology. We don't know nearly enough about ourselves to make robots think, reason or act in the way we do. It may not even be possible. I highly doubt it is.

Artificial Intelligence is one of those fields where the media get carried away due to their lack of understanding when it comes to technology. For those of us that know the reality, it does provide us with a good laugh however when we get to read these articles such as, 'Will robots take over the world?'.

Quick Reply

Latest

Trending

Trending