A few weeks ago, Google, with the help of Dennis Hassabis and the deepmind team, made an incredible breakthrough, probably a landmark moment in our decade.
AlphaGo literally learned like a baby, practically self teaching itself to attain an expert level of Go through the use of millions of expert games and testing games against itself. Very much unlike any Artificial intelligence program written before to play classic strategy games, virtually no specific "go" code was written into it, apart from, of course, legal moves and ladders.And that's the scary part. And perhaps that's the aspect of modern AI development that leads to those in even the most esteemed circles to show great hesitation regarding such huge jumps in AI ability. Supposedly, AI's would soon develop to the point were they "realise" that the interests of their "species" doesn't necessarily coincide with the existence of ours. And then the rest is history.However, I still find it, 1: difficult to wrap my head around the thought that it should be a danger so eminent, action must be taken even from today to protect us from these AI's, and 2: It seems unrealistic that we'll be able to stop tech companies, well, advancing tech in the name of a possible doomsday scenario. You could argue it's unjust to delay or mitigate actually helpful uses of advanced AI in the name of caution. Do experts from various fields or fields outside of technology like have a different impression than those who do work in computing and AI?
x Turn on thread page Beta
Should we really "censor" AI development? watch
- Thread Starter
- 26-02-2016 22:31
- 27-02-2016 10:44
People still overestimate a.i. especially in attention grabbing headlines. Software still only does what you tell it to do. Telling it how to optomise its strategy based on millions of simulated games and expert knowledge is a million miles from "learning like a baby".
Far too early to consider censoring things.Last edited by INTit; 27-02-2016 at 10:48.
- Thread Starter
- 28-02-2016 14:40
'Telling it how to optomise its strategy based on millions of simulated games and expert knowledge is a million miles from "learning like a baby"'...Hmm, I'd like to say so, too, but "telling it how to optimise it's strategy" wasn't the impression I got from the following video:
http://www.theguardian.com/technolog...mind-alphagoIf virtually no Go knowledge was encoded apart from knowing where the moves are on the board, it certainly does seem eerily similar to learning like a baby considering the use of reinforcement learning.In fact, I can very much understand being worried, that we may program: strikeout, assign general-purpose algorithms to tasks in which the results lead to the assigners saying, "we have know idea how that was done, don't blame us". But the reactions of Professor Hawkings and Elon Musk are above even this, I think.