Hey there! Sign in to join this conversationNew here? Join for free

Could computers become sentient? Watch

  • View Poll Results: Could computers become sentient?
    Yes
    3
    42.86%
    No
    4
    57.14%

    • Specialist Advisor
    • Thread Starter
    Offline

    1
    ReputationRep:
    Specialist Advisor
    This is a classic philosophical question in the field of computer science. Could a computer ever really think for itself or have a consciousness? Alan Turing famously defined the Turing Test, a test for intelligence in a computer, requiring that a human being should be unable to distinguish the machine from another human being by using the replies to questions put to both. There is some dispute about the value of this test, given Turing’s intent which was to establish if ‘machines can think’, but it is still seen as a litmus test for AI. Reports of successful Turing Tests have been announced in the past, though they are disputed. Would this be enough to claim sentience though? So what do you guys think?

    (If this thread has already been made, let me know )
    Offline

    17
    ReputationRep:
    Even the most complicated computer will still be following instructions.

    But then again, who's to say we aren't?
    Offline

    11
    ReputationRep:
    (Original post by Alexion)
    Even the most complicated computer will still be following instructions.

    But then again, who's to say we aren't?
    What if a computer could create its own instructions?
    • Specialist Advisor
    • Thread Starter
    Offline

    1
    ReputationRep:
    Specialist Advisor
    (Original post by Alexion)
    Even the most complicated computer will still be following instructions.

    But then again, who's to say we aren't?
    But what is the biological process over millions of years that allows us to become self-aware in a way that other animals are not? And why couldn't computers evolve in a similar way?
    • TSR Support Team
    • Clearing and Applications Advisor
    Offline

    21
    ReputationRep:
    TSR Support Team
    Clearing and Applications Advisor
    It seems to be the case that computers work in a fundamentally different way to human brains but I don't think that necessarily has to be a barrier to synthetic sentience.
    • Specialist Advisor
    • Thread Starter
    Offline

    1
    ReputationRep:
    Specialist Advisor
    FWIW we are forecasting on weather computers could be declared sentient before the end of next year at Almanis (but the crowd thinks it's fairly unlikely at this stage): http://app.almanis.com/#/outcomes/336
    • Specialist Advisor
    • Thread Starter
    Offline

    1
    ReputationRep:
    Specialist Advisor
    (Original post by Plagioclase)
    It seems to be the case that computers work in a fundamentally different way to human brains but I don't think that necessarily has to be a barrier to synthetic sentience.
    I think that's a good point. Kind of like how the brain can do millions of calculations a second in order to maintain balance, whereas computers still find it almost impossible to run freely on 'legs' even though they can compute numbers much faster than us
    • TSR Support Team
    • Clearing and Applications Advisor
    Offline

    21
    ReputationRep:
    TSR Support Team
    Clearing and Applications Advisor
    (Original post by Alex from almanis)
    I think that's a good point. Kind of like how the brain can do millions of calculations a second in order to maintain balance, whereas computers still find it almost impossible to run freely on 'legs' even though they can compute numbers much faster than us
    Yes, although interestingly (at least according to a talk I heard), AI researchers have made a lot more progress with "traditional" computer architectures than attempts to actually recreate how the human brain works. For instance, language-recognition systems seem to work a lot better when they're designed from scratch rather than trying to imitate the way the human brain processes language.
    (Original post by Alex from almanis)
    FWIW we are forecasting on weather computers could be declared sentient before the end of next year at Almanis (but the crowd thinks it's fairly unlikely at this stage ): http://app.almanis.com/#/outcomes/336
    Are you asking whether a computer will pass the Turing test by the end of 2017 or whether it will gain sentience by the end of 2017? They're two very different things. Being able to mimic humans precisely isn't the same thing as sentience and I don't think there are many computer scientists who would equate the two. Indeed, I don't even think Turing thought the Turing test was a very good way of testing machine intelligence.
    • Specialist Advisor
    • Thread Starter
    Offline

    1
    ReputationRep:
    Specialist Advisor
    (Original post by Plagioclase)
    Yes, although interestingly (at least according to a talk I heard), AI researchers have made a lot more progress with "traditional" computer architectures than attempts to actually recreate how the human brain works. For instance, language-recognition systems seem to work a lot better when they're designed from scratch rather than trying to imitate the way the human brain processes language.
    Yeah just like computers don't learn chess like humans either. It seems they work better by a sort of trial and error approach rather than more 'deep structure' learning.
    (Original post by Plagioclase)
    Are you asking whether a computer will pass the Turing test by the end of 2017 or whether it will gain sentience by the end of 2017? They're two very different things. Being able to mimic humans precisely isn't the same thing as sentience and I don't think there are many computer scientists who would equate the two. Indeed, I don't even think Turing thought the Turing test was a very good way of testing machine intelligence.
    Yeah we recognise the two are different. We are asking whether a computer will be sentient by the end of 2017 (specifically in the eyes of leading academics on the topic). The Turing test was just mentioned because it's known as a famous computer test so worth talking about in the context of AI I think
    • TSR Support Team
    • Clearing and Applications Advisor
    Offline

    21
    ReputationRep:
    TSR Support Team
    Clearing and Applications Advisor
    (Original post by Alex from almanis)
    Yeah just like computers don't learn chess like humans either. It seems they work better by a sort of trial and error approach rather than more 'deep structure' learning.
    Yeah we recognise the two are different. We are asking whether a computer will be sentient by the end of 2017 (specifically in the eyes of leading academics on the topic). The Turing test was just mentioned because it's known as a famous computer test so worth talking about in the context of AI I think
    Quite surprised that the crowd forecast is as high as 25% then! Very interesting website though, I definitely heard about the concept (whether they were talking about this company in particular I'm not entirely sure) at a talk last year.
    • Specialist Advisor
    • Thread Starter
    Offline

    1
    ReputationRep:
    Specialist Advisor
    (Original post by Plagioclase)
    Quite surprised that the crowd forecast is as high as 25% then! Very interesting website though, I definitely heard about the concept (whether they were talking about this company in particular I'm not entirely sure) at a talk last year.
    Yeah prediction markets seem to be a big new area at the moment! I think it's sitting at 25% currently because it is a very new question on the site (and initially they start at 50%), so it takes a little while for that estimate to drop to the crowd's actual "consensus" forecast.
    Offline

    3
    ReputationRep:
    (Original post by Alexion)
    Even the most complicated computer will still be following instructions.

    But then again, who's to say we aren't?
    We aren't. Our thoughts are a result of electrical impulses, not predetermined instructions.

    (Original post by Optimum_)
    What if a computer could create its own instructions?
    Those instructions still have to be programmed, it's impossible.
    Offline

    20
    ReputationRep:
    No matter how advanced they are they'll always be working from a set of parameters defined by the people that built and programmed them.

    Makes for some good films though.
    • Specialist Advisor
    • Thread Starter
    Offline

    1
    ReputationRep:
    Specialist Advisor
    (Original post by Jared44)
    We aren't. Our thoughts are a result of electrical impulses, not predetermined instructions.



    Those instructions still have to be programmed, it's impossible.
    But how did we evolve from things like plants /insects which are not sentient to become human beings? Could a computer simulate this process?
    • Specialist Advisor
    • Thread Starter
    Offline

    1
    ReputationRep:
    Specialist Advisor
    (Original post by JamesN88)
    No matter how advanced they are they'll always be working from a set of parameters defined by the people that built and programmed them.

    Makes for some good films though.
    What if computers got to the point that they could program new computers to design themselves though?
    Offline

    20
    ReputationRep:
    (Original post by Alex from almanis)
    What if computers got to the point that they could program new computers to design themselves though?
    I don't think they could cross that line from being super efficient to being creative. Anything they do would still be a result of the human programmer's input, directly or indirectly.

    I know someone doing a philosophy PhD, I bet he'd be in his element here.
    Offline

    18
    ReputationRep:
    The NN algorithms today based off the brain are universal approximates which means that essentially any combination of outputs is theoretically possible.

    The possibility for outcomes doesn't get you very far as it's similar to the monkey on a keyboard eventually writing Shakespeare concept, there needs to be advances in long term memory (the ability to do an event based on a previous event with a high level of certainty for a long length of time).

    As I see it memory is the main concern here, memory is required for understanding and understanding is required for subjectivity/perception.

    Memory is highly linked to the large majority of problems such as creative AI/ subjective AI and self-assembling AI, all subjects that play a part in terminating how close to 'sentient' an AI is.

    Take self-assembling AI for instance, a self-assembling AI has to consider all the functions and variables with an accuracy and understanding maintained to evolve the code gradually, if the machine is uncertain about a letter or some relevance between two variables the entire code base could (rather likely) break.

    If the question is will computers ever be able to feel as we do then the answer is no (though it may be estimable), that is obvious as we are biological and by definition computers are hardware, however if the question is will computers be able to match or surpass us intellectually and know of their own (yet virtual) existence then I would say absolutely.

    Advanced AI may not have the exact same equilibrium of intellectual capacity as us, as it may excel as some traits largely over others affecting it's perceptive similarities to ours yet in my opinion it's likely artificial intellect will surpass us in every trait measurable.
    Offline

    18
    ReputationRep:
    (Original post by JamesN88)
    No matter how advanced they are they'll always be working from a set of parameters defined by the people that built and programmed them.

    Makes for some good films though.
    A great deal of research is done in approximating parameters with some utilizing chaos theory, it takes more processing power for simple tasks, yet the branches for such algorithms are much less restricted.

    If your referring to the algorithms themselves, though it's many times more complicated than high level variables and the argument could be made that humans will never be smart enough to create a completely self-assembling AI to begin with, it is possible.

    I would like to point out that even if a truly self-assembling AI is never achievable, there is little reason that an AI with some ground rules would not be capable of the same levels of intellectual abstraction as we are.
    Offline

    12
    ReputationRep:
    I'll be more scared when they start walking on two legs.
    • Specialist Advisor
    • Thread Starter
    Offline

    1
    ReputationRep:
    Specialist Advisor
    (Original post by Ed Phelan)
    I'll be more scared when they start walking on two legs.
    Yeah it's pretty amazing how hard it is to get computer to walk on 2 legs when they can do so manny other thing so easily!
 
 
 
  • See more of what you like on The Student Room

    You can personalise what you see on TSR. Tell us a little about yourself to get started.

  • Poll
    Would you like to hibernate through the winter months?
    Useful resources

    Articles:

    The Student Room tech wikiTech forum guidelines

    Quick link:

    Unanswered technology and computers threads

    Sponsored features:

    Web Legend

    Win a Macbook Air!

    Blog about setting up a website for a chance to win in our Web Legend competition.

    Groups associated with this forum:

    View associated groups
  • See more of what you like on The Student Room

    You can personalise what you see on TSR. Tell us a little about yourself to get started.

  • The Student Room, Get Revising and Marked by Teachers are trading names of The Student Room Group Ltd.

    Register Number: 04666380 (England and Wales), VAT No. 806 8067 22 Registered Office: International House, Queens Road, Brighton, BN1 3XE

    Quick reply
    Reputation gems: You get these gems as you gain rep from other members for making good contributions and giving helpful advice.