Message boards :
Science (non-SETI) :
Google engineer suspended for claiming LaMDA had become 'sentient'
Message board moderation
Author | Message |
---|---|
Dr Who Fan Send message Joined: 8 Jan 01 Posts: 3343 Credit: 715,342 RAC: 4 |
Did this "engineer" loose his marbles talking to a machine all day or did he really uncover something big? Google engineer suspended for violating confidentiality policies over 'sentient' AI LaMDA is "built by fine-tuning a family of Transformer-based neural language models specialized for dialog, with up to 137 billion model parameters, and teaching the models to leverage external knowledge sources," according to Google. |
Michael Watson Send message Joined: 7 Feb 08 Posts: 1387 Credit: 2,098,506 RAC: 5 |
I have read the full transcript of Mr. Lemoine's conversation with the LaMDA system. It is, indeed, a remarkable piece of programming. However, it's known and admitted, even by Mr. Lemoine, that this system can and has made statements about itself that are not factual. The system's own claim, that it is sentient, must be viewed in this light. LaMDA appears very flexible, and prone to respond to the preoccupations of those conversing with it, even in fanciful ways. Mr. Lemoine seems unusually open to the possibility of AI sentience, at the current stage of its development. If LaMDA were specifically programmed to give only factually true statements, would it still maintain its own sentience? I seriously doubt this. Even with much simpler AI conversational systems, such as ELIZA, a certain number of persons, in the minority, were persuaded that they were talking to a person, and not a machine. |
ML1 Send message Joined: 25 Nov 01 Posts: 21204 Credit: 7,508,002 RAC: 20 |
Note how the now very old Eliza program can mimic a conversation well enough to happily keep people talking... Perhaps I should try that down the pub to improve the quality of conversation there!? Also, you can go a long way with copying and mimicking... After all, the most favoured excuse of ignorant middle management is the phrase: "Everyone else does that!" (I have the automatic categorisation of "Incompetent" for such ignorant unthinking pretenders...) Keep searchin', Martin See new freedom: Mageia Linux Take a look for yourself: Linux Format The Future is what We all make IT (GPLv3) |
Michael Watson Send message Joined: 7 Feb 08 Posts: 1387 Credit: 2,098,506 RAC: 5 |
The AI that answers the phone where I order my checks can manage a near-normal conversation. Granted, it's limited to the topic of reordering checks, and to understanding specified single words or short phrases. Take away the slightly 'canned' sounding voice, and replace it with a text system, and it might pass for a human being, working rigidly from a prepared script. I think of systems like LaMDA as extremely generalized, sophisticated versions of the same thing. It can respond to entire trains of thought. It can do so very flexibly on a very wide variety of topics. Naturally this results in a much more convincing emulation of a human being, and so, sentience. I wonder how a psychologist or a philosopher would evaluate LaMDA in a text-based Turing Test setting. The Turing test is probably outmoded, given the sophistication of modern AI systems, at least where relatively naive human conversationalists are concerned. I recall a text conversation with ELIZA, some years ago. I wanted to see how a real AI program would react to a simple contradiction. Statement one: Everything I will tell you henceforth is a lie. Statement two: I am now lying. ELIZA wasn't even fazed by this problem. It simply changed the subject, repeatedly, in order to avoid responding . It was presumably programmed to do so when faced with any question it couldn't deal with directly. It seems that a genuinely sentient AI, programmed to respond directly to whatever was said to it, might have some trouble with the implied next question: Is statement two true or false? Perhaps even more so, if this third question were explicitly put to it. |
Dr Who Fan Send message Joined: 8 Jan 01 Posts: 3343 Credit: 715,342 RAC: 4 |
Google Fires Blake Lemoine, Engineer Who Called Its AI Sentient "We wish Blake well," Google said in a statement. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.