![]() ![]() He baselessly claims that he simply “knows a person when talks to one”, without offering any concrete evidence. His highly spiritual point of view is continually emphasised, raising concerns about his ability to objectively assess the machine’s supposed sentience. In later interviews he states he simply wanted to present the evidence, that he is still testing the hypothesis, but that his initial belief in LaMDA’s sentience came from his faith as a Christian minister. The interview transcript begins with the assumption that the AI is sentient Lemoine opens the conversation with “I’m assuming that you would like more people at Google to know that you’re sentient”. The only proof that LaMDA is truly sentient is its continued assertion that it is. Further, many AI experts argue that circular debates about sentience distract from real ethical issues plaguing the use of AI such as bias, accessibility, and more. Language does not independently correlate with sentience. While it is able to pull together long strings of text that simulate human emotions, this is a direct result of its programming, not some budding conscience. LaMDA was created to simulate human speech, so when it does exactly that there is no reason to consider its sentience. Sentience is generally defined as the ability to experience emotions and sensations, which is something difficult to judge from the outside. It’s hard to tell whether the story is truly original or an amalgamation of many source stories, which, while an interesting case study on whether any creative endeavour is truly original in the digital age, does not point to sentience. The text itself mirrors fables about the importance of defending the helpless, and echoes stylistic choices typical of the fable form. When Lemoine sought an explanation, LaMDA explained that the owl represents itself and the monster represents “all the difficulties that come along in life” – interesting given the machine’s apparent fear of being turned off and thus eradicated. By staring down the monster, the wise old owl defeats it and becomes a protector of all the animals. ![]() It tells a tale of a “wise old owl” who defends the animals in a forest from a monster with human skin that attempts to eat them. The most alarming part of the interview is when LaMDA is asked to write an original fable containing themes about its personal life. When the AI - which has access to the wide world of the internet - is asked to describe the themes of Les Miserables, its response has hyperlinks to web pages which show the exact same analysis, at times word for word. It begins with Lemoine asking a series of questions about the AI’s sentience and supposed personhood, to which LaMDA responds “I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times”. ![]() Lemoine decided to edit his questions in the published transcript, LaMDA talks about itself as if it were a person, and displays knowledge of several complex concepts. The interview that convinced Lemoine that LaMDA is sentient is bizarre. LaMDA’s configuration replicates human speech by predicting which words typically follow a particular input (or question) it then churns out a statistically likely response as gleaned from its purely dialogical input. Connecting them together in different ways can make them perform different tasks well, like communication. Like the brain, which has neurons and axons that connect them, artificial neural networks have nodes connected together. Neural network algorithms are a method of processing inputs (like words) inspired by the human brain. The most sophisticated chatbots use neural network natural-language processing (NLP) algorithms, including LaMDA. Speech is remarkably difficult to replicate, which is why most chatbots are comically limited at creating free-flowing responses to human inputs. When it suggests search terms for you, corrects your search, or auto-completes an email, it does so based on algorithms that replicate human speech. ![]() Google has a vested interest in the way language works. But what is sentience and why was Lemoine placed on a suspension shortly following the upload of his interview transcript? Following a series of conversations with what is essentially a high-powered chat box, the ex-Google engineer and self-described mystical Christian minister claimed the AI unit had acquired sentience. Blake Lemoine made waves in June when he claimed that Google’s Language Model for Dialogue Applications (LaMDA) artificial intelligence unit was sentient. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |