![]() ![]() Searle argues that Strong AI would require an actual mind to have consciousness or understanding. While the person inside the room was able to provide the correct response using a language phrasebook, he or she still does not speak or understand Chinese it was just a simulation of understanding through matching question or statements with appropriate responses. With the help of the language phrasebook, the person inside the room can select the appropriate response and pass it back to the Chinese speaker. Another person, who is fluent in Chinese, passes notes written in Chinese into the room. In the room, there is a book with Chinese language rules, phrases and instructions. Imagine a person, who does not speak Chinese, sits in a closed room. The Chinese Room Argument proposes the following scenario: “Computation is defined purely formally or syntactically, whereas minds have actual mental or semantic contents, and we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else…A system, me, for example, would not acquire an understanding of Chinese just by going through the steps of a computer program that simulated the behavior of a Chinese speaker (p.17).” In this excerpt from his paper, from Stanford’s website (link resides outside IBM), summarizes his argument well, In his paper, he discusses the definition of understanding and thinking, asserting that computers would never be able to do this. ![]() The Chinese Room Argument was created by John Searle in 1980. This version is used in the famous Loebner Prize competition, where a human judge guesses whether the output was created by a human or a computer. This test evaluates textual, visual, and auditory performance of the AI and compares it to human-generated output. Strong AI needs to perform a variety of tasks equally well, leading to the development of the Extended Turing Test. ![]() However, the original Turing Test only tests for one skill set - text output or chess as examples. The Turing Test introduced general acceptance around the idea of machine intelligence. While there are no set evaluation guidelines for the Turing Test, Turing did specify that a human evaluator will only have a 70% chance of correctly predicting a human vs computer-generated conversation after 5 minutes. However, if the evaluator can identify the human responses correctly, then this eliminates the machine from being categorized as intelligent. If the interrogator cannot reliably discern the machines from human subjects, the machine passes the test. In this test, there is a person known as the “interrogator” who seeks to identify a difference between computer-generated output and human-generated ones through a series of questions. Originally known as the Imitation Game, the test evaluates if a machine’s behavior can be distinguished from a human. For now, many use the Turing test to evaluate intelligence of an AI system.Īlan Turing developed the Turing Test in 1950 and discussed it in his paper, “Computing Machinery and Intelligence” (PDF, 566 KB) (link resides outside IBM). Until the measures of success, such as intelligence and understanding, are explicitly defined, they are correct in this belief. While some individuals, like Marvin Minsky, have been quoted as being overly optimistic in what we could accomplish in a few decades in the field of AI others would say that Strong AI systems cannot even be developed. While AI researchers in both academia and private sectors are invested in the creation of artificial general intelligence (AGI), it only exists today as a theoretical concept versus a tangible reality. But just like a child, the AI machine would have to learn through input and experiences, constantly progressing and advancing its abilities over time. Strong AI aims to create intelligent machines that are indistinguishable from the human mind. Strong artificial intelligence (AI), also known as artificial general intelligence (AGI) or general AI, is a theoretical form of AI used to describe a certain mindset of AI development. If researchers are able to develop Strong AI, the machine would require an intelligence equal to humans it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |