In the US, the legal system consistently demonstrates an absolute failure to grasp even the most basic concepts related to artificial intelligence. But the AI powering it is no more sophisticated than the machine learning algorithms Netflix uses to try and figure out what TV show you’ll want to watch next. From an engineering point of view, the machine is quite impressive. It’s an animatronic puppet that uses natural language processing AI to generate phrases. Let’s be perfectly clear here: if Sophia the Robot is sentient, so is Amazon’s Alexa, Teddy Ruxpin, and The Rockafire Explosion. The problem is that there’s only one country with an existing legal framework by which the rights of a sentient machine can be discussed, and that’s Saudi Arabia.Ī robot called Sophia, made by Hong Kong company Hanson Robotics, was given citizenship during an investment event where plans to build a supercity full of robotic technology were unveiled to a crowd of wealthy attendees. In which case we’d need to turn to the legal system in order to codify and verify any potential incidents of machine consciousness. Perhaps a machine is only sentient if it can meet a simple set of rational qualifications for sentience. If we can’t rely on OpenAI’s chief scientist to determine whether, for example, GPT-3 can think, then we’ll have to shift perspectives. It would appear that computer scientists are no more qualified to opine on machine sentience than philosophers are. We’ve written about Twitter beefs and wacky arguments between AI experts for years. Here we have three of the world’s most famous computer scientists, each of them progenitors of modern artificial intelligence in their own right, debating consciousness on Twitter with the temerity and gravitas of a Star Wars versus Star Trek argument.Īnd this is not an isolated incident by any means. Here, my faithful guideline is: faking it is having it, because it is practically impossible to fake w/o having. I think you would need a particular kind of macro-architecture that none of the current networks possess.Īnd Judea Pearl, a Turing Award-winning computer scientist, thinks even fake sentience should be considered consciousness since, as he puts it, “faking it is having it.”Īs far as I know we do not have an agreed Turing test for consciousness, except, of course, systems that act and communicate as though they have consciousness. Not even for true for small values of "slightly conscious" and large values of "large neural nets". It may be that today's large neural networks are slightly consciousīut Yann LeCun, Facebook/Meta’s AI guru, believes the opposite: Ilya Sutskever, the chief scientist at OpenAI, seems to think AI is already sentient: But they usually stop short of claiming these systems are capable of experiencing thoughts and feelings. They use terms such as “human-level” and “strong AI” to indicate they’re working towards something that imitates human intelligence. Currently, the companies dabbling at the edge of artificial general intelligence (AGI) have wisely stayed on the border of “it’s just a machine” without crossing into the land of “it can think.” There’s just one thing stopping them: the truth.Īnd that barrier’s only as strong as the consequences for breaking it. Any developer, marketing team, CEO, or scientist can claim they’ve created a machine that thinks and feels.
0 Comments
Leave a Reply. |