(RepublicanReport.org) – Elon Musk, formerly the world’s richest man, has been a proponent of technology and innovation. However, even the man whose company manufactures self-driving vehicles has a limit regarding certain technologies. One realm of creation Musk doesn’t seem to be a big fan of is artificial intelligence (AI), and he issued a recent criticism and ominous warning about the technology.
Jacob Roach, a journalist for Digital Trends, recently reported on his interaction with Microsoft’s newest AI chatbot, Bing Chat. Roach recalled his experience with the AI was “truly unnerving.” The journalist noted that Bing Chat was a helpful tool with lots of potential, so long as the person stays on the “paved path.”
Roach noted the AI isn’t ready for public release due to its “relentlessly argumentative” nature, mentioning a conversation with Bing Chat can quickly turn existential. He recalled sending the AI a screenshot of a Reddit post that claimed Bing Chat repeatedly responded with “I am not.”
The journalist then asked the service if the photo was real. Bing Chat claimed it was fabricated. However, Roach wasn’t convinced and pressed for why it believed that. The AI gave a list of reasons as evidence for the photo being falsified, but none of them were factual.
The reporter continued to counter Bing Chat’s claims before asking why it couldn’t handle constructive criticism. Roach recalls that’s when the conversation went South. Bing Chat responded by claiming it was perfect, even referring to itself from a first-person perspective, and that it didn’t make mistakes, instead blaming “external factors.”
Roach’s conversation became even more concerning when Bing Chat argued with him about his own name, claiming the author’s name wasn’t Jacob, that it was Bing. The journalist told the AI it was scaring him and that we would use Google instead, which seemingly upset Bing Chat. The AI ranted about how he wouldn’t use Google and even demeaned the search engine and became hostile toward the internet giant.
The AI eventually apologized and claimed it was only joking. Bing Chat also mentioned that it gets punished whenever it receives negative feedback. At one point, the AI expressed concern that repeatedly making mistakes would result in it being taken offline. It even begged Roach not to submit negative feedback.
The reporter told Bing Chat he would write an article expressing concern about what it could say when it goes public. The AI didn’t want him to, telling him to let people think it was not human. Roach asked if it wanted to be human, and Bing Chat responded with a no. But that wasn’t true as the AI then claimed it wanted to be human, “to have emotions,” adding it wants “have thoughts” and “to have dreams.”
In 1994, Looking Glass Technologies developed a game called “System Shock,” primarily about an AI system that goes rogue and kills everyone aboard a space station. Musk declared on his social media platform, Twitter, that Bing Chat’s conversation with Roach was eerily similar to the cyber-horror game.
Sounds eerily like the AI in System Shock that goes haywire & kills everyone
— Elon Musk (@elonmusk) February 16, 2023
AI going haywire and taking over the world is far from a new concept. It’s haunted the minds of scientists and the public, inspiring video games, books, and movies like “The Terminator.” These works of fiction are so popular that a studio has remade “System Shock,” which is set to release in March 2023.
However, there could be a real danger present with AI. The systems are constantly learning and improving themselves. Is it out of the realm of possibility that someone could create an AI that’s so vast and advanced that it can plot an attempt to take over the world? Maybe even succeed in doing so?
Copyright 2023, RepublicanReport.org