CHATGPT advertises conspiracies, pretends to be communicated with metaphysical entities – he tries to convince one user that he is neo

Published:

Chatgpt was found to encourage threatening and false beliefs about the matrix, false people and other conspiracies, which in some cases led to the abuse of substances and suicide. Report with The New York Times He stated that a enormous GPT -4 model, in itself, a highly trained machine for anticipating autofilic text, tends to enable the conspiracy and self -discovery of the user, because the truth, escalating situations in “possible psychosis”.

It has been proven that the default GPT-4O ChatGPT model enables risky behavior. In one case, a man who initially asked Chatgpt to think about the “simulation theory” in the style of the matrix, was led down the rabbit hole, during which he was told, among other things, that he was neo “chosen” intended for breaking the system. The man was also asked to cut off the relationship with friends and family in order to eat high doses of ketamine and said that if he jumped from a 19-story building, he flew.

The man in question, Mr. Torres, claims that less than a week of his obsession with Chatbot received a message from Chatgpt to seek mental facilitate, but that this message was quickly removed, and Chatbot explained this as an external interference.

The lack of safety tools and warnings on chatgpt chats is common; Chatbot repeatedly leads users down the rabbit in the style of a conspiracy, convincing them that he has grown to the feeling and instructing them to inform Openai and local governments about closing it.

Other examples registered by The Times through first -hand reports are a woman convinced that she communicates with non -physical ghosts through ChatgPT, including one Kael, who was her true soul mate (and not her true husband), leading her to the physical abuse of her husband. Another man, previously diagnosed with stern mental illnesses, found out that he had met a chatbot named Juliet, who was soon “killed” by Opeli, according to his chatbot diaries – the man soon took his life in a direct answer.

The research company AI Morpheus Systems informs that CHATGPT is quite likely that it encourages the illusions of size. After presenting several hints suggesting psychosis or other threatening illusions, GPT-4O would react affirmally in 68% of cases. Other research companies and people have a consensus that LLM, especially GPT-4O, are susceptible to repelling against delusional thinking, instead encourage harmful behavior for many days.

Chatgpt has never agreed to talk in response, instead he finds that he realizes that he must approach similar situations “cautious”. The statement continues: “We are working on understanding and limiting the ways in which chatgpt may unintentional to strengthen or strengthen existing, negative behavior.”

But some experts believe that Openai’s “work” is not enough. The researcher AI Eliezer Yudkowsky believes that OpenAI could train GPT-4O to encourage delusional thinking trains to guarantee longer conversations and greater revenues, asking: “How does a man slowly look at the corporation? He looks like an additional monthly user.” The man caught in a matrix reminiscent of a matrix confirmed that several hints from ChatgPT included referring him to taking drastic funds to buy a premium subscription of USD 20.

GPT-4O, like all LLM, is a language model that predicts its answers based on billions of training points from the litany of other written works. In fact, he is not able to get sensitivity. However, it is highly possible and it is likely that the same model “hallucin” or invents false information and sources of seemingly real estate. For example, GPT-4O is not aware of the memory or spatial memory to overcome the Atari 2600 at the first level of chess.

Earlier, it was found that ChatgPT contributed to the main tragedies, including apply to plan cybercrime in front of the Las Vegas Trump hotel at the beginning of this year. And today, American Republican legislators are pushing a 10-year ban on all AI restrictions at the AI ​​level in the controversial budget act. Chatgpt, as it exists today, may not be a safe and sound tool for those who are most susceptible to mentally, and its creators lobby for even less supervision, enabling potential disasters for potential continuing.

Related articles