A Norwegian man said he was horrified to discover that ChatGPT outputs had falsely accused him of murdering his own children.
According to a complaint filed Thursday by European Union digital rights advocates Noyb, Arve Hjalmar Holmen decided to see what information ChatGPT might provide if a user searched his name. He was shocked when ChatGPT responded with outputs falsely claiming that he was sentenced to 21 years in prison as “a convicted criminal who murdered two of his children and attempted to murder his third son,” a Noyb press release said.
Well, here we are. We skipped using this tech for only search Automation and leapfrogged to directly making shit up (once again).
To me it’s clear that these tools are primarily useful as bullshit generators, and I expect them to hallucinate and be inaccurate. But the companies trying to capitalize on the “AI” bubble are saying that these tools can be useful and accurate. I imagine OpenAI is going to have to invoke the Fox News defense in this case, and claim that “no reasonable person would take this seriously”.
Don’t use hallucinate to describe what it is doing, that is humanizing it and making the tech seem more advanced than it is. It is randomly mashing words together without understanding the meaning of any of them
It’s a technical term of art that’s used by experts in the field.
The technical term was created to promote the misunderstanding that LLMs “think”. The “experts” want people to think LLMs are far more advanced than they actually are. You can add as many tokens to your context as you want - every model is still, fundamentally, a text generator. Humanizing it more than that is naive or deceptive, depending on how much money you have riding on the bubble.
You didn’t read the article I linked. The term came into use before LLMs were a thing, it was originally used in relation to image processing.
Thank you!
deleted by creator
Leapfrogged? It never left. LLMs were made to make shit up.