A Norwegian man said he was horrified to discover that ChatGPT outputs had falsely accused him of murdering his own children.
According to a complaint filed Thursday by European Union digital rights advocates Noyb, Arve Hjalmar Holmen decided to see what information ChatGPT might provide if a user searched his name. He was shocked when ChatGPT responded with outputs falsely claiming that he was sentenced to 21 years in prison as “a convicted criminal who murdered two of his children and attempted to murder his third son,” a Noyb press release said.
No? When they train AI’s on data they lose control of that data. If the data is sensitive, they aren’t being responsible.
GPT models are as you say dumb statistical models, I agree. But in its weights are encoded ghost images of its training data. The model being dumb is not sufficient to make the data storing itself defensible in my opinion.
Sure, but are you suggesting they somehow encoded, falsely, that they were a murder?
Because it’s very unlikely.
It fabricated this from no where. So there’s nothing to delete. Because it’s just a response to a prompt.
No I’m not, that part is absolutely hallucinated. Where the problem comes in is that it then output correct personal information about him and his children. A to me clear violation of GDPR.
That’s not what they’re asking for. They’re asking for the ability for it to not generate that sentence again.