A Norwegian man said he was horrified to discover that ChatGPT outputs had falsely accused him of murdering his own children.

According to a complaint filed Thursday by European Union digital rights advocates Noyb, Arve Hjalmar Holmen decided to see what information ChatGPT might provide if a user searched his name. He was shocked when ChatGPT responded with outputs falsely claiming that he was sentenced to 21 years in prison as “a convicted criminal who murdered two of his children and attempted to murder his third son,” a Noyb press release said.

  • zipzoopaboop@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    2 days ago

    It doesn’t matter how it works. Is the output acceptable?

    Sounds like no, and it’s the company’s problem to fix it

    • thatsnothowyoudoit@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      14 hours ago

      Ok hear me out: the output is all made up. In that context everything is acceptable as it’s just a reflection of the whole of the inputs.

      Again, I think this stems from a misunderstanding of these systems. They’re not like a search engine (though, again, the companies would like you to believe that).

      We can find the output offensive, off putting, gross , etc. but there is no real right and wrong with LLMs the way they are now. There is only statistical probability that a) we’ll understand the output and b) it approximates some currently held truth.

      Put another way; LLMs convincingly imitate language - and therefore also convincing imitate facts. But it’s all facsimile.