A Norwegian man said he was horrified to discover that ChatGPT outputs had falsely accused him of murdering his own children.

According to a complaint filed Thursday by European Union digital rights advocates Noyb, Arve Hjalmar Holmen decided to see what information ChatGPT might provide if a user searched his name. He was shocked when ChatGPT responded with outputs falsely claiming that he was sentenced to 21 years in prison as “a convicted criminal who murdered two of his children and attempted to murder his third son,” a Noyb press release said.

  • michaelmrose@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    It would be more accurate to say that rather than knowing anything at all they have a model of the statistical relationship between a series of tokens and subsequent tokens which words are apt to follow other words and because the training set contains many true things the words produced in response to queries often contain true statements and almost always contain statements that LOOK like true statements.

    Since it has no inherent model of the world to draw on and only such statistical relationships you should check anything important

    • pyre@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      2 days ago

      you say more accurate but all I see is a very roundabout way of saying fucking wrong all the goddamn time

        • pyre@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          2 days ago

          maybe you should tell that to the companies that shove it in every crevice of every website and app. why is it on search results? why is it summarizing emails? why is it literally doing anything? it’s useless. actually it’s less than useless. it’s misleading and harmful. and the companies should be held liable for it.