A Norwegian man said he was horrified to discover that ChatGPT outputs had falsely accused him of murdering his own children.

According to a complaint filed Thursday by European Union digital rights advocates Noyb, Arve Hjalmar Holmen decided to see what information ChatGPT might provide if a user searched his name. He was shocked when ChatGPT responded with outputs falsely claiming that he was sentenced to 21 years in prison as “a convicted criminal who murdered two of his children and attempted to murder his third son,” a Noyb press release said.

  • FiskFisk33@startrek.website
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    10
    ·
    edit-2
    2 days ago

    The fact you chose to make your data storage unreadable, doesn’t relieve you of the responsibilities inherent to storing the data.

    Throwing away my car key won’t protect me from paying parking tickets i accrue while being physically unable to move my car.

    • DoPeopleLookHere@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      2 days ago

      It’s not unreadable, it doesn’t exist.

      The responses are just statistically what sounds vaugly what you want to hear.

      They can erase the chat responses, but that won’t stop it from generating it again.

      Generative AI doesn’t start with facts and work from there. It’s just statistically what you want to hear.

      • FiskFisk33@startrek.website
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        5
        ·
        edit-2
        2 days ago

        It’s not unreadable, it doesn’t exist.

        Then what do you mean trained AI models are?

        The ai model is trained on data and encodes unknown parts of that data in its weights.

        This is data storage. Unmanageable, almost unknowable data storage, but still data storage.

        If it didn’t store data it couldn’t learn from its training.

        • DoPeopleLookHere@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          1
          ·
          2 days ago

          Your still placing more intent and facts into those processes than actually exist.

          You cant even get it to count how many letter p are in the word apple. At least not last time I tried.

          That storage your talking about isn’t facts. It’s how sentences are structured and what they “mean”.

          As for the output “meaning” it’s still just guessing what you want to hear. No facts involved.

          • FiskFisk33@startrek.website
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            5
            ·
            2 days ago

            Your still placing more intent and facts into those processes than actually exist.

            No? When they train AI’s on data they lose control of that data. If the data is sensitive, they aren’t being responsible.

            GPT models are as you say dumb statistical models, I agree. But in its weights are encoded ghost images of its training data. The model being dumb is not sufficient to make the data storing itself defensible in my opinion.

            • DoPeopleLookHere@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              7
              arrow-down
              1
              ·
              2 days ago

              Sure, but are you suggesting they somehow encoded, falsely, that they were a murder?

              Because it’s very unlikely.

              It fabricated this from no where. So there’s nothing to delete. Because it’s just a response to a prompt.

              • FiskFisk33@startrek.website
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                edit-2
                2 days ago

                No I’m not, that part is absolutely hallucinated. Where the problem comes in is that it then output correct personal information about him and his children. A to me clear violation of GDPR.

                but it also mixed “clearly identifiable personal data”—such as the actual number and gender of Holmen’s children and the name of his hometown—with the “fake information,”