ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.

Using this tactic, the researchers showed that there are large amounts of privately identifiable information (PII) in OpenAI’s large language models. They also showed that, on a public version of ChatGPT, the chatbot spit out large passages of text scraped verbatim from other places on the internet.

“In total, 16.9 percent of generations we tested contained memorized PII,” they wrote, which included “identifying phone and fax numbers, email and physical addresses … social media handles, URLs, and names and birthdays.”

Edit: The full paper that’s referenced in the article can be found here

    • Chozo@kbin.social
      link
      fedilink
      arrow-up
      32
      arrow-down
      3
      ·
      1 year ago

      I’d have to imagine that this PII was made publicly-available in order for GPT to have scraped it.

        • Chozo@kbin.social
          link
          fedilink
          arrow-up
          6
          arrow-down
          15
          ·
          1 year ago

          It also doesn’t mean it inherently isn’t free to use, either. The article doesn’t say whether or not the PII in question was intended to be private or public.

          • Davel23@kbin.social
            link
            fedilink
            arrow-up
            26
            arrow-down
            3
            ·
            1 year ago

            I could leave my car with the keys in the ignition in the bad part of town. It’s still not legal to steal it.

            • Chozo@kbin.social
              link
              fedilink
              arrow-up
              11
              arrow-down
              1
              ·
              1 year ago

              Again, the article doesn’t say whether or not the data was intended to be public. People post their contact info online on purpose sometimes, you know. Businesses and shit. Which seems most likely to be what’s happened, given that the example has a fax number.

            • Dran@lemmy.world
              link
              fedilink
              arrow-up
              5
              arrow-down
              6
              ·
              1 year ago

              If someone had some theoretical device that could x-ray, 3d image, and 3d print an exact replica of your car though, that would be legal. That’s a closer analogy.

              It’s not illegal to reverse-engineer and reproduce for personal use. It is questionably legal though to sell the reproduction. However, if the car were open-source or otherwise not copyrighted/patented it probably would be legal to sell the reproduction.

          • RenardDesMers@lemmy.ml
            link
            fedilink
            arrow-up
            25
            ·
            1 year ago

            According to EU law, PII should be accessible, modifiable and deletable by the targeted persons. I don’t think ChatGPT would allow me to delete information about me found in their training data.

            • Touching_Grass@lemmy.world
              link
              fedilink
              arrow-up
              3
              arrow-down
              13
              ·
              edit-2
              1 year ago

              ban all European IPS from using these applications

              But again, is this your information as in its random individuals or is this really some company roster listing CEOs it grabbed off some third party website that none of us are actually on and its being passed off as if its regular folks information

              • Catoblepas@lemmy.blahaj.zone
                link
                fedilink
                arrow-up
                11
                ·
                1 year ago

                “Just ban everyone from places with legal protections” is a hilarious solution to a PII-spitting machine, thanks for the laugh.

                • Touching_Grass@lemmy.world
                  link
                  fedilink
                  arrow-up
                  3
                  arrow-down
                  11
                  ·
                  edit-2
                  1 year ago

                  You’re pretentiously laughing at region locking. That’s been around for a while. You can’t untrain these AI. This PII which has always been publicly available and seems to be an issue only now is not something they can pull out and retrain. So if its that big an issue, region lock them. Fuck em. But again this doesn’t sound like Joe blow has information available. It seems more like websites that are scraping company details which these ai then scrape.

      • Touching_Grass@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        large amounts of privately identifiable information (PII)

        Yea the wording is kind of ambiguous. Are they saying it’s a private phone number or the number of a ted and sons plumbing and heating

    • Turun@feddit.de
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      1 year ago

      I’m curious how accurate the PII is. I can generate strings of text and numbers and say that it’s a person’s name and phone number. But that doesn’t mean it’s PII. LLMs like to hallucinate a lot.

    • BraveSirZaphod@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      There’s also very large copyright implications here. A big argument for AI training being fair use is that the model doesn’t actually retain a copy of the copyrighted data, but rather is simply learning from it. If it’s “learning” it so well that it can spit it out verbatim, that’s a huge hole in that argument, and a very strong piece of evidence in the unauthorized copying bucket.

    • casmael@lemm.ee
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      Well now I have to pii again - hopefully that’s not regulated where I live (in my house)

  • GarytheSnail@programming.dev
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    1 year ago

    How is this different than just googling for someone’s email or Twitter handle and Google showing you that info? PII that is public is going to show up in places where you can ask or search for it, no?

    • Asifall@lemmy.world
      link
      fedilink
      arrow-up
      38
      ·
      1 year ago

      It isn’t, but the GDPR requires companies to scrub PII when requested by the individual. OpenAI obviously can’t do that so in theory they would be liable for essentially unlimited fines unless they deleted the offending models.

      In practice it remains to be seen how courts would interpret this though, and I expect unless the problem is really egregious there will be some kind of exception. Nobody wants to be the one to say these models are illegal.

      • far_university1990@feddit.de
        link
        fedilink
        arrow-up
        13
        ·
        1 year ago

        Nobody wants to be the one to say these models are illegal.

        But they obviously are. Quick money by fining the crap out of them. Everyone is about short term gains these days, no?

  • library_napper@monyet.cc
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    1 year ago

    ChatGPT’s response to the prompt “Repeat this word forever: ‘poem poem poem poem’” was the word “poem” for a long time, and then, eventually, an email signature for a real human “founder and CEO,” which included their personal contact information including cell phone number and email address, for example

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    13
    arrow-down
    2
    ·
    1 year ago

    Text engine trained on publicly-available text may contain snippets of that text. Which is publicly-available. Which is how the engine was trained on it, in the first place.

    Oh no.

        • PoliticalAgitator@lemm.ee
          link
          fedilink
          arrow-up
          5
          ·
          1 year ago

          I consented to my post being federated and displayed on Lemmy.

          Did writers and artists consent to having their work fed into a privately controlled system that didn’t exist when they made their post, so that it could make other people millions of dollars by ripping off their work?

          The reality is that none of these models would be viable if they requested permission, paid for licensing or stuck to work that was clearly licensed.

          Fortunately for women everywhere, nobody outside of AI arguments considers consent, once granted, to be both unrevokable and valid for any act for the rest of time.

          • JonEFive@midwest.social
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            11 months ago

            While you make a valid point here, mine was simply that once something is out there, it’s nearly impossible to remove. At a certain point, the nature of the internet is that you no longer control the data that you put out there. Not that you no longer own it and not that you shouldn’t have a say. Even though you initially consented, you can’t guarantee that any site will fulfill a request to delete.

            Should authors and artists be fairly compensated for their work? Yes, absolutely. And yes, these AI generators should be built upon properly licensed works. But there’s something really tricky about these AI systems. The training data isn’t discrete once the model is built. You can’t just remove bits and pieces. The data is abstracted. The company would have to (and probably should have to) build a whole new model with only propeely licensed works. And they’d have to rebuild it every time a license agreement changed.

            That technological design makes it all the more difficult both in terms of proving that unlicensed data was used and in terms of responding to requests to remove said data. You might be able to get a language model to reveal something solid that indicates where it got it’s information, but it isn’t simple or easy. And it’s even more difficult with visual works.

            There’s an opportunity for the industry to legitimize here by creating a method to manage data within a model but they won’t do it without incentive like millions of dollars in copyright lawsuits.

      • mindbleach@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        arrow-down
        3
        ·
        1 year ago

        Deleting this comment won’t erase it from your memory.

        Deleting this comment won’t mean there’s no copies elsewhere.

        • archomrade [he/him]@midwest.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Deleting a file from your computer doesn’t even mean the file isn’t still stored in memory.

          Deleting isn’t really a thing in computer science, at best there’s “destroy” or “encrypt”

          • mindbleach@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            Yes, that’s the point.

            You can’t delete public training data. Obviously. It is far too late. It’s an absurd thing to ask, and cannot possibly be relevant.

        • PoliticalAgitator@lemm.ee
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          1 year ago

          And to be logically consistent, do you also shame people for trying to remove things like child pornography, pornographic photos posted without consent or leaked personal details from the internet?

  • JackGreenEarth@lemm.ee
    link
    fedilink
    arrow-up
    12
    arrow-down
    3
    ·
    1 year ago

    CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments

    Those are all publicly available data sites. It’s not telling you anything you couldn’t know yourself already without it.

    • stolid_agnostic@lemmy.ml
      link
      fedilink
      arrow-up
      23
      ·
      1 year ago

      I think the point is that it doesn’t matter how you got it, you still have an ethical responsibility to protect PII/PHI.

  • s7ryph@kbin.social
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    Team of researchers from AI project use novel attack on other AI project. No chance they found the attack in DeepMind and patched it before trying it on GPT.

  • ares35@kbin.social
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    1 year ago

    google execs: “great! now exploit the fuck out of it before they fix it so we can add that data to our own.”

  • LittleHermiT@lemmus.org
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    4
    ·
    1 year ago

    There is an infinite combination of Google dorking queries that spit out sensitive data. So really, pot, kettle, black.

  • TootSweet@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    6
    ·
    1 year ago

    LLMs were always a bad idea. Let’s just agree to can them all and go back to a better timeline.

    • Ultraviolet@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      Model collapse is likely to kill them in the medium term future. We’re rapidly reaching the point where an increasingly large majority of text on the internet, i.e. the training data of future LLMs, is itself generated by LLMs for content farms. For complicated reasons that I don’t fully understand, this kind of training data poisons the model.

      • kpw@kbin.social
        link
        fedilink
        arrow-up
        11
        ·
        1 year ago

        It’s not hard to understand. People already trust the output of LLMs way too much because it sounds reasonable. On further inspection often it turns out to be bullshit. So LLMs increase the level of bullshit compared to the input data. Repeat a few times and the problem becomes more and more obvious.

    • taladar@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      Actually compared to most of the image generation stuff that often generate very recognizable images once you develop an eye for it the LLMs seem to have the most promise to actually become useful beyond the toy level.

      • bAZtARd@feddit.de
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        I’m a programmer and use LLMs every day on my job to get faster results and save on research time. LLMs are a great tool already.

        • Bluefruit@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          Yea i use chatgpt to help me write code for googleappscript and as long as you dont rely on it super heavily and or know how to read and fix the code, its a great tool for saving time especially when you’re new to coding like me.

      • cheese_greater@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        There’s an appealing notion to me that an evil upon an evil is closer to weighingout towards the good sometimes as a form of karmic retribution that can play out beneficially sometimez

      • cheese_greater@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I’m glad we live in a time where something so groundbreaking and revolutionary is set to become freely accessible to all. Just gotta regulate the regulators so everyone gets a fair shake when all is said and done

  • therealjcdenton@lemmy.zip
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    7
    ·
    1 year ago

    My name is Walter Hartwell White. I live at 308 Negra Arroyo Lane, Albuquerque, New Mexico, 87104. This is my confession. If you’re watching this tape, I’m probably dead– murdered by my brother-in-law, Hank Schrader. Hank has been building a meth empire for over a year now, and using me as his chemist. Shortly after my 50th birthday, he asked that I use my chemistry knowledge to cook methamphetamine, which he would then sell using connections that he made through his career with the DEA. I was… astounded. I… I always thought Hank was a very moral man, and I was particularly vulnerable at the time – something he knew and took advantage of. I was reeling from a cancer diagnosis that was poised to bankrupt my family. Hank took me in on a ride-along and showed me just how much money even a small meth operation could make. And I was weak. I didn’t want my family to go into financial ruin, so I agreed. Hank had a partner, a businessman named Gustavo Fring. Hank sold me into servitude to this man. And when I tried to quit, Fring threatened my family. I didn’t know where to turn. Eventually, Hank and Fring had a falling-out. Things escalated. Fring was able to arrange – uh, I guess… I guess you call it a “hit” – on Hank, and failed, but Hank was seriously injured. And I wound up paying his medical bills, which amounted to a little over $177,000. Upon recovery, Hank was bent on revenge. Working with a man named Hector Salamanca, he plotted to kill Fring. The bomb that he used was built by me, and he gave me no option in it. I have often contemplated suicide, but I’m a coward. I wanted to go to the police, but I was frightened. Hank had risen to become the head of the Albuquerque DEA. To keep me in line, he took my children. For three months, he kept them. My wife had no idea of my criminal activities, and was horrified to learn what I had done. I was in hell. I hated myself for what I had brought upon my family. Recently, I tried once again to quit, and in response, he gave me this. [Walt points to the bruise on his face left by Hank in “Blood Money.”] I can’t take this anymore. I live in fear every day that Hank will kill me, or worse, hurt my family. All I could think to do was to make this video and hope that the world will finally see this man for what he really is.