There are lots of articles about bad use cases of ChatGPT that Google already provided for decades.

Want to get bad medical advice for the weird pain in your belly? Google can tell you it’s cancer, no problem.

Do you want to know how to make drugs without a lab? Google even gives you links to stores where you can buy the materials for it.

Want some racism/misogyny/other evil content? Google is your ever helpful friend and garbage dump.

What’s the difference apart from ChatGPT’s inability to link to existing sources?

Edit: Just to clear things up. This post is specifically not about the new use cases that come from AI. Sure, Google cannot make semi-non-functional mini programs automatically, and Google will not write a fake paper in whole for me. I am specifically talking about the “This will change the world” articles, that mirror stuff that Google can do exactly like ChatGPT can.

  • fulano@lemmy.eco.br
    link
    fedilink
    arrow-up
    16
    ·
    1 year ago

    There’s something that worries me about GPT-like technologies, and I see very few people talking about it: GPT-based social media bots.

    It can give people and groups to create much advanced mass manipulation strategies. Imagine a lot of gpt accounts on all sites creating comments advocating pro or against something, every time it’s mentioned, in a very natural language, that can fool most people.

    It worries me a lot, and I’m sure it will be done at some point. If recent elections around the world were a mess due to a lot of social media manipulation and fake news campaigns, now imagine that powered by gpt.

    • itsgallus@beehaw.org
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      1 year ago

      I was gonna reply to this in the style of ChatGPT, but I somehow feel like that’d be the same as joking about having a bomb at airport security. But yeah, this is my main concern as well. Not only social media, but even blogs and reputable-looking websites which can act as “sources”. And what about Wikipedia bots?

      I’m not worried about the loss of jobs or the sentience of computers, but rather the incapability to discern what’s real and what’s not. Could online human certificates be a thing? Multi-factor authentication (that is somehow still anonymous)?

      • that_one_guy@beehaw.org
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        I have a hard time imagining a system that can simultaneously identify someone as uniquely human while still maintaining anonymity. Any given website or person online might not know your name, but you would have to have some sort of public key that would identify you. That key would be a fingerprint that could tie all your online activity together for anyone interested.

    • Square Singer@feddit.deOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I don’t know. Social media bots have been doing exactly that quite well for a long time. Turns out, you don’t actually have to write a comment, you just need to find another one that talks about the same key words and copy it in.

      You still get great natural language (since it is natural language) and it fools most people as well.

      Political talking points aren’t that varied. There are a handful of different takes on each topic and people repeat them already, so just copying them doesn’t make much of a difference.

      • fulano@lemmy.eco.br
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        It’s not the same. GPT-based bots add much more to the situation.

        Current bots are easily identifiable, and can be just banned when spotted, but gpt bots can interact in a way that makes is more difficult to spot. They can be programmed to present different personalities and tastes, commenting on several places, and even chit-chatting here and there. Then, they will do their propaganda, considering the contexts, arguing and replying to counterarguments.

        It’s a much more complex structure, and much harder to identify. Today, gpt produces text following some patterns, but that’s something that can be improved.

        • that_one_guy@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          All we can really hope for is effective AI-driven detection methods for AI generated content. Here’s hoping that AIs are good at spotting one another.

          • blindsight@beehaw.org
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            That’s not a workable solution. Since Meta’s algorithm was leaked, there has been such rapid advancement on the open-source side of LLMs that the tech has diverged too far to ever be detectable.

            You can now spin up a custom, targeted LLM in a few hours on low-power consumer hardware. And it beats the massive incumbents within the narrower scope of the training.

            Think, a Facebook comment bot, targeted specifically to sound like pro-[VIEW] comments, complete with typos and Internet slang. Or a high school essay bot, trained exclusivity on 5-paragraph essays.

            The tech right now gives a bet high false positive rate, and there are also AI tools that rewrite text to avoid detection by the existing detection tools.

    • Panteleimon@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      This exactly. Only I am quite certain it’s already being used this way, on a much wider scale than we have any way to measure.