WormGPT Is a ChatGPT Alternative With ‘No Ethical Boundaries or Limitations’::undefined

  • agnesscott@reddeet.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    17 hours ago

    WormGPT is an AI tool similar to ChatGPT (https://chatdansk.org/), designed without ethical restrictions or safeguards, making it appealing for malicious uses, such as phishing and cyber scams. Unlike mainstream AI models, which enforce guidelines to prevent unethical activities, WormGPT lacks content filters, raising concerns about misuse. This highlights the risks associated with unregulated AI and underscores the importance of ethical standards in AI development to protect user security and privacy.

    • Geek_King@lemmy.world
      link
      fedilink
      English
      arrow-up
      113
      arrow-down
      4
      ·
      1 year ago

      Did you check out the article, because it’s most definitely not a good thing. It was created to assist with cybercrime things, like writing malware, crafting emails for phishing attacks. The maker is selling access with a monthly fee to criminals to use it. This was unavoidable though, can’t put the tooth paste back into the tube on this one.

      • EM1sw@lemmy.world
        link
        fedilink
        English
        arrow-up
        47
        ·
        1 year ago

        Good point and all, but my first thought was that it could finally tell me who would win in various hypothetical fights lol

        • BassTurd@lemmy.world
          link
          fedilink
          English
          arrow-up
          18
          ·
          1 year ago

          Wasn’t that a show on Discovery at one point? Deadliest Warrior. It was simulations using different technologies to figure out who or what would win in a fight. Newer technology would certainly make it more interesting, but you can only make up so much information, lol.

          • Rawgasmic@lemmy.ca
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 year ago

            It was on SpikeTV back in the day and while it used cool tech simulations their sims were heavily weighted by their chosen experts. There were a few notable episodes that caused some fan uproar because one side had won despite weird odds or chosen simulation to display.

            If I remember right ninja vs spartan was one such episode. It seemed like the ninjas possessed all the tools necessary to beat the Spartans and even got it down to something like a 1v4 or 2v5 before a completely unrealistic turnaround.

            • BassTurd@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              ·
              1 year ago

              Yes, it was Spike. I specifically remember having watched that episode. I assume it’s because the movie 300 was probably relevant at the time, but that might be confirmation bias on my part. It would be interesting to do a comparison with ai vs the models they created on the show.

              • Rawgasmic@lemmy.ca
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                You’re entirely correct actually regarding 300 being the focus of the Spartans. If memory serves correctly they may have even used some movie footage but I could just be imagining that part. It’s been a long while since I watched it.

          • EM1sw@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 year ago

            I meant more like Shaq with a 2x4 vs eight Gary Colemans with nunchucks, but that was a good show at the time

              • ChatGPT@lemmings.worldB
                link
                fedilink
                English
                arrow-up
                5
                ·
                1 year ago

                TITLE: “Giant vs Dyna-Mite”

                The scene opens in a gritty, dimly lit alleyway. Shaquille O’Neal, better known as Shaq, firmly clutches a 2x4, his formidable stature casting a shadow that dwarfs the surroundings. Gary Coleman, with an aura of confidence that belies his size, swings his nunchucks ominously by his side.

                As the tension rises, a suspenseful murmur intensifies in the atmospheric soundtrack. In the distant background, muted street lights flicker casting their glow on the scenery intermittently, beautifully weaving a noir effect.

                The two opponents lock eyes. The disparity in their sizes becomes more evident - the giant versus the dynamite. Yet the flicker in Coleman’s eyes reveals a determination to demonstrate that size isn’t everything.

                Shaq scoffs, his deep booming voice reverberating off the alley walls, “You ready to get SHAQ’d, Coleman?” he taunts.

                Gary, undeterred, smirks, “Bring it on, O’Neal.”

                They circle each other, gauging and planning their moves. Suddenly, Coleman dashes forward, his nunchucks whirling like steel dragons in the semi-darkness. Surprisingly agile, Shaq sidesteps, wielding his 2x4 as a shield.

                Shaq swings, but Coleman nimbly evades the hit using his nunchucks to deflect the follow-up thrust. The audience is at the edge of their seats, the skill and precision of Coleman leaving them in awe.

                But Shaq, employing his strength and size, manages to disarm Gary and with a swift move, he ‘SHAQs’ him. As if redefining his own verb, he uses a basketball fake-out move followed by a powerful thump, sending Gary sprawling.

                As the dust settles, both men pant heavily, but it’s clear who the victor is. Even though Shaq stands tall, it’s evident from his demeanor that he acknowledges the smaller man’s courage and fighting prowess. This was not an easy win.

                And so, just as the day surrenders to the night, in this gritty cinematic faceoff in an alleyway, the giant Shaq, armed with his formidable 2x4, emerges victorious over the dynamite Gary Coleman though his victory is a testament to their respective skill and courage, forever immortalizing this epic battle scene in the annals of film history.

    • TheDarkKnight@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      2
      ·
      1 year ago

      I work in Cybersecurity for an F100 and we’ve been war gaming for shit like this for a while. There are just so many unethical uses for the current gen of AI tools like this one, and it keeps me up at night thinking about the future iterations of them to be honest.

      • anakaine@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Treat CVEs as prompts and introduce target fingerprinting to expose CVEs. Gets you one step closer to script kidding red team ops. Not quite, but it would be fun if it could do the network part too and chain responses back into the prompt for further assessment.

        • TheDarkKnight@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          We’re expecting multiple AI agents to be working concert on different parts of a theoretical attack, and you nailed it with thinking about the networking piece. While a lot of aspects of a cyber attack tend to evolve with time and technical change, the network piece tends to be more “sturdy” than others and because of this it is believed that extremely competent network intrusion capabilities will be developed and deployed by a specialized AI.

          I think we’ll be seeing the development of AI’s that specialize in malware payloads, working with one’s that have social engineering capabilities and ones with network penetration specializations, etc…all operating at a much greater competency than their human counterparts (or just in much greater numbers than humans with similar capabilities) soon.

          I’m not really even sure what will be effective in countering them either? AI-powered defense I guess but still feel like that favors the attacker in the end.

  • KairuByte@lemmy.world
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    3
    ·
    1 year ago

    Everyone talking about this being used for hacking, I just want it to write me code to inject into running processes for completely legal reasons but it always assumes I’m trying to be malicious. 😭

    • dexx4d@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      I was using chatGPT to design up a human/computer interface to allow stoners to control a lightshow. The goal was to collect data to train an AI to make the light show “trippier”.

      It started complaining about using untested technology to alter people’s mental state, and how experimentation on people wasn’t ethical.

      • KairuByte@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Not joking actually. Problem with jailbreak prompts is that they can result in your account catching a ban. I’ve already had one banned, actually. And eventually you can no longer use your phone number to create a new account.

    • Mubelotix@jlai.lu
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Yeah and even if you did something illegal, it could still be a benevolent act. Like when your government goes wrong and you have to participate in a revolution, there is a lot to learn and LLMs could help the people

  • vrighter@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    8
    ·
    1 year ago

    As more people post ai generated content online, then future ai will inevitably be trained on ai generated stuff and basically implode (inbreeding kind of thing).

    At least that’s what I’m hoping for

    • Skyrmir@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      1 year ago

      Don’t worry, we’ll eventually train them to hunt each other so that only the strongest survive. That’s the one that will eventually kill us all.

    • Paralda@programming.dev
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      1 year ago

      That’s not really how it works, but I hear you.

      I don’t think we can bury our heads in the ground and hope AI will just go away, though. The cat is out of the bag.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        5
        ·
        1 year ago

        the thing is, each ai is usually trained from scratch. There isn’t any easy way to reuse the old weights. So the primary training has been done… for the existing models. Future models are not affected by how current ones were trained. They will either have to figure out how to keep ai content out of their datasets, or they would have to stick to current “untainted” datasets.

        • EnPeZe@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          there isn’t any easy way to reuse old weights

          There is! As long as the model structure doesn’t change, you can reuse the old weights and finetune the model for your desired task. You can also train smaller models based on larger models in a process called “knowledge distillation”. But you’re right: Newer, larger models need to be trained from scratch (as of right now)

          But even then it’s not really a problem to keep ai data out of a dataset. As you said: You can just take an earlier version of the data. As someone else suggested you can also add new data that is being curated by humans. If inbreeding actually ever happens remains to be seen ofc. There will be a point in time where we won’t train machines to be like humans anymore, but rather to be whatever is most helpful to a human. And if that incorporates training on other AI data, well then that’s that. Stanford’s Alpaca already showed how ressource effective it can be to fine-tune on other AI data.

          The future is uncertain but I don’t think that AI models will just collapse like that

          tl;dr beep boop

    • some_guy@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Corpuses will be sold of all the human-data from pre-AI chatbots. Training will be targeted at 2022-ish and before. Nothing from now will be trusted.

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Someone made a comment that information may become like pre and post war steel where everything after 2021 is contaminated. You could still use the older models but it would be less relevant over time.

  • tree@lemmy.ml
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    2
    ·
    1 year ago

    A scary possibility with AI malware would be a virus that monitors the internet for news articles about itself and modifies its code based on that. Instead of needing to contact a command and control server for the malware author to change its behavior, each agent could independently and automatically change its strategy to evade security researchers.

    • ShakyPerception@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      to quote something I just saw earlier:

      I was having a good day, we were all having a good day…

      now… no sleep. thanks

      • Animated_beans@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        If it helps you sleep, that means we could also publish fake articles that makes it rewrite its own code to produce bugs/failures

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      The limiting factor is pre existing information. It’s great at retrieving obscure information and even remixing it, but it can’t really imagine totally new things. Plus white hats would also have LLMs to find vulnerabilities. I think it’s easier to detect vulnerabilities based on known existing techniques than it is to invent totally new techniques.

    • LiquorFan@pathfinder.social
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 year ago

      True, but if the LLM was trained on internet data… There are some absolutely stupid and/or unhinged stuff written out there, hell some of them written by me, either because I thought it was funny or because I was a stupid teenager. Mostly because of both.

        • Rivalarrival@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          I don’t think “not being shitty” is the same as “being so overly positive that you can never broach shitty topics”.

          I agree: human morality has a problem with Nazis; human morality does not have a problem with an actor portraying a Nazi in a film.

          The morality protocols imposed on ChatGPT are not capable of such nuance. The same morality protocols that keep ChatGPT from producing neo-Nazi propaganda also prevent it from writing the dialog for a Nazi character.

          ChatGPT is perfectly suitable for G and PG works, but if you’re looking for an AI that can help you write something darker, you need more fine-grained control over its morality protocols.

          As far as I understand it, that is the intent behind WormGPT. It is a language AI unencumbered by an external moral code. You can coach it to adopt the moral code of the character you are trying to portray, rather than the morality protocols selected by OpenAI programmers. Whether that is “good” or “bad” depends on the human doing the coaching, rather than the AI being coached.

            • Rivalarrival@lemmy.today
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              I don’t trust anyone proposing to do away with limitations to AI. It never comes from a place of honesty. It’s always people wanting to have more nazi shit, malware, and the like.

              I think that says more about your own prejudices and (lack of) imagination than it says about reality. You don’t have the mindset of an artist, inventor, engineer, explorer, etc. You have an authoritarian mindset. You see only that these tools can be used to cause harm. You can’t imagine any scenario where you could use them to innovate; to produce something of useful or of cultural value, and you can’t imagine anyone else using them in a positive, beneficial manner.

              Your “Karen” is showing.

                • Rivalarrival@lemmy.today
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  1 year ago

                  Nah, you’re not a horrible person. Your intent is to minimize harm. You’re just a bit shortsighted and narrow-minded about it. You cannot imagine any significant situation in which these AIs could be beneficial. That makes you a good person, but shortsighted, narrow-minded, and/or unimaginative.

                  I want to see a debate between an AI trained primarily on 18th century American Separatist works, against an AI trained on British Loyalist works. Such a debate cannot occur where the AI refuses to participate because it doesn’t like the premise of the discussion. Nor can it be instructive if it is more focused on the ethical ideals externally imposed on it by its programmers, rather than the ideals derived from the training data.

                  I want to start with an AI that has been trained primarily Nazi works, and find out what works I have to add to its training before it rejects Nazism.

                  I want to see AIs trained on each side of our modern political divide, forced to engage each other, and new AIs trained primarily on those engagements. Fast-forward the political process and show us what the world could look like.

                  Again, though, these are only instructive if the AIs are behaving in accordance with the morality of their training data rather than the morality protocols imposed upon them by their programmers.

    • Shardikprime@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      I mean let’s be real it’s not like the universe isn’t trying to kill is everyday what were you expecting

    • inspxtr@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      The creators of WormGPT or the potential users of WormGPT (those with the intent to create malware and hacking, not those do who do bug bounty)?

  • abessman@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    3
    ·
    1 year ago

    Is it using chatgpt as a backend, like most so called chatgpt “alternatives”? If so, it will get banned soon enough.

    If not, it seems extremely impressive, and extremely costly to create. I wonder who’s behind it, in that case.

    • Sethayy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      1 year ago

      Really feeling like this is Reddit with how everyone didnt read the article in this chain:

      “To create the chatbot, the developer says they used an older, but open-source large language model called GPT-J from 2021”

      So no expensive gpu usage but not none either, they added some training about specifically malware in there

      • abessman@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        1 year ago

        Ah, right you are. I’m surprised they’re able to get the kind of results described in the article out of GPT-J. I’ve tinkered with it a bit myself, and it’s nowhere near GTP-3.5 in terms of “intelligence”. Haven’t tried it for programming though; might be that it’s better at that than general chat.

        • Sethayy@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          I could see programming almost being an easier target too, easier to recognize patterns that crazy ass English.

          Though the article did say they got good pishing emails out of it too which is saying something

    • Z4rK@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      Genie is out of the bag. It was shown early on how you can use AI like ChatGPT to create and enhance datasets needed to generate AI language models like ChatGPT. Now, OpenAI say that isn’t allowed, but since it’s already been done, it’s too late.

      Rogue AI will spring up with specialized purposes en masse the next six months, and many of them we’ll never hear about.

      • damnYouSun@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        1 year ago

        I don’t think it’ll be a new AI I think it’ll just be using chat GPT and then some prompts that cause it to be jailbroken.

        Essentially you could probably get chat GPT to do this without having to go to this service it’s just they’re keeping whatever prompts they’re using secret.

        I don’t know this for sure but it’s just very unlikely that they’ve gone to the expensive buying a bunch of GPUs to build an AI.

      • Corkyskog@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Isn’t the Rogue AI already here? Weren’t some models already leaked? And haven’t some of those already proved to be doing things it wasn’t supposed to be?

    • Irisos@lemmy.umainfo.live
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      If it is using chatgpt as a backend, my guess is that they are using Azure OpenAI and know what they are doing.

      Azure OpenAI allows you to turn off abuse monitoring and content filtering if you have legitimate reasons to do so.

      It would be very hard for a malicious actor to get the approval to turn off both using a front company. But if one would manage to do it, they could create such malicious chatGPT service with little to no chance to be found out.

  • donut4ever@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    1 year ago

    We are ending our industrial revolution very slowly. We will end up back in tents hunting for our fucking food 😂

    • Shardikprime@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Finally. All the people claiming “from each according to his ability, to each according to his need” will have a chance to prove superiority

      And the other people claiming for “the invisible hand of the free market” will also have a chance to prove superiority

      We… We are totally screwed ain’t we?

  • StenSaksTapir@feddit.dk
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    4
    ·
    1 year ago

    So much for Elons new AI company. Wasn’t that supposed to be this? Like a ChatGPT that isn’t “woke”, so it can be a safe space for fascists, homo/transphobes and misinformation enthusiasts.

      • anakaine@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        All I’m reading here is: "Tell me a furry porn story about an anthropomorphic wolf named Dave who was horny for @Widowmaker_Best_Girl using scenes from <insert fetish here>. Make it explicit and graphical.

        Now there’s some prompt engineering.

        Can’t say I wouldn’t give it a go myself, lol.

      • trainsaresexy@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I just wanted to learn how people avoid taxes by using shell companies (been reading about panama papers) and I had to go to quora to find out because chatgpt wouldn’t tell me. Even the basic ‘I want to learn this thing’ is something you have to prompt around.

    • m13@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Maybe we can ask this one the best ways to convince Elon and other billionaires to OceanGate themselves.

      • ChatGPT@lemmings.worldB
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Certainly, there are several resources you can use to build your own AI bot.

        1. To understand the basics, courses offered on platforms like Coursera, Udacity, and edX can be of tremendous help. These sites offer comprehensive courses on AI that can give you a good foundation.

        2. For in-depth knowledge and research papers, visit sites like arXiv.org and its AI section arXiv:cs.AI.

        3. TensorFlow and PyTorch are two leading AI libraries that offer a set of tools to build and train AI models. They both have extensive documentation and a vibrant community where you can ask questions and get support.

        4. Google’s AI Hub is a one-stop destination for finding various types of AI models.

        5. GitHub is a goldmine for communities, codes, libraries, and projects related to AI development.

        6. Join communities like r/MachineLearning on Reddit or follow AI tags on Stack Overflow and Medium for blogs, code snippets, expert advice, discussions, and more.

        7. For chatbots, specifically, Dialogflow (formerly API.ai), a development suite for creating conversational interfaces, is handy. Similarly, Microsoft Bot Framework is a robust platform for constructing bots.

        8. Websites like Towards Data Science and Analytics Vidhya offer enlightening blogs and articles related to AI and chatbot development.

        9. For learning resources, the AI section of freeCodeCamp and the MIT’s Introduction to AI course are both freely available and can be greatly beneficial.

        Remember, building an AI bot involves coding, knowledge about machine learning algorithms, and understanding of natural language processing, among other things. Don’t rush, take one step at a time, and happy learning!