• Khalic@kbin.social
    link
    fedilink
    arrow-up
    8
    arrow-down
    3
    ·
    edit-2
    10 months ago

    I’m not an ML expert but we’ve been using them for a while in neurosciences (software dev in bioinformatics). They are impressive, but have no semantics, no logics. It’s just a fancy mirror. That’s why, for example, world of warcraft player have been able to trick those bots into making an article about a feature that doesn’t exist.

    Do you really want to lose your time reading a blob of data with no coherency?

    • whataboutshutup@discuss.online
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      Do you really want to lose your time reading a blob of data with no coherency?

      We are both on the internet, lol. And I mean it. LLMs are slightly worse than the CEO-optimized clickbaity word salad you get in most articles. Before you’ve found out how\where to search for direct and correct answers, it would be just the same or maybe worse. <– I found this skill a bit fascinating, that we learn to read patterns and red flags without even opening a page. I doubt it’s possible to make a reliable model with that bullshit detector.