• KeenFlame
    link
    fedilink
    arrow-up
    6
    arrow-down
    2
    ·
    5 months ago

    I don’t understand why anyone writing, reading or commenting on this think a bookshelf would not change the outcome? Like what do you people think these ml models are, human brains? Are we still not below even the first layer of understanding?

    • SkyNTP@lemmy.ml
      link
      fedilink
      arrow-up
      10
      ·
      edit-2
      5 months ago

      The problem is the hysteria behind it, leading people to confuse good sounding information with good information. At least when people generally produce information they tend to make an effort to get it right. Machine learning is just an uncaring bullshitting machine, that is rewarded on the basis of the ability to fool people (turns out the Turing test was a crappy benchmark for practice-ready AI besides writing poems), and VC money hasn’t reached the “find out” phase of that looming lesson, when we all just get collectively exhausted by how underwhelming the AI fad is.

      • KeenFlame
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        Yeah, the hysteria is definitely the problem. Can’t say that I agree that the technology is underwhelming, though. It can generate, practically anything fast and with guidance and it’s just interesting that nobody really understands how. It’s a paradigm shift for creative work. Producing music or art will continue to change a lot from this. Using the technology to analyse personalities during job interviews is so fundamentally idiotic, because a generative system is a brainstorming tool, not analytical nor accurate. And just so wrong that it feels like it’s actually the work of someone malicious.