…or at least the kind of model that isn’t fueled by burning a small forest for every query? I am wanting to play an old video game, but I’m still only just learning the language and could use an aid. I really, really want to avoid any of this incredibly wasteful AI stuff.

  • Awoo [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 hours ago

    where you just run the game using the tool and it replaces all the text. there are similar things for emulators.

    These are typically locally running LLMs that do the translating as you go along then cache it on the device. It’s why there’s a delay before the text is replaced, the delay is shorter on higher end machines that process it quicker.

    • lime!
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 hours ago

      well, not typically llms. these tools have been around longer than the term.

      besides, i don’t see why it matters? energy-wise, the problem isn’t the tech, it’s the immense scale it’s deployed on in order to be instantly available to millions of people. running a translator locally is unlikely to show up on your electric bill if you play any games on your computer.