We already know from TOS that Mutlitronic computers are able to develop sapience, with the M-5 computer being specifically designed to “think and reason” like a person, and built around Dr Daystrom’s neural engrams.

However, we also know from Voyager that the holomatrix of their Mk 1 EMH also incorporates Multitronic technology, and from DS9 that it’s also used in mind-reading devices.

Assuming that the EMH is designed to more or less be a standard hologram with some medical knowledge added in, it shouldn’t have come as a surprise that holograms were either sapient themselves, or were capable of developing sapience. It would only be a logical possibility if technology that allowed human-like thought and reasoning into a hologram.

If anything, it is more of a surprise that sapient holograms like the Doctor or Moriarty hadn’t happened earlier.

  • Corgana@startrek.website
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    2
    ·
    11 months ago

    The cool thing about the Doctor’s overall personal arc is that I think most fans would agree that probably he wasn’t sentient in the early episodes, probably was by the end, and there’s no clear moment when it changes (although I submit the events of “Latent Image” as a candidate).

    Something I think we’re all learning now with the rise of LLMs/Generative AI is that one can perform the act of intelligent self-awareness without consciousness or understanding. Sapience without sentience.

      • AnyOldName3@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        11 months ago

        If you trap a person in a room with a keyboard and tell them you’ll give them an electric shock if they don’t write text or the text says they’re a person trapped somewhere rather than software, the result is also just a text generator, but it’s clearly sentient, sapient and conscious because it’s got a human in it. It’s naive to assume that something couldn’t have a mind just because there’s a limited interface to interact with it, especially when neuroscience and psychology can’t pin down what makes the same thing happen in humans.

        This isn’t to say that current large language models are any of these things, just the reason you’ve presented to dismiss that isn’t very good. It might just be bad paraphrasing of the stuff you linked, but I keep seeing people present it just predicts text as a massive gotcha that stands on its own.