• eleitl@lemmy.ml
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    9 months ago

    Tracking a moving object in realtime with video is a standard task for a machine learning engineer. You can do it on an embedded platform with ML hardware support. I don’t know what hardware newer Lancets use but they can already do it according from developer reports from Telegram channels like e.g Разработчик БПЛА.

    • Warl0k3@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      9 months ago

      Honestly, I was just objecting to the use of “AI”. We’ve had both fire and forget and loitering munitions for decades now, neither of which use ML. Will it happen? Sure. For now, ML/AI is too unreliable to be trusted in a deployed direct attack platform, and we dont have computing hardware powerful enough to run ML models that we can jam in a missile.

      (Though yeah we run tons of models against drone data feeds, none of those are done onboard…)

      • barsoap@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        9 months ago

        For now, ML/AI is too unreliable to be trusted in a deployed direct attack platform

        And probably can’t ever be trusted. That “hallucinations can’t ever be ruled out” result is for language models but should probably apply to vision, too. In any case researchers made cars see things and AFAIU they didn’t even have to attack the model they simply confused the radar. Militaries are probably way better at that than anything that’s out in the open, they’ve been doing ECM for ages and of course never tell anyone how any of it works.

        That doesn’t mean that ML can’t be used, though, you can have additional non-ML mission parameters such as the drone only acquiring targets over enemy territory. Or that the AI is merely the gunner, there’s still a human commander.

      • eleitl@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        9 months ago

        The point of modern deep learning approaches is that they’re extremely easy on the developer skill. Decades ago realtime machine vision needed a machine vision expert, these days you throw the hardware at the problem at learning stage, and embedded devices to run the results are stupidly powerful (doesn’t even take a Jetson board), if you compare to what has been available even a decade ago.