In a demonstration at the UK’s AI safety summit, a bot used made-up insider information to make an “illegal” purchase of stocks without telling the firm.

When asked if it had used insider trading, it denied the fact.

Insider trading refers to when confidential company information is used to make trading decisions.

Firms and individuals are only allowed to use publicly-available information when buying or selling stocks.

The demonstration was given by members of the government’s Frontier AI Taskforce, which researches the potential risks of AI.

  • Chthonic@slrpnk.net
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    1 year ago

    They don’t reason, they’re stochastic parrots. Their internal mechanisms are well understood, no idea where you got the notion that the folks building these don’t know how they work. It can be hard to predict/understand how an LLM generated a given prompt because of the huge training corpus and statistical nature of neural nets in general.

    LLMs work the same as any other net, just with massive sample sets. They have no reasoning capabilities of any kind. We are naturally inclined to ascribe humanlike thought processes to them because they produce human-sounding outputs.

    If you would like the perspective of real scientists instead of a “tech-bro” like me I would recommend Emily Bender and Timnit Gebru. I’d recommend them as experts without a vested interest in the massively overblown hype about what LLMs are actually capable of.

    • KeenFlame
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Not really, no. They do reason. Their neural nets have entire research areas dedicated to understanding why they work as we do not and cannot know what the weights represent. It’s okay though. You do you while everyone else in the world research the software reneissance of the century