• Uriel-238@lemmy.fmhy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The trolley problem and the prisoner’s dilemma are both thought experiments from before we were talking about AI programming. Yes, some people who are not familiar with AI try to contemplate how they might inform AI. German officials have suggested that maybe vehicle AI should be regulated to regard all lives as equal when choosing which to hit but such code would inform so few situations that it’s a waste of time and effort to focus on it.

    As for the prisoner’s dilemma, the question is not which is the right choice or how to beat it. Considering the investigation into the US regarding Trump’s retention of national security material, and the investigation into the January 6th raid on the US Capitol, and related efforts by Trump to retain power despite losing the election, we’re watching plenty of incidents in which people choose to betray their fellow conspirators for personal lenience.

    But what is curious, and the reason why the Prisoner’s Dilemma is regarded as a paradox is that most humans will not betray their fellow heister, even when it benefits them more to do so than not. And the current going theory as to why has to do with an evolved sense of loyalty to one’s tribal mates. It served us when we were small predators fighting off larger predators. When a buddy was cornered, rather than flee to save our own skin, we have an instinct to harass to save our buddy, even when it risks our own lives.

    Infamously, Frank Gusenberg, contract killer and victim of the St. Valentines Day massacre had been shot fourteen times at the incident, and yet when the feds asked him, he replied Nobody shot me. He died from his wounds shortly thereafter. It’s a common problem when investigating inter-gang crimes that witnesses won’t even betray members of rival gangs, let alone members of their own crew.