• 0 Posts
  • 1 Comment
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle
  • @marmo7ade

    There are at least 2 far more likely causes for this than politics: source bias and PR considerations.

    Getting better and more accurate responses when talking about Europe or other English speaking countries while asking in English should be expected. When training any LLM model that’s supposed to work with English, you train it on English sources. English sources have a lot more works talking about European countries than African countries. Since there’s more sources talking about Europe, it generates better responses to prompts involving Europe.

    The most likely explanation though over politics is that companies want to make money. If ChatGPT or any other AI says a bunch of racist stuff it creates PR problems, and PR problems can cause investors to bail. Since LLMs don’t really understand what they’re saying, the developers can’t take a very nuanced approach to it and we’re left with blunt bans. If people hadn’t tried so hard to get it to say outrageous things, there would likely be less stringent restrictions.

    @Razgriz @breadsmasher