Musk posted last night that the platform’s algorithm will soon “promote more informational/entertaining content” in order to “maximize unregretted user-seconds.” In response to Musk changing the X algorithm, people asked Grok what is considered “negative” and were told as reported by user Leah McElrath:
• Criticism of the government
• Commentary about misinformation
• Suggestion the public is being manipulated
• Attacks against powerful people or institutions
Am I missing something? Why does whatever Grok say is considered relevant? Is it aware of the details of the policy changes from X?
Not as far as I know, and I’m not even sure the person who posted that was even serious about that being the response Grok provided.
Elon is deranged enough as it is without us having to make stuff up, let’s stay on track.
it’s possible that the grok model was trained or fine tuned somehow to help with moderation. in that case, it’s possible that things like these bullet points are somewhere up it’s context chain, or in its training data in a manner that it can relatively accurately recall
Clearly they can give them instructions to have specific opinions regarding certain things, so it makes sense for me at least to ask what it has to say if you would like to investigate