Twitter Now Asks Your Race When It Dislikes Posts

Alex Hall | August 24, 2020
Font Size


Twitter users have come forward claiming that Twitter is now asking users what race they are when it thinks they have made offensive statements.

Twitter essentially has an artificial intelligence scan your tweet for offensive ideas, asking you if you’re reeeeally sure you wanna say some wrongthink that includes “harmful” language.

Twitter users since then have reported that they are getting a notification which not only asks them if they want to rethink their wrongthink, but what race they are. In addition the screenshots show further questions such as what the context of your “harmful” words were such as whether you were criticizing hate speech or whether you were using “this type of language to defend myself or others.”

One Twitter User came forward with the screenshots indicating “To keep Twitter safe and open, we’re asking people if they want to revise replies that were detected as potentially harmful or offensive.”

Perhaps most humorously, when Twitter asks users why they believe they received this message to revise their tweets, one answer you can click simply says “Twitter targets people like me” which is a little on the nose isn't it?

The moment a Big Tech company asks you your race when you say a word or phrase it considers “harmful” we’re clearly in dangerous territory. Candace Owens for example, was temporarily suspended for tweeting the then-New York Times then-recent hire Sarah Jeong’s anti-white rant by copying Jeong’s tweets but swapping out the words “white” and “men” for “black,” “Jewish,” and “women.”

Twitter appeared to not like that one bit, and suspended Owens for 12 hours. After public outcry over double standards, Owens’ account was reinstated. Twitter staff, according to a tweet from Owens, claimed that they had “made an error.”

Ah yes, those “errors” which always seem to target conservatives…