AI Can Distinguish Abusive Language Online

So far, we’ve used artificial intelligence much to our advantage in whatever way possible. We’ve built devices as trivial as visual emotion masks, but overall, people feel mostly optimistic. But now that we know AI can serve purposes of various degrees of impact, developers from McGill University are developing an AI that can recognize hate speech on social media.

Instead of focusing on isolated words and phrases, they taught machine learning software to spot hate speech by learning how members of hateful communities speak… They focused on three groups who are often the target of abuse: African Americans, overweight people and women.

Previous softwares detecting abusive language have proven unsuccessful due to the misleading nature of online slang. That and the fact that machines aren’t well-versed in sarcasm. The system, however, was able to identify racist slurs and avoided false positives. And I believe this first step in compiling data about sites that condone and even encourage abusive language can lead to finding solutions in the future. Perhaps hopefully, not just online. After all, our material reality reflects our online visual reality, and vice versa.

“Comparing hateful and non-hateful communities to find the language that distinguishes them is a clever solution… [But] ultimately, hate speech is a subjective phenomenon that requires human judgment to identify,”

While it won’t eliminate every online bully, it’s a commendable attempt at making the Internet a safer environment.

--> Help make the world a better place by sharing this story with your friends: