Chatbot Will Combat Email Scammers For You

Since Instagram developed a bot to tackle hate speech, developers have been training them to do other things. Apart from trolls and cyberbullies, netizens also often deal with email scammers. It’s now a lot less hassle-free to shut them down — just contact Netsafe’s genius chatbot.

Next time you get a dodgy email in your inbox, says Netsafe, forward it on to me@rescam.org, and a proxy email address will start replying to the scammer for you, doing its very utmost to waste their time.

Re:scam isn’t anything fancy. In fact, it doesn’t even recognize speech. Its responses are random at best, but vague enough to be believable. It may not pull off a bust itself, but it’ll buy you enough time (and evidence) to take action.

Netsafe’s new online gadget may not be the most sophisticated, but it surely is a reflection of what artificial intelligence is truly capable of. And if you’re the type of hold a grudge, it’s the perfect tool for sweet revenge.

--> Help make the world a better place by sharing this story with your friends:

AI Can Distinguish Abusive Language Online

So far, we’ve used artificial intelligence much to our advantage in whatever way possible. We’ve built devices as trivial as visual emotion masks, but overall, people feel mostly optimistic. But now that we know AI can serve purposes of various degrees of impact, developers from McGill University are developing an AI that can recognize hate speech on social media.

Instead of focusing on isolated words and phrases, they taught machine learning software to spot hate speech by learning how members of hateful communities speak… They focused on three groups who are often the target of abuse: African Americans, overweight people and women.

Previous softwares detecting abusive language have proven unsuccessful due to the misleading nature of online slang. That and the fact that machines aren’t well-versed in sarcasm. The system, however, was able to identify racist slurs and avoided false positives. And I believe this first step in compiling data about sites that condone and even encourage abusive language can lead to finding solutions in the future. Perhaps hopefully, not just online. After all, our material reality reflects our online visual reality, and vice versa.

“Comparing hateful and non-hateful communities to find the language that distinguishes them is a clever solution… [But] ultimately, hate speech is a subjective phenomenon that requires human judgment to identify,”

While it won’t eliminate every online bully, it’s a commendable attempt at making the Internet a safer environment.

--> Help make the world a better place by sharing this story with your friends: