Twitch has recently introduced a new tool to tackle harassment within live stream chats.
The new feature is called Suspicious User Detection and allows creators and moderators to intercept users attempting to evade bans from channels.
“When you ban someone from your channel, they should be banned from your community for good,” claimed Twitch in a recent post. “Unfortunately, bad actors often choose to create new accounts, jump back into Chat, and continue their abusive behaviour.”
This tool’s purpose is to prevent users from quickly creating new accounts when they are banned for abusive behaviour. Thanks to machine learning algorithms, detecting suspicious users will allow identifying these users based on various account signals.
Suspicious User Detection will flag doubtful accounts as either “likely” or “possible” ban-evaders so that creators and moderators can take action if required.
Messages from “likely” ban violators will not appear in the chat at all, although their messages will be visible to creators and mods, who can leave restrictions in place, delete them or completely ban them on the channel.
In turn, the chat will display messages of “possible” ban-evaders as usual, but the account will be marked for the creator and mods to restrict their discussion if necessary.
By default, the presented feature will be enabled and could be turned off or customized on the channel moderation settings page. Possibly offensive users may be observed even more thoroughly with the help of the new “Suspicious Users” widget.
Yet, Twitch remarks that machine learning will never be 100% accurate. Thus, there is a possibility of false positives and false negatives – that’s why the new tool would not automatically ban users.
“This tool highlights our overall approach to safety technology: build powerful tools that work together to give you finer control over your community,” Twitch said. “Our work will never be finished, and we’re continuing to develop more tools to prevent hate, harassment, and ban evasion on Twitch.”