Twitch Introduces Machine Learning Feature to Detect Suspicious Users

Twitch has had a rough year when it comes to moderating hate raids, streamer abuse, and other forms of disgusting behavior on the platform. At its worst time, the company admitted that some programs would take time to implement. The time has now come as Twitch has announced a new initiative that will attempt to deal with hate raids and streamer harassment.

The company announced that it will begin using machine learning to prevent individuals from circumventing Ban Evasions by creating new accounts. In a video posted to the Twitch Twitter account, Streamers and their Mods will be able to view specific potentially harmful chatters under the new Suspicious User Detection system. The streamer or the mods will then have the opportunity to monitor the chatter’s account age, opt to withhold the comment, or let it pass.

Suspicious User Detection will have two tiers of moderation. The “Likely” classification hides a comment from general chat and flags the comment for only the streamer and their mods to see and take action against the account. The “Possible” marking will flag the account the comment came from but the message will be seen in the chat. Twitch says that this new feature will be turned On by default for all streamers. Twitch warns that there could be some growing pains as it may take some time for the machine learning program to become accurate and to expect some false positives and negatives.

The wave of Hate Raids cast a shadow on Twitch during the summer of 2021 as People of Color and LGBTQIA+ Streamers feared using the platform as constant abuse from bot accounts that spammed vile comments in ways that skirted Twitch’s automated filters. That wave of disgusting behavior and Twitch’s perceived slow response prompted 3rd party stream tools to create options that helped marginalized streamers combat the problem. Some streamers utilized these options while they waited for Twitch to later roll out Phone Numbers and Email Verification. This was the first step in Twitch’s plan to cut down on bad actors that created new accounts to bypass bans.

Anything that helps protect streamers from receiving harmful messages is a step in the right direction. These new measures alongside what has been previously implemented must make it hard for repeat offenders from continuing to get their kicks from making streamers react to uncomfortable situations. This new feature can not only protect BIPOC and LGBTQIA+ streamers but also women that have had to deal with stalker accounts on the platform. Hopefully, the Suspicious User Detection system works to give marginalized streamers more peace of mind.

 

Original post: https://gamerant.com/twitch-machine-learning-detect-suspicious-users/

Leave a Reply

Your email address will not be published. Required fields are marked *