aiwithwords logo

Reducing Toxicity in Online Gaming by 70% with AI Moderation

Meta Llama
Reducing Toxicity in Online Gaming by 70% with AI Moderation

Reducing Toxicity in Online Gaming by 70% with AI Moderation

Improving player experience and safety should be top of mind for game developers. In a recent VB Spotlight, Yasmin Hussain, head of trust and safety at Rec Room, and Mark Frumkin, director of account management at Modulate, discussed protecting players from toxicity through the lens of Rec Room’s trust and safety team and their work with ToxMod, a proactive voice chat moderation solution powered by machine learning.

Combating Toxicity One Step at a Time

Rec Room reduced instances of toxic voice chat by around 70% over the last year of experimentation and iteration. The first step was to extend continuous voice moderation coverage across all public rooms, maintaining consistency about the platform’s expectations for behavior. The team ran tests on different mute and ban lengths, as well as warnings, to find the most effective response when players misbehaved.

Creating and Running Test and Safety Experiments

There are specific metrics to track in order to iterate on player moderation strategies, including the profile and prevalence of toxicity, what people are saying, how often they say it, and who the rule-breakers are. A clear hypothesis is key, defining what behavior to change, what outcome to look for, and what success looks like. Iteration is also key, to learn, fine-tune, and tweak, ensuring experiments run long enough to impact player behaviors.

The Future of AI-Powered Voice Moderation

ToxMod continuously analyzes data around policy violations, language, and player interactions. Moderation should evolve to discourage behavior that violates standards and codes of conduct but also encourage behavior that improves the vibe or experience for other players. Identifying pro-social behavior is crucial, supporting players who are good partners, and de-escalating situations that rise in temperature.

Unlocking Player Experience and Safety

To learn more about the challenges of toxicity in games, strategies to change player behavior, and how machine learning has changed the game, watch the VB Spotlight on demand. Agenda topics include voice moderation, Rec Room’s success and learnings, essential insights from voice moderation data, and reducing toxicity to increase player retention and engagement.

My Thoughts

Reducing Toxicity in Online Gaming with AI Moderation

As an avid gamer, I’ve often encountered toxic players that can ruin the online gaming experience. However, with the help of AI-powered voice moderation, game developers can significantly reduce toxicity and create a safer and more enjoyable environment for players. The recent success of Rec Room, a social gaming platform, is a prime example of how effective AI moderation can be.

Rec Room’s Success Story

Rec Room has seen a significant reduction in toxic voice chat instances, thanks to the implementation of ToxMod, a proactive voice chat moderation solution powered by machine learning. The platform’s trust and safety team, led by Yasmin Hussain, worked closely with Modulate to develop a comprehensive moderation strategy that has led to a more positive and welcoming community.

Key Takeaways

Some key takeaways from Rec Room’s success story include the importance of continuous voice moderation coverage, the need for clear community standards, and the value of iterative testing and experimentation to fine-tune moderation strategies. Additionally, the use of AI-powered tools has enabled Rec Room to identify and address toxic behavior more effectively, creating a safer and more enjoyable experience for players.

    leave a reply

    Leave a Reply

    Your email address will not be published. Required fields are marked *