Ubisoft and Riot Games are launching Zero Harm in Comms, a research project focused on AI-based solutions to toxic player interactions during multiplayer games.
The program’s goal is to expand the abilities of their AI tech to stop hostile, bigoted, or negative interactions between users. It’s attempting to build a cross-industry database and labeling system, which will then be used to train AI moderation tools to preemptively find and halt bad behavior. According to Ubisoft’s press release, this player data will be anonymized as part of the program effort to ensure privacy, while conducting its research ethically.
This project was created by Yves Jacquier, the executive director of Ubisoft La Forge, and Wesley Kerr, head of technology research at Riot. Jacquier stated in a press release that “We believe that, by coming together as an industry, we will be able to tackle this issue more effectively.” Kerr mentioned the project’s potential outside the gaming world saying, “Disruptive behavior isn’t a problem that is unique to games — every company that has an online social platform is working to address this challenging space.”
Riot’s catalog of multiplayer titles and Ubisoft’s plethora of games will create a wide range of cases for their research. Of course, the AI cannot stop every single case of bad behavior, but, in theory, the tech will be able to detect more issues with a higher success rate. This is the first half of the ongoing research project, which started around 6 months ago. Whatever the outcome, Ubisoft and Riot plan on sharing the results of this first phase with the rest of the industry next year.
Both Ubisoft and Riot have gone through their own accusations of toxic and mismanaged workplaces. Earlier this year, Riot agreed to $100 million settlement in a gender discrimination case.