Valorant is going to start storing voice comms to moderate abusive behaviour

Valorant is going to start storing voice comms to moderate abusive behaviour

Riot has announced that it has updated its privacy policy across its online games to allow it to start storing voice communications data. Online multiplayer game Valorant will be the first to see the new system in action, as Riot says it’s moving to start moderating voice comms to verify reports of abusive and toxic behaviour.

A report by TechCrunch provides some details about the voice moderation system Riot plans to implement. Audio data will be stored regionally, and then pulled when a report is submitted. Riot says the audio will be evaluated to check for code of conduct violations, and if one has occured, the player in question will have a chance to see it. Afterwards, the recording will be deleted. If no violation is found, the audio will also be deleted.

Riot told TechCrunch that the system for monitoring voice communications is still in development, and may take the form of a voice-to-text transcription system or possibly machine learning. Modulate’s ToxMod software already has the capability to ‘listen’ to human speech and recognise specific words, phrases, or abusive language in general,¬†and Riot may use a similar AI-driven solution in its voice moderation.

RELATED LINKS: Valorant – all you need to know, Valorant system requirements