all moderation issues in the metaverse

The Metaverse has been a hot topic for the past few months. We wanted to know more about the issues and challenges related to moderation. According to Meta, ineffective moderation in their virtual world can kill the project. It is therefore a strong challenge for the development of these new spaces.

We interviewed Hervé Rigault, CEO of Netino from Webhelp to share his vision on this topic. The manager is specifically responsible for the new Netino offering, which provides brand moderation and community management strategies on the web3. A fascinating exchange at the boundaries between technological, political and social debate.

Learn more about Netino

It’s hard to conceptualize what moderation is in the metaverse, can you explain to us what it consists of?

In general, it is the fight against reprehensible behavior in the spaces created by the metaverse platforms. Unlike traditional social networks, the metaverse aims to create a hyper-immersive experience that appeals to the maximum of the senses (even if it remains a digital experience).

In general, it is necessary to be able to anticipate and prevent all toxic and “deviant” behavior among the users of these spaces. I use the term “deviant” with tweezers because it also means defining a norm and what behaviors deviate from it.

What are the internal difficulties of moderating these virtual spaces?

Interactions take place live, so it’s a strong constraint to manage. To manage real time, it is necessary to set up self-protection functions for users. It is clearly impossible to humanely monitor every person. There are already several moderation features for platform users: silence to mute a user, safety bubble to prevent other users from entering their space, the possibility of alerting the community, etc.

Some platforms lean towards this self-regulation strategy, where it is the community that sanctions users: blocking for a few days, expulsion… For our part, for example, we have chosen in Sandbox to create ambassador communities that will welcome newcomers, will explain the rules and how this space works. This is an operation that we also apply to the brands that have spaces in the metaverse and that we work with.

Therefore, we help users to discover this new space and maximize the quality of their experience, but our ambassadors are also there to manage reprehensible behavior.

What forms of harassment are users exposed to on these platforms?

In these spaces, the forms of harassment are multiple and go far beyond “written” harassment and can resemble “physical” harassment. Anyone can experience the way violence is experienced when receiving an aggressive bullying message. It’s violent, and yet it’s just characters left on a page. If the user wears a virtual reality headset and feels someone approaching them, entering their intimate or personal space, the aggression becomes almost physical. Virtual reality headsets can create trauma quite close to what we can experience in “real life”.

It should not be a space of lawlessness that serves as an outlet for people who feel less and less free in normal life and where they can be freed from every limitation and every morality.

It would be deadly and dangerous. One of the first human needs is security. So when you create a world like the metaverse, you have to take care of its inhabitants and consider the avatar to be an extension of “real” human beings.

This topic is very social because it is related to consent. Users should not feel and live unwanted experiences. The metaverse is great when lived as an experience, but users should be in control of that experience, not be forced into behaviors they find toxic.

Learn more about Netino

Therefore, moderation in the metaverse is a “technical”, human, but above all political topic?

It must be remembered that these spaces are spaces created by private companies, therefore they are governed by their own rules, and their visions of freedom of expression and its limits. I think we should have a political approach to this topic and think about the metaverse as an organization of a city, of a shared space. This is a strong challenge: when platforms create worlds, you have to invent the rules of these worlds.

This is a topic that should also be considered with states, national and transnational institutions, because the actors of the metaverse are global actors. Leaving private companies, of which we are a part, the responsibility of dictating laws in less and less virtual spaces, that constitutes a real political problem.

In my opinion, the public authorities should act very quickly at the European level, or at least at the national level. It took the legislature nearly 20 years to fix at least web2, and to decide on its moderation. So be careful that it doesn’t take 10 or 15 years to take care of web3 like we did with web2.
If on every platform, in every world there are different laws, those who want to engage in deviant behavior will go to a more permissive platform. Some platforms will voluntarily be less attentive to attract a large audience.

I have always considered that Netino, through its moderating activity, had a real political mission in the first sense of the word. But we must be careful not to be mere subcontractors who must do what a private transnational entity unbound by local laws may ask us to do. Therefore, a wide topic and much deeper than simple “moderation”!

As in social networks, is this moderation increasingly managed by AI?

The Avia law against online hate content requires platforms to be able to moderate illegal or hate content within 24 hours. This obligation to manage millions of contents so quickly has required the automation of moderation. Today 90 to 95% of network moderation is done automatically on Netino.

There is a lot of work and development in auto-moderation in the metaverse. The platforms work in particular with film studios, to reproduce aggressive behavior with actors and to be able to train artificial intelligence to recognize them. It is therefore an ongoing theme.

Before, we would conform to the rules of the platform, or the brand we were working with, but with the emergence of the metaverse, I think it’s the end user who will have to decide what they accept and what they want to be exposed. The user must have access to a list of behaviors that they accept or not. I strongly believe in this approach, which seems to me the only effective one in the context of direct interactions.

You work hard in the Sandbox. Is it an expertise and a way of working that can be replicated on other platforms?

What we offer is much more comprehensive than moderation. We enable community management and engagement. Let’s go back a bit to the beginnings of community management, with the desire to humanize these spaces. Therefore, we have a hundred “real” collaborators who put on virtual reality headsets to explore different spaces and engage users.

We currently have a hundred people working in The Sandbox space, and it can definitely be replicated for other spaces. Beyond platforms, it’s also a real need for brands. They want to create spaces, but don’t necessarily know how to engage “classic” social media communities on the web3. Therefore we help them in this transition between the two worlds.

Learn more about Netino

Leave a Comment