EU Digital Services Act (DSA) Transparency Report

This DSA Transparency Report is responsive to the obligations under Articles 15(1)(a), 15(1)(c), 15(1)(d), and 15(1)(e) of the EU’s Digital Services Act (“DSA”). This Report provides information regarding Viber’s content moderation efforts, including the following topics: Government orders and requests; Viber’s proactive content moderation initiated without any governmental order or user report; Total number of relevant content moderation user reports; and Use of automation in content moderation. This report covers the reporting period from February 17, 2024, to February 17, 2025.

1. Government orders and requests

In accordance with Article 15(1)(a) of the DSA, this section covers orders from EU Member States’ judicial or administrative authorities. These include requests under Article 9 to remove illegal content and Article 10 to provide information about user accounts.

Table 15(1)(a).1 – Number of Authority Orders to act against illegal content per Article 9 received by Viber between February 17, 2024 and February 17, 2025, categorized by the issuing Member State. Note that such orders were received only with regard to child nudity and no other illegal content.

Member State Child Nudity Orders
Austria 0
Belgium 0
Bulgaria 0
Croatia 0
Cyprus 0
Czechia 0
Denmark 0
Estonia 0
Finland 0
France 1
Germany 0
Greece 0
Hungary 0
Ireland 0
Italy 0
Latvia 0
Lithuania 0
Luxembourg 0
Malta 0
Netherlands 0
Polan 0
Portugal 0
Romania 0
Slovakia 0
Slovenia 0
Spain 1
Sweden 0
Total 2
  • Median time to inform the issuing authority of receipt: 5 hours
  • Median time to give effect to the order: 18 hours (note that out of the two total orders, only one required a takedown, while in the other case the authority did not respond, and no action was taken)

Table 15(1)(a).2 – Number of Authority requeststo provide information about user accounts in accordance with Article 10 received by Viber between February 17, 2024 and February 17, 2025, categorized by the requesting Member State.

Austria 2
Belgium 0
Bulgaria 9
Croatia 0
Cyprus 0
Czechia 5
Denmark 0
Estonia 0
Finland 1
France 6
Germany 24
Greece 44
Hungary 22
Ireland 1
Italy 1
Latvia 0
Lithuania 0
Luxembourg 226
Malta 0
Netherlands 0
Polan 0
Portugal 0
Romania 0
Slovakia 0
Slovenia 0
Spain 5
Sweden 0
Total 346
  • Median time to inform the issuing authority of receipt: 40 hours 18 minutes
  • Median time to give effect to the order: 55 hours 09 minutes

2. Viber’s proactive content moderation

The section below provides information pursuant to Article 15(1)(c) of the DSA and includes content moderation actions undertaken by Viber on its own initiative, not preceded by an order from a Member State authority or a report/notice submitted by a user.

Viber’s content moderation approach is rooted in a comprehensive and structured internal Content Moderation Policy, as well as clear published Acceptable Use Policy, both aimed together at providing a safe, respectful, and secure environment for all users. The company employs a hybrid strategy that effectively combines advanced automated tools alongside reliance upon user reports, with a trained team of human moderators to review reports and indications and ensure that actions are being taken appropriately. This hybrid model ensures that content is swiftly identified and addressed, offering both efficiency and the nuanced judgment that automated tools alone cannot provide.

The primary categories of violating content that are actively addressed by Viber’s Content Moderation Policy include spam, nudity or sexual content, violence, copyright violations, bullying and harassment, fraud and scam, child nudity, and other forms of content that violate the company’s community standards or applicable laws.

To ensure that content is addressed promptly and in accordance with the severity of the violation, Viber has established clear content removal timelines corresponding with legal obligations. Those timeframes allow Viber’s content moderators enough time to carefully assess the content in question, especially when the violation is complex or borderline, to balance between swift action and the protection of user freedoms, ensuring that content is removed in a way that is both fair and timely.

Viber content moderators undergo periodic training through detailed presentation and Q&A session, which include real-life examples and practical scenarios to ensure they can consistently apply the company’s guidelines. The training also equips them to address edge cases and make context-sensitive decisions while respecting user freedoms, while consulting their colleagues and escalating borderline cases for a more scrutinised process.

Viber takes three types of actions when addressing violating content: removing content entirely, restricting its visibility, or terminating accounts responsible for severe or repeated breaches. Further, in certain cases Viber’s automatic moderation systems issue a warning to the user regarding its prohibited behaviour.

Below you can find the number of content moderation actions taken by Viber on its own initiative during the reporting period. These actions are broken down by category of policy violation and enforcement type. This disclosure includes only actions initiated by Viber without receiving any prior user report, in line with Article 15(1)(c) of the DSA. Specifically, it focuses on the two policy areas where Viber currently employs automated detection and filtering systems, namely (a) Spam, Fraud and Scam; and (b) Nudity/Sexually inappropriate content.

Other types of content violations (e.g., hate speech, harassment, violent threats) are typically addressed only following user reports, and are therefore excluded from this section.

Nudity/Sexually inappropriate content

  • Total warnings issued: 401
  • Total restrictions imposed: 2,171

Spam, Fraud or Scam

  • Users received automatic short or long blocks: 23,780,208
  • Users warned about displaying spammy behaviour: 102,704

3. Content moderation based on user reports

This section, in accordance with Article 15(1)(d) of the DSA, provides data on complaints submitted through Viber’s internal complaint-handling system under its content moderation policy. Alongside Viber’s proactive moderation layers of automated detection systems and a dedicated team of human moderators, there is a third layer of protection that relies on user reports. If users encounter content that they believe violates Viber’s Acceptable Use Policy, or otherwise illegal, harmful or illegitimate content, they can report it through the platform, while choosing a specific report category. To assist users in navigating the reporting options, Viber maintains a public guide explaining how to report inappropriate or illegal content and spam, and the steps involved. The guides are available here and here.

As part of this third layer, Viber also has fast-track measures for trusted flaggers, such as users or entities with proven expertise in reporting illegal content. Notifications from trusted flaggers are prioritized and addressed immediately. Viber has a dedicated webpage for trusted flaggers, to provide them with information regarding their reporting options, available here.

Total number of complaints received from users through the company’s internal complaint-handling systems between February 17, 2024 and February 17, 2025:

30,018,972

4. Use of automation in content moderation

In accordance with Article 15(1)(e) of the DSA, this section provides information on Viber’s use of automated tools for content moderation.

  • Automated Means for Content Moderation

Viber employs a hybrid content moderation framework that combines automated technologies with human oversight to ensure compliance with applicable laws, community standards, and user safety. This system integrates both proactive and reactive measures, ensuring the rapid detection and removal of harmful content while upholding the principles of freedom of expression and fairness.

  • Description and Purpose of Automated Tools:

(1) Proactive Detection:

Automated filters are used to identify certain clear-cut violations—primarily spam, fraud/scams, and nudity/sexual content — that do not require extensive material contextual analysis. This enables swift action on blatantly harmful material.

(2) Reactive Assessment:

Automated systems assist in filtering and flagging content reported by users or identified through monitoring, expediting review by human moderators for context-dependent cases.

(3) Notification Mechanism:

A user-friendly notification system allows individuals to report illegal content in a swift straight-forward way. Non-registered users also have the ability to report publicly available content through our Contact Us page, ensuring inclusivity in reporting mechanisms. Such reports are then forwarded to Viber’s content moderation team, which handles them promptly and efficiently.

  • Accuracy and Error Indicators:

Automated tools are calibrated to achieve high accuracy in detecting explicitly illegal content, with operational metrics monitored regularly to minimize error rate. Periodic evaluations measure the effectiveness of these tools, with error rates tracked and adjustments made as needed to improve performance. 

Estimated Error Rate: Viber’s moderation systems target a false-positive/false-negative rate of under 5%, with the goal of further reducing error rates and boosting overall accuracy.

Safeguards Applied:

(1) Human Oversight:

Automated actions are reviewed by trained moderators to ensure proper contextual evaluation for complex or ambiguous cases.

(2) Appeals and Error Mitigation:

Viber maintains an error-review process and give users the ability to appeal decisions. Each appeal is assessed carefully. The tools are being fed constantly with insights into our automated filters for ongoing refinement.

  • Timeliness and Reporting Obligations:

Viber uses automated tools specifically for spam, fraud, and nudity/sexually inappropriate content. This automated screening reduces turnaround time by flagging potentially harmful content for prompt review, which helps keep illegal or prohibited material from spreading. Items that raise more serious or ambiguous concerns, beyond what the automated filters cover, are escalated to human moderators for immediate evaluation. This setup ensures that any high-risk findings can be addressed quickly and escalated if needed.

  • Accountability and Record-Keeping:

Detailed records of automated decisions, including flagged content, removal actions, and associated notifications, are maintained for a minimum of six months, and in certain cases for extended periods of up to two years. These records support compliance, appeals, and continuous improvement of automated systems.

By integrating advanced automation with human expertise, Viber ensures an effective, rights-respecting approach to content moderation while complying with the requirements of the DSA.

Published date: April 3, 2025