Brazil Introduces New Digital Protections for Minors

Digital platforms now sit at the centre of how children and adolescents communicate, learn, and spend time online. At the same time, exposure to manipulation, exploitation, and fraud has increased across these environments, with risks embedded not only in content but in how platforms operate. According to the State of Scams in Brazil 2025 Report, individuals in Brazil encounter scams on average every one and a half days, equating to approximately 252 scam attempts per year. This level of exposure reflects how deeply embedded deceptive practices have become across digital environments.
Protecting minors in this context cannot rely on awareness or moderation alone. It increasingly requires platform-level intervention. Brazil’s Digital Statute for Children and Adolescents came into force in March 2026, applying across platforms, applications, games, and other digital services used by minors, and introducing obligations aimed at strengthening safeguards within these systems.
Design and Enforcement
A central aspect of the framework is its focus on platform design. The statute introduces restrictions on manipulative design practices, including features that encourage compulsive use or increase exposure to harm, as well as restrictions on monetisation practices involving children and adolescents. This reflects a recognition that risks are shaped by platform architecture, user flows, and incentive structures, not only by individual pieces of content.
The framework is supported by an expanded role for ANPD, Brazil’s data protection authority, which is responsible for overseeing compliance and setting technical requirements. These obligations are backed by enforcement mechanisms, including significant financial penalties and potential service restrictions, positioning the statute as an operational requirement rather than guidance.
Brazil’s approach sits alongside a broader set of international efforts to strengthen protections for minors online. In the United Kingdom, the Online Safety Act introduces duties on platforms to assess and mitigate risks to children, alongside existing requirements around privacy and design. In Australia, recent legislation focuses on restricting access to certain social media services for users under 16. While these approaches differ in scope, they reflect a shared direction towards increased platform accountability where minors are present.
From Moderation to System Design
These developments point to a shift in how online safety is being addressed. The focus is moving away from reacting to harmful content towards shaping how platforms are designed and operate from the outset.
There’s a false sense of safety in thinking that children are protected just because they’re physically close to us while online.
In reality, risks are embedded in platform design and business models – from exploitation to addictive features.
This law is a turning point because it addresses those structural issues, but its real impact will depend on effective implementation.
Measures such as robust age verification systems replacing self-declared age, parental controls, and design restrictions form part of a broader effort to address risk at the level of systems rather than individual incidents.
Implementation and Ongoing Considerations
Age assurance introduces technical and privacy considerations, while platform-level changes may be complex to apply consistently.
Similar measures in other jurisdictions have also faced scrutiny around proportionality, particularly whether measures are appropriate to the level of risk they aim to address, feasibility at scale, and potential unintended consequences, particularly where age verification and data use intersect.
What This Demonstrates
Brazil’s statute illustrates a move towards more comprehensive forms of protection for minors online, combining access controls, design considerations, and enforcement within a single framework.
As digital environments continue to evolve, protecting minors is increasingly linked to how platforms are built, how they function, and the obligations placed on platforms to manage risk and protect users.
Latest blogs & research
Brazil Introduces New Digital Protections for Minors
Brazil’s new digital statute strengthens protections for minors through platform design controls, age assurance measures, and regulatory enforcement.
Romance Scams in Brazil: Warning Signs and Prevention
Experts from Brazil discuss how romance scams work, their emotional impact, and how victims can protect themselves online.
De Viena a la Acción: GASA México y UNODC México Cierran Brechas Operativas
GASA México y UNODC México formalizan un Acuerdo de Intercambio de Comunicaciones, convirtiendo los compromisos globales de Viena en acción coordinada contra el fraude.
What the UN Global Fraud Summit Discussions Tell Us About What Comes Next
Watch expert discussions from the UN Global Fraud Summit on the industrialisation of fraud, global collaboration, public–private frameworks, and next steps for implementation.
Game Over for Scammers: Regional Defenses Against Online Gambling–Related Scams
Experts from INTERPOL, ACMA, and DGOJ examine how gambling-related scams operate and how global enforcement is responding.
What 22,000 Fraud & Cyber Crime Operator Signals Reveal About the State of Bank Attacks
Falkin's analysis of 22,661 fraud operator signals shows how bank attacks are evolving across regions, typologies, and AI-driven scam infrastructure.
Reinventing Fraud Detection Through Digital Fingerprinting and Link Analysis
A Microsoft white paper examines how digital fingerprinting and link analysis shift fraud detection from isolated events to connected, network-level intelligence.
On the Frontlines: Fighting AI-Powered Scams & Fraud
Experts from Microsoft, OpenAI, Google and C4ADS share how AI is shaping scams and how to fight back.