Note: This post contains the text of my opening keynote at Deloitte’s Digital Platform Trust Forum 2025 (US Edition), which was held in Mountain View, California on November 13, 2025.
tl;dr: Digital platforms are in their “OSHA moment.” Just as the passage of the OSH Act of 1970 moved workplace safety from voluntary codes to enforceable standards, recent frameworks like the EU’s Digital Services Act are compelling platforms to make their safety systems auditable while still holding them accountable for safety outcomes. Meeting this challenge requires integrating the Trust & Safety and Compliance disciplines into product design from the start, an investment that’s becoming both legally unavoidable and strategically essential. The platforms that will lead aren’t the ones resisting this shift; rather, the leaders will be the ones turning accountability into a competitive advantage by engineering trust as deliberately as they engineer features.
Introduction
It’s a bit surreal to be speaking after seeing a fireside chat with a version of myself that doesn’t technically exist.
But that deepfake demonstration makes the point better than any slide could: we’ve reached a moment where even seeing isn’t necessarily believing.
Likewise, we can’t assume trust. It has to be built, proven, and continually reinforced. That shift – from assuming trust to deliberately engineering it – is exactly what I want to talk about today.
Expectations for digital platforms have changed. It’s no longer enough to make promises about safety, integrity, or responsibility; regulators now expect proof that those promises are being kept.
Accountability has become a complement to reputation – a way to show that integrity isn’t just claimed, but confirmed.
While that may feel new in the digital world, we’ve seen it before in another industry, in another era, and that’s where I want to start.
Workplace Safety and OHSA
When the Occupational Safety and Health Administration, also known as OSHA, was created in 1970, it wasn’t because companies suddenly discovered empathy.
It was because in the late 1960s, workplace accidents in the United States were killing about 14,000 people a year and injuring millions more.
Before OSHA, workplace safety was a patchwork of state laws, voluntary codes, and after-the-fact investigations. Every factory, refinery, or mine was essentially on its own, and the result was grim. It was clear that voluntary self-regulation wasn’t working and, instead, a federal standard was required to ensure every worker, regardless of state or industry, was protected (as the law says) “so far as possible.”
OSHA didn’t set out to make companies more caring; it set out to make them more accountable.
It established enforceable national standards, created a framework for inspections, and backed it with penalties. While OSHA didn’t make safety self-sustaining overnight, it did move safety from being an afterthought to being an expectation. Today, you don’t need to convince any serious company that safety matters. The only question is how far beyond baseline compliance they choose to go. The rulebook still matters; it’s just no longer the whole story.
Of course, that progress hasn’t made OSHA obsolete. Violations still happen, enforcement still matters, and culture still drifts. But since OSHA began in 1971, workplace fatality rates in the United States have fallen by roughly eighty percent. The progress hasn’t been perfect, but the long-term trend is unmistakable: safety became a core expectation.
OSHA showed how clear standards, consistent accountability, and transparency can shift industry norms. In the digital realm, we now have the chance to apply the same principles proactively through the systems and safeguards we design ourselves.
From Safety to Digital Trust
For the past two decades, online safety has operated primarily through self-regulation. Trust & Safety teams built policies, published transparency reports, ran investigations, and created the world’s first frameworks for digital accountability largely because it was the right thing to do, not because anyone required it. Though let’s be honest: it was also strategic. It’s better to build accountability proactively than to have it imposed reactively.
Where regulation did exist, it reinforced fragmentation. Privacy over here, payments over there, child safety here, content moderation here, and here, and here . . . The Communications Decency Act (CDA) in 1996, the Children’s Online Privacy Protection Act (COPPA) in 1998, the Payment Card Industry Data Security Standard (PCI DSS) in 2004 – each added a layer of responsibility, but each was still narrow and focused on a single category of risk. None required platforms to examine how their systems created or managed risk. That’s what’s changing now.
Around the world, governments are moving toward comprehensive frameworks like the EU Digital Services Act (DSA), the EU Artificial Intelligence (AI) Act, the UK Online Safety Act (OSA), Australia’s Industry Codes of Practice for the Online Industry, and Singapore’s Online (Relief and Accountability) Bill. A number of other countries are proposing similar legislation.
Under the DSA, for instance, very large platforms are now required to assess and publish the systemic risks their services create — from how content spreads to how their algorithms influence what people see and share.
We are moving from a world that judged platforms only by visible outcomes to a world that judges them by how safely their systems are designed to operate.
It’s the difference between “Don’t sell spoiled food” and “Show us your entire supply chain, explain how contamination gets detected, and prove your safety protocols actually prevent it.”
One regulates outcomes. The other regulates systems.
Let’s be honest: most platforms are still developing the processes and infrastructure needed to understand how their systems create risk, let alone to demonstrate how those risks are being managed.
Here’s what that looks like in practice:
- a content moderation team removes a video for violating harassment policies;
- That same day, the recommendation algorithm promotes fifty more videos with identical characteristics because the enforcement signal never reached the ranking system;
- Meanwhile, the policy team is analyzing user appeals in one database;
- The machine learning (ML) team is evaluating model performance in another database;
- The compliance team is trying to report on “effectiveness” without having access to either.
Each team is doing its job. But the platform has no unified view of whether its safeguards are actually working. That fragmentation isn’t just an operational inconvenience; it’s a structural vulnerability that makes harm both more likely and harder to detect. Digital platforms are fundamentally different from factories – they’re not static. An algorithm update can undo a safety feature overnight. Code ships constantly. The platform you audit today isn’t the platform running tomorrow.
The opportunity in front of us isn’t speed of response; it’s quality of systems – making accountability consistent, reliable, and built-in. Regulators are asking for the same kind of rigor that platforms have been working to develop internally for years – frameworks that make risk management systemic instead of situational.
Trust & Safety’s frameworks are dynamic and operational, designed to detect emerging risks, evaluate harm, and respond in real time. Compliance’s are structural and evidentiary, designed to align those actions with laws, standards, and audit requirements.
The real opportunity now is to connect these frameworks. When those systems are integrated, accountability becomes continuous rather than episodic, and the organization gains a clear, defensible understanding of how risk is identified, managed, and improved over time. That’s the same evolution we saw in physical safety half a century ago: the shift from patchwork fixes to accountable systems.
This is our turning point.
The platforms that learn to do this won’t just meet the new expectations; they’ll define what “trustworthy” looks like for everyone else.
From Reactive to Preventive
Compliance isn’t automatic. Most organizations start reactively, interpreting new requirements, fixing issues as they arise, and only gradually learning how to anticipate and prevent them. It’s still rare for systems to be built where robust, effective safeguards are integrated from the beginning.
That was true in workplace safety, too. The hazards that led to OSHA’s standards weren’t mysteries; most were known long before regulation forced change. The problem wasn’t awareness, but rather, priority. It took loss, followed by accountability, for prevention to become the default expectation.
Digital platforms are in a similar place now. Trust & Safety and Compliance have built extraordinary capacity to respond – to manage incidents, investigate harms, engage with regulators, and explain what happened. We can evolve that same capability into something systemic and use what we already know about risk not just to validate the past, but also to shape the design of what comes next.
Integrating systems, anticipating risk, embedding accountability into design – these aren’t separate, unrelated tasks. They’re different dimensions of a single transition: from compliance as a function to trust as an operating principle.
That transition starts with how we handle what we learn. Every investigation, audit, and user report reveals something about how a system behaves under pressure. When that information is used to inform design and policy rather than just ticking a box, incidents lead to insights that shape how systems work.
Addressing Cost and Resistance
But here’s what I haven’t said yet, and what everyone in this room is thinking: this shift isn’t free and it isn’t easy. Building systemic accountability requires resources, creates friction, and slows some things down.
When OSHA was created, industry groups predicted it would cripple American manufacturing. They argued that compliance costs would make companies uncompetitive and that federal overreach would stifle innovation.
They were wrong about the ultimate outcome, but they weren’t wrong about the immediate cost. Safety did require investment. It did change how products got made. Some companies struggled. Some failed. But the industry that emerged was stronger, more sustainable, and more trustworthy. The ones that survived weren’t the ones that fought hardest against accountability; instead, they were the ones that learned to integrate it fastest.
Digital platforms face a similar moment. Building the infrastructure for systemic accountability isn’t cheap. Integrating Trust & Safety with product development adds complexity. Documenting risk assessments takes time. But the alternative – operating without that infrastructure – is becoming both legally untenable and strategically unsustainable.
The platforms that will lead five years from now aren’t the ones resisting this shift. They’re the ones figuring out how to make accountability a competitive advantage.
If We Were Starting from Scratch
If we were starting fresh today, knowing everything we know now, what would we build?
We’d build platforms where every algorithmic change – every new ranking signal, every content recommendation tweak – would have a documented impact assessment before deployment, not after a crisis. Where enforcement actions automatically generate signals that feed back into the systems that created the risk in the first place. Where product teams have real-time visibility into how their features are being exploited, and policy teams can see immediately whether their rules are having the intended effect.
We’d design organizations where Compliance isn’t just the people who write the audit report six months after launch, but also the people in the room when architecture decisions get made. Where “Can we demonstrate this is safe?” isn’t a question you ask at the end of the development cycle, but a gate you pass through before you build.
We aren’t lacking insight into where things go wrong. Ask any Trust & Safety team, and they can tell you exactly where the gaps are. The gap isn’t visibility – it’s priority. It’s whether those insights reach the people making resource decisions, architecture choices, and product roadmaps.
If we can turn that awareness into shared responsibility, we can address risks before they become crises and show, in practice, what it means to design for trust.
That’s what trust by design looks like: not perfect, but provable. Not assumed, but engineered.
The Human Core
At its center, all of this – accountability, compliance, regulations, safety – has never really been about “frameworks” or “systems.” It’s been about people.
People say OSHA regulations are written in blood — that every regulation exists because someone, somewhere, got hurt. Many of our digital safeguards were born the same way. We have the opportunity to shift our approach – to build not just safeguards designed to recover from harm, but also safeguards that anticipate the need to protect. Making that shift requires applying the same rigor to anticipating and mitigating impact as we do to driving progress, so that innovation remains sustainable.
That is the next step in our evolution. Not waiting for the next crisis to force change, but building systems that prevent it from happening. Not explaining after the fact what went wrong, but demonstrating in advance what’s been made right.
Proving, through evidence, design, and practice, that safety and trust can be engineered as intentionally as anything else we build.
That’s not the future of platform accountability; it’s the present. The only question is how quickly we move.
Thank you so much.
