Researcher access under the Online Safety Act: getting transparency right
Amendments to the Online Safety Act, passed in 2023, in the Data (Use and Access) Act 2025 provide for the Government to require the sharing of data relating to the potentially very broad scope of “online safety matters” with researchers. The idea is that this can enable independent scrutiny of the risks associated with different services online and the efficacy of measures to mitigate those risks. It is now the job of officials in the Department for Science, Innovation and Technology to turn those provisions into practical proposals and they are expected to consult on proposals later this year.
There are similar provisions in the EU Digital Services Act (DSA) and the fraught process of implementing those requirements should be a cautionary tale for UK policymakers. CCIA Europe explored some of the challenges the EU was facing in a blog post at the end of last year as well as in feedback shared with the European Commission while it was preparing its Delegated Regulation on Data Access under the DSA.
In principle, these requirements could be valuable if they lead to proportionate requests to share data with appropriately vetted researchers, but getting there will require getting the scope and the safeguards right. The mechanism for this sort of regime needs to be designed carefully.
The most important concern is about the requirement for companies to hand over data to third parties. Individuals’ privacy and security depends on establishing appropriate limits on the data that is shared, who it is shared with, and what steps are taken to ensure that it is not possible to identify individuals. Even if the companies sharing the data ensure that it is pseudonymised properly, they may not know what other datasets researchers have which could be used to reverse that process.
There is no need to rush the scope of this new provision. Why not wait until we can learn lessons from the EU’s experience with the DSA? Recreate its successes and avoid any mistakes. The Government can limit a lot of the risks by starting with enabling sharing of defined, aggregated datasets instead of implementing broad, open-ended requirements. Companies can then respond quickly without undue risks to user privacy.
The Government can also put in place suitable safeguards. This should include vetting over who is requesting the data. Without appropriate controls it is easy to end up in absurd situations where companies are pushed to do what is manifestly not in the interests of their users’ privacy. An example from competition regulation: the browser Yandex, operated by a Russian company and subject to Russian data localisation laws, was listed as a potential default choice under the Commission’s initial guidance for the Digital Markets Act (DMA) for several months. There is no national boundary over where researchers can request access to sensitive online safety data from, so that kind of problem could easily be recreated here. It would be easy to end up in a situation where untrustworthy or outright malicious actors (even hostile states) abroad seek to get past the vetting process and weaponise access.
Companies should also have a reasonable opportunity to ask what the data will be used for and reject requests based on specific criteria such as whether they are likely to infringe trade secrets or are simply irrelevant to online safety.
The same goes for how data is used. If we create a situation where companies are expected to ‘fire and forget’ it will first make it harder to catch problems, for example when researchers are using data in a way that might infringe on personal privacy. It will also simply lead to mistakes: researchers who think they have found something because they have misunderstood internal data that was not intended for these purposes. It is in everyone’s interests that these mistakes are caught before they hurt the reputation of the researchers involved or undermine trust in important services and regulatory frameworks. Companies should have a formal mechanism for ensuring that their data is being used responsibly and not in ways that are (intentionally or not) misleading.
If researchers are enabled to submit bespoke requests, it could become a kind of blank cheque. Uncareful actors could submit claims in large volumes at little cost to themselves and impose exorbitant costs on the companies that have to respond with extensive efforts to assemble and prepare the data. British users will end up paying the price if there is an impractical burden on services here which contributes to making it harder to bring new features to the UK market.
All of this will be worse if there is the kind of unrealistic timeframe proposed in the EU, where companies are expected to respond to what might be challenging requests within 15 days. Mistakes are much more likely if companies have to respond in a rush. There is no reason to rush this here, as the UK has the flexibility to set a more realistic timeline.
It is important to remember here another difference with the EU DSA. The scope of the Ofcom regime is, by design, much greater, reflecting a view that smaller firms are often the source of significant risks (e.g. the porn sites that have been the subject of enforcement actions thus far). That isn’t unreasonable in itself, but the risk is that large costs are created for smaller and/or lower risk services that cannot justify the costs of compliance with the regime. We have already seen that to some extent and researcher access could worsen the problem. This requirement should be limited to where the risks that justify researcher access are greatest.
All of this can be addressed in the consultation later this year, but the sooner the Government starts to set reasonable bounds on these requirements, the better. There is an opportunity to build a regime that will genuinely improve our understanding of online safety. There is also the risk of creating an expensive mess. If the Government gives this regime the time to grow and puts in place the right controls, we have a much better shot at achieving the right outcome.