With the EU AI Act incoming this summer, the bloc lays out its plan for AI governance

Business man interacting with bot on computer to get information.

Image Credits: metamorworks / Getty Images

The European Union has taken the wraps off the structure of the new AI Office, the ecosystem-building and oversight body that’s being established under the bloc’s AI Act. The risk-based regulatory framework for artificial intelligence is expected to enter into force before the end of July — following the regulation’s final approval by EU lawmakers last week. The AI Office will take effect on June 16.

The AI Office reflects the bloc’s bigger ambitions in AI. It will play a key role in shaping the European AI ecosystem over the coming years — playing a dual role of helping to regulate AI risks, and fostering uptake and innovation. But the bloc also hopes the AI Office can exert wider influence on the global stage as many countries and jurisdictions are looking to understand how to approach AI governance. In all, it will be made up of five units.

Here’s a breakdown of what each of the five units of the EU’s AI Office will focus on:

One unit will tackle “regulation and compliance”, including liaising with EU Member States to support harmonized application and enforcement of the AI Act. “The unit will contribute to investigations and possible infringements, administering sanctions,” per the Commission, which intends the Office to play a supporting role the EU country-level governance bodies the law will also establish for enforcing the broad sweep of the regime.

Another unit will deal with “AI Safety”. The Commission said this will focus on “the identification of systemic risks of very capable general-purpose models, possible mitigation measures as well as evaluation and testing approaches” — with general purpose models (GPAIs) referring to the recent wave of generative AI technologies such as the foundational models that underpin tools like ChatGPT. Though the EU said the unit will be most concerned with GPAIs with so-called “systemic risk” — which the law defines as models trained above a certain compute threshold.

The AI Office will have responsibility for directly enforcing the AI Act’s rules for GPAIs — so relevant units are expected to conduct testing and evaluation of GPAIs, as well as using powers to request information from AI giants to enable the oversight.

The AI Office’s compliance unit’s work will also include producing templates GPAIs will be expected to use, such as for summarizing any copyrighted material used to train their models.

While having a dedicated AI Safety unit seems necessary to give full effect to the law’s rules for GPAIs, it also looks intended to respond to international developments in AI governance since the EU’s law was drafted — such as the UK and US announcing their own respective AI Safety Institutes last fall. The big difference, though, is the EU’s AI Safety unit is armed with legal powers.

A third unit of the AI Office will dedicate itself to what the Commission dubs “Excellence in AI and Robotics”, including supporting and funding AI R&D. The Commission said this unit will coordinate with its previously announced “GenAI4EU” initiative, which aims to stimulate the development and uptake of generative AI models — including by upgrading Europe’s network of supercomputers to support model training.

A fourth unit is focused on “AI for Social Good”. The Commission said this will “design and implement” the Office’s international engagement for big projects where AI could have a positive societal impact — such as in areas like weather modelling, cancer diagnoses and digital twins for artistic reconstruction.

Back in April, the EU announced that a planned AI collaboration with the US, on AI safety and risk research, would also include a focus on joint working on uses of AI for the public good. So this component of the AI Office was already sketched out.

Finally, a fifth unit will tackle “AI Innovation and Policy Coordination”. The Commission said its role will be to ensure the execution of the bloc’s AI strategy — including “monitoring trends and investment, stimulating the uptake of AI through a network of European Digital Innovation Hubs and the establishment of AI Factories, and fostering an innovative ecosystem by supporting regulatory sandboxes and real-world testing”.

Having three the five units of the EU AI Office working — broadly speaking — on AI uptake, investment and ecosystem building, while just two are concerned with regulatory compliance and safety, looks intended to offer further reassurance to industry that the EU’s speed in producing a rulebook for AI is not anti-innovation, as some homegrown AI developers have complained. The bloc also argues trustworthiness will foster adoption of AI.

The Commission has already appointed the heads of several of the AI Office units — and the overall head of the Office itself — but the AI Safety unit’s chief has yet to be named. A lead scientific advisor role is also vacant. Confirmed appointments are: Lucilla Sioli, head of AI Office; Kilian Gross, head of the Regulation & Compliance unit; Cecile Huet, Excellence in AI and Robotics Unit; Martin Bailey, AI for Societal Good Unit; and Malgorzata Nikowska, AI Innovation and Policy Coordination Unit.

The AI Office was established by a Commission decision back in January and started preparatory work — such as deciding the structure — in late February. It sits within the EU’s digital department, DG Connect — which is (currently) headed by internal market commissioner, Thierry Breton.

The AI Office will eventually have a headcount of more than 140 people, including technical staff, lawyers, political scientists and economists. On Wednesday the EU said some 60 staff have been put in place so far. It plans to ramp up hiring over the next couple of years as the law is implemented and becomes fully operation. The AI Act takes a phased approach to rules, with some provisions set to apply six months after the law comes in force, while others get a longer lead in of a year or more.

One key upcoming role for the AI Office will be in drawing up Codes of Practice and best practices for AI developers — which the EU wants to play a stop-gap role while the legal rulebook is phased in.

A Commission official said the Code is expected to launch soon, once the AI Act enters into force later this summer.

Other work for the AI Office includes liaising with a range of other fora and expert bodies the AI Act will establish to knit together the EU’s governance and ecosystem-building approach, including the European Artificial Intelligence Board, a body which will be made up of representatives from Member States; a scientific panel of independent experts; and a broader advisory forum comprised of stakeholders including industry, startups and SMEs, academia, think tanks and civil society.

“The first meeting of the AI Board should take place by the end of June,” the Commission noted in a press release, adding: “The AI Office is preparing guidelines on the AI system definition and on the prohibitions, both due six months after the entry into force of the AI Act. The Office is also getting ready to coordinate the drawing up of codes of practice for the obligations for general-purpose AI models, due 9 months after entry into force.”

This report was updated with the names of confirmed appointments after the Commission provided the information

United States capitol in Instagram colors

Lawmakers revise Kids Online Safety Act to address LGBTQ advocates' concerns

United States capitol in Instagram colors

Image Credits: Bryce Durbin / TechCrunch

The Kids Online Safety Act (KOSA) is getting closer to becoming a law, which would make social platforms significantly more responsible for protecting children who use their products. With 62 senators backing the bill, KOSA seems poised to clear the Senate and progress to the House.

KOSA creates a duty of care for social media platforms to limit addictive or harmful features that have demonstrably affected the mental health of children. The bill also requires platforms to develop more robust parental controls.

But under a previous version of KOSA, LGBTQ advocates pushed back on a part of the bill that would give individual state attorneys general the ability to decide what content is inappropriate for children. This rings alarm bells in a time when LGBTQ rights are being attacked on the state level, and books with LGBTQ characters and themes are being censored in public schools. Senator Marsha Blackburn (R-TN), who introduced the bill with Senator Richard Blumenthal (D-CT), said that a top priority for conservatives should be to “protect minor children from the transgender [sic] in this culture,” including on social media.

Jamie Susskind, Senator Blackburn’s legislative director, said in a statement, “KOSA will not — nor was it designed to — target or censor any individual or community.”

After multiple amendments, the new draft of KOSA has appeased some concerns from LGBTQ rights groups like GLAAD, the Human Rights Campaign and The Trevor Project; for one, the FTC will instead be responsible for nationwide enforcement of KOSA, rather than state-specific enforcement by attorneys general.

A letter to Senator Blumenthal from seven LGBTQ rights organizations said: “The considerable changes that you have proposed to KOSA in the draft released on February 15, 2024, significantly mitigate the risk of it being misused to suppress LGBTQ+ resources or stifle young people’s access to online communities. As such, if this draft of the bill moves forward, our organizations will not oppose its passage.”

Other privacy-minded activist groups like the Electronic Frontier Foundation (EFF) and Fight for the Future are still skeptical of the bill, even after the changes.

In a statement shared with TechCrunch, Fight for the Future said that these changes are promising, but don’t go far enough.

“As we have said for months, the fundamental problem with KOSA is that its duty of care covers content specific aspects of content recommendation systems, and the new changes fail to address that. In fact, personalized recommendation systems are explicitly listed under the definition of a design feature covered by the duty of care,” Fight for the Future said. “This means that a future Federal Trade Commission (FTC) could still use KOSA to pressure platforms into automated filtering of important but controversial topics like LGBTQ issues and abortion, by claiming that algorithmically recommending that content ’causes’ mental health outcomes that are covered by the duty of care like anxiety and depression.”

The Blumenthal and Blackburn offices said that the duty of care changes were made to regulate the business model and practices of social media companies, rather than the content that is posted on them.

KOSA was also amended last year to address earlier concerns about age-verification requirements for users of all ages that could endanger privacy and security. Jason Kelley, the EFF’s activism director, is concerned that these amendments aren’t enough to ward off dangerous interpretations of the bill.

“Despite these latest amendments, KOSA remains a dangerous and unconstitutional censorship bill which we continue to oppose,” Kelly said in a statement to TechCrunch. “It would still let federal and state officials decide what information can be shared online and how everyone can access lawful speech. It would still require an enormous number of websites, apps, and online platforms to filter and block legal, and important, speech. It would almost certainly still result in age verification requirements.”

The issue of children’s online safety has stayed at the forefront of lawmakers’ minds, especially after five big tech CEOs testified before the Senate a few weeks ago. With increasing support for KOSA, Blumenthal’s office told TechCrunch that it is intent on fast-tracking the bill forward.

Update, 2/16/23, 12:30 PM ET with statement from Jamie Susskind.

Microsoft, X throw their weight behind KOSA, the controversial kids online safety bill

Fan fiction writers rally fandoms against KOSA, the bill purporting to protect kids online

Europe's Digital Services Act applies in full from tomorrow — here's what you need to know

Image Credits: NicoElNino / Getty Images

The European Union’s rebooted e-commerce rules start to apply in full from tomorrow — setting new legal obligations on the likely thousands of platforms and digital businesses that fall in scope.

The Digital Services Act (DSA) is a massive endeavour by the EU to set an online governance framework for platforms and use transparency obligations as a tool to squeeze illegal content and products off the regional internet.

If something is illegal to say or sell in a particular Member State it should not be possible to workaround the law by taking to the Internet is the basic idea. So online marketplaces operating in Europe should not let users buy and sell guns, for example, if the purchase of weapons is banned in the relevant EU market nor should social media sites allow hate speech to stay up if a country has laws in place that prohibit it.

Protection of minors is another key focus — with the regulation stipulating in-scope platforms and services must ensure “a high level of privacy, safety, and security” for kids, and banning use of their data for targeted ads.

The bloc can’t put an exact number on how many companies are in the frame, not least as new digital platforms are being spawned all the time, but says it expects at least a thousand to be subject to the rules.

Platforms, marketplaces and other in-scope digital services providers that fail to comply with the DSA are risking tough penalties — of up to 6% of global annual turnover for confirmed breaches.

As well as applying content moderation rules to platforms and know your customer requirements to marketplaces, the regulation applies some obligations to hosting services and other online intermediaries (such as ISPs, domain name registers and network infrastructure providers).

Smaller platforms, such as early stage startups yet to grab much scale — defined as “micro” or “small” enterprises employing fewer than 50 staff and with an annual turnover below €10 million — are exempt from the bulk of provisions. But they will still have to make sure they set clear and concise T&Cs; and provide a contact point for authorities. (Fast scaling startups that outstrip the micro/small criteria won’t immediately face having all general rules apply but will get a “targeted exemption” for some provisions DSA over a transitional 12-month period, per the Commission.)

In-scope companies have had well over a year to get their compliance plan in order — since the text of the law was published back in October 2022. Although plenty of detail remains to be filled in, as DSA oversight bodies spin up and start to produce guidance. Which means many businesses are still likely to be trying to figure out exactly how the rules apply to them.

More rules for Big Tech too

Major tech platforms and marketplaces face the strictest level of DSA regulation. They have already passed one compliance deadline: A subset of DSA rules, focused on algorithmic transparency and systematic risk mitigation, have been in application on larger platforms and search engines (aka VLOPs and VLOSEs) since late August. Last December, the Commission also opened its first formal investigation of a VLOP, on Elon Musk-owned X (formerly Twitter), over a string of suspected breaches.

But even for larger platforms there’s more rules incoming tomorrow: From Saturday, the almost two dozen tech giants which, like X, have been designated as subject to the rules for VLOPs and VLOSEs are expected to be compliant with the DSA’s general obligations, too. So if Musk was already doing DSA compliance badly, he’s now got a bunch more demands to worry about come the weekend.

This includes in areas like providing content reporting tools for users and giving people the ability to challenge content moderation decisions; cooperating with so-called “trusted flaggers” (third parties that are authorized to make reports to platforms); producing transparency reports; and applying business traceability requirements (aka know your customer rules), to name a few.

On moderation, for instance, platforms must provide a “statement of reasons” to users every time they make a content moderation decision that affects them (such as a removal or demoting content).

The EU is collecting these statements in a database — so far only for larger platforms already subject to VLOP rules — and says it has amassed more than 4 billion statements to date. As smaller platforms’ statements go into the database the Commission expects to get a complete overview of content moderation practices, building on the “very interesting overview” of larger platforms’ decision-making it says the DSA has already delivered.

Other requirements of the general rules for platforms include having to provide information about ads they run and any algorithmic recommender systems they operate.

As noted above, the DSA specifically bans child’s data being used for advertising — so there’s a requirement to ensure minors’ information is not sucked into existing ad targeting systems. Although exactly how platforms will be able to determine whether a user is a minor or not without also running into privacy pitfalls, such as if they were to force age verification tech on all their users, is, the Commission admits, a complex area.

So while, from tomorrow, all platforms will have an obligation to provide “effective protection measures for minors” as a Commission official put it in a background briefing with journalists today, they noted there are ongoing discussions between DSA enforcers aimed at determining which technologies might be “acceptable solutions” in this context — leaving platforms in limbo over how exactly to comply in the meanwhile.

“The problem is difficult to solve,” the official admitted. “We are fully aware of the impact that [age verification] can have on privacy and we would not accept any measure for age verification… So my short answer is it’s complicated. But the long answer is that we are discussing together with Member States and with the Digital Services Coordinators, in the context of a taskforce that we have put in place already, to find which ones would be the acceptable solutions.”

Digital Services Coordinators

Zooming out again, monitoring tech giants’ compliance with general DSA rules falls, not to the Commission — which is the sole enforcer of obligations specific to VLOPs/VLOSEs (and plenty busy enough as a result) — but to EU Member State level enforcers. So called Digital Services Coordinators (DSCs). Thus, with the DSA coming into full application, there’s a whole new layer of digital oversight being slotted into place to regulate online activity around the region.

Here the bloc’s lawmakers maintained a “country of origin” principle, which also applied in the EU’s earlier e-commerce regime, so this tranche of DSA oversight on tech giants will come from authorities located in countries where the platforms are established.

For example, in the case of X, Ireland’s media regulator, Coimisiún na Meán, is likely to be competent authority overseeing its compliance with the general DSA rules. Ditto for Apple, Meta and TikTok, which also locate their European HQs in Ireland. Whereas Amazon’s compliance with general DSA rules will probably be monitored by Luxembourg’s competition authority, the Autorité de la concurrence, on account of its pick of regional base.

In the case of platforms without a regional establishment, and which haven’t appointed a local legal representative, they face enforcement by any of the competent bodies in any Member State — which could request information from them and/or take enforcement action related to compliance issues under the general rules.

Such platforms are therefore (potentially) exposing themselves to greater regulatory risk. (Albeit, this is assuming Europe-based authorities can actually enforce the law on foreign entities if they refuse to play by the rules — and here the difficulties EU data protection authorities have had trying to make Clearview AI abide by the GDPR looks instructive.)

Smaller EU-located platforms and startups, meanwhile, are likely to face general DSA oversight by the DSC appointed in their home market. So — for example — France’s BeReal, a popular photo sharing platform, will likely have its DSA compliance overseen by ARCOM, the comms and audiovisual regulator the country looks set to name as its DSC.

Confirmed DSCs so far are a mixture of existing regulatory agencies, including telecoms, media, consumer and competition regulators. Member States are also allowed to name more than one body to ensure adequate expertise underpins their oversight.

The EU has provided a webpage for finding the DSC that each Member State has appointed — although, as the time of writing, not all appointments have been made so there are still some gaps.

As their name (“coordinators”) suggests, DSCs will be doing plenty of joint working to ensure they are tapping relevant expertise to carry out effective oversight of the broad range of in-scope platforms and businesses. They are also envisaged playing a supporting role for the Commission’s enforcement on larger platforms’ systemic risk. Although enforcement decisions on VLOPs/VLOSEs remain with the Commission.

Additionally, the regulation establishes a new body — the “European Board for Digital Services” — where DSCs will meet regularly to share information and coordinate. The Board will, for instance, be responsible for producing advice and guidance for applying the law.

A handful of Board meetings have already taken place, per the Commission, which says some early workstreams aimed at setting best practices cover areas including provisions around data access for researchers; how to award trusted flagger status and select out of court dispute settlement bodies; and coordinating the handling of user complaints.

Again, ahead of best practice consensus being reached, and compliance guidance produced (and, in some cases, a confirmed appointment of a DSC), regulated platforms and services will have to figure out a way forward on their own.

DSCs are also intended to be contact points for citizens wanting to make DSA-related complaints. (And if a complaint from a citizen is about a platform a particular authority doesn’t oversee they will be responsible for sending it to the relevant competent body that does.)

EU consumers won’t only have to rely on regulatory action on their complaints, though. They will also be able to turn to collective redress litigation if a company fails to respect their rights under the Act. So non-compliant platforms face the risk of being sued too. 

Those DSCs already appointed in time for Saturday’s deadline could choose to start an investigation or request information from platforms they oversee starting from tomorrow, a Commission official confirmed. But it remains to be seen how fast out the blocks these new digital enforcers will be.

Judging by how other EU digital rules have been implemented in recent years, it seems likely platforms will be given some grace to get up to speed, and time allowed for the regime to bed in, including as enforcers get their own feet fully under the table. Although, given this is decentralized enforcement, some Member State authorities may be more eager to get going than others and we could see DSA interventions happening at different speeds around the region.

DSCs are empowered to issue fines of up to 6% of global annual turnover for breaches of the regulation, which is the same level of penalty the Commission wields on VLOPs/VLOSEs if they violate the extra obligations applied to larger platforms and search engines. So — on paper — there’s a lot of new regulatory risk in Europe arriving from Saturday.

The full application of the regime also means VLOPs like X could face separate fines from the Commission and a DSC — i.e. if their compliance fails both sets of obligations. (But whether another layer of regulatory risk in the EU will finally concentrate Musk’s mind on compliance remains to be seen.)

One thing is clear: The DSA steps up the complexity for platforms operating in the region, applying a whole bundle of new obligations and unfurling another network of enforcers — on top of the growing sprawl of existing laws that may also apply to digital businesses, such as the General Data Protection Regulation, ePrivacy Directive, Data Act and the incoming AI Act (to name a few).

Selling advice on how all these rules apply and intersect (or even collide) will certainly keep regional lawyers and consultants busy for years.

Changes and challenges

In one early sign of potentially interesting times ahead, Ireland’s Coimisiún na Meán has recently been consulting on rules for video sharing platforms that could force them to switch off profiling-based content feeds by default in that local market.

In that case the policy proposal was being made under EU audio visual rules, not the DSA, but given how many major platforms are located in Ireland the Coimisiún na Meán, as DSC, could spin up some interesting regulatory experiments if it take a similar approach when it comes to applying the DSA on the likes of Meta, TikTok, X and other tech giants.

Another interesting question is how the DSA might be applied to fast-scaling generative AI tools.

The viral rise of AI chatbots like OpenAI’s ChatGPT occurred after EU lawmakers had drafted and agreed the DSA. But the intent for the regulation was for it to be futureproofed and able to apply to new types of platforms and services as they arise.

Asked about this, a Commission official said they have identified two different situations vis-à-vis generative AI tools: One where a VLOP is embedding this type of AI into an in-scope platform (such as baking it into a search engine or recommender system) — where they said the DSA does already apply. “We are discussing with them to check compliance with the DSA,” the official noted on that.

The second scenario relates to “standalone” AI tools that are not embedded into platforms already identified as in-scope of the regulation. In this instance the official told TechCrunch the legal question for DSA enforcers will be whether the AI tech is a platform or a search engine, as the regulation defines it.

“A lawyer will go into the definition and check whether it is used as a search engine, or it is, technically speaking, hosting content and putting it at the request of the recipient of the service and disseminating to the public. If the definition is met, you tick the box and the DSA applies,” they said. “It is as simple as that.”

Although it’s less clear how quickly that process of determination might happen — and it would presumably depend on the DSC in question.

Per the Commission, standalone AI tools that meet the DSA definition of a platform or search engine and also pass the threshold of 45 million monthly users could — in the future — also go on to be designated as VLOPs/VLOSEs. In that scenario the regulation’s extra algorithmic transparency and systemic risks rules should apply and the Commission would be responsible for oversight and enforcement. Although the official noted the final wording of the incoming AI Act will also be relevant in establishing any respective bounds here, so whether the AI Act and DSA would (or wouldn’t) apply in parallel on such tools.

Elon Musk’s X faces first DSA probe in EU over illegal content risks, moderation, transparency and deceptive design

EU says incoming rules for general purpose AIs can evolve over time