Hands of diverse group of people putting together. Concept of teamwork, cooperation, unity, togetherness, partnership, agreement, social community or movement. Flat style. Vector illustration.

To benefit all, diverse voices must take part in leading the growth and regulation of AI

Hands of diverse group of people putting together. Concept of teamwork, cooperation, unity, togetherness, partnership, agreement, social community or movement. Flat style. Vector illustration.

Image Credits: Intpro / Getty Images

Jorge Calderon

Contributor

Jorge Calderon is managing director at San Francisco–based Inicio Ventures, an initiative of Hispanics in Philanthropy.

Over the last 25 years, I’ve been a tech investor, founder, organizer, strategist and academic. I’m proud to be part of a growing group of diverse leaders shaping an innovation system that represents and benefits us all. But in recent months, I’ve become increasingly troubled by the absence of Latinx/e founders and leaders in today’s critically important conversations about AI’s growth and regulation.

As AI’s presence in our lives increases, so does the number of diverse founders leveraging it to develop positive, socially impactful services and products. Because their unique life experiences inform these founders’ ingenuity, their startups often address critical social needs. When diverse founders succeed, society benefits.

Yet their voices and perspectives remain largely absent from policy discussions and decisions that will shape the future of AI and its influence on our society.

Unfortunately, such exclusion is part of a broader pattern within the startup and venture ecosystem. Those of Latinx/e heritage in the U.S. account for more than 20% of the U.S. population; they’ve founded half of all new businesses over the last decade (19% of which are tech-related), and contribute $3.2 trillion annually to the nation’s economy. As a group, they represent the fifth-largest economy in the world.

Yet, despite their entrepreneurial talent and determination, Latinx/e founders remain overlooked and undervalued, receiving less than 2% of startup investment funding. Even when they receive it, it’s typically just a fraction of what’s awarded to their non-Hispanic counterparts.

While historically underestimated, Latinx/e Americans are persevering and preparing to be a significant force in the U.S.’ future. Latinx/e college enrollment has more than doubled since 2000, and enrollment in science and engineering programs has grown by 65% over the last 10 years.

Guillermo Diaz Jr., former CIO of Cisco, called today’s intersection of AI and tech with surging Latinx/e education, economic power, and employment “a light-speed moment,” noting that an increase in Latinx/e technology leadership means a far more prosperous U.S.A.

When it comes to AI regulation, I understand and share some commonly voiced concerns and appreciate the recent clamor for quick regulation. But I don’t understand Latinx/e and diverse groups’ exclusion from the regulatory conversation.

Last year, the Biden administration discussed AI regulations with leaders from companies like Open AI, Google, Amazon, Meta, Microsoft, and a handful of academics and advocates. But this group was too narrow. Underrepresented communities and our allies generally have a nuanced outlook on AI.

On one hand, we are rightly concerned that AI technologies could perpetuate bias and discrimination. On the other, we are eager to ensure that diverse communities, founders, consumers and all Americans can benefit from AI’s many positive potential implementations. Regulations made without broad, nuanced perspectives could diminish AI’s benefits to diverse communities, leading to worse social and economic outcomes for everyone.

Discussions about AI’s growth and regulation are fundamentally discussions about the future of society, and diverse groups will play a key role in that future. Before regulators finalize any significant policy changes, diverse, visionary startup founders and leaders should be engaged in discussing how to simultaneously develop an appropriate regulatory framework for AI technology while also creating the conditions to encourage diverse founders to have a say and play a meaningful role in the evolution of AI.

In addition to creating thoughtful guardrails, policymakers should also be ideating about incentives like tax credits, STEM education grants, and training and recruitment programs to create pathways for diverse groups’ increased representation, contributions, and success within the growing AI sector.

Like any transformative technology, advanced AI has risks and incredible positive potential for all. That means lawmakers need all of us to provide input to AI-related policies. It is imperative that they include diverse startup founders and leaders as they consider the AI incentives and regulations that will shape our collective future.

dripping Facebook Meta logo

Adtech giants like Meta must give EU users real privacy choice, says EDPB

dripping Facebook Meta logo

Image Credits: Bryce Durbin / TechCrunch

The European Data Protection Board (EDPB) has published new guidance that has major implications for adtech giants like Meta and other large platforms.

Since November 2023, the owner of Facebook and Instagram has forced users in the European Union to agree to being tracked and profiled for its ad targeting business, or else pay it a monthly subscription to access ad-free versions of the services. However, a market leader imposing that kind of binary “consent or pay” choice does not look viable according to the EDPB, an expert body made up of representatives of data protection authorities from around the EU.

The guidance, which was confirmed incoming Wednesday as we reported earlier, will steer how privacy regulators interpret the bloc’s General Data Protection Regulation (GDPR) in a critical area. The full opinion of the EDPB on “consent or pay” runs to 42 pages.

Other large ad-funded platforms should also take note of the granular guidance. But Meta looks first in line to feel any resultant regulatory chill falling on its surveillance-based business model.

“The EDPB notes that negative consequences are likely to occur when large online platforms use a ‘consent or pay’ model to obtain consent for the processing,” the Board opines, underscoring the risk of “an imbalance of power” between the individual and the data controller, such as in cases where “an individual relies on the service and the main audience of the service.”

In a press release accompanying publication of the opinion, the Board’s chair, Anu Talu, also emphasized the need for platforms to provide users with a “real choice” over their privacy.

“Online platforms should give users a real choice when employing ‘consent or pay’ models,” Talu wrote. “The models we have today usually require individuals to either give away all their data or to pay. As a result most users consent to the processing in order to use a service, and they do not understand the full implications of their choices.”

“Controllers should take care at all times to avoid transforming the fundamental right to data protection into a feature that individuals have to pay to enjoy. Individuals should be made fully aware of the value and the consequences of their choices,” she added.

In a summary of its opinion, the EDPB writes in the press release that “in most cases” it will “not be possible” for “large online platforms” that implement consent or pay models to comply with the GDPR’s requirement for “valid consent” — if they “confront users only with a choice between consenting to processing of personal data for behavioural advertising purposes and paying a fee” (i.e., as Meta currently is).

The opinion defines large platforms, non-exhaustively, as entities designated as very large online platforms under the EU’s Digital Services Act or gatekeepers under the Digital Markets Act (DMA) — again, as Meta is (Facebook and Instagram are regulated under both laws).

“The EDPB considers that offering only a paid alternative to services which involve the processing of personal data for behavioural advertising purposes should not be the default way forward for controllers,” the Board goes on. “When developing alternatives, large online platforms should consider providing individuals with an ‘equivalent alternative’ that does not entail the payment of a fee.

“If controllers do opt to charge a fee for access to the ‘equivalent alternative,’ they should give significant consideration to offering an additional alternative. This free alternative should be without behavioural advertising, e.g. with a form of advertising involving the processing of less or no personal data. This is a particularly important factor in the assessment of valid consent under the GDPR.”

The EDPB takes care to stress that other core principles of the GDPR — such as purpose limitation, data minimization and fairness — continue to apply around consent mechanisms, adding: “In addition, large online platforms should also consider compliance with the principles of necessity and proportionality, and they are responsible for demonstrating that their processing is generally in line with the GDPR.”

Given the detail of the EDPB’s opinion on this contentious and knotty topic — and the suggestion that lots of case-by-case analysis will be needed to make compliance assessments — Meta may feel confident nothing will change in the short-term. Clearly it will take time for EU regulators to analyze, ingest and act on the Board’s advice.

Contacted for comment, Meta spokesman Matthew Pollard emailed a brief statement playing down the guidance: “Last year, the Court of Justice of the European Union [CJEU] ruled that the subscriptions model is a legally valid way for companies to seek people’s consent for personalised advertising. Today’s EDPB Opinion does not alter that judgment and Subscription for no ads complies with EU laws.”

Ireland’s Data Protection Commission, which oversees Meta’s GDPR compliance and has been reviewing its consent model since last year, declined to comment on whether it will be taking any action in light of the EDPB guidance as it said the case is ongoing.

Ever since Meta launched the “subscription for no ads” offer last year, it has continued to claim it complies with all relevant EU regulations — seizing on a line in the July 2023 ruling by the EU’s top court in which judges did not explicitly rule out the possibility of charging for a non-tracking alternative but instead stipulated that any such payment must be “necessary” and “appropriate.”

Commenting on this aspect of the CJEU’s decision in its opinion, the Board notes — in stark contrast to Meta’s repeat assertions the CJEU essentially sanctioned its subscription model in advance — that this was “not central to the Court’s determination.”

At the same time, the Board’s opinion does not entirely deny large platforms the possibility of charging for a non-tracking alternative — so Meta and its tracking-ad-funded ilk may feel confident they’ll be able to find some succor in 42 pages of granular discussion of the intersecting demands of data protection law. (Or, at least, that this intervention will keep regulators busy trying to wrap their heads about case-by-case complexities.)

However, the guidance does — notably — encourage platforms to offer free alternatives to tracking ads, including privacy-safe(r) ad-supported offerings.

The EDPB gives examples of contextual, “general advertising” or “advertising based on topics the data subject selected from a list of topics of interests.” (And it’s worth noting the European Commission has also suggested Meta could be using contextual ads instead of forcing users to consent to tracking ads as part of its oversight of the tech giant’s compliance with the DMA.)

“While there is no obligation for large online platforms to always offer services free of charge, making this further alternative available to the data subjects enhances their freedom of choice,” the Board goes on, adding: “This makes it easier for controllers to demonstrate that consent is freely given.”

While there’s rather more discursive nuance to what the Board has published today than instant clarity served up on a pivotal topic, the intervention is important and does look set to make it harder for mainstream adtech giants like Meta to frame and operate under false binary privacy-hostile choices over the long run.

Armed with this guidance, EU data protection regulators should be asking why such platforms aren’t offering far less privacy-hostile alternatives — and asking that question, if not literally today, then very, very soon.

For a tech giant as dominant and well resourced as Meta, it’s hard to see how it can dodge answering that ask for long. Although it will surely stick to its usual GDPR playbook of spinning things out for as long as it possibly can and appealing every final decision it can.

Privacy rights nonprofit noyb, which has been at the forefront of fighting the creep of consent-or-pay tactics in the region in recent years, argues the EDPB opinion makes it clear Meta cannot rely on the “pay or okay” trick anymore. However, its founder and chairman, Max Schrems, told TechCrunch he’s concerned the Board hasn’t gone far enough in skewering this divisive forced consent mechanism.

“The EDPB recalls all the relevant elements, but does not unequivocally state the obvious consequence, which is that ‘pay or okay’ is not legal,” he told us. “It names all the elements why it’s illegal for Meta, but there [are] thousands of other pages where there is no answer yet.”

As if 42 pages of guidance on this knotty topic wasn’t enough already, the Board has more in the works, too: EDPB Chair Anu Talus says it intends to develop guidelines on consent-or-pay models “with a broader scope,” adding that it will “engage with stakeholders on these upcoming guidelines.”

European news publishers were the earliest adopters of the controversial consent tactic, so the forthcoming “broader” EDPB opinion is likely to be keenly watched by players in the media industry.

EU privacy body adopts view on Meta’s controversial ‘consent or pay’ tactic