Forget the debate, the Supreme Court just declared open season on regulators

United States Supreme Court at Twilight

Image Credits: Rudy Sulgan / Getty Images

As the country reels from a presidential debate that left no one looking good, the Supreme Court has swooped in with what could be one of the most consequential decisions it has ever made in the context of the tech industry. By reversing a 40-year-old decision, the court has opened up regulators to endless interference by industry and the whims of judges as compromised and out of touch as they are.

The Supreme Court announced Friday morning that it had ruled 6-3 (you know who voted how) to overturn Chevron v. Natural Resources Defense Council, a case from 1984 that established a very important doctrine in federal regulation.

Federal law is by necessity broad, applying as it does across so many jurisdictions. Furthermore, some laws stay on the books for decades without modification. And so each law’s wording — just like the Constitution — requires interpretation, a task spread among all parties in the legal system, from lawyers to justices to amici curae.

The 1984 Chevron decision established that independent agencies like the EPA, SEC and FCC also have a say in this. In fact, the decision found, in cases where the law is ambiguous, the courts must defer to these agencies in their capacity as experts in their fields.

As an example, think about something like the Clean Water Act providing certain legal protections for wetlands. Who defines whether a plot of land counts as wetlands? It can’t be interested parties like heavy industry or nature advocacy groups, since their interpretations will likely be mutually exclusive. And what are the chances that whatever judge gets handed the case has any expertise in the matter? Instead, in such cases, the EPA, staffed with notionally disinterested experts on wetlands, is empowered to settle ambiguities.

All right, so what do wetlands and the EPA have to do with technology? Well, who do you think defines “encryption” in law, or “communications,” “search and seizure,” or “reasonable expectation of privacy”?

The entire concept of net neutrality is perched atop the FCC’s interpretation of whether broadband data is an “information service” or a “communications service,” the terms written in the act empowering that agency.

If the FCC is not empowered to settle this ambiguity in a very old law that was written well before Friday’s broadband and mobile networks, who is? Whichever court takes the case brought by the telecommunications industry, which hates net neutrality and would prefer an interpretation where the FCC doesn’t regulate them at all. And if the industry doesn’t like that court’s interpretation, it gets a few more shots as the case rises toward — oh, the Supreme Court.

Interesting, remarked Justice Elena Kagan (as quoted by court reporter Amy Howe), that in “one fell swoop” the court had granted itself “exclusive power over every open issue — no matter how expertise-driven or policy-laden — involving the meaning of regulatory law.” In other words, the Supreme Court assigned itself the powers currently exercised by every regulatory agency in the country.

Tech’s play for time pays off

Why is this so consequential for tech? Because the tech industry has been facing down a wave of regulatory activity led by these agencies, operating in the vacuum of congressional action. Due to a lack of effective federal laws in tech, agencies have had to step up and offer updated interpretations of the laws on the books.

Tech leaders have loudly and repeatedly asked for federal laws — not agency regulations — defining and limiting their industries. “Please,” they cry, “Give us a federal privacy law! Pass a law on location data! Pass a nice big law about how artificial intelligence should be used!”

They know very well that Congress is almost incapable of passing any such laws, partly because tech industry lobbyists quietly fight them in the background whenever one with teeth is proposed. You will be shocked to find out that despite a decade or more of tech asking for these laws, few or none have actually appeared! And when California passes one, they all lament: not like that! The pleas are made with fingers crossed, purely for optics.

Let us be optimistic for once and imagine that Congress passes a big law on AI, protecting certain information, requiring certain disclosures, and so on. It’s impossible that such a law would contain no ambiguities or purposeful vagueness to allow for the law to apply to as-yet-unknown situations or applications. Thanks to the Supreme Court, those ambiguities will no longer be resolved by experts.

(As an example of how this will play out, in the very decision issued today, Justice Gorsuch repeatedly referred to nitrogen oxide, a pollutant at issue, as nitrous oxide, laughing gas. This is the level of expertise we may expect.)

Every law has ambiguities. And at the frontiers of technology, ambiguity is even more common, since there is no precedent and lawmakers do not understand technical matters.

And so, looking forward, who defines “artificial intelligence,” or “scrape” or “personal information,” or “invasive”? Yesterday, it might have been the FCC or FTC, which with their experts in technology, industry, markets, and so on, would have made an informed decision and perhaps even solicited public opinion, as they often do in rulemaking processes. Today, it will be a judge in whichever state an industry decides has the friendliest or most gullible bench.

As Kagan argued, summarized again by Howe:

Kagan cited as one example a hypothetical bill to regulate artificial intelligence. Congress, she said, “knows there are going to be gaps because Congress can hardly see a week in the future.” So it would want people “who actually know about AI and are accountable to the political process to make decisions” about artificial intelligence. Courts, she emphasized, “don’t even know what the questions are about AI,” much less the answers.

This decision is arguably the largest single deregulatory action that could be taken, and as we have all observed, without regulation, tech — like any other big industry — will consolidate and exploit. The next few years, even under a pro-regulatory Democratic administration, will be a free-for-all. There is no barrier, and probably no downside, to industry lawyers challenging every single regulatory decision in court and arguing for a more favorable interpretation of the law.

We are entering a favorable climate for large companies that were likely to face regulatory scrutiny — now far less likely to be hammered for bad behavior since they can have “bad” redefined by a jurisdiction of their choosing.

But chaos favors the nimble, and large tech companies have proven themselves slow to react when faced with an industry-overturning technology (or so they believe) like AI. There is an opportunity here, frankly speaking, for those with money and ambition but blissfully unburdened by certain moral principles, to explore new methods and business models that might have attracted regulatory attention before.

If you thought you were being exploited before — you ain’t seen nothing yet.

Elon Musk's X taken to court in Ireland for grabbing EU user data to train Grok without consent

CANNES, FRANCE - JUNE 19: Elon Musk attends 'Exploring the New Frontiers of Innovation: Mark Read in Conversation with Elon Musk' session during the Cannes Lions International Festival Of Creativity 2024 - Day Three on June 19, 2024 in Cannes, France. (Photo by Marc Piasecki/Getty Images)

Image Credits: Marc Piasecki / Getty Images

Elon Musk’s X is being taken to court in Ireland for using Europeans’ data to train AI models, RTE reported late on Tuesday. The development relates to the social media platform’s decision last month to process user data to train its Grok AI model without notifying or asking people if they’re okay with that.

Last month, the Irish Data Protection Commission (DPC) told TechCrunch it was “surprised” by X’s move. It also said it had “followed up” seeking more information.

The Pan-EU General Data Protection Regulation (GDPR) requires any processing of people’s data to have a valid legal basis. Breaches of the regime can result in penalties of up to 4% of global annual turnover, so any confirmed noncompliance could end up being costly for X. Notably, the DPC is suing X under Ireland’s 2018 Data Protection Act, per RTE.

According to RTE, the DPC is seeking an injunction against Twitter International (the company’s Irish division is still named that) over concerns about the processing of user data for AI model training. The watchdog told RTE it’s taking action as it believes the matter poses an urgent risk to the rights and freedoms of users.

The Irish broadcaster’s report suggests the DPC intends to refer the matter to the European Data Protection Board (EDPB), an independent supervisory body established under the GDPR that has powers to issue guidance on how that pan-EU law applies.

The DPC declined further comment on the legal action when contacted with questions on Wednesday.

“In terms of the matter which was before the courts yesterday, August 6, the DPC has not released any comment at this point and it would be inappropriate to do so until this matter has been dealt with by the court,” assistant principal communications officer, Risteard Byrne, told TechCrunch.

The GDPR requires any processing of personal data to have a proper legal basis. That means the legal basis must be appropriate for the use-case — in this case, privacy experts believe X needs to obtain users’ content to repurpose their public posts to train its AI models. Instead, X quietly started helping itself to users’ info last month, only letting users opt out with an option buried in web settings. Users were also not notified by X that it’s using their data to train Grok.

Meta, which owns Facebook and Instagram, paused a similar move to repurpose user data for AI training back in June following GDPR complaints and regulatory pressure, including from the DPC. However, Musk’s company appears to have been less cooperative with privacy regulators — hence the DPC seeking an injunction in the High Court.

RTE reported the DPC is seeking orders including one for “suspending, restricting, or prohibiting the respondent from processing the personal data of X users for the purposes of developing, training or refining any machine learning, large language or other AI systems used by Twitter.”

Per RTE, the DPC is also concerned about X’s plan to launch the next version of Grok this month, which is believed to have been trained using the personal data of users in the EU and European Economic Area.

The broadcaster reported that Twitter International had refused requests from the DPC to stop processing European users’ data or delay the launch of the updated version of Grok.

The injunction proceedings will return before the High Court next week, the report added.

X did not immediately respond to requests for comment.

Since Musk took over Twitter, there have been concerns he would not take a good faith approach to compliance with EU privacy laws. But despite some early expressions of concern from the DPC following the un-notified departure in November 2022 of Twitter’s data protection officer, the regulator has had little to say about the chaotic change of direction Musk has taken. Or related GDPR complaints.

Notably, X has also been able to maintain its main established status in Ireland, which allows it to streamline GDPR oversight by having the DPC lead on investigating complaints. Yet, it’s not clear if Twitter International has any meaningful say in decisions taken by Musk that affect local users.

Musk’s quiet run on GDPR oversight may finally be coming to an end, though, if the court grants the injunction.

There’s been more regional bad news for X in recent weeks, too. The platform found itself losing a case in the Netherlands on a GDPR compliance issue, as we reported last month, after an individual sued Musk over shadowbanning and other legal issues.

The European Commission also last month said it suspects X of breaching the bloc’s Digital Services Act (DSA).

The DSA contains even higher penalties — up to 6% of global annual turnover — for noncompliance. The EU said it suspects X of breaching these rules in relation to the misleading design of its blue check system and failing to meet transparency requirements related to data access for researchers and the effectiveness of an ad archive it is required to provide.

The EU is also investigating a second DSA case against X, open since December 2023, concerning wide-ranging content moderation and risk mitigation issues.

Here’s how to disable X (Twitter) from using your data to train its Grok AI

A location geofence over New York City, representing a geofence warrant.

US appeals court rules geofence warrants are unconstitutional

A location geofence over New York City, representing a geofence warrant.

Image Credits: TechCrunch

A federal appeals court has ruled that geofence warrants are unconstitutional, a decision that will limit the use of the controversial search warrants across several U.S. states.

The Friday ruling from the U.S. Court of Appeals for the Fifth Circuit, which covers Louisiana, Mississippi and Texas, found that geofence warrants are “categorically prohibited by the Fourth Amendment,” which protects against unwarranted searches and seizures. 

Civil liberties and privacy advocates applauded the ruling, which effectively makes the use of geofence warrants unlawful across the three U.S. states for now.

Geofence warrants, also known as “reverse” search warrants, allow police to draw a shape on a map, such as over a crime scene, and demand that Google (or any other company that collects user locations) search its entire banks of location data for any phone or device that was in that area at a specific point in time. 

But critics have long argued that geofence warrants are unconstitutional because they can be overbroad and include information on entirely innocent people. 

The court case centers on an armed robbery of a U.S. Postal Service worker in Mississippi in February 2018, in which police used a geofence warrant to identify the individuals suspected of the robbery. 

The Fifth Circuit’s opinion comes to a different conclusion than a similar case heard last month in the Fourth Circuit, which covers North Carolina, Virginia and West Virginia. That ruling found that accessing Google’s stores of location data does not count as a search and upheld the legality of geofence warrants across those states. 

In its case, the Fifth Circuit disagreed and found that police seeking data from Google’s vast stores of location data for a criminal suspect does in fact constitute a search. But because the bank of data is so big, and because the entire database has to be scanned, the court ruled that there is no legal authority capable of authorizing a search, per a blog post by law professor Orin Kerr analyzing the ruling.

The court said in its ruling, its emphasis included: “This search is occurring while law enforcement officials have no idea who they are looking for, or whether the search will even turn up a result. Indeed, the quintessential problem with these warrants is that they never include a specific user to be identified, only a temporal and geographic location where any given user may turn up post-search. That is constitutionally insufficient.”

While the Fifth Circuit ruled that geofence warrants are unconstitutional, the court concluded that the police department had acted in good faith when seeking the warrant for the location data held by Google, and upheld the defendant’s conviction. The court said, in part because the use of geofence warrants were novel at the time and the department asked other agencies for legal guidance prior to submitting the warrant, the evidence should not be suppressed in this case. 

Kerr, in his analysis, said the ruling “raises questions of whether any digital warrants for online contents are constitutional.” 

Because tech companies, like Google, Uber, Snap and others, collect and store huge amounts of its users’ location data and histories on its servers, this data can be obtained by law enforcement; if the data didn’t exist, the problem would be moot. The use of geofence warrants has rocketed in recent years, at one point amounting to about one-quarter of all U.S. legal demands the company received. 

Google said late last year that it would begin storing users’ location data on their devices, making geofence warrants less useful for law enforcement.

India's top court clears way for Byju's insolvency proceedings

Image Credits: Christopher Pike / Bloomberg / Getty Images

India’s top court has put on hold a tribunal ruling that halted Byju’s insolvency proceedings — a win for U.S. creditors that are seeking to recover $1 billion from the once-celebrated edtech startup.

The Indian Supreme Court on Wednesday ordered a stay on the National Company Law Appellate Tribunal’s recent approval of a settlement between the Indian cricket board BCCI, which had halted the insolvency proceedings. The Supreme Court’s order means that the proceedings will now resume.

The Wednesday ruling is the latest in a slew of crises for cash-strapped Byju’s, which was once India’s most valuable startup with a $22 billion valuation.

The startup’s troubles began a couple years ago, but they escalated last month after the Indian tribunal court initiated insolvency proceedings after the firm failed to pay over $19 million it owed to the BCCI, which holds significant sway in India as the formal body overseeing cricket, the most popular sport in the country.

Byju’s averted the proceedings when the CEO’s brother, Riju Raveendran, agreed to pay the BCCI. An appeals tribunal then dismissed the insolvency case.

U.S.-based Glas Trust, which represents some lenders to a Byju’s group company, had opposed the tribunal’s decision, arguing that Riju Raveendran had used the lender’s capital to pay the BCCI.

Between 2020 and 2021, Byju’s raised more than $2.5 billion, including a $1.2 billion Term B loan from a group of U.S. creditors. The startup sought to go public in early 2022 at a valuation over $40 billion, but had to abruptly shelve those plans after Russia’s invasion of Ukraine tanked the global market.

Byju’s did not immediately respond to requests for comment.

The startup has been fighting fires on nearly every front for the past two years. Its troubles grew significantly when it missed financial reporting deadlines and fell short of revenue projections by over 50% in 2022.

Top investors, including Prosus and Peak XV, have alleged governance issues at the edtech firm, and even sought legal action to remove founder Byju Raveendran and gain control over the firm, which has raised over $5 billion in equity and debt.

Last year, board members and the startup’s auditor abruptly resigned in protest.

The conflict intensified when Byju’s slashed its valuation to $25 million, seeking to raise funding via a rights issue, prompting backlash from investors including Prosus, Peak XV, Sofina and Chan Zuckerberg Initiative. The was ordered to not use the capital it raised in the rights issue, and was blocked from attempting to raise a second rights issue.

Prosus and BlackRock have written down the value of their Byju’s stakes to zero.

US appeals court rules geofence warrants are unconstitutional

A location geofence over New York City, representing a geofence warrant.

Image Credits: TechCrunch

A federal appeals court has ruled that geofence warrants are unconstitutional, a decision that will limit the use of the controversial search warrants across several U.S. states.

The Friday ruling from the U.S. Court of Appeals for the Fifth Circuit, which covers Louisiana, Mississippi and Texas, found that geofence warrants are “categorically prohibited by the Fourth Amendment,” which protects against unwarranted searches and seizures. 

Civil liberties and privacy advocates applauded the ruling, which effectively makes the use of geofence warrants unlawful across the three U.S. states for now.

Geofence warrants, also known as “reverse” search warrants, allow police to draw a shape on a map, such as over a crime scene, and demand that Google (or any other company that collects user locations) search its entire banks of location data for any phone or device that was in that area at a specific point in time. 

But critics have long argued that geofence warrants are unconstitutional because they can be overbroad and include information on entirely innocent people. 

The court case centers on an armed robbery of a U.S. Postal Service worker in Mississippi in February 2018, in which police used a geofence warrant to identify the individuals suspected of the robbery. 

The Fifth Circuit’s opinion comes to a different conclusion than a similar case heard last month in the Fourth Circuit, which covers North Carolina, Virginia and West Virginia. That ruling found that accessing Google’s stores of location data does not count as a search and upheld the legality of geofence warrants across those states. 

In its case, the Fifth Circuit disagreed and found that police seeking data from Google’s vast stores of location data for a criminal suspect does in fact constitute a search. But because the bank of data is so big, and because the entire database has to be scanned, the court ruled that there is no legal authority capable of authorizing a search, per a blog post by law professor Orin Kerr analyzing the ruling.

The court said in its ruling, its emphasis included: “This search is occurring while law enforcement officials have no idea who they are looking for, or whether the search will even turn up a result. Indeed, the quintessential problem with these warrants is that they never include a specific user to be identified, only a temporal and geographic location where any given user may turn up post-search. That is constitutionally insufficient.”

While the Fifth Circuit ruled that geofence warrants are unconstitutional, the court concluded that the police department had acted in good faith when seeking the warrant for the location data held by Google, and upheld the defendant’s conviction. The court said, in part because the use of geofence warrants were novel at the time and the department asked other agencies for legal guidance prior to submitting the warrant, the evidence should not be suppressed in this case. 

Kerr, in his analysis, said the ruling “raises questions of whether any digital warrants for online contents are constitutional.” 

Because tech companies, like Google, Uber, Snap and others, collect and store huge amounts of its users’ location data and histories on its servers, this data can be obtained by law enforcement; if the data didn’t exist, the problem would be moot. The use of geofence warrants has rocketed in recent years, at one point amounting to about one-quarter of all U.S. legal demands the company received. 

Google said late last year that it would begin storing users’ location data on their devices, making geofence warrants less useful for law enforcement.

Elon Musk's X taken to court in Ireland for grabbing EU user data to train Grok without consent

CANNES, FRANCE - JUNE 19: Elon Musk attends 'Exploring the New Frontiers of Innovation: Mark Read in Conversation with Elon Musk' session during the Cannes Lions International Festival Of Creativity 2024 - Day Three on June 19, 2024 in Cannes, France. (Photo by Marc Piasecki/Getty Images)

Image Credits: Marc Piasecki / Getty Images

Elon Musk’s X is being taken to court in Ireland for using Europeans’ data to train AI models, RTE reported late on Tuesday. The development relates to the social media platform’s decision last month to process user data to train its “Grok” AI model without notifying or asking people if they’re okay with that.

Last month, the Irish Data Protection Commission (DPC) told TechCrunch it was “surprised” by X’s move. It also said it had “followed up” seeking more information.

The pan-EU General Data Protection Regulation (GDPR) requires any processing of people’s data to have a valid legal basis. Breaches of the regime can result in penalties of up to 4% of global annual turnover, so any confirmed non-compliance could end up being costly for X. Notably, the DPC is suing X under Ireland’s 2018 Data Protection Act, per RTE.

According to RTE, the DPC is seeking an injunction against Twitter International (the company’s Irish division is still named that) over concerns about the processing of user data for AI model training. The watchdog told RTE it’s taking action as it believes the matter poses an urgent risk to the rights and freedoms of users.

The Irish broadcaster’s report suggests the DPC intends to refer the matter to the European Data Protection Board (EDPB), an independent supervisory body established under the GDPR that has powers to issue guidance on how that pan-EU law applies.

The DPC declined further comment on the legal action when contacted with questions on Wednesday.

“In terms of the matter which was before the courts yesterday, August 6, the DPC has not released any comment at this point and it would be inappropriate to do so until this matter has been dealt with by the court,” assistant principal communications officer, Risteard Byrne, told TechCrunch.

The GDPR requires any processing of personal data to have a proper legal basis. That means the legal basis must be appropriate for the use-case — in this case, privacy experts believe X needs to obtain users’ content to repurpose their public posts to train its AI models. Instead, X quietly started helping itself to users’ info last month, only letting users opt out with an option buried in web settings. Users were also not notified by X that it’s using their data to train Grok.

Meta, which owns Facebook and Instagram, paused a similar move to repurpose user data for AI training back in June following GDPR complaints and regulatory pressure, including from the DPC. However, Musk’s company appears to have been less cooperative with privacy regulators — hence the DPC seeking an injunction in the High Court.

RTE reported the DPC is seeking orders including one for “suspending, restricting, or prohibiting the respondent from processing the personal data of X users for the purposes of developing, training or refining any machine learning, large language or other AI systems used by Twitter.”

Per RTE, the DPC is also concerned about X’s plan to launch the next version of Grok this month, which is believed to have been trained using the personal data of users in the EU and European Economic Area.

The broadcaster reported that Twitter International had refused requests from the DPC to stop processing European users’ data or delay the launch of the updated version of Grok.

The injunction proceedings will return before the High Court next week, the report added.

X did not immediately respond to requests for comment.

Since Musk took over Twitter, there have been concerns he would not take a good faith approach to compliance with EU privacy laws. But despite some early expressions of concern from the DPC following the un-notified departure in November 2022 of Twitter’s then-data protection officer, the regulator has had little to say about the chaotic change of direction Musk has wrought. Or related GDPR complaints.

Notably, X has also been able to maintain its main established status in Ireland, which allows it to streamline GDPR oversight by having the DPC lead on investigating complaints. Yet, it’s not clear if Twitter International has any meaningful say in decisions taken by Musk that affect local users.

Musk’s quiet run on GDPR oversight may finally be coming to an end, though, if the court grants the injunction.

There’s been more regional bad news for X in recent weeks, too. The platform found itself losing a case in the Netherlands on a GDPR compliance issue, as we reported last month, after an individual sued Musk over shadowbanning and other legal issues.

The European Commission also last month said it suspects X of breaching the bloc’s Digital Services Act (DSA).

The DSA contains even higher penalties — up to 6% of global annual turnover — for non-compliance. The EU said it suspects X of breaching these rules in relation to the misleading design of its blue check system, and failing to meet transparency requirements related to data access for researchers and the effectiveness of an ad archive it is required to provide.

The EU is also investigating a second DSA case against X, open since December 2023, concerning wide-ranging content moderation and risk mitigation issues.

Here’s how to disable X (Twitter) from using your data to train its Grok AI

Forget the debate, the Supreme Court just declared open season on regulators

United States Supreme Court at Twilight

Image Credits: Rudy Sulgan / Getty Images

As the country reels from a presidential debate that left no one looking good, the Supreme Court has swooped in with what could be one of the most consequential decisions it has ever made in the context of the tech industry. By reversing a 40-year-old decision, the court has opened up regulators to endless interference by industry and the whims of judges as compromised and out of touch as they are.

The Supreme Court announced Friday morning that it had ruled 6-3 (you know who voted how) to overturn Chevron v. Natural Resources Defense Council, a case from 1984 that established a very important doctrine in federal regulation.

Federal law is by necessity broad, applying as it does across so many jurisdictions. Furthermore, some laws stay on the books for decades without modification. And so each law’s wording — just like the Constitution — requires interpretation, a task spread among all parties in the legal system, from lawyers to justices to amici curae.

The 1984 Chevron decision established that independent agencies like the EPA, SEC and FCC also have a say in this. In fact, the decision found, in cases where the law is ambiguous, the courts must defer to these agencies in their capacity as experts in their fields.

As an example, think about something like the Clean Water Act providing certain legal protections for wetlands. Who defines whether a plot of land counts as wetlands? It can’t be interested parties like heavy industry or nature advocacy groups, since their interpretations will likely be mutually exclusive. And what are the chances that whatever judge gets handed the case has any expertise in the matter? Instead, in such cases, the EPA, staffed with notionally disinterested experts on wetlands, is empowered to settle ambiguities.

All right, so what do wetlands and the EPA have to do with technology? Well, who do you think defines “encryption” in law, or “communications,” “search and seizure,” or “reasonable expectation of privacy”?

The entire concept of net neutrality is perched atop the FCC’s interpretation of whether broadband data is an “information service” or a “communications service,” the terms written in the act empowering that agency.

If the FCC is not empowered to settle this ambiguity in a very old law that was written well before Friday’s broadband and mobile networks, who is? Whichever court takes the case brought by the telecommunications industry, which hates net neutrality and would prefer an interpretation where the FCC doesn’t regulate them at all. And if the industry doesn’t like that court’s interpretation, it gets a few more shots as the case rises toward — oh, the Supreme Court.

Interesting, remarked Justice Elena Kagan (as quoted by court reporter Amy Howe), that in “one fell swoop” the court had granted itself “exclusive power over every open issue — no matter how expertise-driven or policy-laden — involving the meaning of regulatory law.” In other words, the Supreme Court assigned itself the powers currently exercised by every regulatory agency in the country.

Tech’s play for time pays off

Why is this so consequential for tech? Because the tech industry has been facing down a wave of regulatory activity led by these agencies, operating in the vacuum of congressional action. Due to a lack of effective federal laws in tech, agencies have had to step up and offer updated interpretations of the laws on the books.

Tech leaders have loudly and repeatedly asked for federal laws — not agency regulations — defining and limiting their industries. “Please,” they cry, “Give us a federal privacy law! Pass a law on location data! Pass a nice big law about how artificial intelligence should be used!”

They know very well that Congress is almost incapable of passing any such laws, partly because tech industry lobbyists quietly fight them in the background whenever one with teeth is proposed. You will be shocked to find out that despite a decade or more of tech asking for these laws, few or none have actually appeared! And when California passes one, they all lament: not like that! The pleas are made with fingers crossed, purely for optics.

Let us be optimistic for once and imagine that Congress passes a big law on AI, protecting certain information, requiring certain disclosures, and so on. It’s impossible that such a law would contain no ambiguities or purposeful vagueness to allow for the law to apply to as-yet-unknown situations or applications. Thanks to the Supreme Court, those ambiguities will no longer be resolved by experts.

(As an example of how this will play out, in the very decision issued today, Justice Gorsuch repeatedly referred to nitrogen oxide, a pollutant at issue, as nitrous oxide, laughing gas. This is the level of expertise we may expect.)

Every law has ambiguities. And at the frontiers of technology, ambiguity is even more common, since there is no precedent and lawmakers do not understand technical matters.

And so, looking forward, who defines “artificial intelligence,” or “scrape” or “personal information,” or “invasive”? Yesterday, it might have been the FCC or FTC, which with their experts in technology, industry, markets, and so on, would have made an informed decision and perhaps even solicited public opinion, as they often do in rulemaking processes. Today, it will be a judge in whichever state an industry decides has the friendliest or most gullible bench.

As Kagan argued, summarized again by Howe:

Kagan cited as one example a hypothetical bill to regulate artificial intelligence. Congress, she said, “knows there are going to be gaps because Congress can hardly see a week in the future.” So it would want people “who actually know about AI and are accountable to the political process to make decisions” about artificial intelligence. Courts, she emphasized, “don’t even know what the questions are about AI,” much less the answers.

This decision is arguably the largest single deregulatory action that could be taken, and as we have all observed, without regulation, tech — like any other big industry — will consolidate and exploit. The next few years, even under a pro-regulatory Democratic administration, will be a free-for-all. There is no barrier, and probably no downside, to industry lawyers challenging every single regulatory decision in court and arguing for a more favorable interpretation of the law.

We are entering a favorable climate for large companies that were likely to face regulatory scrutiny — now far less likely to be hammered for bad behavior since they can have “bad” redefined by a jurisdiction of their choosing.

But chaos favors the nimble, and large tech companies have proven themselves slow to react when faced with an industry-overturning technology (or so they believe) like AI. There is an opportunity here, frankly speaking, for those with money and ambition but blissfully unburdened by certain moral principles, to explore new methods and business models that might have attracted regulatory attention before.

If you thought you were being exploited before — you ain’t seen nothing yet.

United States Supreme Court at Twilight

Forget the debate, the Supreme Court just declared open season on regulators

United States Supreme Court at Twilight

Image Credits: Rudy Sulgan / Getty Images

As the country reels from a presidential debate that left no one looking good, the Supreme Court has swooped in with what could be one of the most consequential decisions it has ever made in the context of the tech industry. By reversing a 40-year-old decision, the court has opened up regulators to endless interference by industry and the whims of judges as compromised and out of touch as they are.

The Supreme Court announced Friday morning that it had ruled 6-3 (you know who voted how) to overturn Chevron v. Natural Resources Defense Council, a case from 1984 that established a very important doctrine in federal regulation.

Federal law is by necessity broad, applying as it does across so many jurisdictions. Furthermore, some laws stay on the books for decades without modification. And so each law’s wording — just like the Constitution — requires interpretation, a task spread among all parties in the legal system, from lawyers to justices to amici curae.

The 1984 Chevron decision established that independent agencies like the EPA, SEC and FCC also have a say in this. In fact, the decision found, in cases where the law is ambiguous, the courts must defer to these agencies in their capacity as experts in their fields.

As an example, think about something like the Clean Water Act providing certain legal protections for wetlands. Who defines whether a plot of land counts as wetlands? It can’t be interested parties like heavy industry or nature advocacy groups, since their interpretations will likely be mutually exclusive. And what are the chances that whatever judge gets handed the case has any expertise in the matter? Instead, in such cases, the EPA, staffed with notionally disinterested experts on wetlands, is empowered to settle ambiguities.

All right, so what do wetlands and the EPA have to do with technology? Well, who do you think defines “encryption” in law, or “communications,” “search and seizure,” or “reasonable expectation of privacy”?

The entire concept of net neutrality is perched atop the FCC’s interpretation of whether broadband data is an “information service” or a “communications service,” the terms written in the act empowering that agency.

If the FCC is not empowered to settle this ambiguity in a very old law that was written well before Friday’s broadband and mobile networks, who is? Whichever court takes the case brought by the telecommunications industry, which hates net neutrality and would prefer an interpretation where the FCC doesn’t regulate them at all. And if the industry doesn’t like that court’s interpretation, it gets a few more shots as the case rises toward — oh, the Supreme Court.

Interesting, remarked Justice Elena Kagan (as quoted by court reporter Amy Howe), that in “one fell swoop” the court had granted itself “exclusive power over every open issue — no matter how expertise-driven or policy-laden — involving the meaning of regulatory law.” In other words, the Supreme Court assigned itself the powers currently exercised by every regulatory agency in the country.

Tech’s play for time pays off

Why is this so consequential for tech? Because the tech industry has been facing down a wave of regulatory activity led by these agencies, operating in the vacuum of congressional action. Due to a lack of effective federal laws in tech, agencies have had to step up and offer updated interpretations of the laws on the books.

Tech leaders have loudly and repeatedly asked for federal laws — not agency regulations — defining and limiting their industries. “Please,” they cry, “Give us a federal privacy law! Pass a law on location data! Pass a nice big law about how artificial intelligence should be used!”

They know very well that Congress is almost incapable of passing any such laws, partly because tech industry lobbyists quietly fight them in the background whenever one with teeth is proposed. You will be shocked to find out that despite a decade or more of tech asking for these laws, few or none have actually appeared! And when California passes one, they all lament: not like that! The pleas are made with fingers crossed, purely for optics.

Let us be optimistic for once and imagine that Congress passes a big law on AI, protecting certain information, requiring certain disclosures, and so on. It’s impossible that such a law would contain no ambiguities or purposeful vagueness to allow for the law to apply to as-yet-unknown situations or applications. Thanks to the Supreme Court, those ambiguities will no longer be resolved by experts.

(As an example of how this will play out, in the very decision issued today, Justice Gorsuch repeatedly referred to nitrogen oxide, a pollutant at issue, as nitrous oxide, laughing gas. This is the level of expertise we may expect.)

Every law has ambiguities. And at the frontiers of technology, ambiguity is even more common, since there is no precedent and lawmakers do not understand technical matters.

And so, looking forward, who defines “artificial intelligence,” or “scrape” or “personal information,” or “invasive”? Yesterday, it might have been the FCC or FTC, which with their experts in technology, industry, markets, and so on, would have made an informed decision and perhaps even solicited public opinion, as they often do in rulemaking processes. Today, it will be a judge in whichever state an industry decides has the friendliest or most gullible bench.

As Kagan argued, summarized again by Howe:

Kagan cited as one example a hypothetical bill to regulate artificial intelligence. Congress, she said, “knows there are going to be gaps because Congress can hardly see a week in the future.” So it would want people “who actually know about AI and are accountable to the political process to make decisions” about artificial intelligence. Courts, she emphasized, “don’t even know what the questions are about AI,” much less the answers.

This decision is arguably the largest single deregulatory action that could be taken, and as we have all observed, without regulation, tech — like any other big industry — will consolidate and exploit. The next few years, even under a pro-regulatory Democratic administration, will be a free-for-all. There is no barrier, and probably no downside, to industry lawyers challenging every single regulatory decision in court and arguing for a more favorable interpretation of the law.

We are entering a favorable climate for large companies that were likely to face regulatory scrutiny — now far less likely to be hammered for bad behavior since they can have “bad” redefined by a jurisdiction of their choosing.

But chaos favors the nimble, and large tech companies have proven themselves slow to react when faced with an industry-overturning technology (or so they believe) like AI. There is an opportunity here, frankly speaking, for those with money and ambition but blissfully unburdened by certain moral principles, to explore new methods and business models that might have attracted regulatory attention before.

If you thought you were being exploited before — you ain’t seen nothing yet.

Supreme Court of the United States, Washington DC, USA

The Supreme Court could decide the future of content moderation

Supreme Court of the United States, Washington DC, USA

Image Credits: Richard Sharrocks / Getty Images

The Supreme Court is considering the fate of two state laws that limit how social media companies can moderate the content on their platforms.

In oral arguments on Monday, the justices grappled with a thorny set of questions that could reshape the internet, from social networks like Facebook and TikTok to apps like Yelp and Etsy.

In October, the Supreme Court decided to hear the two parallel cases, one in Florida (Moody v. NetChoice, LLC) and one in Texas (NetChoice, LLC v. Paxton). In both instances, signed into law by Republican governors, a new state law instructed social media companies to stop removing certain kinds of content.

Florida’s Senate Bill 7072 prevents social media companies from banning political candidates or putting restrictions on their content. In Texas, House Bill 20 told social media companies that they could no longer remove or demonetize content based on the “viewpoint represented in the user’s expression.” In Florida, a federal appeals court mostly ruled in favor of the tech companies, but in Texas the appeals court sided with the state.

The two laws were both crafted by Republican lawmakers to punish social media companies for their perceived anti-conservative bias. Those accusations have not been borne out by research, but conservative social media users are disproportionately exposed to political misinformation, which could explain perceptions of an ideological discrepancy in tech’s content moderation decisions.

The Florida and Texas laws are now tangled up in a complex web of dusty legal precedents, largely drawing on rulings created long before words like “tweet” and “livestream” were part of everyday speech. Because most laws governing the modern internet are so outdated, tech companies and their critics alike are eager for clarity — though as the Supreme Court demonstrated last year with a different pair of social media cases, they may not get it.

Supreme Court rules in favor of Twitter and Google, avoiding the issue of Section 230 for now

On Monday, justices on both sides of the political spectrum sounded skeptical about the pair of state laws. In oral arguments, Justice Sonia Sotomayor called the cases “odd,” warning that their broad nature could have unforeseen impacts.

“It seems like your law is covering just about every social media platform on the Internet, and we have amici who are not traditional social media platforms, like smartphones and others who have submitted amici briefs, telling them that readings of this law could cover them,” Sotomayor said, referencing the Florida law.

“This is so, so broad, it’s covering almost everything. But the one thing I know about the Internet is that its variety is infinite.” Sotomayor pointed to the online marketplace Etsy as a less obvious example of a website that could be negatively impacted by state laws designed to dictate what social media companies can do.

Addressing Florida solicitor general Henry Whitaker, Justice Brett Kavanaugh brought up the First Amendment — but not in a way sympathetic to the state’s argument.

“You said the design of the First Amendment is to prevent ‘suppression of speech,’ Kavanaugh said. “And you left out what I understand to be three key words in the First Amendment or to describe the First Amendment, ‘by the government.’”

Even Justice Neil Gorsuch, who seemed more sympathetic to critical arguments against the social networks, pointed to Section 230, a longstanding law that protects internet companies’ content moderation decisions, noting that it likely “preempts” the state limits on social media moderation.

Not all of the justices seemed to side with the tech industry. Justices Clarence Thomas and Samuel Alito appeared to find the states’ arguments more compelling than their peers, with Alito at one point asking if the idea of content moderation was “anything more than a euphemism for censorship.”

Monday’s hearing provided some clarity on where the majority of justices seem to stand now, but anything can happen — including nothing. A handful of justices, including Justices Sotomayor, Gorsuch, Barrett and Thomas, expressed uncertainty about the way the cases were brought to begin with.

“It’s called a facial challenge, because on the face of the law a challenger alleges what the legislature has done is unconstitutional,” Paul Barrett, NYU adjunct law professor and deputy director of NYU Stern Center for Business and Human Rights, told TechCrunch. “It’s a case where a party, in this case industry trade groups, go to court, even before the law goes into operation. And they say to the trial judge, ‘This law is unconstitutional, no matter how it gets applied.’

“They asked the judge at that point for an injunction that says the law is not to go into effect. By doing that, there isn’t the usual supply of facts and figures and experience and so forth, there isn’t testimony that allows an appellate court to see how the law works in practice.”

The Supreme Court could issue a decisive ruling any time between now and when the court’s term ends in June. Or it could decline to rule on the issues at hand and opt to kick the cases back down to lower courts for a full trial, a process that could take years. “Supreme Court cases can fizzle in this way, much to the frustration in most cases to other parties,” Barrett said.

Either way, the highest court in the land will have to face the internet age head-on eventually. Many of the relevant legal precedents deal with cable TV, newspapers or utility companies — not internet businesses with many millions or even billions of users.

“It’s clear that the Supreme Court needs to update its First Amendment jurisprudence to take into account this vast technological change,” Barrett said. “The Supreme Court often lags behind society in dealing with these kinds of things, and now it’s time to deal with it.”

Supreme Court rules in favor of Twitter and Google, avoiding the issue of Section 230 for now

A new study found that Facebook’s Pages and Groups shape its ideological echo chambers