The EU's 10 biggest antitrust actions on tech

Wrestling arm hold

Image Credits: Paul (opens in a new window) / Flickr (opens in a new window) under a CC BY 2.0 (opens in a new window) license.

The U.S. innovates and the EU regulates, or so certain transatlantic punters love to harp. We’re not going to get embroiled in that noise here, but two things are clear: The bloc’s Single Market has its own particular set of rules, and quite a lot of U.S. tech giants have run afoul of European Union competition regulations over the past several decades. Make of that what you will.

Earlier this month, as she reveled in nailing a couple of major antitrust case appeals against Apple and Google, the EU’s outgoing competition chief, Margrethe Vestager, jokingly referred to Big Tech as some of her best customers. Ouch.

We’ve compiled a list of 10 of the biggest EU antitrust actions targeting tech to give a snapshot of the most high-profile — if not always consequential — competition skirmishes between Brussels and industry heavy hitters over the past several decades of digital development. The list is ordered based on the size of the fine or liability involved.

While it’s fair to say the EU’s antitrust tech enforcement outcomes have varied, one lasting legacy is that some of these major cases served as inspiration for the bloc’s Digital Markets Act: A flagship market contestability reform that could see major tech players hit harder and faster in the coming years. It’s finally Big Tech’s time to buckle up.

Ireland’s tax breaks for Apple

No one enjoys paying their taxes, even less a demand for unpaid back taxes. But by September 2018, Apple had just finished handing the EU an eye-watering €13.1 billion (then worth $15.3 billion) after the bloc successfully sued one of its member states, Ireland, over illegal tax breaks granted to Apple between 1991 and 2014.

The State Aid case, which falls broadly under the bloc’s competition rules, went back and forth through EU appeal courts. But in September 2024, the Court of Justice affirmed the original August 2016 Commission finding of unlawful State Aid.

With the top court weighing in with a final ruling (not a referral back to a lower court), Apple’s legal options to continue challenging the decision are all but exhausted, and the billions in underpaid taxes sitting in an EU escrow account look set to finally pour into Ireland’s coffers.

Google’s Android restrictions on OEMs

Micromanaging the software that mobile device makers could bundle with its operating system, Android — to get its own wares in front of Android users regardless of the hardware they picked — got Google into costly hot water in the EU in recent years. Around $5 billion worth of antitrust heat, in fact. The 2018 Commission decision sanctioning it for abusing a dominant position was, and still is, a record-breaking penalty for this category of competition abuse.

The original EU €4.34 billion fine on Google was revised down slightly, to €4.125 billion, in a September 2022 appeal decision by the General Court. However, the judges largely upheld the original Commission decision, rejecting Google’s bid to overturn the enforcement.

Google’s self-preferencing with Shopping

Back in June 2017, Google was hit with another (at the time) record-breaking €2.42 billion penalty for abusing a dominant position — this one in relation to how it operated its product comparison service, Google Shopping (previously branded Google Product Search and, before that, the pun-tastic Froogle).

The bloc found that Google had not only unfairly favored its own (eponymous) shopping comparison service in organic search results, a market the tech giant has almost entirely sewn up in Europe, but had also actively demoted rival comparison services. The multi-billion-euro fine ensued — worth around $2.73 billion at the time it was announced — and was subsequently affirmed in a September 2024 decision by the EU’s top court.

Apple’s anti-steering on iOS music streaming

The EU branched into a competition theory of harm that accused Apple of consumer exploitation, rather than exclusionary conduct, in this long-running enforcement against Apple’s conduct in the music streaming market on iOS.

The bloc’s competition division changed tack a few times, as it investigated iOS developer complaints against the App Store operator. But in March 2024 it ended up hitting Apple with a €1.84 billion fine (around $2 billion) for banning developers from telling iPhone users about cheaper deals available outside Apple’s store. The vast majority of the financial sanction — a full €1.8 billion — was applied on top of the EU’s standard damages calculation, which the bloc said it hoped would act as a deterrent. (Without it, the fine would have been a mere €40 million — or a “parking ticket” level penalty for Big Tech.)

Google’s AdSense restrictions

Yet another billion+ antitrust penalty hit Google for abuse of dominance in March 2019, when the bloc sanctioned the company over its search ad brokering business. The Commission found it had used restrictive clauses in contracts with customers between 2006 and 2016 in a bid to squeeze out rival ad brokers. A penalty of €1.49 billion (or around $1.7 billion) was duly imposed.

However, in September 2024, despite upholding the majority of the Commission’s findings, the EU’s General Court annulled the AdSense decision in its entirety over errors in how the Commission assessed the duration of Google’s contracts. It remains to be seen whether the EU will appeal.

The Commission still has another (open) case probing Google’s adtech stack more broadly, which could also make the AdSense case look like small beer. Margrethe Vestager warned last year that if the suspected violations are confirmed, a structural separation (i.e., breaking up Google) may be the only viable remedy.

PC monitor and TV parts price-fixing cartel

In 2012, the EU handed down a total of €1.47 billion in fines in a cartels case related to the sale of components used in the manufacture of computer monitors and TVs. A raft of electronics giants were caught up in the enforcement over price-fixing of cathode ray tubes (CRT) between 1996 and 2006. The components were used in computer monitors and TVs in the pre-flatscreen era, and the Commission found that hardware makers had colluded to fix prices. Fines were handed down to seven electronics giants that were involved in either one or two CRT cartels, including LG, Panasonic, Philips, Samsung, and Toshiba.

Chipmaker Intel’s exclusionary practices

Going further back in time, we arrive in May 2009, at what was then a record €1.06 billion antitrust penalty for chipmaker Intel after the EU found that the U.S. giant had abused a dominant position to exclude rival AMD. Intel had been paying computer manufacturers and retailers to postpone, cancel, or otherwise avoid using or selling AMD’s products, and the EU found these exclusionary practices breached competition rules.

The chipmaker appealed the EU’s enforcement with some success over the following decade of legal arguments. In 2017, the Court of Justice set aside an earlier ruling by a lower court and referred the case back to the General Court, which went on to annul part of the Commission’s decision, while allowing that some of Intel’s practices had been unlawful.

The Court quashed the original fine in its entirety, owing to uncertainty over the penalty calculation, but last year the EU reimposed a fine of €376.36 million on Intel — for the “naked restrictions” that the Court had upheld. Appeals still rumble on, so where this enforcement finally ends up remains to be seen.

Qualcomm’s deal with Apple for mobile chips

In early 2018, it was mobile chipmaker Qualcomm’s turn to be hit with a beefy EU antitrust penalty: €997 million (or around $1.23 billion at the time). The sanction was for abuse of a dominant position between 2011 and 2016. The enforcement focused on Qualcomm’s relationship with Apple, and the EU decided it had shut rival chipmakers out of the market for supplying LTE baseband chipsets by paying Apple to exclusively use its chips for iPhones and iPads.  

However, Qualcomm appealed the decision, and in June 2022 the EU General Court ruled in its favor, rejecting the Commission’s analysis and also finding some procedural faults with its case. The EU later confirmed it would not appeal the judgment, so this is one sizable antitrust penalty that didn’t make it beyond the headlines.

The bloc has had better luck in a separate (longer-running) antitrust procedure against the chipmaker: In September 2024, the General Court largely upheld a Commission penalty on Qualcomm of just under $270 million in a case related to predatory pricing.

Microsoft’s anti-competitive licensing practices

We have to wind back the clock all the way to March 2004 to arrive at the EU giving Microsoft a spanking for abusing a dominant position with its Windows operating system. The then-record €497 million penalty (around $794 million) would be worth closer to €762 million (or ~$1.3 billion) today, factoring in Eurozone inflation (per this tool).

The original complaint sparking the investigation into Microsoft’s licensing and royalties practices dated all the way back to 1993. The EU’s enforcement on Microsoft was upheld on appeal. As well as handing down a fine, the bloc ordered various remedies, including interoperability requirements, and a second, larger penalty of €899 million was handed down on Microsoft in February 2008 for noncompliance. A 2012 decision by the EU’s General Court upheld the noncompliance penalty but reduced the level of the noncompliance fine slightly to €860 million.

Luxembourg’s tax deal with Amazon

In an October 2017 State Aid case, the EU argued that Luxembourg, the member state where e-commerce giant Amazon has its regional base, had granted the company “undue tax benefits” between May 2006 and June 2014. The Commission found that Amazon’s corporate structure in the country had allowed it to pay four times less tax than other companies based there — a tax break the EU calculated was worth around €250 million. (The EU does not issue fines for State Aid cases but requires any unlawfully uncollected taxes to be recouped.)

But while the Commission took issue with Luxembourg’s method of calculating Amazon’s taxable profits in the country, unlike in the aforementioned Ireland-Apple State Aid case, its arguments did not prevail in court: In a final ruling late last year, the EU’s top court struck down the Commission decision, finding that the EU had not established that the Luxembourg tax ruling was illegal State Aid. The upshot? Amazon was off the hook.

India flag with AI

Here are India's biggest AI startups based on how much money they've raised

India flag with AI

Image Credits: Jagmeet Singh / TechCrunch

India is very far from the “uncanny valley” of San Francisco, but it has a massive trove of engineering talent, and some of those people are hopping on the train and turning into founders and builders of AI startups.

The story of the AI startup ecosystem in India today is reminiscent of the early days of SaaS in the country: Funding is constrained — especially compared to the billions that AI startups in the U.S. and Europe are raising. But in areas like generative AI, we’re spotting signs of where VC money is being channeled. It’s going to home-grown talent, solving problems particular to their part of the world and bringing new approaches to the same challenges their developed-country counterparts are tackling.

Some Indian startups are looking to integrate local language support into their AI models to address growing demand from Indian consumers. And a few Indian startups, such as Pepper Content and Pocket FM, are also leveraging AI to create use cases for markets beyond India and enter the U.S. market.

That’s not to say it’s been easy. In India, funding for AI startups — including those working on infrastructure and services — dropped nearly 80% in 2023 to $113.4 million from $554.7 million in 2022, according to the Tracxn data shared with TechCrunch. In contrast, AI startup funding in the U.S. grew about 211% to $16.2 billion last year from $5.2 billion in 2022. To date, AI startup investments have hit a whopping $13 billion in the U.S. In that same period, just $92 million has been invested in Indian AI startups.

Dev Khare, a partner at Lightspeed Venture Partners India, told TechCrunch that India has some good opportunities for AI in consumer applications, whether that is creating content in Indic languages, offering virtual influencers or creating short videos and games using AI.

“A decent majority of the market in SaaS in the last 10 years has been going off to established markets and trying to replicate those at lower cost and with better support. That very valid market has led to some large outcomes in India. But you can’t do that in a newly emerging market, like AI or native AI. You have to take a risk and say, ‘This is where the world will be a few years from now. That market doesn’t exist today, but I’m going to bet it exists. I’m going to build for that.’ That’s a bit of a newer DNA for India. We’ve seen that happen,” he said.

In the last 18 months, Lightspeed India and SEA has invested over $150 million in AI, which includes new investments and follow-ons in existing AI-enabled startups. Globally, the fund has invested more than $1 billion across over 70 companies in AI in the same period.

Global and local investors are actively scouting for AI startups in India, as the country helps them diversify their portfolios and is in a better position amid ongoing geopolitical conflicts in significant markets. Growing data sovereignty concerns across nations also give a reason to look for local startups building promising solutions for the world’s most populous country.

Indian AI startups that have raised the most money

Krutrim

Founder: Bhavish Aggarwal
Total funding raised: $50 million
Key investors: Matrix Partners India

Led by Ola founder Bhavish Aggarwal, Krutrim (Hindi of Sanskrit origin meaning “artificial”) is India’s first unicorn AI startup, valued at $1 billion on just $50 million of money raised. Launched in December 2023 in Bengaluru, Krutrim is building a large language model (LLM) based on Indian languages and English. Earlier this year, it introduced an AI chatbot, which (not unlike its Western counterparts) saw a backlash upon its public beta launch over inaccurate results. The startup claims its AI model improves through regular updates.

Sarvam AI

Founders: Vivek Raghavan and Pratyush Kumar
Total funding raised: $41 million
Key investors: Lightspeed Venture Partners, Peak XV Partners and Khosla Ventures

Sarvam AI (Telugu for “everything”) is India’s other high-profile startup working on LLMs based on Indian languages. The startup was co-founded by Vivek Raghavan and Pratyush Kumar, who both worked previously with tech veteran Nandan Nilekani on IIT Madras’ project AI4Bharat. The Bengaluru-based startup emerged from stealth in December and aims to offer full-stack generative AI offerings, including a platform to let enterprises develop GenAI apps based on Sarvam’s LLM and contribute to open source models and datasets. In February, Sarvam AI partnered with Microsoft to launch voice-based AI tools and bring its Indic voice LLM to Azure.

Mad Street Den

Founders: Ashwini Asokan and Anand Chandrasekaran
Total funding raised: $67 million
Key investors: Avatar Growth Capital, Peak XV Partners and Alpha Wave Global

Computer vision startup Mad Street Den is building AI solutions for enterprise customers. The Chennai-based startup, co-founded by the neuroscientist-designer couple Ashwini Asokan and Anand Chandrasekaran in 2016, initially introduced its vision tech for the retail segment, though it expanded to verticals, including finance, insurance, healthcare and logistics. Its bigger vision goes beyond its home market, per its mission: “to make people all over the globe A.I natives.”

Wysa

Founders: Jo Aggarwal and Ramakant Vempati
Total funding raised: $25 million
Key investors: HealthQuad, W Health, British International Investment and Google Assistant Fund

Wysa is a mental health tech startup that uses AI to offer an “emotionally intelligent” therapist chatbot that helps users talk through their feelings. Managed by Wysa’s mental health professionals, the chatbot is used by over 6.5 million people across more than 95 countries and diverse age groups. The Bengaluru-based startup, which also has operations in Boston and London, raised $20 million in July 2022. It was co-founded by Jo Aggarwal and her husband, Ramakant Vempati, in 2016 after Aggarwal fell into a deep depression.

Neysa Networks

Founders: Sharad Sanghi and Anindya Das
Total funding raised: $20 million
Key investors: Matrix Partners India, Nexus Venture Partners and NTTVC

Mumbai-based Neysa Networks is led by seasoned tech entrepreneur Sharad Sanghi, who previously founded cloud and data company Netmagic Solutions. It offers a variety of generative AI platforms and services to businesses to let them deploy AI and machine learning. The startup’s Nebula platform is used to scale AI projects using on-demand GPU infrastructure and train and infer AI models on the cloud. The company’s Palvera platform provides multi-vendor and multi-input observability and lets users preemptively identify issues using a unified data lake and preexisting telemetry datasets. The Aegis platform focuses on AI/ ML security.

Here are some emerging Indian AI startups to watch

Upliance AI

Founders: Mahek Mody and Mohit Sharma
Total funding raised: $5.5 million
Key investors: Khosla Ventures and Draper Associates

Upliance AI brings AI to home appliances to let people cook over 500 new dishes at home. The Bengaluru-based startup plans to raise $10 million to $15 million early next year to bolster its market presence.

Scribble Data

Founders: Venkata Pingali and Indrayudh Ghoshal
Total funding raised: $2.3 million
Key investor: Blume Ventures

Scribble Data offers domain-specific AI assistants to large North American and European insurers to help them scale their back-end business capacity. It is headquartered in Bengaluru and has a sales team in Toronto.

Expertia AI

Founders: Kanishk Shukla and Akshay Gugnani
Total funding raised: $1.3 million
Key investors: Chiratae Ventures, Endiya Partners and Entrepreneur First

Expertia AI, based in Bengaluru, helps businesses automate their recruitments using AI and reduces hiring time to 24 hours. It automates sourcing, screening, outreach, engagement, assessment, interviewing and scheduling using proprietary deep-learning algorithms. The startup is currently raising $3 million from the lead investor, with participation from existing investors.

OnFinance

Founders: Anuj Srivastava and Priyesh Srivastava
Total funding raised: $1.1 million
Key investors: Silverneedle Ventures, Indian Angel Network and LetsVenture

Bengaluru-based OnFinance helps banks and wealth management companies with its AI co-pilots that work in areas ranging from equity research to compliance to wealth advisory.

Helium

Founders: Shray Arora and Sidharth Sahni
Total funding raised: $550,000
Key investor: Merak Ventures

Helium, based in Delhi, helps e-commerce brands with direct-to-consumer web stores with AI and reactive headless storefronts.

Soket Labs

Founder: Abhishek Upperwal
Total funding raised: $140,000

Soket Labs, based in Bengaluru and Gurugram, is an AI research firm that developed open source Pragna-1B multilingual LLM through its in-house GenAI Studio. It plans to raise $7 million in a seed round in two–three months.

Kissan AI

Founders: Pratik Desai

Based in Surat with an extended office in the Bay Area, KissanAI serves agriculture and adjacent domains using its GenAI platform AgriCopilot and a family of domain-specific Agri LLMs, Dhenu. The startup is currently bootstrapped and is backed by the founder’s family and friends, though it plans to raise $3 million to $4 million in a round between seed and Series A.

Shorthills AI

Founders: Pawan Prabhat and Paramdeep Singh

Shorthills AI, based in Gurugram, was founded in June 2018 by Pawan Prabhat and Paramdeep Singh. The pair previously founded the accounting training platform EduPristine. The bootstrapped startup builds custom AI tools for enterprises and has customers in the U.S. and India.

Meta releases its biggest 'open' AI model yet

people walking past Meta signage

Image Credits: TOBIAS SCHWARZ/AFP / Getty Images

Meta’s latest open source AI model is its biggest yet.

Today, Meta said it is releasing Llama 3.1 405B, a model containing 405 billion parameters. Parameters roughly correspond to a model’s problem-solving skills, and models with more parameters generally perform better than those with fewer parameters.

At 405 billion parameters, Llama 3.1 405B isn’t the absolute largest open source model out there, but it’s the biggest in recent years. Trained using 16,000 Nvidia H100 GPUs, it also benefits from newer training and development techniques that Meta claims makes it competitive with leading proprietary models like OpenAI’s GPT-4o and Anthropic’s Claude 3.5 Sonnet (with a few caveats).

As with Meta’s previous models, Llama 3.1 405B is available to download or use on cloud platforms like AWS, Azure and Google Cloud. It’s also being used on WhatsApp and Meta.ai, where it’s powering a chatbot experience for U.S.-based users.

New and improved

Like other open and closed source generative AI models, Llama 3.1 405B can perform a range of different tasks, from coding and answering basic math questions to summarizing documents in eight languages (English, German, French, Italian, Portuguese, Hindi, Spanish and Thai). It’s text-only, meaning that it can’t, for example, answer questions about an image, but most text-based workloads — think analyzing files like PDFs and spreadsheets — are within its purview.

Meta wants to make it known that it is experimenting with multimodality. In a paper published today, researchers at the company write that they’re actively developing Llama models that can recognize images and videos, and understand (and generate) speech. Still, these models aren’t yet ready for public release.

To train Llama 3.1 405B, Meta used a dataset of 15 trillion tokens dating up to 2024 (tokens are parts of words that models can more easily internalize than whole words, and 15 trillion tokens translates to a mind-boggling 750 billion words). It’s not a new training set per se, since Meta used the base set to train earlier Llama models, but the company claims it refined its curation pipelines for data and adopted “more rigorous” quality assurance and data filtering approaches in developing this model.

The company also used synthetic data (data generated by other AI models) to fine-tune Llama 3.1 405B. Most major AI vendors, including OpenAI and Anthropic, are exploring applications of synthetic data to scale up their AI training, but some experts believe that synthetic data should be a last resort due to its potential to exacerbate model bias.

For its part, Meta insists that it “carefully balance[d]” Llama 3.1 405B’s training data, but declined to reveal exactly where the data came from (outside of webpages and public web files). Many generative AI vendors see training data as a competitive advantage and so keep it and any information pertaining to it close to the chest. But training data details are also a potential source of IP-related lawsuits, another disincentive for companies to reveal much. 

Meta Llama 3.1
Image Credits: Meta

In the aforementioned paper, Meta researchers wrote that compared to earlier Llama models, Llama 3.1 405B was trained on an increased mix of non-English data (to improve its performance on non-English languages), more “mathematical data” and code (to improve the model’s mathematical reasoning skills), and recent web data (to bolster its knowledge of current events).

Recent reporting by Reuters revealed that Meta at one point used copyrighted e-books for AI training despite its own lawyers’ warnings. The company controversially trains its AI on Instagram and Facebook posts, photos and captions, and makes it difficult for users to opt out. What’s more, Meta, along with OpenAI, is the subject of an ongoing lawsuit brought by authors, including comedian Sarah Silverman, over the companies’ alleged unauthorized use of copyrighted data for model training.

“The training data, in many ways, is sort of like the secret recipe and the sauce that goes into building these models,” Ragavan Srinivasan, VP of AI program management at Meta, told TechCrunch in an interview. “And so from our perspective, we’ve invested a lot in this. And it is going to be one of these things where we will continue to refine it.”

Bigger context and tools

Llama 3.1 405B has a larger context window than previous Llama models: 128,000 tokens, or roughly the length of a 50-page book. A model’s context, or context window, refers to the input data (e.g. text) that the model considers before generating output (e.g. additional text).

One of the advantages of models with larger contexts is that they can summarize longer text snippets and files. When powering chatbots, such models are also less likely to forget topics that were recently discussed.

Two other new, smaller models Meta unveiled today, Llama 3.1 8B and Llama 3.1 70B — updated versions of the company’s Llama 3 8B and Llama 3 70B models released in April — also have 128,000-token context windows. The previous models’ contexts topped out at 8,000 tokens, which makes this upgrade fairly substantial — assuming the new Llama models can effectively reason across all that context.

Meta Llama 3.1
Image Credits: Meta

All of the Llama 3.1 models can use third-party tools, apps and APIs to complete tasks, like rival models from Anthropic and OpenAI. Out of the box, they’re trained to tap Brave Search to answer questions about recent events, the Wolfram Alpha API for math- and science-related queries, and a Python interpreter for validating code. In addition, Meta claims the Llama 3.1 models can use certain tools they haven’t seen before — to an extent.

Building an ecosystem

If benchmarks are to be believed (not that benchmarks are the end-all be-all in generative AI), Llama 3.1 405B is a very capable model indeed. That’d be a good thing, considering some of the painfully obvious limitations of previous-generation Llama models.

Llama 3 405B performs on par with OpenAI’s GPT-4, and achieves “mixed results” compared to GPT-4o and Claude 3.5 Sonnet, per human evaluators that Meta hired, the paper notes. While Llama 3 405B is better at executing code and generating plots than GPT-4o, its multilingual capabilities are overall weaker, and Llama 3 405B trails Claude 3.5 Sonnet in programming and general reasoning.

And because of its size, it needs beefy hardware to run. Meta recommends at least a server node.

That’s perhaps why Meta’s pushing its smaller new models, Llama 3.1 8B and Llama 3.1 70B, for general-purpose applications like powering chatbots and generating code. Llama 3.1 405B, the company says, is better reserved for model distillation — the process of transferring knowledge from a large model to a smaller, more efficient model — and generating synthetic data to train (or fine-tune) alternative models.

To encourage the synthetic data use case, Meta said it has updated Llama’s license to let developers use outputs from the Llama 3.1 model family to develop third-party AI generative models (whether that’s a wise idea is up for debate). Importantly, the license still constrains how developers can deploy Llama models: App developers with more than 700 million monthly users must request a special license from Meta that the company will grant on its discretion.

Meta Llama 3.1
Image Credits: Meta

That change in licensing around outputs, which allays a major criticism of Meta’s models within the AI community, is a part of the company’s aggressive push for mindshare in generative AI.

Alongside the Llama 3.1 family, Meta is releasing what it’s calling a “reference system” and new safety tools — several of these block prompts that might cause Llama models to behave in unpredictable or undesirable ways — to encourage developers to use Llama in more places. The company is also previewing and seeking comment on the Llama Stack, a forthcoming API for tools that can be used to fine-tune Llama models, generate synthetic data with Llama and build “agentic” applications — apps powered by Llama that can take action on a user’s behalf.

“[What] We have heard repeatedly from developers is an interest in learning how to actually deploy [Llama models] in production,” Srinivasan said. “So we’re trying to start giving them a bunch of different tools and options.”

Play for market share

In an open letter published this morning, Meta CEO Mark Zuckerberg lays out a vision for the future in which AI tools and models reach the hands of more developers around the world, ensuring people have access to the “benefits and opportunities” of AI.

It’s couched very philanthropically, but implicit in the letter is Zuckerberg’s desire that these tools and models be of Meta’s making.

Meta’s racing to catch up to companies like OpenAI and Anthropic, and it is employing a tried-and-true strategy: give tools away for free to foster an ecosystem and then slowly add products and services, some paid, on top. Spending billions of dollars on models that it can then commoditize also has the effect of driving down Meta competitors’ prices and spreading the company’s version of AI broadly. It also lets the company incorporate improvements from the open source community into its future models.

Llama certainly has developers’ attention. Meta claims Llama models have been downloaded over 300 million times, and more than 20,000 Llama-derived models have been created so far.

Make no mistake, Meta’s playing for keeps. It is spending millions on lobbying regulators to come around to its preferred flavor of “open” generative AI. None of the Llama 3.1 models solve the intractable problems with today’s generative AI tech, like its tendency to make things up and regurgitate problematic training data. But they do advance one of Meta’s key goals: becoming synonymous with generative AI.

There are costs to this. In the research paper, the co-authors — echoing Zuckerberg’s recent comments — discuss energy-related reliability issues with training Meta’s ever-growing generative AI models.

“During training, tens of thousands of GPUs may increase or decrease power consumption at the same time, for example, due to all GPUs waiting for checkpointing or collective communications to finish, or the startup or shutdown of the entire training job,” they write. “When this happens, it can result in instant fluctuations of power consumption across the data center on the order of tens of megawatts, stretching the limits of the power grid. This is an ongoing challenge for us as we scale training for future, even larger Llama models.”

One hopes that training those larger models won’t force more utilities to keep old coal-burning power plants around.

A signage of Bharti Airtel Ltd

Bharti will become BT's biggest shareholder after buying a 25%, $4B stake from Altice

A signage of Bharti Airtel Ltd

Image Credits: Pradeep Gaur / SOPA Images / LightRocket / Getty Images

BT, the U.K.’s former incumbent telecoms carrier, is picking up a major new investor today as telecoms companies look for stronger footing in the rapidly shifting technology and communications market. Bharti, the Indian tech and telecoms giant that owns Airtel, said it would purchase a 24.5% stake currently owned by Altice.

Based on BT’s market cap of around £13 billion ($16 billion) at the time of the deal, it values the stake at around $4 billion.

Bharti said in a statement that it would buy 9.99% immediately, and would acquire the remainder after regulatory clearance.

Altice has found itself on unstable footing over its debt-led acquisitions and corporate scandals, as detailed in this story at the end of 2023. Altice, which owns stakes in other technology and communications companies, had bought its stake in BT in several tranches, initially in 2021 and later in May 2023.

BT’s share price has dropped since then partly due to the broader decline of technology and communications stocks. And Altice now appears to be paring down its operational assets: This deal comes on the heels of its sale of media platform Teads to web recommendation platform Outbrain less than two weeks ago for $725 in cash and deferred payments, plus stock, in a transaction valued at $1 billion.

5G and AI are two of the biggest existential milestones for telcos at the moment. They might turn out to be threats, or opportunities, depending on how carriers play their cards. Bharti cited both in its rationale for this deal, likely looking for better economies of scale on both fronts in terms of purchasing, development and strategy as competition heats up from technology giants that threaten to further cannibalize telcos with new approaches to communication that bypass telco infrastructure.

“Bharti hopes that this investment will further help create new synergies in the telecom sector between both countries in the areas of AI and 5G R&D and core engineering amongst others, offering great potential to collaborate on industry best practices and emerging technologies,” the company said in a statement. Airtel, Bharti’s mobile carrier, is in hot competition with Reliance’s Jio in India in what many consider a duopoly, so investing abroad gives Bharti more diversification.

Interestingly, BT — riding high on its incumbency status in the U.K. — was once the one doing the investing: It held a 21% stake in Bharti between 1997 and 2001.

“Bharti and British Telecom (BT) have an enduring relationship going back more than two decades wherein BT owned 21% stake along with 2 board seats in Bharti Airtel Limited from 1997-2001,” noted Bharti’s founder and chairman, Sunil Bharti Mittal, in a statement. “Today marks a significant milestone in Bharti Group’s history as we invest in BT — an iconic British Company.”

BT was considerably less verbose on the news of the deal.

“We welcome investors who recognise the long-term value of our business, and this scale of investment from Bharti Global is a great vote of confidence in the future of BT Group and our strategy,” Allison Kirkby, BT’s CTO, said in a statement, the company’s only comment on the deal. “BT has enjoyed a long association with Bharti Enterprises, and I’m pleased that they share our ambition and vision for the future of our business. They have a strong track record of success in the sector, and I look forward to ongoing and positive engagement with them in the months and years to come.”

Additional reporting by Manish Singh

Bharti will become BT's biggest shareholder after buying a 25%, $4B stake from Altice

Image Credits: Pradeep Gaur / SOPA Images / LightRocket / Getty Images

BT, the U.K.’s former incumbent telecoms carrier, is picking up a major new investor today as telecoms companies look for stronger footing in the rapidly-shifting technology and communications market. Bharti, the Indian tech and telecoms giant that owns Airtel, said it would purchase a 24.5% stake currently owned by Altice.

Based on BT’s market cap of around £13 billion ($16 billion) at the time of the deal, it values the stake at around $4 billion.

Bharti said in a statement that it would buy 9.99% immediately, and would acquire the remainder after regulatory clearance.

Altice has found itself on unstable footing over its debt-led acquisitions and corporate scandals, as detailed in this story at the end of 2023. Altice, which owns stakes in other technology and communications companies, had bought its stake in BT in several tranches, initially in 2021 and later in May 2023.

BT’s share price has dropped since then partly due to the broader decline of technology and communications stocks. Altice now appears to be in divestment mode: This deal comes on the heels of its sale of media platform Teads to web recommendation platform Outbrain for around $1 billion less than two weeks ago.

5G and AI are two of the biggest existential milestones for telcos at the moment. They might turn out to be threats, or opportunities, depending on how carriers play their cards. Bharti cited both in its rationale for this deal, likely looking for better economies of scale on both fronts in terms of purchasing, development and strategy as competition heats up from technology giants that threaten to further cannibalize telcos with new approaches to communication that bypass telco infrastructure.

“Bharti hopes that this investment will further help create new synergies in the telecom sector between both countries in the areas of AI and 5G R&D and core engineering amongst others, offering great potential to collaborate on industry best practices and emerging technologies,” the company said in a statement. Airtel, Bharti’s mobile carrier, is in hot competition with Reliance’s Jio in India in what many consider a duopoly, so investing abroad gives Bharti more diversification.

Interestingly, BT — riding high on its incumbency status in the U.K. — was once the one doing the investing: it held a 21% stake in Bharti between 1997 and 2001.

“Bharti and British Telecom (BT) have an enduring relationship going back more than two decades wherein BT owned 21% stake along with 2 board seats in Bharti Airtel Limited from 1997-2001,” noted Bharti’s founder and chairman, Sunil Bharti Mittal, in a statement. “Today marks a significant milestone in Bharti Group’s history as we invest in BT — an iconic British Company.”

BT was considerably less verbose on the news of the deal.

“We welcome investors who recognise the long-term value of our business, and this scale of investment from Bharti Global is a great vote of confidence in the future of BT Group and our strategy,” Allison Kirkby, BT’s CTO, said in a statement, the company’s only comment on the deal. “BT has enjoyed a long association with Bharti Enterprises, and I’m pleased that they share our ambition and vision for the future of our business. They have a strong track record of success in the sector, and I look forward to ongoing and positive engagement with them in the months and years to come.”

Additional reporting by Manish Singh

Here are India's biggest AI startups based on how much money they've raised

India flag with AI

Image Credits: Jagmeet Singh / TechCrunch

India is very far from the “uncanny valley” of San Francisco, but it has a massive trove of engineering talent, and some of those people are hopping on the train and turning into founders and builders of AI startups.

The story of the AI startup ecosystem in India today is reminiscent of the early days of SaaS in the country: Funding is constrained — especially compared to the billions that AI startups in the U.S. and Europe are raising. But in areas like generative AI, we’re spotting signs of where VC money is being channeled. It’s going to home-grown talent, solving problems particular to their part of the world and bringing new approaches to the same challenges their developed-country counterparts are tackling.

Some Indian startups are looking to integrate local language support into their AI models to address growing demand from Indian consumers. And a few Indian startups, such as Pepper Content and Pocket FM, are also leveraging AI to create use cases for markets beyond India and enter the U.S. market.

That’s not to say it’s been easy. In India, funding for AI startups — including those working on infrastructure and services — dropped nearly 80% in 2023 to $113.4 million from $554.7 million in 2022, according to the Tracxn data shared with TechCrunch. In contrast, AI startup funding in the U.S. grew about 211% to $16.2 billion last year from $5.2 billion in 2022. To date, AI startup investments have hit a whopping $13 billion in the U.S. In that same period, just $92 million has been invested in Indian AI startups.

Dev Khare, a partner at Lightspeed Venture Partners India, told TechCrunch that India has some good opportunities for AI in consumer applications, whether that is creating content in Indic languages, offering virtual influencers or creating short videos and games using AI.

“A decent majority of the market in SaaS in the last 10 years has been going off to established markets and trying to replicate those at lower cost and with better support. That very valid market has led to some large outcomes in India. But you can’t do that in a newly emerging market, like AI or native AI. You have to take a risk and say, ‘This is where the world will be a few years from now. That market doesn’t exist today, but I’m going to bet it exists. I’m going to build for that.’ That’s a bit of a newer DNA for India. We’ve seen that happen,” he said.

In the last 18 months, Lightspeed India and SEA has invested over $150 million in AI, which includes new investments and follow-ons in existing AI-enabled startups. Globally, the fund has invested more than $1 billion across over 70 companies in AI in the same period.

Global and local investors are actively scouting for AI startups in India, as the country helps them diversify their portfolios and is in a better position amid ongoing geopolitical conflicts in significant markets. Growing data sovereignty concerns across nations also give a reason to look for local startups building promising solutions for the world’s most populous country.

Indian AI startups that have raised the most money

Krutrim

Founder: Bhavish Aggarwal
Total funding raised: $50 million
Key investors: Matrix Partners India

Led by Ola founder Bhavish Aggarwal, Krutrim (Hindi of Sanskrit origin meaning “artificial”) is India’s first unicorn AI startup, valued at $1 billion on just $50 million of money raised. Launched in December 2023 in Bengaluru, Krutrim is building a large language model (LLM) based on Indian languages and English. Earlier this year, it introduced an AI chatbot, which (not unlike its Western counterparts) saw a backlash upon its public beta launch over inaccurate results. The startup claims its AI model improves through regular updates.

Sarvam AI

Founders: Vivek Raghavan and Pratyush Kumar
Total funding raised: $41 million
Key investors: Lightspeed Venture Partners, Peak XV Partners and Khosla Ventures

Sarvam AI (Telugu for “everything”) is India’s other high-profile startup working on LLMs based on Indian languages. The startup was co-founded by Vivek Raghavan and Pratyush Kumar, who both worked previously with tech veteran Nandan Nilekani on IIT Madras’ project AI4Bharat. The Bengaluru-based startup emerged from stealth in December and aims to offer full-stack generative AI offerings, including a platform to let enterprises develop GenAI apps based on Sarvam’s LLM and contribute to open source models and datasets. In February, Sarvam AI partnered with Microsoft to launch voice-based AI tools and bring its Indic voice LLM to Azure.

Mad Street Den

Founders: Ashwini Asokan and Anand Chandrasekaran
Total funding raised: $67 million
Key investors: Avatar Growth Capital, Peak XV Partners and Alpha Wave Global

Computer vision startup Mad Street Den is building AI solutions for enterprise customers. The Chennai-based startup, co-founded by the neuroscientist-designer couple Ashwini Asokan and Anand Chandrasekaran in 2016, initially introduced its vision tech for the retail segment, though it expanded to verticals, including finance, insurance, healthcare and logistics. Its bigger vision goes beyond its home market, per its mission: “to make people all over the globe A.I natives.”

Wysa

Founders: Jo Aggarwal and Ramakant Vempati
Total funding raised: $25 million
Key investors: HealthQuad, W Health, British International Investment and Google Assistant Fund

Wysa is a mental health tech startup that uses AI to offer an “emotionally intelligent” therapist chatbot that helps users talk through their feelings. Managed by Wysa’s mental health professionals, the chatbot is used by over 6.5 million people across more than 95 countries and diverse age groups. The Bengaluru-based startup, which also has operations in Boston and London, raised $20 million in July 2022. It was co-founded by Jo Aggarwal and her husband, Ramakant Vempati, in 2016 after Aggarwal fell into a deep depression.

Neysa Networks

Founders: Sharad Sanghi and Anindya Das
Total funding raised: $20 million
Key investors: Matrix Partners India, Nexus Venture Partners and NTTVC

Mumbai-based Neysa Networks is led by seasoned tech entrepreneur Sharad Sanghi, who previously founded cloud and data company Netmagic Solutions. It offers a variety of generative AI platforms and services to businesses to let them deploy AI and machine learning. The startup’s Nebula platform is used to scale AI projects using on-demand GPU infrastructure and train and infer AI models on the cloud. The company’s Palvera platform provides multi-vendor and multi-input observability and lets users preemptively identify issues using a unified data lake and preexisting telemetry datasets. The Aegis platform focuses on AI/ ML security.

Here are some emerging Indian AI startups to watch

Upliance AI

Founders: Mahek Mody and Mohit Sharma
Total funding raised: $5.5 million
Key investors: Khosla Ventures and Draper Associates

Upliance AI brings AI to home appliances to let people cook over 500 new dishes at home. The Bengaluru-based startup plans to raise $10 million to $15 million early next year to bolster its market presence.

Scribble Data

Founders: Venkata Pingali and Indrayudh Ghoshal
Total funding raised: $2.3 million
Key investor: Blume Ventures

Scribble Data offers domain-specific AI assistants to large North American and European insurers to help them scale their back-end business capacity. It is headquartered in Bengaluru and has a sales team in Toronto.

Expertia AI

Founders: Kanishk Shukla and Akshay Gugnani
Total funding raised: $1.3 million
Key investors: Chiratae Ventures, Endiya Partners and Entrepreneur First

Expertia AI, based in Bengaluru, helps businesses automate their recruitments using AI and reduces hiring time to 24 hours. It automates sourcing, screening, outreach, engagement, assessment, interviewing and scheduling using proprietary deep-learning algorithms. The startup is currently raising $3 million from the lead investor, with participation from existing investors.

OnFinance

Founders: Anuj Srivastava and Priyesh Srivastava
Total funding raised: $1.1 million
Key investors: Silverneedle Ventures, Indian Angel Network and LetsVenture

Bengaluru-based OnFinance helps banks and wealth management companies with its AI co-pilots that work in areas ranging from equity research to compliance to wealth advisory.

Helium

Founders: Shray Arora and Sidharth Sahni
Total funding raised: $550,000
Key investor: Merak Ventures

Helium, based in Delhi, helps e-commerce brands with direct-to-consumer web stores with AI and reactive headless storefronts.

Soket Labs

Founder: Abhishek Upperwal
Total funding raised: $140,000

Soket Labs, based in Bengaluru and Gurugram, is an AI research firm that developed open source Pragna-1B multilingual LLM through its in-house GenAI Studio. It plans to raise $7 million in a seed round in two–three months.

Kissan AI

Founders: Pratik Desai

Based in Surat with an extended office in the Bay Area, KissanAI serves agriculture and adjacent domains using its GenAI platform AgriCopilot and a family of domain-specific Agri LLMs, Dhenu. The startup is currently bootstrapped and is backed by the founder’s family and friends, though it plans to raise $3 million to $4 million in a round between seed and Series A.

Shorthills AI

Founders: Pawan Prabhat and Paramdeep Singh

Shorthills AI, based in Gurugram, was founded in June 2018 by Pawan Prabhat and Paramdeep Singh. The pair previously founded the accounting training platform EduPristine. The bootstrapped startup builds custom AI tools for enterprises and has customers in the U.S. and India.

Meta releases its biggest 'open' AI model yet

people walking past Meta signage

Image Credits: TOBIAS SCHWARZ/AFP / Getty Images

Meta’s latest open source AI model is its biggest yet.

Today, Meta said it is releasing Llama 3.1 405B, a model containing 405 billion parameters. Parameters roughly correspond to a model’s problem-solving skills, and models with more parameters generally perform better than those with fewer parameters.

At 405 billion parameters, Llama 3.1 405B isn’t the absolute largest open source model out there, but it’s the biggest in recent years. Trained using 16,000 Nvidia H100 GPUs, it also benefits from newer training and development techniques that Meta claims makes it competitive with leading proprietary models like OpenAI’s GPT-4o and Anthropic’s Claude 3.5 Sonnet (with a few caveats).

As with Meta’s previous models, Llama 3.1 405B is available to download or use on cloud platforms like AWS, Azure and Google Cloud. It’s also being used on WhatsApp and Meta.ai, where it’s powering a chatbot experience for U.S.-based users.

New and improved

Like other open and closed source generative AI models, Llama 3.1 405B can perform a range of different tasks, from coding and answering basic math questions to summarizing documents in eight languages (English, German, French, Italian, Portuguese, Hindi, Spanish and Thai). It’s text-only, meaning that it can’t, for example, answer questions about an image, but most text-based workloads — think analyzing files like PDFs and spreadsheets — are within its purview.

Meta wants to make it known that it is experimenting with multimodality. In a paper published today, researchers at the company write that they’re actively developing Llama models that can recognize images and videos, and understand (and generate) speech. Still, these models aren’t yet ready for public release.

To train Llama 3.1 405B, Meta used a dataset of 15 trillion tokens dating up to 2024 (tokens are parts of words that models can more easily internalize than whole words, and 15 trillion tokens translates to a mind-boggling 750 billion words). It’s not a new training set per se, since Meta used the base set to train earlier Llama models, but the company claims it refined its curation pipelines for data and adopted “more rigorous” quality assurance and data filtering approaches in developing this model.

The company also used synthetic data (data generated by other AI models) to fine-tune Llama 3.1 405B. Most major AI vendors, including OpenAI and Anthropic, are exploring applications of synthetic data to scale up their AI training, but some experts believe that synthetic data should be a last resort due to its potential to exacerbate model bias.

For its part, Meta insists that it “carefully balance[d]” Llama 3.1 405B’s training data, but declined to reveal exactly where the data came from (outside of webpages and public web files). Many generative AI vendors see training data as a competitive advantage and so keep it and any information pertaining to it close to the chest. But training data details are also a potential source of IP-related lawsuits, another disincentive for companies to reveal much. 

Meta Llama 3.1
Image Credits: Meta

In the aforementioned paper, Meta researchers wrote that compared to earlier Llama models, Llama 3.1 405B was trained on an increased mix of non-English data (to improve its performance on non-English languages), more “mathematical data” and code (to improve the model’s mathematical reasoning skills), and recent web data (to bolster its knowledge of current events).

Recent reporting by Reuters revealed that Meta at one point used copyrighted e-books for AI training despite its own lawyers’ warnings. The company controversially trains its AI on Instagram and Facebook posts, photos and captions, and makes it difficult for users to opt out. What’s more, Meta, along with OpenAI, is the subject of an ongoing lawsuit brought by authors, including comedian Sarah Silverman, over the companies’ alleged unauthorized use of copyrighted data for model training.

“The training data, in many ways, is sort of like the secret recipe and the sauce that goes into building these models,” Ragavan Srinivasan, VP of AI program management at Meta, told TechCrunch in an interview. “And so from our perspective, we’ve invested a lot in this. And it is going to be one of these things where we will continue to refine it.”

Bigger context and tools

Llama 3.1 405B has a larger context window than previous Llama models: 128,000 tokens, or roughly the length of a 50-page book. A model’s context, or context window, refers to the input data (e.g. text) that the model considers before generating output (e.g. additional text).

One of the advantages of models with larger contexts is that they can summarize longer text snippets and files. When powering chatbots, such models are also less likely to forget topics that were recently discussed.

Two other new, smaller models Meta unveiled today, Llama 3.1 8B and Llama 3.1 70B — updated versions of the company’s Llama 3 8B and Llama 3 70B models released in April — also have 128,000-token context windows. The previous models’ contexts topped out at 8,000 tokens, which makes this upgrade fairly substantial — assuming the new Llama models can effectively reason across all that context.

Meta Llama 3.1
Image Credits: Meta

All of the Llama 3.1 models can use third-party tools, apps and APIs to complete tasks, like rival models from Anthropic and OpenAI. Out of the box, they’re trained to tap Brave Search to answer questions about recent events, the Wolfram Alpha API for math- and science-related queries, and a Python interpreter for validating code. In addition, Meta claims the Llama 3.1 models can use certain tools they haven’t seen before — to an extent.

Building an ecosystem

If benchmarks are to be believed (not that benchmarks are the end-all be-all in generative AI), Llama 3.1 405B is a very capable model indeed. That’d be a good thing, considering some of the painfully obvious limitations of previous-generation Llama models.

Llama 3 405B performs on par with OpenAI’s GPT-4, and achieves “mixed results” compared to GPT-4o and Claude 3.5 Sonnet, per human evaluators that Meta hired, the paper notes. While Llama 3 405B is better at executing code and generating plots than GPT-4o, its multilingual capabilities are overall weaker, and Llama 3 405B trails Claude 3.5 Sonnet in programming and general reasoning.

And because of its size, it needs beefy hardware to run. Meta recommends at least a server node.

That’s perhaps why Meta’s pushing its smaller new models, Llama 3.1 8B and Llama 3.1 70B, for general-purpose applications like powering chatbots and generating code. Llama 3.1 405B, the company says, is better reserved for model distillation — the process of transferring knowledge from a large model to a smaller, more efficient model — and generating synthetic data to train (or fine-tune) alternative models.

To encourage the synthetic data use case, Meta said it has updated Llama’s license to let developers use outputs from the Llama 3.1 model family to develop third-party AI generative models (whether that’s a wise idea is up for debate). Importantly, the license still constrains how developers can deploy Llama models: App developers with more than 700 million monthly users must request a special license from Meta that the company will grant on its discretion.

Meta Llama 3.1
Image Credits: Meta

That change in licensing around outputs, which allays a major criticism of Meta’s models within the AI community, is a part of the company’s aggressive push for mindshare in generative AI.

Alongside the Llama 3.1 family, Meta is releasing what it’s calling a “reference system” and new safety tools — several of these block prompts that might cause Llama models to behave in unpredictable or undesirable ways — to encourage developers to use Llama in more places. The company is also previewing and seeking comment on the Llama Stack, a forthcoming API for tools that can be used to fine-tune Llama models, generate synthetic data with Llama and build “agentic” applications — apps powered by Llama that can take action on a user’s behalf.

“[What] We have heard repeatedly from developers is an interest in learning how to actually deploy [Llama models] in production,” Srinivasan said. “So we’re trying to start giving them a bunch of different tools and options.”

Play for market share

In an open letter published this morning, Meta CEO Mark Zuckerberg lays out a vision for the future in which AI tools and models reach the hands of more developers around the world, ensuring people have access to the “benefits and opportunities” of AI.

It’s couched very philanthropically, but implicit in the letter is Zuckerberg’s desire that these tools and models be of Meta’s making.

Meta’s racing to catch up to companies like OpenAI and Anthropic, and it is employing a tried-and-true strategy: give tools away for free to foster an ecosystem and then slowly add products and services, some paid, on top. Spending billions of dollars on models that it can then commoditize also has the effect of driving down Meta competitors’ prices and spreading the company’s version of AI broadly. It also lets the company incorporate improvements from the open source community into its future models.

Llama certainly has developers’ attention. Meta claims Llama models have been downloaded over 300 million times, and more than 20,000 Llama-derived models have been created so far.

Make no mistake, Meta’s playing for keeps. It is spending millions on lobbying regulators to come around to its preferred flavor of “open” generative AI. None of the Llama 3.1 models solve the intractable problems with today’s generative AI tech, like its tendency to make things up and regurgitate problematic training data. But they do advance one of Meta’s key goals: becoming synonymous with generative AI.

There are costs to this. In the research paper, the co-authors — echoing Zuckerberg’s recent comments — discuss energy-related reliability issues with training Meta’s ever-growing generative AI models.

“During training, tens of thousands of GPUs may increase or decrease power consumption at the same time, for example, due to all GPUs waiting for checkpointing or collective communications to finish, or the startup or shutdown of the entire training job,” they write. “When this happens, it can result in instant fluctuations of power consumption across the data center on the order of tens of megawatts, stretching the limits of the power grid. This is an ongoing challenge for us as we scale training for future, even larger Llama models.”

One hopes that training those larger models won’t force more utilities to keep old coal-burning power plants around.

Apple logo at entrance to an Apple store

Apple's iOS 18 may be 'the biggest' software update in iPhone history, report says

Apple logo at entrance to an Apple store

Image Credits: Nicholas Kamm / AFP / Getty Images

Apple’s upcoming iOS 18 software update may be “the biggest” in the company’s history, according to Bloomberg’s Mark Gurman. iOS 18 is expected to be announced at Apple’s annual WWDC event in June.

“I’m told that the new operating system is seen within the company as one of the biggest iOS updates–if not the biggest–in the company’s history,” Gurman wrote in his latest Power On newsletter. “With that knowledge, Apple’s developers conference in June should be pretty exciting.”

The news comes a few months after Gurman reported that Apple was hoping for iOS 18 to be its most “ambitious and compelling” update in years.

Although the latest report doesn’t detail any specifics, Gurman has previously reported that Apple is planning to launch a newer version of Siri that leverages a new AI system. Apple is also expected to launch new features that improve how both Siri and the Messages app can auto-complete sentences and field questions. Plus, Apple Music is expected to get auto-generated playlists, which is something that Spotify introduced last year.

Apple is said to be looking at integrating generative AI into development tools like Xcode to allow developers to write new applications faster. In addition, Apple’s productivity apps, like Pages and Keynotes, should also be getting generative AI updates.

iOS 18 could also bring RCS support, as Apple revealed back in November that it plans to add support for the RCS standard on iOS in 2024. At the time, Apple said it believes that “RCS Universal Profile will offer a better interoperability experience when compared to SMS or MMS.” The major reversal follows public pressure on Apple to add support for RCS to iPhones. It’s worth noting that although Apple plans to adopt RCS, it has confirmed that messages sent from an Android user to an iPhone will still be displayed in green bubbles.

We’re still four months away from the big iOS 18 reveal, so we may learn more about the software update as we get closer to the official announcement over the next few months.

Apple to finally bring RCS to iPhones

Futuristic data center with blueish tint and colored lights on the different server banks.

Cloud infrastructure saw its biggest revenue growth ever in Q4

Futuristic data center with blueish tint and colored lights on the different server banks.

Image Credits: IR_Stone / Getty Images

For the last several quarters we’ve seen a lull in the expansion of the cloud infrastructure market, with lower growth numbers than we’ve been accustomed to seeing in the past. That changed this quarter thanks in large part to interest in generative AI. The new revenue wave began just last year, driven by the ChatGPT hype cycle, but has already pushed cloud infra revenue in the fourth quarter of 2023 to $74 billion, up $12 billion over last year at this time and $5.6 billion over Q3, the largest quarter-over-quarter increase the cloud market has experienced, per Synergy Research.

The cloud infrastructure market for the entire year grew to an eye popping $270 billion, up from $212 billion in 2022. Synergy’s John Dinsdale predicts that the growth we saw in the last year is here to stay, even as the market continues to mature and the law of large numbers takes increasing effect. “Cloud is now a massive market and it takes a lot to move the needle, but AI has done just that. Looking ahead, the law of large numbers means that the cloud market will never return to the growth rates seen prior to 2022, but Synergy does forecast that growth rates will now stabilize, resulting in huge ongoing annual increases in cloud spending,” he said in a statement.

Jamin Ball, a partner at Altimeter Capital, writing in his excellent Clouded Judgement newsletter, sees a similarly bright future for these vendors:

The hyperscalers are really starting to see the tailwind of new workload growth overtake the headwind of optimizations. Sometimes new workloads are AI related. Sometimes they’re classic cloud migrations. The hyperscalers benefit from massive scale, distribution, trust and depth of customer relationships in ways no other software companies do. They also are seeing AI revenue (largely compute) show up sooner than anyone else.

Ball’s data supports Dinsdale’s claims around diminishing growth rates, but in a market so large, growth for growth’s sake becomes a far less important metric:

Charts showing various growth metrics for AWS, Azure and Google Cloud.
Image Credits: Jamin Ball, Clouded Judgement, Altimeter Capital

For now, it appears that Microsoft’s lucrative investment/partnership with OpenAI is giving it an edge in the market as we saw the company’s market share grow two full percentage points to 25% in the fourth quarter, a remarkable one-quarter increase. Amazon is still king of the mountain with 31% share, albeit down two points from last quarter. It would be easy to say Amazon’s loss was Microsoft’s gain, although it’s probably not quite that simple and there are probably more nuanced impacts across the market. Meanwhile Google held steady at around 11% share.

Synergy reports that the Big 3 constitute 67% of overall market share, or approximately $50 billion in total cloud revenue coming from the three largest companies for a single quarter.

From a dollars perspective, the numbers are, per usual, a bit mind boggling, with Amazon coming in at $23 billion, Microsoft at $18.5 billion and Google with around $8 billion. If these numbers don’t match the reported numbers exactly, that’s because these companies often combine different types of cloud revenue to arrive at the reported figures. Synergy looks at IaaS, PaaS and hosted private cloud services, and the companies’ reported cloud numbers may include SaaS and other revenue that Synergy doesn’t count.

Cloud infrastructure market share graph from Synergy Research
Image Credits: Synergy Research

In terms of quarterly percentage growth, keeping in mind those caveats about how the companies measure revenue, AWS was up 13%, Azure was up 30% and Google Cloud was up around 25% (although they don’t separate out SaaS revenue in that number).

One thing was clear last year, Microsoft was putting the heat on Amazon and left the company on its heels, perhaps for the first time, with its aggressive deal making with OpenAI.

Scott Raney, a partner at Redpoint, told TechCrunch at re:Invent in December that Amazon was clearly playing catch up when it came to AI, and it was an unusual place for the company to find itself. “This might be the first time where people looked and said that Amazon isn’t in the pole position to capitalize on this massive opportunity. What Microsoft’s done around Copilot and the fact Q comes out [this week] means that in reality, they’re absolutely 100% playing catch-up,” Raney said at the time.

While generative AI represents a massive opportunity for all the cloud vendors, it’s still very much early days. We always like to say that first to market is a huge advantage, and it certainly has been for Amazon all these years. Whether Microsoft’s aggressive approach to AI represents a similar advantage isn’t clear yet, but it’s hard to ignore a two percentage point market share increase in a single quarter. For now it feels like Microsoft has taken the lead when it comes to AI in the enterprise, but Google and Amazon still have plenty of time left on the clock to figure it out.

Customers use slot machines inside a casino representing WinStar

'World's biggest casino' app exposed customers' personal data

Customers use slot machines inside a casino representing WinStar

Image Credits: Patrick T. Fallon / AFP / Getty Images

The startup that develops the phone app for casino resort giant WinStar has secured an exposed database that was spilling customers’ private information to the open web.

Oklahoma-based WinStar bills itself as the “world’s biggest casino” by square footage. The casino and hotel resort also offers an app, My WinStar, in which guests can access self-service options during their hotel stay, their rewards points and loyalty benefits, and casino winnings.

The app is developed by a Nevada software startup called Dexiga.

The startup left one of its logging databases on the internet without a password, allowing anyone with knowledge of its public IP address to access the WinStar customer data stored within using only their web browser.

Dexiga took the database offline after TechCrunch alerted the company to the security lapse.

three screenshots of the My WinStar apps
Screenshots of the My WinStar app. Image Credits: Google Play (screenshot)

Anurag Sen, a good-faith security researcher who has a knack for discovering inadvertently exposed sensitive data on the internet, found the database containing personal information, but it was initially unclear who the database belonged to.

Sen said the personal data included full names, phone numbers, email addresses and home addresses. Sen shared details of the exposed database with TechCrunch to help identify its owner and disclose the security lapse.

TechCrunch examined some of the exposed data and verified Sen’s findings. The database also contained an individual’s gender and the IP address of the user’s device, TechCrunch found.

None of the data was encrypted, though some sensitive data — such as a person’s date of birth — was redacted and replaced with asterisks.

A review of the exposed data by TechCrunch found an internal user account and password associated with Dexiga founder Rajini Jayaseelan.

Dexiga’s website says its tech platform powers the My WinStar app.

To confirm the source of the suspected spill, TechCrunch downloaded and installed the My WinStar app on an Android device and signed up using a phone number controlled by TechCrunch. That phone number instantly appeared in the exposed database, confirming that the database was linked to the My WinStar app.

TechCrunch contacted Jayaseelan and shared the IP address of the exposed database. The database became inaccessible a short time after.

In an email, Jayaseelan said Dexiga secured the database but claimed the database contained “publicly available information” and that no sensitive data was exposed.

Dexiga said the incident resulted from a log migration in January. Dexiga did not provide a specific date when the database became exposed. The exposed database contained rolling daily logs dating back to January 26 at the time it was secured.

Jayaseelan would not say if Dexiga has the technical means, such as access logs, to determine if anyone else accessed the database while it was exposed to the internet. Jayaseelan also would not say if Dexiga has notified WinStar of the security lapse, or if Dexiga would inform affected customers that their information was exposed. It is not immediately known how many individuals had personal data exposed by the data spill.

“We are further investigating the incident, continue to monitor our IT systems, and will take necessary future actions accordingly,” Dexiga said in response.

WinStar’s general manager Jack Parkinson did not respond to TechCrunch’s emails requesting comment.

Read more on TechCrunch:

Researchers say attackers are mass-exploiting new Ivanti VPN flawSecurity flaw in a popular smart helmet allowed silent location trackingGovernment hackers targeted iPhone owners with zero-days, Google saysHopSkipDrive says personal data of 155,000 drivers stolen in data breach