Whataburger app becomes unlikely power outage map after Houston hurricane

Image Credits: Whataburger

Fast-food chain Whataburger’s app has gone viral in the wake of Hurricane Beryl, which left around 1.8 million utility customers in Houston, Texas without power. Hundreds of thousands of those people may remain without power for days as Houston anticipates a heat wave, with temperatures climbing into the mid-90s.

Amid frustrations with the local utility company CenterPoint Energy, which doesn’t offer an app, some Houstonians got creative with their attempts to track the power outages. They turned to the Whataburger app instead.

Whataburger is a San Antonio-based fast-food chain with 127 stores in the Houston area, according to Newsweek. On the Whataburger app, users can see a map of Whataburger locations, with an orange logo indicating a store is open, and a grey logo meaning it’s closed.

Since CenterPoint Energy doesn’t have an online map of outages, an X user with the screen name BBQBryan found that the map of which Whataburger stores are open could be a useful tool for seeing where there’s power.

“The Whataburger app works as a power outage tracker, handy since the electric company doesn’t show a map,” BBQBryan wrote in a post that now has over 22,000 likes and 6.9 million impressions.

“Well there’s a use for our app we didn’t think of!” the Whataburger X account replied. “We hope you and everyone else are okay!”

This viral moment seems to have boosted Whataburger’s download numbers. In the iOS App Store, Whataburger is currently ranked 16th in the food and drink category. Just three weeks ago, it was ranked 40th.

Though the Whataburger revelation is funny, the app’s value is indicative of ongoing issues in Texas, where residents feel unsupported in the face of extreme weather. In 2021, when Texas was hit with a fierce winter storm, millions of residents were left without power in dangerous, freezing weather. To make matters worse, Senator Ted Cruz (R-TX) was spotted flying to a resort in Mexico while his state was in the midst of a historically fatal power grid failure. So, when Cruz made a post on Sunday urging Texans to stay safe amid the hurricane’s landfall, some constituents were frustrated by the hypocrisy and responded in kind on X.

It’s an infrastructural failure that a burger app has become a crucial resource for Houston residents looking for more information on power outages. The Whataburger app may be able to offer a vague idea of where there’s power, but there could be other reasons why a store is closed — after all, Houston is in the immediate aftermath of a natural disaster.

India's Rapido becomes a unicorn with fresh $120M funding

Swiggy backs bike taxi platform Rapido in $180 million funding

Image Credits: Dhiraj Singh / Bloomberg / Getty Images

Bike-taxi startup Rapido is the latest Indian startup to become a unicorn, meaning it reached $1 billion in valuation. The 8-year-old firm has raised $120 million in a new funding round led by WestBridge Capital, according to a regulatory filing.

The new capital, a Series E infusion, underscores Rapido’s growing prominence in India’s mobility sector where it has emerged as a formidable challenger to the long-standing duopoly of Uber and Ola. It is simultaneously also helping Swiggy’s competitive stance against rival Zomato in the fiercely contested food-delivery market.

Swiggy led Rapido’s last round in April 2022, which valued the mobility startup at $800 million. Rapido has raised about $430 million to date.

Rapido’s focus on two-wheeler transportation, instead of cabs, has allowed it to navigate the challenges that have hindered the growth of traditional cab-hailing services in India, capitalizing on the widespread use of motorcycles and scooters in the country’s congested urban centers.

Rapido didn’t respond to a request for comment.

In Rapido, Swiggy has found a delivery partner that is helping it serve the growing food delivery orders in the country. Rapido is able to provide its drivers more work opportunity through its tie-up with Swiggy, according to an investor of Rapido, who requested anonymity discussing strategy. Swiggy eventually plans to increase its stake in Rapido, according to a person familiar with the situation, but not before the IPO.

Rapido also engaged with Khazanah, Malaysia’s sovereign wealth fund, for funding in the current round, TechCrunch previously reported.

Swiggy has filed for an initial public offering, seeking to raise $1.25 billion from the event.

Rapido is the third Indian startup to become a unicorn this year after fintech Perfios and AI upstart Krutrim.

India's Rapido becomes unicorn with fresh $120 million funding

Image Credits: Getty Images

Bike-taxi startup Rapido has become the latest Indian startup to become a unicorn, or reach $1 billion in valuation. The eight-year-old firm has raised $120 million in a new funding round led by WestBridge Capital, according to a regulatory filing.

The new capital, a Series E infusion, underscores Rapido’s growing prominence in India’s mobility sector where it has emerged as a formidable challenger to the long-standing duopoly of Uber and Ola. It is simultaneously also helping Swiggy’s competitive stance against rival Zomato in the fiercely contested food delivery market.

Swiggy led Rapido’s last round, in April 2022, which valued the mobility startup at $800 million. Rapido has raised about $430 million to date.

Rapido’s focus on two-wheeler transportation, instead of cabs, has allowed it to navigate the challenges that have hindered the growth of traditional cab-hailing services in India, capitalizing on the widespread use of motorcycles and scooters in the country’s congested urban centers.

Rapido didn’t respond to a request for comment.

In Rapido, Swiggy has found a delivery partner that is helping it serve the growing food delivery orders in the country. Rapido is able to provide its drivers more work opportunity through its tie-up with Swiggy, according to an investor of Rapido, who requested anonymity discussing strategy. Swiggy eventually plans to increase its stake in Rapido, according to a person familiar with the situation, but not before the IPO.

Rapido also engaged with Khazanah, Malaysia’s sovereign wealth fund, for funding in the current round, TechCrunch previously reported.

Swiggy has filed for an initial public offering, seeking to raise $1.25 billion from the event.

Rapido is the third Indian startup to become a unicorn this year after fintech Perfios and AI upstart Krutrim.

Whataburger app becomes unlikely power outage map after Houston hurricane

Image Credits: Whataburger

Fast-food chain Whataburger’s app has gone viral in the wake of Hurricane Beryl, which left around 1.8 million utility customers in Houston, Texas without power. Hundreds of thousands of those people may remain without power for days as Houston anticipates a heat wave, with temperatures climbing into the mid-90s.

Amid frustrations with the local utility company CenterPoint Energy, which doesn’t offer an app, some Houstonians got creative with their attempts to track the power outages. They turned to the Whataburger app instead.

Whataburger is a San Antonio-based fast-food chain with 127 stores in the Houston area, according to Newsweek. On the Whataburger app, users can see a map of Whataburger locations, with an orange logo indicating a store is open, and a grey logo meaning it’s closed.

Since CenterPoint Energy doesn’t have an online map of outages, an X user with the screen name BBQBryan found that the map of which Whataburger stores are open could be a useful tool for seeing where there’s power.

“The Whataburger app works as a power outage tracker, handy since the electric company doesn’t show a map,” BBQBryan wrote in a post that now has over 22,000 likes and 6.9 million impressions.

“Well there’s a use for our app we didn’t think of!” the Whataburger X account replied. “We hope you and everyone else are okay!”

This viral moment seems to have boosted Whataburger’s download numbers. In the iOS App Store, Whataburger is currently ranked 16th in the food and drink category. Just three weeks ago, it was ranked 40th.

Though the Whataburger revelation is funny, the app’s value is indicative of ongoing issues in Texas, where residents feel unsupported in the face of extreme weather. In 2021, when Texas was hit with a fierce winter storm, millions of residents were left without power in dangerous, freezing weather. To make matters worse, Senator Ted Cruz (R-TX) was spotted flying to a resort in Mexico while his state was in the midst of a historically fatal power grid failure. So, when Cruz made a post on Sunday urging Texans to stay safe amid the hurricane’s landfall, some constituents were frustrated by the hypocrisy and responded in kind on X.

It’s an infrastructural failure that a burger app has become a crucial resource for Houston residents looking for more information on power outages. The Whataburger app may be able to offer a vague idea of where there’s power, but there could be other reasons why a store is closed — after all, Houston is in the immediate aftermath of a natural disaster.

Whataburger app becomes unlikely power outage map after Houston hurricane

Image Credits: Whataburger

Fast-food chain Whataburger’s app has gone viral in the wake of Hurricane Beryl, which left around 1.8 million utility customers in Houston, Texas without power. Hundreds of thousands of those people may remain without power for days as Houston anticipates a heat wave, with temperatures climbing into the mid-90s.

Amid frustrations with the local utility company CounterPoint Energy, which doesn’t offer an app, some Houstonians got creative with their attempts to track the power outages. They turned to the Whataburger app instead.

Whataburger is a San Antonio-based fast-food chain with 127 stores in the Houston area, according to Newsweek. On the Whataburger app, users can see a map of Whataburger locations, with an orange logo indicating a store is open, and a grey logo meaning it’s closed.

Since CounterPoint Energy doesn’t have an online map of outages, an X user with the screen name BBQBryan found that the map of which Whataburger stores are open could be a useful tool for seeing where there’s power.

“The Whataburger app works as a power outage tracker, handy since the electric company doesn’t show a map,” BBQBryan wrote in a post that now has over 22,000 likes and 6.9 million impressions.

“Well there’s a use for our app we didn’t think of!” the Whataburger X account replied. “We hope you and everyone else are okay!”

This viral moment seems to have boosted Whataburger’s download numbers. In the iOS App Store, Whataburger is currently ranked 16th in the food and drink category. Just three weeks ago, it was ranked 40th.

Though the Whataburger revelation is funny, the app’s value is indicative of ongoing issues in Texas, where residents feel unsupported in the face of extreme weather. In 2021, when Texas was hit with a fierce winter storm, millions of residents were left without power in dangerous, freezing weather. To make matters worse, Senator Ted Cruz (R-TX) was spotted flying to a resort in Mexico while his state was in the midst of a historically fatal power grid failure. So, when Cruz made a post on Sunday urging Texans to stay safe amid the hurricane’s landfall, some constituents were frustrated by the hypocrisy and responded in kind on X.

It’s an infrastructural failure that a burger app has become a crucial resource for Houston residents looking for more information on power outages. The Whataburger app may be able to offer a vague idea of where there’s power, but there could be other reasons why a store is closed — after all, Houston is in the immediate aftermath of a natural disaster.

As AI becomes standard, watch for these 4 DevSecOps trends

Image of a magnifying glass above balls to represent identifying bias in AI.

Image Credits: Hiroshi Watanabe (opens in a new window) / Getty Images

David DeSanto

ContributorDavid DeSanto is the chief product officer at GitLab Inc., where he leads GitLab’s product division to define and execute GitLab’s product vision and roadmap. David is responsible for ensuring the company builds, ships, and supports the platform that reinforces GitLab’s leadership in the DevSecOps platform market.

AI’s role in software development is reaching a pivotal moment — one that will compel organizations and their DevSecOps leaders to be more proactive in advocating for effective and responsible AI utilization.

Simultaneously, developers and the wider DevSecOps community must prepare to address four global trends in AI: the increased use of AI in code testing, ongoing threats to IP ownership and privacy, a rise in AI bias, and — despite all of these challenges — an increased reliance on AI technologies. Successfully aligning with these trends will position organizations and DevSecOps teams for success. Ignoring them could stifle innovation or, worse, derail your business strategy.

From luxury to standard: Organizations will embrace AI across the board

Integrating AI will become standard, not a luxury, across all industries of products and services, leveraging DevSecOps to build AI functionality alongside the software that will leverage it. Harnessing AI to drive innovation and deliver enhanced customer value will be critical to staying competitive in the AI-driven marketplace.

From my conversations with GitLab customers and monitoring industry trends, with organizations pushing the boundaries of efficiency through AI adoption, more than two-thirds of businesses will embed AI capabilities within their offerings by the end of 2024. Organizations are evolving from experimenting with AI to becoming AI-centric.

To prepare, organizations must invest in revising software development governance and emphasizing continuous learning and adaptation in AI technologies. This will require a cultural and strategic shift. It demands rethinking business processes, product development, and customer engagement strategies. And it requires training — which DevSecOps teams say they want and need. In our latest Global DevSecOps Report, 81% of respondents said they would like more training on how to use AI effectively.

As AI becomes more sophisticated and integral to business operations, companies will need to navigate the ethical implications and societal impacts of their AI-driven solutions, ensuring that they contribute positively to their customers and communities.

AI will dominate code-testing workflows

The evolution of AI in DevSecOps is already transforming code testing, and the trend is expected to accelerate. GitLab’s research found that only 41% of DevSecOps teams currently use AI for automated test generation as part of software development, but that number is expected to reach 80% by the end of 2024 and approach 100% within two years.

As organizations integrate AI tools into their workflows, they are grappling with the challenges of aligning their current processes with the efficiency and scalability gains that AI can provide. This shift promises a radical increase in productivity and accuracy — but it also demands significant adjustments to traditional testing roles and practices. Adapting to AI-powered workflows requires training DevSecOps teams in AI oversight and fine-tuning AI systems to facilitate its integration into code testing to enhance software products’ overall quality and reliability.

Additionally, this trend will redefine the role of quality assurance professionals, requiring them to evolve their skills to oversee and enhance AI-based testing systems. It’s impossible to overstate the importance of human oversight, as AI systems will require continuous monitoring and guidance to be highly effective.

AI’s threat to IP and privacy in software security will accelerate

The growing adoption of AI-powered code creation increases the risk of AI-introduced vulnerabilities and the chance of widespread IP leakage and data privacy breaches affecting software security, corporate confidentiality, and customer data protection.

To mitigate those risks, businesses must prioritize robust IP and privacy protections in their AI adoption strategies and ensure that AI is implemented with full transparency about how it’s being used. Implementing stringent data governance policies and employing advanced detection systems will be crucial to identifying and addressing AI-related risks. Fostering heightened awareness of these issues through employee training and encouraging a proactive risk management culture is vital to safeguarding IP and data privacy.

The security challenges of AI also underscore the ongoing need to implement DevSecOps practices throughout the software development life cycle, where security and privacy are not afterthoughts but are integral parts of the development process from the outset. In short, businesses must keep security at the forefront when adopting AI — similar to the shift left concept within DevSecOps — to ensure that innovations leveraging AI do not come at the cost of security and privacy.

​​Brace for a rise in AI bias before we see better days

While 2023 was AI’s breakout year, its rise put a spotlight on bias in algorithms. AI tools that rely on internet data for training inherit the full range of biases expressed across online content. This development poses a dual challenge: exacerbating existing biases and creating new ones that impact the fairness and impartiality of AI in DevSecOps.

To counteract pervasive bias, developers must focus on diversifying their training datasets, incorporating fairness metrics, and deploying bias-detection tools in AI models, as well as explore AI models designed for specific use cases. One promising avenue to explore is using AI feedback to evaluate AI models based on a clear set of principles, or a “constitution,” that establishes firm guidelines about what AI will and won’t do. Establishing ethical guidelines and training interventions are crucial to ensure unbiased AI outputs.

Organizations must establish robust data governance frameworks to ensure the quality and reliability of the data in their AI systems. AI systems are only as good as the data they process, and bad data can lead to inaccurate outputs and poor decisions.

Developers and the broader tech community should demand and facilitate the development of unbiased AI through constitutional AI or reinforcement learning with human feedback aimed at reducing bias. This requires a concerted effort across AI providers and users to ensure responsible AI development that prioritizes fairness and transparency.

Preparing for the AI revolution in DevSecOps

As organizations ramp up their shift toward AI-centric business models, it’s not just about staying competitive — it’s also about survival. Business leaders and DevSecOps teams will need to confront the anticipated challenges amplified by using AI — whether they be threats to privacy, trust in what AI produces, or issues of cultural resistance.

Collectively, these developments represent a new era in software development and security. Navigating these changes requires a comprehensive approach encompassing ethical AI development and use, vigilant security and governance measures, and a commitment to preserving privacy. The actions organizations and DevSecOps teams take now will set the course for the long-term future of AI in DevSecOps, ensuring its ethical, secure, and beneficial deployment.

Google Chrome becomes a 'picture-in-picture' app

Image Credits: S3studio / Getty Images

As competition in the browser market heats up, thanks to innovations from startups like Arc and others, Google is preparing to make a notable change to how its Chrome browser operates. The company announced Wednesday that it’s introducing a new feature called “Minimized Custom Tabs” that will allow users to move between a native app and their web content with a tap. When doing so, the Custom Tab becomes a small, picture-in-picture window that floats above the native app content.

The new addition focuses on the use of Custom Tabs, a feature in Android browsers that gives app developers a way to add a customized browser experience directly in their app. Instead of opening the user’s browser or a WebView — which doesn’t support all the features of the web platform — Custom Tabs let users remain in their app while browsing. For developers, the use of Custom Tabs can increase app engagement and reduce the risk of users leaving the app and not returning.

Image Credits: Google

By turning the Custom Tab into a picture-in-picture window, shifting to the web experience may feel more natural — and more like you’re still inside the native app. The change could also be useful to developers who are pointing their customers to a website to sign up for accounts or subscriptions, as it makes it easier for the user to move back and forth between the website and the native app.

While minimized to the picture-in-picture window, the Custom Tab can be docked off to the side of the screen. When the page is maximized, the user can tap on a down arrow to shrink it to the picture-in-picture window again.

The new web experience comes at a time when Google is making accessing the web a more baked-in experience on Android. With features like Circle to Search and other AI-powered integrations, people can find their way to the web via gestures like circling or highlighting items.

The change is rolling out in the latest version of Chrome (M124) and will be automatically applied anywhere developers are already using Chrome’s Custom Tabs. Google notes that while the change is affecting Chrome browsers, it hopes other browser makers will adopt similar functionality.

Image of a magnifying glass above balls to represent identifying bias in AI.

As AI becomes standard, watch for these 4 DevSecOps trends

Image of a magnifying glass above balls to represent identifying bias in AI.

Image Credits: Hiroshi Watanabe (opens in a new window) / Getty Images

David DeSanto

Contributor

David DeSanto is the chief product officer at GitLab Inc., where he leads GitLab’s product division to define and execute GitLab’s product vision and roadmap. David is responsible for ensuring the company builds, ships, and supports the platform that reinforces GitLab’s leadership in the DevSecOps platform market.

AI’s role in software development is reaching a pivotal moment — one that will compel organizations and their DevSecOps leaders to be more proactive in advocating for effective and responsible AI utilization.

Simultaneously, developers and the wider DevSecOps community must prepare to address four global trends in AI: the increased use of AI in code testing, ongoing threats to IP ownership and privacy, a rise in AI bias, and — despite all of these challenges — an increased reliance on AI technologies. Successfully aligning with these trends will position organizations and DevSecOps teams for success. Ignoring them could stifle innovation or, worse, derail your business strategy.

From luxury to standard: Organizations will embrace AI across the board

Integrating AI will become standard, not a luxury, across all industries of products and services, leveraging DevSecOps to build AI functionality alongside the software that will leverage it. Harnessing AI to drive innovation and deliver enhanced customer value will be critical to staying competitive in the AI-driven marketplace.

From my conversations with GitLab customers and monitoring industry trends, with organizations pushing the boundaries of efficiency through AI adoption, more than two-thirds of businesses will embed AI capabilities within their offerings by the end of 2024. Organizations are evolving from experimenting with AI to becoming AI-centric.

To prepare, organizations must invest in revising software development governance and emphasizing continuous learning and adaptation in AI technologies. This will require a cultural and strategic shift. It demands rethinking business processes, product development, and customer engagement strategies. And it requires training — which DevSecOps teams say they want and need. In our latest Global DevSecOps Report, 81% of respondents said they would like more training on how to use AI effectively.

As AI becomes more sophisticated and integral to business operations, companies will need to navigate the ethical implications and societal impacts of their AI-driven solutions, ensuring that they contribute positively to their customers and communities.

AI will dominate code-testing workflows

The evolution of AI in DevSecOps is already transforming code testing, and the trend is expected to accelerate. GitLab’s research found that only 41% of DevSecOps teams currently use AI for automated test generation as part of software development, but that number is expected to reach 80% by the end of 2024 and approach 100% within two years.

As organizations integrate AI tools into their workflows, they are grappling with the challenges of aligning their current processes with the efficiency and scalability gains that AI can provide. This shift promises a radical increase in productivity and accuracy — but it also demands significant adjustments to traditional testing roles and practices. Adapting to AI-powered workflows requires training DevSecOps teams in AI oversight and fine-tuning AI systems to facilitate its integration into code testing to enhance software products’ overall quality and reliability.

Additionally, this trend will redefine the role of quality assurance professionals, requiring them to evolve their skills to oversee and enhance AI-based testing systems. It’s impossible to overstate the importance of human oversight, as AI systems will require continuous monitoring and guidance to be highly effective.

AI’s threat to IP and privacy in software security will accelerate

The growing adoption of AI-powered code creation increases the risk of AI-introduced vulnerabilities and the chance of widespread IP leakage and data privacy breaches affecting software security, corporate confidentiality, and customer data protection.

To mitigate those risks, businesses must prioritize robust IP and privacy protections in their AI adoption strategies and ensure that AI is implemented with full transparency about how it’s being used. Implementing stringent data governance policies and employing advanced detection systems will be crucial to identifying and addressing AI-related risks. Fostering heightened awareness of these issues through employee training and encouraging a proactive risk management culture is vital to safeguarding IP and data privacy.

The security challenges of AI also underscore the ongoing need to implement DevSecOps practices throughout the software development life cycle, where security and privacy are not afterthoughts but are integral parts of the development process from the outset. In short, businesses must keep security at the forefront when adopting AI — similar to the shift left concept within DevSecOps — to ensure that innovations leveraging AI do not come at the cost of security and privacy.

​​Brace for a rise in AI bias before we see better days

While 2023 was AI’s breakout year, its rise put a spotlight on bias in algorithms. AI tools that rely on internet data for training inherit the full range of biases expressed across online content. This development poses a dual challenge: exacerbating existing biases and creating new ones that impact the fairness and impartiality of AI in DevSecOps.

To counteract pervasive bias, developers must focus on diversifying their training datasets, incorporating fairness metrics, and deploying bias-detection tools in AI models, as well as explore AI models designed for specific use cases. One promising avenue to explore is using AI feedback to evaluate AI models based on a clear set of principles, or a “constitution,” that establishes firm guidelines about what AI will and won’t do. Establishing ethical guidelines and training interventions are crucial to ensure unbiased AI outputs.

Organizations must establish robust data governance frameworks to ensure the quality and reliability of the data in their AI systems. AI systems are only as good as the data they process, and bad data can lead to inaccurate outputs and poor decisions.

Developers and the broader tech community should demand and facilitate the development of unbiased AI through constitutional AI or reinforcement learning with human feedback aimed at reducing bias. This requires a concerted effort across AI providers and users to ensure responsible AI development that prioritizes fairness and transparency.

Preparing for the AI revolution in DevSecOps

As organizations ramp up their shift toward AI-centric business models, it’s not just about staying competitive — it’s also about survival. Business leaders and DevSecOps teams will need to confront the anticipated challenges amplified by using AI — whether they be threats to privacy, trust in what AI produces, or issues of cultural resistance.

Collectively, these developments represent a new era in software development and security. Navigating these changes requires a comprehensive approach encompassing ethical AI development and use, vigilant security and governance measures, and a commitment to preserving privacy. The actions organizations and DevSecOps teams take now will set the course for the long-term future of AI in DevSecOps, ensuring its ethical, secure, and beneficial deployment.

Bhavish Aggarwal

Ola founder's Krutrim becomes India's first AI unicorn

Bhavish Aggarwal

Image Credits: MANJUNATH KIRAN / AFP / Getty Images

Krutrim, an AI startup founded by Ola founder Bhavish Aggarwal, said it has raised a funding round that values it at $1 billion. The startup, founded last year, is the fastest to become a unicorn in India, it claimed in a press statement. It’s also the first Indian AI startup to become a unicorn, it said.

Matrix Partners India — which has also backed Aggarwal’s other two startups, ride-hailing platform Ola and EV startup Ola Electric — led the $50 million “first round” in Krutrim. TechCrunch reported last year that Aggarwal was in talks to raise $50 million for his new AI venture.

Krutrim, which means “artificial” in Sanskrit, is building a large language model that has been trained on local Indian languages in addition to English. The startup plans to launch a voice-enabled conversational AI assistant that understands and speaks multiple Indian languages, the startup said.

It plans to make a beta version of its eponymous chatbot available to consumers next month and follow it by rolling out APIs to developers and enterprises. On its website, Krutrim says it also plans to develop in-house capability to manufacture chips optimised for AI compute. TechCrunch reported earlier that Aggarwal’s new venture will explore developing and designing chips.

Krutrim is Aggarwal’s third venture. Ola, his first, leads the ride-hailing market in India and is eyeing profitability. Ola Electric leads the two-wheeler EV market in India and recently filed the paperwork for a $662 million initial public offering.

“India has to build its own AI, and at कृत्रिम, we are fully committed towards building the country’s first complete AI computing stack,” Aggarwal said in a statement. “We are thrilled to announce the successful closure of our first funding round, which not only validates the potential of कृत्रिम ’s innovative AI solutions but also underscores the confidence investors have in our ability to drive meaningful change out of India for the world.”

The investment in Krutrim comes at a time when investors globally are rushing to identify and back AI breakthrough, banking on the thesis that advances in AI will make countless industries more efficient and startups at the forefront will deliver generational returns.

Despite being home to one of the world’s largest startup ecosystems, India has yet to make a material impact in the AI race. Indian contenders have yet to emerge and challenge the dominance of large language model titans such as OpenAI’s ChatGPT, Amazon-backed Anthropic, or Google’s Bard.

Indian powerhouse Reliance partnered with Nvidia in September, revealing plans to build a large language model that is trained on India’s diverse languages. But the firm — run by Mukesh Ambani, Asia’s richest person — has yet to launch its AI offering.

Peak XV and Lightspeed India recently backed Sarvam, an AI startup that is also building a large language model.

Training a large language model has proven to be a very expensive endeavour. OpenAI has raised over $11 billion to date, whereas Anthropic has also raised billions from investors including Google and Amazon. xAI, Elon Musk’s AI startup, is in talks to raise up to $6 billion at a valuation of about $20 billion, Financial Times reported Friday.

Where is India in the generative AI race?

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Image Credits: TechCrunch

Google’s Gemini on Android, its AI replacement for Google Assistant, will soon be taking advantage of its ability to deeply integrate with Android’s mobile operating system and Google’s apps. At the Google I/O 2024 developer conference on Tuesday, the company announced that users will be able to pull up the Gemini overlay on top of the app they’re using in more ways. It’s also updating Android’s built-in AI model, Gemini Nano. 

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps. Meanwhile, YouTube users will be able to tap “Ask this video” to find specific information from within that YouTube video, Google says. 

Image Credits: TechCrunch

Those who pay for the upgraded Gemini Advanced will also have the ability to use an “Ask this PDF” option that lets you get answers from the document without having to read through all the pages. Gemini Advanced subscribers pay $19.99 per month for access to AI and receive 2TB of storage along with other Google One benefits.

Already, Gemini on Android could do other things like generate captions on photos, ask questions about articles you’re reading, and perform other generative AI tasks, similar to other AI chatbots. However, OpenAI upstaged Google’s event to announce a GenAI model, GPT-4o (with the o standing for “omni”), that works with text, speech, and video, including what the phone’s camera is seeing. So despite Gemini’s built-in advantages, it will have some competition on mobile devices.

Google says the latest Gemini on Android features will roll out to hundreds of millions of supported devices over the next few months. Over time, Gemini will evolve to offer other suggestions related to what’s on your screen as well. 

Meanwhile, the on-device foundation model on Android, Gemini Nano, will be upgraded to include multimodality. That means it will be able to process text input as well as other means of processing information, including sights, sounds, and spoken language. 

Image Credits: TechCrunch

We’re launching an AI newsletter! Sign up here to start receiving it in your inboxes on June 5.

Read more about Google I/O 2024 on TechCrunch