Brian Williams might host a live election night special for Amazon

Image Credits: Alex Wong / Getty Images

Amazon Prime Video could be getting into the live news business, if only for one night.

Variety reports that the company is in talks with longtime NBC and MSNBC news anchor Brian Williams to host a live Election Night special, competing with more traditional TV news broadcasts to offer non-partisan coverage of the U.S. presidential election as results come in.

While Amazon, like other services, built its streaming business around on-demand shows and movies, it has been moving into live programming, most notably with its Thursday night NFL broadcasts. (Even Netflix, whose executives had consistently said they interested in bidding for live sports rights, announced a deal with the WWE in January.)

Amazon did not immediately respond to TechCrunch’s request for comment. Variety says the company is not planning to create regular news programming and is instead treating this as a one-off.

Exclusive: Cold shipping might be the next industry that batteries disrupt

A person putting a vial in a battery-powered cold shipping container.

Image Credits: Artyc

Hannah Sieber knows just how transformative batteries can be. At her previous startup, EcoFlow, she used them to replace generators, whether for powering homes after a bad storm or RVs at a campsite. The experience made her wonder what else batteries could do, especially smaller ones. 

“What are the other industries that could dramatically change?” she recalled thinking at the time.

After leaving EcoFlow, and while studying at Stanford, it hit her. She had been researching how power shutoffs in California, which are intended to limit wildfire risk, had disproportionate effects on people of different means. 

She noticed that utilities were spending more on generators and microgrids in wealthier communities, leaving smaller, poorer communities in the lurch. “I saw the impact of what happens during a 56-hour shutoff if you’re a small business and your refrigerator loses power and all of a sudden you have to buy more inventory,” she told TechCrunch. “That was kind of this ‘ah ha’ moment.” 

Sieber started digging deeper into refrigeration, probing for places where battery-powered cooling might make a difference. She quickly zeroed in on shipping after reading up on its climate impacts. 

“Could we electrify the cold chain?” she said she asked herself. “And what would it look like to do battery-powered shipping?”

Sieber’s latest startup, Artyc, is her answer to that question. The company has quietly raised $14 million to date, according to PitchBook, and it has a product on the market, Medstow Micro, that helps hospitals, clinical trials and medical laboratories ship temperature-sensitive specimens. 

The device is a white plastic cube, small enough that it can be held with one hand. Pop the lid, and inside up to four vials can be stored. On the outside, there’s a USB-C port to charge a lithium-ion battery that powers a solid-state heat pump, which provides cooling or heating depending on outside conditions. The cube can keep samples at 3 degrees C (37.4 degrees F) for at least 56 hours. Thermometers, accelerometers and GPS keep track of the package, and a cellular connection lets customers keep tabs on its precious cargo.

Artyc leases the boxes to its customers, and because one of its boxes can replace both tracking hardware and disposable ice packs or dry ice, Sieber said they tend to break even after about four shipments. Plus, because the boxes are reusable, their carbon footprint is better than competing methods after just two shipments, she added.

One of Sieber’s goals with the Medstow Micro is to expand patient access to clinical trials. Currently, most are run out of large hospitals in major metropolitan areas. As a result, many people who might be eligible tend to be excluded, harming not just patients, who miss out on potentially life-altering treatments, but the field of medicine itself, since trials that enroll more diverse patients tend to produce therapies that benefit more people.

Artyc’s next product will hold five liters, and it’ll likely be targeted at pricey, temperature sensitive foods like herbs, chocolate and wine. Then in 2025, the startup is planning to ship its 25-liter size. “For a lot of our customers, it’s actually about what they can’t ship today that they would like to be able to ship,” Sieber said.

Other uses have been popping up, she said. Hospitals and clinical labs have said that they are considering using Artyc’s boxes as additional, blackout-proof storage or as mobile refrigerators to simplify rounds. “Imagine a world where you have that on site, and a mobile phlebotomist grabs it, does rounds for the day, and brings it back,” she said.

Sieber is looking beyond healthcare in developed countries like the U.S., too. “We’ve had great conversations with some global health foundations,” she said. For now, the team is trying to figure out how to guarantee the temperature of the contents in extenuating circumstances. 

“If you’re trying to get to a rural community and road quality isn’t what you expect and delays happen, how do you build a buffer?” she said. Still, she’s optimistic. “We think it’s easier to source an outlet than dry ice.”

'Visual' AI models might not see anything at all

Image Credits: Bryce Durbin / TechCrunch

The latest round of language models, like GPT-4o and Gemini 1.5 Pro, are touted as “multimodal,” able to understand images and audio as well as text. But a new study makes clear that they don’t really see the way you might expect. In fact, they may not see at all.

To be clear at the outset, no one has made claims like “This AI can see like people do!” (Well, perhaps some have.) But the marketing and benchmarks used to promote these models use phrases like “vision capabilities,” “visual understanding,” and so on. They talk about how the model sees and analyzes images and video, so it can do anything from homework problems to watching the game for you.

So although these companies’ claims are artfully couched, it’s clear that they want to express that the model sees in some sense of the word. And it does — but kind of the same way it does math or writes stories: matching patterns in the input data to patterns in its training data. This leads to the models failing in the same way they do on certain other tasks that seem trivial, like picking a random number.

A study — informal in some ways, but systematic — of current AI models’ visual understanding was undertaken by researchers at Auburn University and the University of Alberta. They tested the biggest multimodal models on a series of very simple visual tasks, like asking whether two shapes overlap, or how many pentagons are in a picture, or which letter in a word is circled. (A summary micropage can be perused here.)

They’re the kind of thing that even a first-grader would get right, yet they gave the AI models great difficulty.

“Our seven tasks are extremely simple, where humans would perform at 100% accuracy. We expect AIs to do the same, but they are currently NOT,” wrote co-author Anh Nguyen in an email to TechCrunch. “Our message is, ‘Look, these best models are STILL failing.’”

Image Credits: Rahmanzadehgervi et al

The overlapping shapes test is one of the simplest conceivable visual reasoning tasks. Presented with two circles either slightly overlapping, just touching or with some distance between them, the models couldn’t consistently get it right. Sure, GPT-4o got it right more than 95% of the time when they were far apart, but at zero or small distances, it got it right only 18% of the time. Gemini Pro 1.5 does the best, but still only gets 7/10 at close distances.

(The illustrations do not show the exact performance of the models but are meant to show the inconsistency of the models across the conditions. The statistics for each model are in the paper.)

Or how about counting the number of interlocking circles in an image? I bet an above-average horse could do this.

Image Credits: Rahmanzadehgervi et al

They all get it right 100% of the time when there are five rings, but then adding one ring completely devastates the results. Gemini is lost, unable to get it right a single time. Sonnet-3.5 answers six … a third of the time, and GPT-4o a little under half the time. Adding another ring makes it even harder, but adding another makes it easier for some.

The point of this experiment is simply to show that, whatever these models are doing, it doesn’t really correspond with what we think of as seeing. After all, even if they saw poorly, we wouldn’t expect six-, seven-, eight- and nine-ring images to vary so widely in success.

The other tasks tested showed similar patterns; it wasn’t that they were seeing or reasoning well or poorly, but there seemed to be some other reason why they were capable of counting in one case but not in another.

One potential answer, of course, is staring us right in the face: Why should they be so good at getting a five-circle image correct, but fail so miserably on the rest, or when it’s five pentagons? (To be fair, Sonnet-3.5 did pretty good on that.) Because they all have a five-circle image prominently featured in their training data: the Olympic Rings.

Image Credits: IOC

This logo is not just repeated over and over in the training data but likely described in detail in alt text, usage guidelines and articles about it. But where in their training data would you find six interlocking rings. Or seven? If their responses are any indication: nowhere! They have no idea what they’re “looking” at, and no actual visual understanding of what rings, overlaps or any of these concepts are.

I asked what the researchers think of this “blindness” they accuse the models of having. Like other terms we use, it has an anthropomorphic quality that is not quite accurate but hard to do without.

“I agree, ‘blind’ has many definitions even for humans and there is not yet a word for this type of blindness/insensitivity of AIs to the images we are showing,” wrote Nguyen. “Currently, there is no technology to visualize exactly what a model is seeing. And their behavior is a complex function of the input text prompt, input image and many billions of weights.”

He speculated that the models aren’t exactly blind but that the visual information they extract from an image is approximate and abstract, something like “there’s a circle on the left side.” But the models have no means of making visual judgments, making their responses like those of someone who is informed about an image but can’t actually see it.

As a last example, Nguyen sent this, which supports the above hypothesis:

Image Credits: Anh Nguyen

When a blue circle and a green circle overlap (as the question prompts the model to take as fact), there is often a resulting cyan-shaded area, as in a Venn diagram. If someone asked you this question, you or any smart person might well give the same answer, because it’s totally plausible … if your eyes are closed! But no one with their eyes open would respond that way.

Does this all mean that these “visual” AI models are useless? Far from it. Not being able to do elementary reasoning about certain images speaks to their fundamental capabilities, but not their specific ones. Each of these models is likely going to be highly accurate on things like human actions and expressions, photos of everyday objects and situations, and the like. And indeed that is what they are intended to interpret.

If we relied on the AI companies’ marketing to tell us everything these models can do, we’d think they had 20/20 vision. Research like this is needed to show that, no matter how accurate the model may be in saying whether a person is sitting or walking or running, they do it without “seeing” in the sense (if you will) we tend to mean.

TTT models might be the next frontier in generative AI

Many interacting with chat interface.

Image Credits: Natee127 / Getty Images

After years of dominance by the form of AI known as the transformer, the hunt is on for new architectures.

Transformers underpin OpenAI’s video-generating model Sora, and they’re at the heart of text-generating models like Anthropic’s Claude, Google’s Gemini and GPT-4o. But they’re beginning to run up against technical roadblocks — in particular, computation-related roadblocks.

Transformers aren’t especially efficient at processing and analyzing vast amounts of data, at least running on off-the-shelf hardware. And that’s leading to steep and perhaps unsustainable increases in power demand as companies build and expand infrastructure to accommodate transformers’ requirements.

A promising architecture proposed this month is test-time training (TTT), which was developed over the course of a year and a half by researchers at Stanford, UC San Diego, UC Berkeley and Meta. The research team claims that TTT models can not only process far more data than transformers, but that they can do so without consuming nearly as much compute power.

The hidden state in transformers

A fundamental component of transformers is the “hidden state,” which is essentially a long list of data. As a transformer processes something, it adds entries to the hidden state to “remember” what it just processed. For instance, if the model is working its way through a book, the hidden state values will be things like representations of words (or parts of words).

“If you think of a transformer as an intelligent entity, then the lookup table — its hidden state — is the transformer’s brain,” Yu Sun, a post-doc at Stanford and a co-contributor on the TTT research, told TechCrunch. “This specialized brain enables the well-known capabilities of transformers such as in-context learning.”

The hidden state is part of what makes transformers so powerful. But it also hobbles them. To “say” even a single word about a book a transformer just read, the model would have to scan through its entire lookup table — a task as computationally demanding as rereading the whole book.

So Sun and team had the idea of replacing the hidden state with a machine learning model — like nested dolls of AI, if you will, a model within a model.

It’s a bit technical, but the gist is that the TTT model’s internal machine learning model, unlike a transformer’s lookup table, doesn’t grow and grow as it processes additional data. Instead, it encodes the data it processes into representative variables called weights, which is what makes TTT models highly performant. No matter how much data a TTT model processes, the size of its internal model won’t change.

Sun believes that future TTT models could efficiently process billions of pieces of data, from words to images to audio recordings to videos. That’s far beyond the capabilities of today’s models.

“Our system can say X words about a book without the computational complexity of rereading the book X times,” Sun said. “Large video models based on transformers, such as Sora, can only process 10 seconds of video, because they only have a lookup table ‘brain.’ Our eventual goal is to develop a system that can process a long video resembling the visual experience of a human life.”

Skepticism around the TTT models

So will TTT models eventually supersede transformers? They could. But it’s too early to say for certain.

TTT models aren’t a drop-in replacement for transformers. And the researchers only developed two small models for study, making TTT as a method difficult to compare right now to some of the larger transformer implementations out there.

“I think it’s a perfectly interesting innovation, and if the data backs up the claims that it provides efficiency gains then that’s great news, but I couldn’t tell you if it’s better than existing architectures or not,” said Mike Cook, a senior lecturer in King’s College London’s department of informatics who wasn’t involved with the TTT research. “An old professor of mine used to tell a joke when I was an undergrad: How do you solve any problem in computer science? Add another layer of abstraction. Adding a neural network inside a neural network definitely reminds me of that.”

Regardless, the accelerating pace of research into transformer alternatives points to growing recognition of the need for a breakthrough.

This week, AI startup Mistral released a model, Codestral Mamba, that’s based on another alternative to the transformer called state space models (SSMs). SSMs, like TTT models, appear to be more computationally efficient than transformers and can scale up to larger amounts of data.

AI21 Labs is also exploring SSMs. So is Cartesia, which pioneered some of the first SSMs and Codestral Mamba’s namesakes, Mamba and Mamba-2.

Should these efforts succeed, it could make generative AI even more accessible and widespread than it is now — for better or worse.

TTT models might be the next frontier in generative AI

Many interacting with chat interface.

Image Credits: Natee127 / Getty Images

After years of dominance by the form of AI known as the transformer, the hunt is on for new architectures.

Transformers underpin OpenAI’s video-generating model Sora, and they’re at the heart of text-generating models like Anthropic’s Claude, Google’s Gemini and GPT-4o. But they’re beginning to run up against technical roadblocks — in particular, computation-related roadblocks.

Transformers aren’t especially efficient at processing and analyzing vast amounts of data, at least running on off-the-shelf hardware. And that’s leading to steep and perhaps unsustainable increases in power demand as companies build and expand infrastructure to accommodate transformers’ requirements.

A promising architecture proposed this month is test-time training (TTT), which was developed over the course of a year and a half by researchers at Stanford, UC San Diego, UC Berkeley and Meta. The research team claims that TTT models can not only process far more data than transformers, but that they can do so without consuming nearly as much compute power.

The hidden state in transformers

A fundamental component of transformers is the “hidden state,” which is essentially a long list of data. As a transformer processes something, it adds entries to the hidden state to “remember” what it just processed. For instance, if the model is working its way through a book, the hidden state values will be things like representations of words (or parts of words).

“If you think of a transformer as an intelligent entity, then the lookup table — its hidden state — is the transformer’s brain,” Yu Sun, a post-doc at Stanford and a co-contributor on the TTT research, told TechCrunch. “This specialized brain enables the well-known capabilities of transformers such as in-context learning.”

The hidden state is part of what makes transformers so powerful. But it also hobbles them. To “say” even a single word about a book a transformer just read, the model would have to scan through its entire lookup table — a task as computationally demanding as rereading the whole book.

So Sun and team had the idea of replacing the hidden state with a machine learning model — like nested dolls of AI, if you will, a model within a model.

It’s a bit technical, but the gist is that the TTT model’s internal machine learning model, unlike a transformer’s lookup table, doesn’t grow and grow as it processes additional data. Instead, it encodes the data it processes into representative variables called weights, which is what makes TTT models highly performant. No matter how much data a TTT model processes, the size of its internal model won’t change.

Sun believes that future TTT models could efficiently process billions of pieces of data, from words to images to audio recordings to videos. That’s far beyond the capabilities of today’s models.

“Our system can say X words about a book without the computational complexity of rereading the book X times,” Sun said. “Large video models based on transformers, such as Sora, can only process 10 seconds of video, because they only have a lookup table ‘brain.’ Our eventual goal is to develop a system that can process a long video resembling the visual experience of a human life.”

Skepticism around the TTT models

So will TTT models eventually supersede transformers? They could. But it’s too early to say for certain.

TTT models aren’t a drop-in replacement for transformers. And the researchers only developed two small models for study, making TTT as a method difficult to compare right now to some of the larger transformer implementations out there.

“I think it’s a perfectly interesting innovation, and if the data backs up the claims that it provides efficiency gains then that’s great news, but I couldn’t tell you if it’s better than existing architectures or not,” said Mike Cook, a senior lecturer in King’s College London’s department of informatics who wasn’t involved with the TTT research. “An old professor of mine used to tell a joke when I was an undergrad: How do you solve any problem in computer science? Add another layer of abstraction. Adding a neural network inside a neural network definitely reminds me of that.”

Regardless, the accelerating pace of research into transformer alternatives points to growing recognition of the need for a breakthrough.

This week, AI startup Mistral released a model, Codestral Mamba, that’s based on another alternative to the transformer called state space models (SSMs). SSMs, like TTT models, appear to be more computationally efficient than transformers and can scale up to larger amounts of data.

AI21 Labs is also exploring SSMs. So is Cartesia, which pioneered some of the first SSMs and Codestral Mamba’s namesakes, Mamba and Mamba-2.

Should these efforts succeed, it could make generative AI even more accessible and widespread than it is now — for better or worse.

Climate tech might be the hot job market in 2024

Crew installs solar panels on an apartment building.

Image Credits: Marty Caivano/Digital First Media/Boulder Daily Camera / Getty Images

One of the major stories that defined the tech sector in 2023 was layoffs. Companies large and small shed over 240,000 jobs in the last year, and while the trend has cooled of late, it hasn’t stopped, with nearly 7,000 jobs cut in November alone.

But there have been bright spots. Climate tech is one sector that has been hiring, and 2024 looks like it will be continuing the trend.

Clean energy jobs have grown 10% in the past two years, outpacing the economy as a whole, according to a report by industry group E2. Through 2032, when the Inflation Reduction Act is set to expire, the fastest-growing job fields include wind turbine technician (45% growth) and solar photovoltaic installer (22% growth), according to the Bureau of Labor Statistics.

For startups, 2023 was more muddled. As investors closed their pocketbooks, founders had to make hard choices about how to extend their runways. Some had to resort to layoffs, but not everyone. Many founders I’ve spoken with continue to emphasize that they’re hiring for a variety of roles.

Deal Dive: Training the workforce for the clean energy transition

For those laid off from the general tech sector, climate tech would appear to be an appealing pivot, and for many, that’s proving to be true. Nearly every company needs software developers, project managers and designers. Is there a need for 240,000 of them? Probably not yet. And some that look like a close fit might require a bit of climate or energy knowledge on the part of the applicant.

In other words, there’s a skills gap.

That shouldn’t be surprising. The economy of the last century-plus has been oriented around burning fossil fuels; the economy of the 21st century won’t be. Even if you have skills that port from the old to the new — like oil rig operators switching to geothermal — there’s some amount of relearning that has to take place.

That, coupled with the fact that expats from the general tech world have been increasingly interested in climate tech jobs, has fueled the growth of a very particular type of business: climate tech job career sites. Several have popped up in recent years, including Terra.do, Climatebase, MCJ Collective (which is also a venture capital firm), and Climate Draft.

To varying degrees, these sites offer not just job postings, but also networking events, educational resources, even training cohorts that help job seekers recognize how their skills fit into the climate tech world while teaching them new ones.

As if to underline the importance of these companies to the climate tech sector, venture capital firm Lowercarbon bought Climate Draft in September.

In the past few years, climate tech jobs have skewed toward those needed by early-stage startups. In the coming years, as they move toward commercialization, they’ll be hiring for a broader range of roles — and more of them. It won’t just be scientists and engineers, but sales, service and support staff, too.

Tech jobs are just a fraction of what’ll be needed, of course. There will be a need for wind turbine technicians and solar installers, as the BLS helpfully points out, but also geothermal borehole drillers, hydrogen plant operators, EV charger technicians, battery recyclers and heat pump installers.

The move toward climate-friendly jobs has been in the works for years, as E2’s data shows. Simple economics have been pushing things in that direction, driven mostly by the falling costs of solar, wind and batteries, and the labor market has been responding.

But as new government policies fall into place, including the Inflation Reduction Act and the Bipartisan Infrastructure Law in the U.S. and the Green Deal in the EU, the job market in climate tech is certain to get even hotter. Between those three alone, the U.S. and the EU are investing over $800 billion in climate and environment initiatives, a figure that is likely to be conservative when the final tally is complete.

It’s a gusher of money that has taken a while to uncork, but as it starts to flow in earnest, the demand for labor will only increase. Where the money goes, so do the jobs.

Crew installs solar panels on an apartment building.

Climate tech might be the hot job market in 2024

Crew installs solar panels on an apartment building.

Image Credits: Marty Caivano/Digital First Media/Boulder Daily Camera / Getty Images

One of the major stories that defined the tech sector in 2023 was layoffs. Companies large and small shed over 240,000 jobs in the last year, and while the trend has cooled of late, it hasn’t stopped, with nearly 7,000 jobs cut in November alone.

But there have been bright spots. Climate tech is one sector that has been hiring, and 2024 looks like it will be continuing the trend.

Clean energy jobs have grown 10% in the past two years, outpacing the economy as a whole, according to a report by industry group E2. Through 2032, when the Inflation Reduction Act is set to expire, the fastest-growing job fields include wind turbine technician (45% growth) and solar photovoltaic installer (22% growth), according to the Bureau of Labor Statistics.

For startups, 2023 was more muddled. As investors closed their pocketbooks, founders had to make hard choices about how to extend their runways. Some had to resort to layoffs, but not everyone. Many founders I’ve spoken with continue to emphasize that they’re hiring for a variety of roles.

Deal Dive: Training the workforce for the clean energy transition

For those laid off from the general tech sector, climate tech would appear to be an appealing pivot, and for many, that’s proving to be true. Nearly every company needs software developers, project managers and designers. Is there a need for 240,000 of them? Probably not yet. And some that look like a close fit might require a bit of climate or energy knowledge on the part of the applicant.

In other words, there’s a skills gap.

That shouldn’t be surprising. The economy of the last century-plus has been oriented around burning fossil fuels; the economy of the 21st century won’t be. Even if you have skills that port from the old to the new — like oil rig operators switching to geothermal — there’s some amount of relearning that has to take place.

That, coupled with the fact that expats from the general tech world have been increasingly interested in climate tech jobs, has fueled the growth of a very particular type of business: climate tech job career sites. Several have popped up in recent years, including Terra.do, Climatebase, MCJ Collective (which is also a venture capital firm), and Climate Draft.

To varying degrees, these sites offer not just job postings, but also networking events, educational resources, even training cohorts that help job seekers recognize how their skills fit into the climate tech world while teaching them new ones.

As if to underline the importance of these companies to the climate tech sector, venture capital firm Lowercarbon bought Climate Draft in September.

In the past few years, climate tech jobs have skewed toward those needed by early-stage startups. In the coming years, as they move toward commercialization, they’ll be hiring for a broader range of roles — and more of them. It won’t just be scientists and engineers, but sales, service and support staff, too.

Tech jobs are just a fraction of what’ll be needed, of course. There will be a need for wind turbine technicians and solar installers, as the BLS helpfully points out, but also geothermal borehole drillers, hydrogen plant operators, EV charger technicians, battery recyclers and heat pump installers.

The move toward climate-friendly jobs has been in the works for years, as E2’s data shows. Simple economics have been pushing things in that direction, driven mostly by the falling costs of solar, wind and batteries, and the labor market has been responding.

But as new government policies fall into place, including the Inflation Reduction Act and the Bipartisan Infrastructure Law in the U.S. and the Green Deal in the EU, the job market in climate tech is certain to get even hotter. Between those three alone, the U.S. and the EU are investing over $800 billion in climate and environment initiatives, a figure that is likely to be conservative when the final tally is complete.

It’s a gusher of money that has taken a while to uncork, but as it starts to flow in earnest, the demand for labor will only increase. Where the money goes, so do the jobs.

video games, startups, venture

Video game startups might be a bright spot for VC in 2024

video games, startups, venture

Image Credits: ChooStudio / Getty Images

The global video game industry makes more money each year than movies and music combined. But that doesn’t mean the sector was immune to the macroeconomic impacts of the last few years. Gaming companies have held sizable layoffs, and venture funding to the category hit a five-year low in 2023. But VCs are optimistic that things will turn around this year.

Gaming startups raised $2 billion last year, according to a report from video game-focused VC Konvoy Ventures. 2023’s total was down significantly from 2021, $9.9 billion, and 2022, $6.7 billion.

Many VCs think that 2024 could be a bloodbath for startups, generally, as exits aren’t likely to return to any kind of normalcy until 2025; many companies will run out of money and have to shut down. But video games might be an outlier, according to some VCs.

For one, there were still a lot of positive milestones for the sector in 2023. There were multiple titles released last year that garnered huge audiences, including Baldur’s Gate 3 and Hogwarts Legacy, which each sold more than 22 million copies. Despite a flat year for growth in terms of the overall gaming industry, video games are still projected to grow into a $229 billion industry by the end of the decade.

The category is also changing, which opens the door for startups to launch alongside new trends. As drama around Apple’s App Store fees continues to persist, the industry is moving away from mobile games — which traditionally raised the most venture money — and toward cross-platform games, which are more expensive to make, but more lucrative, too. Unlike some categories, AI is just in its early innings in video games and will likely start to stake its place this year.

Josh Chapman, co-founder and managing partner at Konvoy, said the industry should return to normal growth in 2024. The increase in activity caused by tourist investors coming in due to pandemic-fueled gaming spikes and the crypto folks backing web3 gaming has all retreated. The industry can return to organic growth this year, he said.

“A lot of the web3 and crypto stuff in gaming sort of evaporated last year,” Chapman said. “The lack of web3 gaming companies pitching in the market led to an overall drop in deal flow. That’s one subsector of gaming, everything else stayed pretty strong.”

Ilya Eremeev, managing and general partner at The Games Fund, told TechCrunch that despite the industry coming off of a more challenging year for fundraising there is a lot to be excited about. One of the main things is the amount of developer talent available after the industry shed thousands in headcount through layoffs last year. Plus, compensation for these positions has gone down, which means startups might be able to land top talent in this market.

While some of the tourist investors have exited the space, corporates have remained active and have started to participate more at the early stages. It also goes against the trends in the broader venture space, where corporate VCs participated in the lowest percentage of U.S. deals in 2023 in nine years, according to PitchBook data.

“Strategics in Asia trying to run overseas operations in Europe and in the U.S., especially in Europe, they realized there is a growth opportunity in this region,” Eremeev said. “Sometimes they accumulated a lot of capital, they need to invest and are more open for high-risk deals and they invest in early stage.”

But the biggest trend to watch in video games this year is AI. While the AI frenzy in 2022 sparked a lot of existing companies to tout their AI prowess or a lot of companies to start building fast, it wasn’t as immediate of a jolt to the video game sector, Eremeev said. But companies are starting to launch, and they could have big implications — especially regarding the costs associated with game creation.

This gaming startup tries to show ‘AI + crypto’ is not a fad

Mobile ruled the gaming space for a long time, not just because the games were popular, but because they weren’t as expensive to produce as, say, an immersive data-heavy PC game. This made them more venture-backable. Sofia Dolfe, a partner at Index Ventures focused on gaming, said that watching AI unfold in the video game sector is one of the things she’s tracking the most this year.

“We are at the early innings of AI, it will lower the ability to create something, it will also lower the barrier for some areas of gaming that have been less VC investable,” Dolfe said. “Triple AAA quality games on PC that had really long-form creation cycles, it didn’t lend itself as much with the venture model as mobile games, bringing down those costs we will see a lot of studios being built that leverage that technology that I’m excited about.”

Generative AI embedded in games is another development to watch. There could be really interesting advancements where games can become more of a choose your own adventure in a way if AI allows users to fully control every aspect of the game, including NPCs (non-playable characters). This will of course have to have guardrails and guidelines, Eremeev said.

Interestingly, no investor mentioned AR or VR as an area of growth they are excited about this year. But with the current list of big video game releases set for 2024, and with Disney taking a 15% stake in Epic Games just last week, VC investors may have good reason to be optimistic about this year and video game startups in the long term.

“It’s going to be a very tricky and challenging year for the gaming industry but some amazing opportunities will emerge,” Chapman said. “If you look at Halo, Halo was built in 2001. League of Legends was built in 2009. Tough times produce incredible companies.”

Puzzle piece pulled out to reveal cloud behind it.

As Microsoft unbundles Teams, it might not have the impact on Slack you think

Puzzle piece pulled out to reveal cloud behind it.

Image Credits: David Wall / Getty Images

One of the primary reasons that Slack joined forces with Salesforce in 2021 in a $28 billion deal was to give the communications company the clout to compete with Microsoft. For years, company co-founder Stewart Butterfield railed against Microsoft bundling Teams with Office 365, calling it anticompetitive and saying at one point that Microsoft was “unhealthily obsessed with killing Slack.”

The company went so far as to file a complaint against Microsoft with the European Union in 2020.

This morning, Microsoft announced it was finally unbundling Teams from Office 365 in the future, although current customers could continue to use the bundled license.

Butterfield stepped down from Slack at the end of 2022, but he seemed less concerned about Microsoft after he became part of the CRM giant, telling TechCrunch’s Connie Loizos in 2021 that Teams seemed to be more focused on meeting software like Zoom than Slack, and he wasn’t aware of the status of the complaint his company filed before becoming part of Salesforce.

Salesforce, for its part, didn’t have any comment on the unbundling announcement, but Microsoft’s bundling strategy seems to have worked quite well, with the company reporting it has over 320 million users worldwide. Compare that with Slack, which has 32 million users or 10% of Microsoft’s total. It’s hard to know what exactly that means given the differences in how the two companies count their users, but it’s clear that Microsoft has opened up a significant lead.

Maybe Butterfield was right, but it’s probably too late to matter. “While Microsoft is unbundling Teams simply to avoid an antitrust mess, it’s good for Salesforce/Slack for sure, but in many ways it may be a Pyrrhic victory,” Alan Pelz-Sharpe, founder and principal analyst at Deep Analysis, told TechCrunch. The market has matured to the point that many larger firms have made their choice, and since swapping out solutions isn’t a trivial matter, unbundling Teams is unlikely to have an appreciable impact on market share.

Microsoft’s announcement seemingly allows them to have their cake and eat it too, keeping their existing customers under the existing Office 365 bundling agreement, while charging future customers for using the product, and presumably giving the company an argument with regulators that they’ve unbundled Teams and are not in violation of any anticompetition rules.

In fact, Holger Mueller, an analyst at Constellation Research, says that this could be the first occurrence where an anticompetitive regulation helps the vendor’s business. “Microsoft has simply sold Teams to enough companies with its existing Office accounts and now no longer needs the energy and power of the enterprise license agreement,” Mueller said.

What’s more, rather than aiding Slack, he sees this as helping Microsoft to get Teams into more accounts where companies weren’t buying Office 365 licenses. Redmond can now sell standalone Teams licenses into non-Microsoft shops much more easily, all while building goodwill with regulators, and still sticking it to Slack in the stand-alone market battle.

That is probably not the outcome that Butterfield envisioned when he started complaining about Microsoft all those years ago, but the regulatory outcome doesn’t always come out in the way you expect, especially when the market shifts so dramatically in the intervening years — or Microsoft’s bundling strategy simply worked.