Moxie, which helps nurses launch medspas, raises a preemptive Series B from Lachy Groom

Barbie doll getting botox injection

Image Credits: Peter Dazeley / Getty Images

Moxie helps nurses open medspas by providing them with most of the tools they need to run their businesses, from billing software and marketing services to discounted supplies. The startup has raised a $10 million Series B led by existing investor Lachy Groom, a solo VC, with participation from SignalFire.

The round comes only a year after the startup announced its $15.7 million Series A, which values Moxie at nearly triple its previous valuation.

“Our business grew more than 4x over the last year,” said Moxie’s founder and CEO Dan Friedman. “We still had more than 75% of the capital from Series A, so there was no need for cash. But we also have a really big agenda and a big vision, and the [new funding] helps us double down.”

Medspas — treatment facilities that offer minimally invasive aesthetic procedures such as Botox, specialized facials, and laser treatments — have been increasing in popularity. Because most states require that registered nurses administer these procedures, the growing industry has attracted healthcare workers, many of whom were burned out at their hospital jobs, to launch their own medspa businesses.

Friedman, who previously co-founded Thinkful, an online coding business, was looking for his next entrepreneurial act after selling his educational company to Chegg for $100 million. Friedman decided to start Moxie after learning from a family friend about the complexities and high costs involved in launching a medspa. He developed a “business-in-a-box” solution that allows nurses to have their clinics ready for operation at a fraction of the time and cost compared to if they do it themselves.

“We make it easier, faster, and cheaper to launch a medspa,” Friedman said. “Then we support the growth of the practice with business software, including payments, integrated buy now, pay later, marketing tooling, and a suite of compliance tools.”

The startup also helps medspa owners save on their most significant expense, supplies, by partnering with major suppliers to negotiate bulk discounts. This enables Moxie’s clients to offer lower pricing and better compete with major medspa chains, such as Laser Away and those operated by private equity-backed management companies.

Moxie also pairs spa owners with success coaches who guide business growth. The company makes money by charging its clients a percentage of total sales, which resembles a franchise model in many ways, but there is one key differentiator: Moxie isn’t licensing its brand. “Our clients are not 250 medspas with ‘Moxie’ on the door,” Friedman said. “These are 250 medspas with their entrepreneurs’ names on the door.”

Other VC-backed companies that offer medspa services include Addition and Greycroft-backed Ever/Body and Botox provider Peachy. But since Moxie doesn’t run its own clinics, Friedman said he doesn’t view these startups as direct competitors.

“It’s a big category with more than $15 billion a year spent on medspas,” Friedman said. “They can succeed, and we can succeed.”

Beeble AI raises $4.75M to launch a virtual production platform for indie filmmakers

Image Credits: Beeble AI / Beeble AI

Visual effects (VFX) have emerged as essential in filmmaking and have transformed storytelling and creativity in the film industry with its diverse digital techniques. However, the high cost of VFX tools often leaves independent filmmakers and content creators working with modest budgets struggling to compete with larger productions. A new company, Beeble AI, is turning to AI to address this problem.

The South Korea-based VFX startup has developed virtual lighting solutions for filmmakers and visual effects artists to address the high cost of creating top-shelf, Hollywood-level visual effects and level the playing field in the film industry, empowering indie filmmakers and content creators to compete with larger productions.

The startup has now secured $4.75 million in seed funding led by Basis Set Ventures with participation from Fika Ventures at a valuation of $25 million, Beeble AI CEO and co-founder Hoon Kim told TechCrunch.

Beeble AI was founded in 2022 by five co-founding members who previously worked at the AI research and machine learning team of the South Korean game publisher Krafton. The co-founders, involved in AI-driven content creation, realized that no AI startup focused on “lighting,” which they thought was a crucial element in filmmaking and photography, leading to Beeble AI’s birth.

Beeble’s main product is SwitchLight Studio, a desktop app that offers relighting and composition within virtual environments. (SwitchLight Studio will be rebranded as Virtual Studio in the third quarter of this year, notes Kim.)

“While our initial focus was on virtual lighting, we are now shifting towards developing comprehensive virtual [production] studios,” Kim said in an interview with TechCrunch. “We foresee a future where small teams of fewer than 10 artists can create content that rivals that of major Hollywood studios.”

Virtual production entails combining virtual and physical settings in the creation of films. You may have seen the green screen, a background used in filming to allow editors to incorporate VFX during post-production. Kim explained that a large LED screen replaced the green screen in high-end virtual production, but the LED wall is still expensive for indie filmmakers.

“Powered by Unreal Engine and giant LED walls, virtual production creates the illusion of on-location shooting without the actual travel. However, this technology has been accessible only to filmmakers with million-dollar budgets due to its high cost and complexity,” Kim continued.

Unlike traditional virtual production companies, which use LED walls to bring virtual environments into the real world, Beeble’s virtual production platform will virtualize real actors into the virtual world, the company CEO told TechCrunch. With just a phone camera, users can access infinite locations, lighting, and camera options, all within a virtual environment, he added.

Disney+’s “The Mandalorian” is one example of a film shot using a virtual production for filming and real-time effects, Kim noted. Virtual production has rapidly become one of the fastest-growing areas of visual effects and filming.

Potential users of its virtual production platform could be not just B2C users (content creators and filmmakers) but B2B companies like ReelShort, a short-form video streaming app specializing in serialized dramas, Kim said.  

Generative AI companies, like OpenAI’s Sora and Runway, help make videos from text, which could entirely disrupt the animation and movie industry. However, Kim said that the generative AI models, including Sora and Runway, often produce random content and alter the original image or video even for simple tasks; Beeble is designed for predictability and ease of manipulation with AI capability. “To tell a compelling story, you need to have full control over every little detail of the project, including environment, characters, camera, and lighting,” Kim continued.

A text-based prompt interface is not the way to do this, and it doesn’t allow for detailed iteration to perfect your shots. Beeble aims to enable fully controllable video creation with AI.

The key to achieving significant cost reductions is their foundational AI model at the core, which enables you to adjust lighting, environments, and camera movements in the post-production stage, the startup says.

Per a report published by co-founding members at CVPR 2024, the foundational AI model “automatically digitizes 2D footage of an actor’s performance into a physically accurate 3D representation.” The reconstructed actor in a 3D space has precise geometry and textures, allowing artists complete freedom to alter lighting, environments, and camera angles.” Beeble AI claims that this capability significantly reduces budget constraints and allows creators to focus primarily on storytelling. Users can create cinematic shots in their living room using just an iPhone.

Beeble AI says it started generating revenue last October. Around 3 million users have downloaded its SwitchLight mobile app, an AI photo editor app. The startup also said Caption AI is integrating its SDK to offer advanced relighting features within its app.

With the new capital, it wants to expand its business to a virtual production studio platform. It plans to use the new capital to advance its foundational AI model, further product development, and hire staff, which now consists of seven employees.

Previous investors include Mashup Ventures and Kakao Ventures.

ILM shows off the new Stagecraft LED wall used for season 2 of ‘The Mandalorian’

Stoke Space hopper2

Stoke Space's initial launch plans at Cape Canaveral take shape

Stoke Space hopper2

Image Credits: Stoke Space (opens in a new window)

Stoke Space is nothing if not ambitious. The five-year-old launch startup has generated a lot of hype due to its bold plans to develop the first fully reusable rocket, with both the booster and second stage vertically returning to Earth. 

Those plans got a major boost a year ago, when the U.S. Space Force awarded Stoke and three other startups valuable launch pad real estate at Florida’s Cape Canaveral Space Force Station. Stoke plans on redeveloping the historic Launch Complex 14, which was home to John Glenn’s historic mission and other NASA programs, in time for its first launch in 2025. 

At the center of Stoke’s plans is Nova, a two-stage rocket that is designed so that both the booster and the second stage return to Earth and land vertically. The only other rocket under development that is aiming for full reuse is SpaceX’s Starship. According to Stoke, their reusable upper stage will unlock incredible possibilities, like the ability to return cargo from orbit, land anywhere on Earth, and drive launch prices down by an order of magnitude. 

Before any of this can take place, the Space Force must complete its “environmental assessment” of the company’s plans at LC-14, in order to evaluate how repeat launches will affect local flora and fauna. These assessments are mandatory under federal law, and they can often take months — but the upside is that they provide a closer look at a company’s operational plans. 

Stoke’s goals are audacious, but the draft environmental assessment for Stoke’s launch pad shows that it would be an error to expect a test of returning even the booster on the first flight. Indeed, the environmental assessment does not consider reusable operations at all, but only missions with the 132-foot-tall Nova flying in a fully expendable configuration. The document, released last month, calls this Stoke’s “phased program approach.” Phase 1 involves operating a totally expendable vehicle at a relatively low launch cadence. Phase 2, which would require a supplemental environmental analysis and is not considered in this draft document, would involve the fully reusable rocket. 

Image Credits: Stoke Space

To start, Stoke is seeking authorization to conduct around two launches next year — the first year of operation — and then told regulators that it anticipates a maximum launch cadence of 10 launches per year. Stoke told the regulators that Nova will be capable of carrying up to 7,000 kilograms to low Earth orbit, the maximum payload capacity of the rocket when it will not be reused. 

A person familiar with Stoke’s plans said that the company has no intention of pursuing the reusable aspects of Nova until it has successfully demonstrated the ability to regularly deploy payloads to planned orbits, and that this phased approach was always part of the internal roadmap. 

A phased approach isn’t uncommon: SpaceX, which is the global kingpin of launch, launched its Falcon 9 rocket for the first time in 2010, but only returned the booster back to Earth in 2015. Stoke is clearly seeking to take a similar path, though the draft document does not propose any dates by which the company might start testing its reusable tech.

While it’s too soon to say when reusable flights might start at the Cape, Stoke has been busy conducting its own “hop” campaigns of its second stage at its facilities in Washington State. Stoke CEO Andy Lapsa said in a recent podcast appearance that the company started developing Nova’s second stage first because there was no playbook on second-stage reuse; but because rocket stage design is so tightly coupled, they had to understand the second-stage parameters in order to begin to design the booster. 

“The whole vehicle, from a technical side, has to be designed with the end state in mind,” he said. “It has to be architected for that. Everything we’ve done from founding to today is take that end state and build for that end state architecture.” 

Once the reusable technology is fully developed, the Space Force will need to conduct a supplemental environmental analysis. At that point, the supplemental EA will consider the environmental impacts of landing at a landing zone near the launch pad, landing on a barge offshore, or at some other location. Depending on the complexity of the changes to the original analysis, this process could take six months or more. 

But Stoke will be ready to shift into that second phase, Lapsa said on the podcast: “The millisecond we reach orbit, our focus shifts entirely on, okay, now let’s show that we can get back down. Once we show that we can get back down […] then the millisecond after that, we start focusing on reuse.” 

Stoke Space's initial launch plans at Cape Canaveral take shape

Stoke Space hopper2

Image Credits: Stoke Space (opens in a new window)

Stoke Space is nothing if not ambitious. The five-year-old launch startup has generated a lot of hype due to its bold plans to develop the first fully reusable rocket, with both the booster and second stage vertically returning to Earth. 

Those plans got a major boost a year ago, when the U.S. Space Force awarded Stoke and three other startups valuable launch pad real estate at Florida’s Cape Canaveral Space Force Station. Stoke plans on redeveloping the historic Launch Complex 14, which was home to John Glenn’s historic mission and other NASA programs, in time for its first launch in 2025. 

At the center of Stoke’s plans is Nova, a two-stage rocket that is designed so that both the booster and the second stage return to Earth and land vertically. The only other rocket under development that is aiming for full reuse is SpaceX’s Starship. According to Stoke, their reusable upper stage will unlock incredible possibilities, like the ability to return cargo from orbit, land anywhere on Earth, and drive launch prices down by an order of magnitude. 

Before any of this can take place, the Space Force must complete its “environmental assessment” of the company’s plans at LC-14, in order to evaluate how repeat launches will affect local flora and fauna. These assessments are mandatory under federal law, and they can often take months — but the upside is that they provide a closer look at a company’s operational plans. 

Stoke’s goals are audacious, but the draft environmental assessment for Stoke’s launch pad shows that it would be an error to expect a test of returning even the booster on the first flight. Indeed, the environmental assessment does not consider reusable operations at all, but only missions with the 132-foot-tall Nova flying in a fully expendable configuration. The document, released last month, calls this Stoke’s “phased program approach.” Phase 1 involves operating a totally expendable vehicle at a relatively low launch cadence. Phase 2, which would require a supplemental environmental analysis and is not considered in this draft document, would involve the fully reusable rocket. 

Image Credits: Stoke Space

To start, Stoke is seeking authorization to conduct around two launches next year — the first year of operation — and then told regulators that it anticipates a maximum launch cadence of 10 launches per year. Stoke told the regulators that Nova will be capable of carrying up to 7,000 kilograms to low Earth orbit, the maximum payload capacity of the rocket when it will not be reused. 

A person familiar with Stoke’s plans said that the company has no intention of pursuing the reusable aspects of Nova until it has successfully demonstrated the ability to regularly deploy payloads to planned orbits, and that this phased approach was always part of the internal roadmap. 

A phased approach isn’t uncommon: SpaceX, which is the global kingpin of launch, launched its Falcon 9 rocket for the first time in 2010, but only returned the booster back to Earth in 2015. Stoke is clearly seeking to take a similar path, though the draft document does not propose any dates by which the company might start testing its reusable tech.

While it’s too soon to say when reusable flights might start at the Cape, Stoke has been busy conducting its own “hop” campaigns of its second stage at its facilities in Washington State. Stoke CEO Andy Lapsa said in a recent podcast appearance that the company started developing Nova’s second stage first because there was no playbook on second-stage reuse; but because rocket stage design is so tightly coupled, they had to understand the second-stage parameters in order to begin to design the booster. 

“The whole vehicle, from a technical side, has to be designed with the end state in mind,” he said. “It has to be architected for that. Everything we’ve done from founding to today is take that end state and build for that end state architecture.” 

Once the reusable technology is fully developed, the Space Force will need to conduct a supplemental environmental analysis. At that point, the supplemental EA will consider the environmental impacts of landing at a landing zone near the launch pad, landing on a barge offshore, or at some other location. Depending on the complexity of the changes to the original analysis, this process could take six months or more. 

But Stoke will be ready to shift into that second phase, Lapsa said on the podcast: “The millisecond we reach orbit, our focus shifts entirely on, okay, now let’s show that we can get back down. Once we show that we can get back down […] then the millisecond after that, we start focusing on reuse.” 

Ex-Googler joins filmmaker to launch DreamFlare, a studio and streaming platform for AI-generated video

Image Credits: DreamFlare

A startup called DreamFlare AI is emerging from stealth on Tuesday with the goal of helping content creators make and monetize short-form AI-generated content.

The company, co-founded by former Google employee Josh Liss and documentary filmmaker Rob Bralver, does not make or sell its own AI technology to create video. Rather, it’s envisioned as a sort of studio where creators work with professional storytellers to create video using third-party AI tools like Runway, Midjourney, ElevenLabs, and others. The videos will then be distributed through a subscription-based online service. Creators will earn money from revenue-sharing on subscriptions and advertising, as well as a few other options.

DreamFlare will offer two types of animated content on the platform: Flips, which are comic book-style stories with AI-generated short clips and images that users can scroll through, and Spins, which are interactive choose-your-own-adventure short films where viewers can change certain outcomes of the story. 

The launch of DreamFlare comes at a time when artists in Hollywood see AI technology as a threat. A 2024 study commissioned by the Animation Guild, a union for animation artists, found that 75% of film production companies using AI have decreased or eliminated jobs.

Despite these concerns, DreamFlare insists it’s creating a new space for creators to earn revenue from a new form of entertainment; it isn’t replacing anyone’s job. 

“It’s an opportunity for creators to democratize storytelling,” Liss told TechCrunch. “We are excited to give human beings the opportunity to leverage this tool to tell exciting new stories,” he added. 

Image Credits: DreamFlare

Among those optimistic about AI entertainment and video platforms like DreamFlare is FoundersX Ventures, which has invested. The company also claims it has creative partnerships with various entertainment industry executives, including those from Disney, Netflix, and Universal. Additionally, DreamFlare says it’s partnered with “Oscar- and Emmy-winning filmmakers and showrunners,” according to Liss, who said that they are “currently staying anonymous because of the controversy around [AI-generated content.]”

The company says it has raised $1.6 million in funding to date.

How DreamFlare works

Creators on DreamFlare are permitted to use any existing AI tool that offers paid plans, but many of these tools have ethical and legal questions surrounding them. For example, OpenAI, the company behind the Sora model, does not disclose how it procures training videos. 

DreamFlare claims to have a rigorous review process to ensure submissions are not based on copyrighted material and does not accept R-rated content. When published content does not meet these standards, the platform has a DMCA takedown notice for anyone who thinks their copyright has been infringed.

“We’re always trying to control quality, safety, and legality before anything is published on the platform,” Bralver explained. 

When creators successfully pass DreamFlare’s application process, they work alongside the creative team on story development. (According to the company, DreamFlare team members are former Disney and Universal executives who have chosen to remain anonymous.) 

While creating content inspired by copyrighted intellectual property like “Star Wars” isn’t allowed, public domain characters are free game, which is why there are titles on the platform related to Little Red Riding Hood, Alice in Wonderland, Peter Pan, Frankenstein, and Thor, among others. 

From what we saw during a demo of the platform, the quality of AI-generated video output was decent enough, albeit with occasional jerky and a sometimes strange-looking animation style. (It’s definitely nowhere close to being Pixar-level quality.) Some of the content on DreamFlare is original and creative, such as one about a cat detective who had a little too much catnip. 

Creators can earn money on DreamFlare in four ways: platform revenue sharing, cuts on ad revenue, tips from fans, and a soon-to-be-launched marketplace for creators to sell merchandise. 

There’s also a fan fund that allows followers to support content creators and participate in the process. For example, if a user pays for the Supporter package, they will be featured in the credits of a future video. If a follower wants to pay more, they have the opportunity to connect with the creator in a private Discord channel. The highest contributing followers are promoted to producer status and get exclusive insights into how a creator makes their content.

Image Credits: DreamFlare

At launch, around 100 content creators are on the platform, providing a diverse range of content, from sci-fi and comedy to fantasy, mystery, and more. 

DreamFlare’s premium membership costs $2.99 per month or $24 per year. It currently has a limited-time offer that includes a one-year subscription for $9.99. There is also free weekly content to try and get people hooked on the idea.

Beeble AI raises $4.75M to launch a virtual production platform for indie filmmakers

Image Credits: Beeble AI / Beeble AI

Visual effects (VFX) have emerged as essential in filmmaking and have transformed storytelling and creativity in the film industry with its diverse digital techniques. However, the high cost of VFX tools often leaves independent filmmakers and content creators working with modest budgets struggling to compete with larger productions. A new company, Beeble AI, is turning to AI to address this problem.

The South Korea-based VFX startup has developed virtual lighting solutions for filmmakers and visual effects artists to address the high cost of creating top-shelf, Hollywood-level visual effects and level the playing field in the film industry, empowering indie filmmakers and content creators to compete with larger productions.

The startup has now secured $4.75 million in seed funding led by Basis Set Ventures with participation from Fika Ventures at a valuation of $25 million, Beeble AI CEO and co-founder Hoon Kim told TechCrunch.

Beeble AI was founded in 2022 by five co-founding members who previously worked at the AI research and machine learning team of the South Korean game publisher Krafton. The co-founders, involved in AI-driven content creation, realized that no AI startup focused on “lighting,” which they thought was a crucial element in filmmaking and photography, leading to Beeble AI’s birth.

Beeble’s main product is SwitchLight Studio, a desktop app that offers relighting and composition within virtual environments. (SwitchLight Studio will be rebranded as Virtual Studio in the third quarter of this year, notes Kim.)

“While our initial focus was on virtual lighting, we are now shifting towards developing comprehensive virtual [production] studios,” Kim said in an interview with TechCrunch. “We foresee a future where small teams of fewer than 10 artists can create content that rivals that of major Hollywood studios.”

Virtual production entails combining virtual and physical settings in the creation of films. You may have seen the green screen, a background used in filming to allow editors to incorporate VFX during post-production. Kim explained that a large LED screen replaced the green screen in high-end virtual production, but the LED wall is still expensive for indie filmmakers.

“Powered by Unreal Engine and giant LED walls, virtual production creates the illusion of on-location shooting without the actual travel. However, this technology has been accessible only to filmmakers with million-dollar budgets due to its high cost and complexity,” Kim continued.

Unlike traditional virtual production companies, which use LED walls to bring virtual environments into the real world, Beeble’s virtual production platform will virtualize real actors into the virtual world, the company CEO told TechCrunch. With just a phone camera, users can access infinite locations, lighting, and camera options, all within a virtual environment, he added.

Disney+’s “The Mandalorian” is one example of a film shot using a virtual production for filming and real-time effects, Kim noted. Virtual production has rapidly become one of the fastest-growing areas of visual effects and filming.

Potential users of its virtual production platform could be not just B2C users (content creators and filmmakers) but B2B companies like ReelShort, a short-form video streaming app specializing in serialized dramas, Kim said.  

Generative AI companies, like OpenAI’s Sora and Runway, help make videos from text, which could entirely disrupt the animation and movie industry. However, Kim said that the generative AI models, including Sora and Runway, often produce random content and alter the original image or video even for simple tasks; Beeble is designed for predictability and ease of manipulation with AI capability. “To tell a compelling story, you need to have full control over every little detail of the project, including environment, characters, camera, and lighting,” Kim continued.

A text-based prompt interface is not the way to do this, and it doesn’t allow for detailed iteration to perfect your shots. Beeble aims to enable fully controllable video creation with AI.

The key to achieving significant cost reductions is their foundational AI model at the core, which enables you to adjust lighting, environments, and camera movements in the post-production stage, the startup says.

Per a report published by co-founding members at CVPR 2024, the foundational AI model “automatically digitizes 2D footage of an actor’s performance into a physically accurate 3D representation.” The reconstructed actor in a 3D space has precise geometry and textures, allowing artists complete freedom to alter lighting, environments, and camera angles.” Beeble AI claims that this capability significantly reduces budget constraints and allows creators to focus primarily on storytelling. Users can create cinematic shots in their living room using just an iPhone.

Beeble AI says it started generating revenue last October. Around 3 million users have downloaded its SwitchLight mobile app, an AI photo editor app. The startup also said Caption AI is integrating its SDK to offer advanced relighting features within its app.

With the new capital, it wants to expand its business to a virtual production studio platform. It plans to use the new capital to advance its foundational AI model, further product development, and hire staff, which now consists of seven employees.

Previous investors include Mashup Ventures and Kakao Ventures.

ILM shows off the new Stagecraft LED wall used for season 2 of ‘The Mandalorian’

Ex-Googler joins filmmaker to launch DreamFlare, a studio for AI-generated video

Image Credits: DreamFlare

A startup called DreamFlare AI is emerging from stealth on Tuesday with the goal of helping content creators make and monetize short-form AI-generated content.

The company, co-founded by former Google employee Josh Liss and documentary filmmaker Rob Bralver, does not make or sell its own AI technology to create video. Rather, it’s envisioned as a sort of studio where creators work with professional storytellers to create video using third-party AI tools like Runway, Midjourney, ElevenLabs, and others. The videos will then be distributed through a subscription-based online service. Creators will earn money from revenue-sharing on subscriptions and advertising, as well as a few other options.

DreamFlare will offer two types of animated content on the platform: Flips, which are comic book-style stories with AI-generated short clips and images that users can scroll through, and Spins, which are interactive choose-your-own-adventure short films where viewers can change certain outcomes of the story. 

The launch of DreamFlare comes at a time when artists in Hollywood see AI technology as a threat. A 2024 study commissioned by the Animation Guild, a union for animation artists, found that 75% of film production companies using AI have decreased or eliminated jobs.

Despite these concerns, DreamFlare insists it’s creating a new space for creators to earn revenue from a new form of entertainment; it isn’t replacing anyone’s job. 

“It’s an opportunity for creators to democratize storytelling,” Liss told TechCrunch. “We are excited to give human beings the opportunity to leverage this tool to tell exciting new stories,” he added. 

Among those optimistic about AI entertainment and video platforms like DreamFlare are FoundersX Ventures, which has invested. The company also claims it has creative partnerships with various entertainment industry executives, including those from Disney, Netflix, and Universal. Additionally, DreamFlare says it’s partnered with “Oscar and Emmy winning filmmakers and showrunners,” according to Liss, who said that they are “currently staying anonymous because of the controversy around [AI-generated content.]”

The company says it has raised $1.6 million in funding to date.

How DreamFlare works

Creators on DreamFlare are permitted to use any existing AI tool that offers paid plans, but many of these tools have ethical and legal questions surrounding them. For example, OpenAI, the company behind the Sora model, does not disclose how it procures training videos. 

DreamFlare claims to have a rigorous review process to ensure submissions are not based on copyrighted material, and does not accept R-rated content. When published content does not meet these standards, the platform has a DMCA takedown notice for anyone who thinks their copyright has been infringed.

“We’re always trying to control quality, safety, and legality before anything is published on the platform,” Bralver explained. 

When creators successfully pass DreamFlare’s application process, they work alongside the creative team on story development. (According to the company, DreamFlare team members are former Disney and Universal executives who have chosen to remain anonymous.) 

While creating content inspired by copyrighted intellectual property like “Star Wars” isn’t allowed, public domain characters are free game, which is why there are titles on the platform related to Little Red Riding Hood, Alice in Wonderland, Peter Pan, Frankenstein, and Thor, among others. 

From what we saw during a demo of the platform, the quality of AI-generated video output was decent enough, albeit with occasional jerky and a sometimes strange-looking animation style. (It’s definitely nowhere close to being Pixar-level quality.) Some of the content on DreamFlare is original and creative, such as one about a cat detective who had a little too much catnip. 

Creators can earn money on DreamFlare in four ways: platform revenue sharing, cuts on ad revenue, tips from fans, and a soon-to-be-launched marketplace for creators to sell merchandise. 

There’s also a fan fund that allows followers to support content creators and participate in the process. For example, if a user pays for the Supporter package, they will be featured in the credits of a future video. If a follower wants to pay more, they have the opportunity to connect with the creator in a private Discord channel. The highest contributing followers are promoted to producer status and get exclusive insights into how a creator makes their content.

At launch, around 100 content creators are on the platform, providing a diverse range of content, from sci-fi and comedy to fantasy, mystery, and more. 

DreamFlare’s premium membership costs $2.99 per month or $24 per year. It currently has a limited-time offer that includes a one-year subscription for $9.99. There is also free weekly content to try and get people hooked on the idea.

Will Apple's Vision Pro launch be a Groundhog Day for immersive computing?

Collage of Apple Vision Pro headset for illustrative purposes

Image Credits: Darrell Etherington / TechCrunch

Apple’s Vision Pro headset is set to finally launch in the U.S. on February 2, at a retail price of $3,499. At that price, there’s no doubt it’ll have limited appeal, which seems just fine with Apple given reports about their initial sales expectations. Apple originally announced Vision Pro last June at its annual developer event, and it’s been teasing out hands-on time to select media, influencers and developers in an extended hype and ecosystem preparation event ever since.

The big question remains: Will Apple Vision Pro meaningfully move the needle on immersive computing — or will it be yet another splashy launch for a VR/AR/MR product that fails to change the status quo?

Based on the handful of firsthand accounts available, one thing seems clear about Apple Vision Pro: No one’s doubting its quality or capabilities. Many were impressed by the experience of playing back volumetric video they themselves had captured with their iPhones thanks to a recent software update, and people seemed to universally enjoy watching blockbuster movies in 3D on the headset during their demo. Reactions to other aspects of the experience were more mixed, but again generally very positive.

Curiously, much of what Apple pitched with the Vision Pro launch focused on things you already do all the time on your other devices, including your iPhone, Mac and iPad. The strategy makes a lot of sense given how prior mixed reality devices have missed the mark with overblown claims about revolutionary new computing paradigms, only to end up as niche successes at best — or expensive closet adornments at worst.

The other major player who’s had any success so far in this market is Meta, which introduced the third generation of its Quest headset last year. Meta’s playing in a very different pond when compared to Apple based on price point alone, since the Quest 3 retails for $499 — seven times less than Apple’s debut hardware. Meta started with a more expensive, higher-end option way back in the Oculus origin days, and then went for a mass-market approach, tackling price first and adding back in features as component costs went down to try to find a happy medium where budget accommodations met feature set and quality to drive mass market appeal.

Based on VR client usage tracking numbers, the Meta Quest 3 appears to be doing decently well and may have picked up steam during the most recent holiday quarter, but it’s also been reported that demand for the category is down generally and Meta’s still funneling way more money into the category than it’s recouping from potentially dwindling demand. And that’s with an extremely solid product on the market: The Quest 3 is easily the best VR hardware I’ve used so far in terms of balancing great features and performance with a decent price tag and a fairly impressive software library.

It’s unclear what kind of software library Apple Vision Pro will have at launch; the company has been hosting developer preview events and working with them to prepare apps for consumer availability, so it seems likely they’ll have some standout offerings when it’s time for the first Vision Pro customers to boot up their devices and strap them to their faces.

Apple’s approach to this inaugural launch of its XR ambitions is unique, and it has the added advantage of being a company with a long history of coming relatively late to a category and then owning it, with the iPhone, the iPad and the Apple Watch all being stellar examples.

But it’s facing something here it hasn’t necessarily in the past, which is a device category that has actually enjoyed lots of hype and heraldry as the “next big thing” in computing — for around a decade now. Portable media players and smartphones in particular didn’t enjoy this kind of paradigm-shift shot-calling, only to fall mostly flat the way VR and mixed reality has to date.

Mark Zuckerberg has experienced firsthand how easy it is to be stuck in a seeming time-loop unveiling the next generation of spatial computing, only to find himself onstage the very next year announcing basically the same thing in a slightly different way — and yet not having that future come to pass. Apple seems poised to potentially fall into the same trap, with Vision Pro a splashy instantiation of a mixed reality future we’ve all seen promised before but have no real interest in actually collectively buying into.

The new Apple Music Classical app, shown on 3 smartphone screens, offers Apple Music subscribers access to over 5 million classical music tracks.

Apple Music Classical to launch in China, Japan, Taiwan and more on Jan. 24

The new Apple Music Classical app, shown on 3 smartphone screens, offers Apple Music subscribers access to over 5 million classical music tracks.

Image Credits: Apple

Starting on January 24, Apple Music Classical will be available in China, Japan, South Korea, Hong Kong, Taiwan and Macau. Apple made the announcement in an X post today.

Apple’s classical music app launched in most countries in March 2023, except for select markets like the above six countries. Russia and Turkey were also excluded from the initial launch, however, the company previously noted there would be future availability.

Additionally, Apple Music Classical was originally only an iOS-only app but arrived on Android devices this past summer.

Apple Music Classical gives Apple Music subscribers free access to over five million tracks, more than 700 curated playlists, exclusive albums, high-quality audio and more. A standard Apple Music subscription costs $10.99 per month in the U.S., whereas a student plan costs $5.99 per month and the Family tier is priced at $16.99 per month.

Apple Music Classical is now available for download to everyone