The VC buying up prized real estate in SF says not to 'listen to agitators'

Image Credits: TechCrunch

VC Neil Mehta, the Greenoaks Capital co-founder tied to a growing number of building purchases across several blocks of San Francisco’s once-glittering Fillmore Street, defended himself on Monday via an op-ed in The San Francisco Standard, saying the moves are solely about revitalizing a “city that has given me more than I could ever give back to it.” 

The piece aims to push back at local politicians, including SF Supervisor Aaron Peskin, who recently held a rally on the shop-lined street, telling onlookers and reporters that Mehta’s buying spree will displace longtime small businesses. (Peskin is also running for mayor currently.)

Mehta – who definitely underestimated the blowback from the purchases – further argues that he’s not looking to make a fast buck on the real estate holdings. They’re being purchased via a real estate fund that he backs through a nonprofit to which he (alone) has donated $100 million. As such, writes Mehta, he has  “zero financial interest in these properties,” “will receive nothing in return,” and any proceeds will be “reinvested in the community.”

Meta updates Ray-Ban smart glasses with real-time AI video, reminders, and QR code scanning

Image Credits: Meta

Meta CEO Mark Zuckerberg announced updates to the company’s Ray-Ban Meta smart glasses at Meta Connect 2024 on Wednesday. Meta continued to make the case that smart glasses can be the next big consumer device, announcing some new AI capabilities and familiar features from smartphones coming to Ray-Ban Meta later this year.

Some of Meta’s new features include real-time AI video processing and live language translation. Other announcements — like QR code scanning, reminders, and integrations with iHeartRadio and Audible — seem to give Ray-Ban Meta users the features from their smartphones that they already know and love.

Meta says its smart glasses will soon have real-time AI video capabilities, meaning you can ask the Ray-Ban Meta glasses questions about what you’re seeing in front of you, and Meta AI will verbally answer you in real time. Currently, the Ray-Ban Meta glasses can only take a picture and describe that to you or answer questions about it, but the video upgrade should make the experience more natural, in theory at least. These multimodal features are slated to come later this year.

In a demo, users could ask Ray-Ban Meta questions about a meal they were cooking, or city scenes taking place in front of them. The real-time video capabilities mean that Meta’s AI should be able to process live action and respond in an audible way.

This is easier said than done, however, and we’ll have to see how fast and seamless the feature is in practice. We’ve seen demonstrations of these real-time AI video capabilities from Google and OpenAI, but Meta would be the first to launch such features in a consumer product.

Zuckerberg also announced live language translation for Ray-Ban Meta. English-speaking users can talk to someone speaking French, Italian, or Spanish, and their Ray-Ban Meta glasses should be able to translate what the other person is saying into their language of choice. Meta says this feature is coming later this year and will include more language later on.

The Ray-Ban Meta glasses are getting reminders, which will allow people to ask Meta AI to remind them about things they look at through the smart glasses. In a demo, a user asked their Ray-Ban Meta glasses to remember a jacket they were looking at so they could share the image with a friend later on.

Meta announced that integrations with Amazon Music, Audible, and iHeart are coming to its smart glasses. This should make it easier for people to listen to music on their streaming service of choice using the glasses’ built-in speakers.

The Ray-Ban Meta glasses will also gain the ability to scan QR codes or phone numbers from the glasses. Users can ask the glasses to scan something, and the QR code will immediately open on the person’s phone with no further action required.

The smart glasses will also be available in a range of new Transitions lenses, which respond to ultraviolet light to adjust to the brightness of the room you’re in.

Snap previews its real-time image model that can generate AR experiences

A picture taken on October 1, 2019 in Lille shows the logo of mobile app Snapchat displayed on a tablet.

Image Credits: Denis Charlet/AFP (opens in a new window) / Getty Images

At the Augmented World Expo on Tuesday, Snap teased an early version of its real-time, on-device image diffusion model that can generate vivid AR experiences. The company also unveiled generative AI tools for AR creators.

Snap co-founder and CTO Bobby Murphy said onstage that the model is small enough to run on a smartphone and fast enough to re-render frames in real time, guided by a text prompt.

Murphy said that while the emergence of generative AI image diffusion models has been exciting, these models need to be significantly faster for them to be impactful for augmented reality, which is why its teams have been working to accelerate machine learning models. 

Snapchat users will start to see Lenses with this generative model in the coming months, and Snap plans to bring it to creators by the end of the year. 

Image Credits: Snap

“This and future real time on device generative ML models speak to an exciting new direction for augmented reality, and is giving us space to reconsider how we imagine rendering and creating AR experiences altogether,” Murphy said.

Murphy also announced that Lens Studio 5.0 is launching today for developers with access to new generative AI tools that will help them create AR effects much faster than currently possible, saving them weeks and even months. 

AR creators can create selfie Lenses by generating highly realistic ML face effects. Plus, they can generate custom stylization effects that apply a realistic transformation over the user’s face, body and surroundings in real time. Creators can also generate a 3D asset in minutes and include it in their Lenses. 

In addition, AR creators can generate characters like aliens or wizards with a text or image prompt using the company’s Face Mesh technology. They can also generate face masks, texture and materials within minutes. 

The latest version of Lens Studio also includes an AI assistant that can answer questions that AR creators may have.

Google pauses its experiment to expand real-money games on the Play Store

google play logo

Image Credits: Google

Google said today that it globally paused its experiment that aimed to allow new kinds of real-money games on the Play Store, citing the challenges that come with the lack of a central authority to approve such apps in some regions.

In January, the company said it would start allowing real-money apps widely in June in India, Brazil and Mexico. Notably, India has had a pilot program for fantasy sports and Rummy apps since 2022 and in Mexico since November 2023.

The company said it would still allow the apps that were part of the pilot program to continue operating on the Play Store in India.

“Expanding our support of real-money gaming apps in markets without a central licensing framework has proven more difficult than expected and we need additional time to get it right for our developer partners and the safety of our users. Google Play remains deeply committed to helping all developers responsibly build new businesses and reach wider audiences across a variety of content types and genres,” a Google spokesperson said in a statement.

The company added that it wants to support real-money games on the Play Store, but is trying to figure out a suitable framework for that. Google also said that it is still working on a new service fee structure for these kinds of games and will need more time to finalize the details.

Google specifically addressed the Indian market without citing any central licensing framework that identifies what kind of games are allowed in the country. The company likely doesn’t want to tread the treacherous waters of regulation. Plus, last year, India’s IT ministry paused the formation of a self-regulating body for the gaming industry that might have defined rules about real-money games.

Meta is tagging real photos as 'Made with AI,' say photographers

Meta social media icons including Threads are displayed on a smartphone

Image Credits: Jonathan Raa/NurPhoto / Getty Images

Earlier in February, Meta said that it would start labeling photos created with AI tools on its social networks. Since May, Meta has regularly tagged some photos with a “Made with AI” label on its Facebook, Instagram and Threads apps.

But the company’s approach of labeling photos has drawn ire from users and photographers after attaching the “Made with AI” label to photos that have not been created using AI tools.

There are plenty of examples of Meta automatically attaching the label to photos that were not created through AI. For example, this photo of Kolkata Knight Riders winning the Indian Premier League Cricket tournament. Notably, the label is only visible on the mobile apps and not on the web.

An Instagram photo of the Kolkata Knight Riders, labeled incorrectly as "Made with AI".
An Instagram photo of the Kolkata Knight Riders, labeled as “Made with AI.” Image Credit: Instagram (screenshot)

Plenty of other photographers have raised concerns over their images having been wrongly tagged with the “Made with AI” label. Their point is that simply editing a photo with a tool should not be subject to the label.

Former White House photographer Pete Souza said in an Instagram post that one of his photos was tagged with the new label. Souza told TechCrunch in an email that Adobe changed how its cropping tool works and you have to “flatten the image” before saving it as a JPEG image. He suspects that this action has triggered Meta’s algorithm to attach this label.

“What’s annoying is that the post forced me to include the ‘Made with AI’ even though I unchecked it,” Souza told TechCrunch.

A photo taken by Pete Souza, but which Instagram has labeled as “Made with AI.” Image Credit: Instagram (screenshot)

Meta would not answer on the record to TechCrunch’s questions about Souza’s experience or other photographers’ posts who said their posts were incorrectly tagged. However, after publishing of the story, Meta said the company is evaluating its approach to indicate labels reflect the amount of AI used in an image.

“Our intent has always been to help people know when they see content that has been made with AI. We are taking into account recent feedback and continue to evaluate our approach so that our labels reflect the amount of AI used in an image,” a Meta spokesperson told TechCrunch.

In a February blog post, Meta said it utilizes metadata of images to detect the label.

“We’re building industry-leading tools that can identify invisible markers at scale — specifically, the “AI generated” information in the C2PA and IPTC technical standards — so we can label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools,” the company said at that time.

As PetaPixel reported last week, Meta seems to be applying the “Made with AI” label when photographers use tools such as Adobe’s Generative AI Fill to remove objects.

While Meta hasn’t clarified when it automatically applies the label, some photographers have sided with Meta’s approach, arguing that any use of AI tools should be disclosed. The company told TechCrunch that it is actively working with companies that have AI-powered tools for creation to refine its approach.

“We rely on industry-standard indicators that other companies include in content from their tools, so we’re actively working with these companies to improve the process so our labeling approach matches our intent,”

For now, Meta provides no separate labels to indicate if a photographer used a tool to clean up their photo, or used AI to create it. For users, it might be hard to understand how much AI was involved in a photo. Meta’s label specifies that “Generative AI may have been used to create or edit content in this post” — but only if you tap on the label.

Despite this approach, there are plenty of photos on Meta’s platforms that are clearly AI-generated, and Meta’s algorithm hasn’t labeled them. With U.S. elections to be held in a few months, social media companies are under more pressure than ever to correctly handle AI-generated content. 

The story has been updated with Meta’s comments.

Meta’s new AI deepfake playbook: More labels, fewer takedowns

After winning a landmark case against real estate agents, this startup aims to replace them with a flat fee

Landian emerges from stealth

Image Credits: Landian

One of the people who successfully sued the National Association of Realtors (NAR) to change real estate commissions has co-founded a new real estate startup.

It all began in 2017 when Josh Sitzer and his wife listed their home for sale in Kansas City. The couple was frustrated by the fact they had to pay a 3% commission to a buyer’s agent.

“Due to the anti-competitive structure of the industry before the lawsuit, I, as the seller, was effectively coerced into paying 3% of my home’s selling price to a buyer’s agent in order to achieve a successful sale,” he told TechCrunch. “While hiring agents is a choice for many, I don’t believe anyone should be bullied into paying for undesired services due to unfair industry practices,” he added.

Sitzer shared his frustration with his neighbor, who happened to be a lawyer familiar with the subject matter. By 2019, he and other homeowners had filed a class-action lawsuit (Moehrl et al. v. National Association of Realtors, et al.) against the NAR. They received a verdict last year that resulted in a settlement earlier this year that will radically change how home real estate is sold.

The National Association of Realtors agreed to pay $418 million in damages to settle lawsuits. The association also agreed to abolish the “Participation Rule” that required sell-side agents to make an offer of compensation to buyer brokers. Between that and other rule changes agreed to, the real estate market is expected to be considerably transformed.

“I wouldn’t say I had expectations in the beginning, as it was a multi-year battle of ups and downs, but I had enough confidence in my position to commit to taking action,” Sitzer said.

To take advantage of the new landscape, Sitzer has teamed up with Bryce Galen and Neal Batra to found a startup called Landian, which aims to help homebuyers benefit from the rule change that resulted from the lawsuit by offering flat-fee real estate agents on demand. The name Landian blends the words “Land” and “Guardian.”

That startup is emerging from stealth Thursday with an offering in beta, TechCrunch is the first to report. The site, according to its founders, allows users to import listings from any real estate site and then book a home tour or prepare an offer with a licensed local agent, without owing a commission.

Advances in technology years ago make it easier for homebuyers to find properties they are interested in looking at or buying, so the model of buyers’ agents getting a 3% commission is considered antiquated by many. Some buyers have argued that it’s unfair to pay such a large commission to an agent when they did most of the legwork themselves.

Buyers have the option to pay à la carte for Landian’s offering: $49 for each home tour and $199 for an offer prep session. If they want more hand-holding, they can cough up a flat fee of $1,799, which includes up to five home tours and two offer prep sessions, with additional services available on an à la carte basis. But they only have to pay that upon closing. So if you don’t end up buying a house through Landian and you commit to that agreement, you don’t owe anything, Galen said.

“With Landian, homebuyers are protected from the new reality of paying exorbitant commissions out of pocket that eat into their closing costs,” said Galen, who previously founded the fintech company Zero, which was acquired by Avant in 2021. People don’t need to use a buyer’s agent in the same way.”

A lot of industry incumbents such as Redfin and Zillow are not incentivized to change the pricing model, in Galen’s view.

“Because the Zillows and Redfins and this sort of old guard real estate tech companies have thrived and grown in a world where a buyer agent gets 3%, they’re not leading the change here,” Galen told TechCrunch. “It’s a new wave of startups like Landian that we expect will lead change.”

Batra agrees.

“My bet is that, following the NAR settlement, most agents will convert from relying solely on the traditional model based on speculation and higher fees to incorporating the Landian flat-fee model,” he said.

The New York-based startup has not yet raised external capital, so far just operating with friends and family money. It is in the process of raising a seed round.

Want more fintech news in your inbox? Sign up for TechCrunch Fintech here.

Want to reach out with a tip? Email me at [email protected] or send me a message on Signal at 408.204.3036. You can also send a note to the whole TechCrunch crew at [email protected]. For more secure communications, click here to contact us, which includes SecureDrop and links to encrypted messaging apps.

Snap previews its real-time image model that can generate AR experiences

A picture taken on October 1, 2019 in Lille shows the logo of mobile app Snapchat displayed on a tablet.

Image Credits: Denis Charlet/AFP (opens in a new window) / Getty Images

At the Augmented World Expo on Tuesday, Snap teased an early version of its real-time, on-device image diffusion model that can generate vivid AR experiences. The company also unveiled generative AI tools for AR creators.

Snap co-founder and CTO Bobby Murphy said onstage that the model is small enough to run on a smartphone and fast enough to re-render frames in real time, guided by a text prompt.

Murphy said that while the emergence of generative AI image diffusion models has been exciting, these models need to be significantly faster for them to be impactful for augmented reality, which is why its teams have been working to accelerate machine learning models. 

Snapchat users will start to see Lenses with this generative model in the coming months, and Snap plans to bring it to creators by the end of the year. 

Image Credits: Snap

“This and future real time on device generative ML models speak to an exciting new direction for augmented reality, and is giving us space to reconsider how we imagine rendering and creating AR experiences altogether,” Murphy said.

Murphy also announced that Lens Studio 5.0 is launching today for developers with access to new generative AI tools that will help them create AR effects much faster than currently possible, saving them weeks and even months. 

AR creators can create selfie Lenses by generating highly realistic ML face effects. Plus, they can generate custom stylization effects that apply a realistic transformation over the user’s face, body and surroundings in real time. Creators can also generate a 3D asset in minutes and include it in their Lenses. 

In addition, AR creators can generate characters like aliens or wizards with a text or image prompt using the company’s Face Mesh technology. They can also generate face masks, texture and materials within minutes. 

The latest version of Lens Studio also includes an AI assistant that can answer questions that AR creators may have.

Google pauses its experiment to expand real-money games on the Play Store

google play logo

Image Credits: Google

Google said today that it globally paused its experiment that aimed to allow new kinds of real-money games on the Play Store, citing the challenges that come with the lack of a central authority to approve such apps in some regions.

In January, the company said it would start allowing real-money apps widely in June in India, Brazil and Mexico. Notably, India has had a pilot program for fantasy sports and Rummy apps since 2022 and in Mexico since November 2023.

The company said it would still allow the apps that were part of the pilot program to continue operating on the Play Store in India.

“Expanding our support of real-money gaming apps in markets without a central licensing framework has proven more difficult than expected and we need additional time to get it right for our developer partners and the safety of our users. Google Play remains deeply committed to helping all developers responsibly build new businesses and reach wider audiences across a variety of content types and genres,” a Google spokesperson said in a statement.

The company added that it wants to support real-money games on the Play Store, but is trying to figure out a suitable framework for that. Google also said that it is still working on a new service fee structure for these kinds of games and will need more time to finalize the details.

Google specifically addressed the Indian market without citing any central licensing framework that identifies what kind of games are allowed in the country. The company likely doesn’t want to tread the treacherous waters of regulation. Plus, last year, India’s IT ministry paused the formation of a self-regulating body for the gaming industry that might have defined rules about real-money games.

Meta is tagging real photos as 'Made with AI,' say photographers

Meta social media icons including Threads are displayed on a smartphone

Image Credits: Jonathan Raa/NurPhoto / Getty Images

Earlier in February, Meta said that it would start labeling photos created with AI tools on its social networks. Since May, Meta has regularly tagged some photos with a “Made with AI” label on its Facebook, Instagram and Threads apps.

But the company’s approach of labeling photos has drawn ire from users and photographers after attaching the “Made with AI” label to photos that have not been created using AI tools.

There are plenty of examples of Meta automatically attaching the label to photos that were not created through AI. For example, this photo of Kolkata Knight Riders winning the Indian Premier League Cricket tournament. Notably, the label is only visible on the mobile apps and not on the web.

An Instagram photo of the Kolkata Knight Riders, labeled incorrectly as "Made with AI".
An Instagram photo of the Kolkata Knight Riders, labeled as “Made with AI.” Image Credit: Instagram (screenshot)

Plenty of other photographers have raised concerns over their images having been wrongly tagged with the “Made with AI” label. Their point is that simply editing a photo with a tool should not be subject to the label.

Former White House photographer Pete Souza said in an Instagram post that one of his photos was tagged with the new label. Souza told TechCrunch in an email that Adobe changed how its cropping tool works and you have to “flatten the image” before saving it as a JPEG image. He suspects that this action has triggered Meta’s algorithm to attach this label.

“What’s annoying is that the post forced me to include the ‘Made with AI’ even though I unchecked it,” Souza told TechCrunch.

A photo taken by Pete Souza, but which Instagram has labeled as “Made with AI.” Image Credit: Instagram (screenshot)

Meta would not answer on the record to TechCrunch’s questions about Souza’s experience or other photographers’ posts who said their posts were incorrectly tagged. However, after publishing of the story, Meta said the company is evaluating its approach to indicate labels reflect the amount of AI used in an image.

“Our intent has always been to help people know when they see content that has been made with AI. We are taking into account recent feedback and continue to evaluate our approach so that our labels reflect the amount of AI used in an image,” a Meta spokesperson told TechCrunch.

In a February blog post, Meta said it utilizes metadata of images to detect the label.

“We’re building industry-leading tools that can identify invisible markers at scale — specifically, the “AI generated” information in the C2PA and IPTC technical standards — so we can label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools,” the company said at that time.

As PetaPixel reported last week, Meta seems to be applying the “Made with AI” label when photographers use tools such as Adobe’s Generative AI Fill to remove objects.

While Meta hasn’t clarified when it automatically applies the label, some photographers have sided with Meta’s approach, arguing that any use of AI tools should be disclosed. The company told TechCrunch that it is actively working with companies that have AI-powered tools for creation to refine its approach.

“We rely on industry-standard indicators that other companies include in content from their tools, so we’re actively working with these companies to improve the process so our labeling approach matches our intent,”

For now, Meta provides no separate labels to indicate if a photographer used a tool to clean up their photo, or used AI to create it. For users, it might be hard to understand how much AI was involved in a photo. Meta’s label specifies that “Generative AI may have been used to create or edit content in this post” — but only if you tap on the label.

Despite this approach, there are plenty of photos on Meta’s platforms that are clearly AI-generated, and Meta’s algorithm hasn’t labeled them. With U.S. elections to be held in a few months, social media companies are under more pressure than ever to correctly handle AI-generated content. 

The story has been updated with Meta’s comments.

Meta’s new AI deepfake playbook: More labels, fewer takedowns