Federal Trade Commission illustration

FTC report on predatory social media data hoarding hints at future regulations

Federal Trade Commission illustration

Image Credits: Bryce Durbin / TechCrunch

A new FTC report on how social media and streaming sites collect and monetize their hoards of user data doesn’t really feature a lot of surprises for anyone who’s followed the space. It’s more helpful to consider this part of a paper trail the agency is laying down in order to justify new regulations in the space.

The report has its roots way back in late 2020, when the FTC ordered nine of the tech companies with the biggest data collection apparatus to disclose numerous aspects of how their surveillance capitalism business models worked. (The companies: Amazon, Facebook, YouTube, Twitter, Snap, ByteDance, Discord, Reddit, and WhatsApp.)

What data do you collect, on whom, and how long is it kept? If asked to delete, do you do so? What do you use it for, who do you sell it to, and what do they use it for? The questions are quite comprehensive, the better to avoid the possibility of prevarication or obscuration through withholding of important data.

The responses of the companies were, predictably, evasive, as the FTC’s Bureau of Consumer Protection Director Samuel Levine notes in the preface:

Echoing the way that firms conceal and hide their collection practices, many of the Companies provided the Commission with limited, incomplete, or unhelpful responses that appeared to have been carefully crafted to be self-serving and avoid revealing key pieces of information.

The resulting report details all manner of shenanigans, representing both malice and incompetence. Few of the practices disclosed will surprise anyone at this point, but the executive summary starting on page 9 is a great refresher on all the skulduggery we have come to expect from the likes of these.

Of course, it has been nearly four years since then, and many of the companies have made changes to their practices, or have been fined or otherwise chastised. But despite the elevation of Lina Khan to Chair of the FTC subsequent to this inquiry, there has been no large revision or expansion of rules that lay down bright lines like “thou shalt not sell data on a user’s health challenges to advertisers.”

One exception you might hope for, compliance with the Children’s Online Privacy Protection Act, also seems to be an afterthought. As the FTC writes:

…In an apparent attempt to avoid liability under the COPPA Rule, most [social media and video streaming services] asserted that there are no child users on their platforms because children cannot create accounts. Yet we know that children are using SMVSSs. The SMVSSs should not ignore this reality…Almost all of the Companies allowed teens on their SMVSSs and placed no restrictions on their accounts, and collected personal information from teens just like they do from adults.

Meta allegedly ignored obvious violations for years; Amazon settled for $25 million after “flouting” the law; TikTok owner ByteDance is the target of a similar lawsuit filed just last month.

So what’s the point of the report, if all this is known?

Well, the FTC has to do its due diligence too when considering rules that could restrict a bunch of multi-billion-dollar global tech companies. If the FTC in 2020 had said, “These companies are out of control, we propose a new rule!” then the industries impacted would quite justifiably challenge it by saying there is no evidence of the kind of practices the rule would prohibit. This kind of thing happened with net neutrality as well: the broadband companies challenged it on (among other things) the basis that the harms were overstated, and won.

Though Chair Khan’s statement accompanying the report suggests it will help inform state and federal lawmakers’ efforts (which is likely true), it is almost certain that this will provide a foundational fact basis on which to build out a new rulemaking. The very fact that the companies both admit to doing these things, and that they have been caught red-handed doing others in the meantime, would strengthen any argument for new regulations.

Khan also fends off dissent from within, from Commissioners who (despite voting unanimously to issue the report) accuse it of attempting to regulate speech or dictate business models. She dispatches these arguments with the confidence of someone already drafting a proposal.

That proposal (should it exist) would likely be aimed at trimming the wings of those companies that have come to embody entire industries within themselves. As Khan puts it:

…It is the relative dominance of several of these platforms that gives their decisions and data practices an outsized impact on Americans. When a single firm controls a market and is unchecked by competition, its policies can effectively function as private regulation. A consolidated market is also more susceptible to coordination with–or cooptation by–the government. Unchecked private surveillance by these platforms creates heightened risk of improper surveillance by the state. How these markets are structured can result in greater risks to—or greater protections of—people’s core liberties.

In other words, let’s not leave it to them, and the FTC likely doesn’t intend to.

Communia bets social media can be good for you

Amrapali Gan and Olivia DeRamus of Communia

Image Credits: Communia

Olivia DeRamus is flipping the script: What if scrolling through social media didn’t make us miserable? What if, especially for women, social media could actually make us feel more supported?

“It’s certainly not what mainstream social platforms have been built for,” DeRamus told TechCrunch. But with her social platform Communia, DeRamus is daring to try something that seems counterintuitive.

Communia is both a social platform and a mental health tool; you can post updates in a community feed, or you can privately journal and track your emotions over time. But for users to get vulnerable, they need to feel safe. So, the platform is taking an approach that could polarize some, but could reassure others: People must verify their identity before they can fully use the app.

“It’s a safety feature, but it also kind of sets the tone that this is an intimate space and that you’re safe here,” she said. “So people feel more empowered to talk about their PMDD, or to talk about the difficult life experience they’re going through.”

Image Credits: Communia

DeRamus used to work in the nonprofit sector, focused on women’s anti-violence efforts. But she pivoted to tech because she knew first-hand how valuable a supportive online platform could be. When DeRamus was in college, she was sexually assaulted. In a time when opening up to her friends was a challenge, she sought out online communities to help her process her experience.

“I turned to social media because I was trying to figure out what even was happening to me,” she said. “It was pre-MeToo … Where are women even having conversations around sexual assault?”

With Communia, DeRamus is creating the app that she needed at that time. “We don’t have all the answers at Communia, but even at our tiny, tiny amount of funding, with our tiny fraction of resources, the solutions to many of the internet’s issues are oftentimes more simple than big tech platforms would like us to believe,” she said.

And since she isn’t a typical tech founder, DeRamus isn’t using the typical tech playbook. “I’m aware that in tech, most people go very far without knowing how they’re going to make money, and I know BeReal was acquired recently, and they never figured it out,” she said. “That’s not the pathway for Communia, so we really wanted to take our time not only in building a better digital world for women, but also in building a more sustainable type of tech product.”

So, Communia’s plan to monetize is to onboard creators. The idea is that fans could pay to access more intimate communities with creators they know, where the emphasis is on the group’s connections as a whole, rather than their specific relationship to that creator.

To figure out its creator strategy, Communia brought on ex-OnlyFans CEO Amrapali Gan as a strategic partner and growth adviser.

“We’ve been getting to know each other for about a year now. I actually slid into her DMs, cold, no intro, and she responded to me. We met up, and over time, she was advising us on a number of different things,” DeRamus said. “We realized that she felt really passionate about our mission, and that she was helping us in so many aspects of the business that it made sense to bring her on in a formal role that was much more than a traditional adviser.”

Gan worked at OnlyFans as CMO, and then CEO, in a time when the platform was explosive with growth. Like Communia, Gan emphasized safety in her time at OnlyFans. As a platform most associated with adult content, OnlyFans uses a digital identity platform called Yoti that prevents minors from signing up. Gan left OnlyFans last year to found Hoxton, a creative agency working with startups.

“Her expertise is really allowing us to make even better decisions and grow faster, but also handle a lot of difficult questions that come up for any social media platform with her expertise,” DeRamus said.

Communia remains a relatively green platform. The app participated in TechCrunch’s Startup Battlefield 200 in 2023, but has taken a slow approach to building out the platform. DeRamus hasn’t marketed the platform much, but perhaps Gan’s involvement will change that. Slow growth makes sense for a platform like Communia, though, because its entire premise could deteriorate if it lacks the content moderation capabilities to grow at scale without sacrificing safety. Communia is embarking on its first institutional fundraise now, with the hope that extra funds could help navigate safe growth.

Founder behind social media app IRL charged with fraud

Abraham Shafi, IRL

Image Credits: IRL

While venture capitalists and the rest of the technorati are off on holiday or attending the Paris Olympics, the U.S. Securities and Exchange Commission and its staff attorneys are keeping busy this summer. 

For the second time this week — and at least the fourth time in the past several months — the SEC has charged a venture-backed founder on allegations of fraud. 

The SEC said Wednesday it has charged Abraham Shafi, the founder and former CEO of the social media startup known as IRL, for allegedly defrauding investors. The agency says Shafi made false and misleading statements about the company’s growth and concealed that he and his fiancée, Barbara Woortmann, extensively used company credit cards to pay for personal expenses.

IRL was positioned as a viral social media app that took off during the pandemic, but there was one small problem: Its millions of users were fake. IRL, which started as a social calendar app and was building out a messaging-based social network to become the “WeChat of the West, shut down in June 2023 after an internal investigation by the company’s board found that 95% of the app’s users were “automated or from bots.” 

Before IRL’s demise, Shafi had managed to raise $200 million in venture capital. The startup’s last round — a Series C raise of $170 million led by Softbank’s Vision Fund 2 — pushed IRL into unicorn status with a $1.17 billion valuation. Problems and concerns emerged not long after. 

The SEC said in its complaint Wednesday that Shafi portrayed IRL as a viral social media platform that had organically attracted its purported 12 million users. Instead, IRL spent millions of dollars on ads that offered incentives to download the IRL app, according to the SEC.

The SEC alleges Shafi then hid those expenses. The complaint also alleges that Shafi didn’t disclose to investors that he and Woortmann charged hundreds of thousands of dollars on the company’s credit cards on clothing, home furnishings and travel.

“As we alleged, Shafi took advantage of investors’ appetite for investments in the pre-IPO technology space and fraudulently raised approximately $170 million by lying about IRL’s business practices,” said Monique C. Winkler, Director of the SEC’s San Francisco Regional Office. “Investors in this space should continue to be vigilant.”

Earlier this week, the SEC charged BitClout founder Nader Al-Naji with fraud and unregistered offering of securities, claiming he used his pseudonymous online identity “DiamondHands” to avoid regulatory scrutiny while he raised over $257 million in cryptocurrency. BitClout, a buzzy crypto startup, was backed by high-profile VCs such as a16z, Sequoia, Chamath Palihapitiya’s Social Capital, Coinbase Ventures and Winklevoss Capital. 

In June, the SEC charged Ilit Raz, CEO and founder of the now-shuttered AI recruitment startup Joonko, with defrauding investors of at least $21 million. The agency alleged Raz made false and misleading statements about the quantity and quality of Joonko’s customers, the number of candidates on its platform and the startup’s revenue.

The agency has also gone after venture firms in recent months. In May, the SEC charged Robert Scott Murray and his firm Trillium Capital LLC with a fraudulent scheme to manipulate the stock price of Getty Images Holdings Inc. by announcing a phony offer by Trillium to purchase Getty Images. 

UK's internet regulator warns social media platforms over risks of inciting violence

Riot police hold back protesters after disorder broke out on July 30, 2024 in Southport, England.

Image Credits: Christopher Furlong / Getty Images

The U.K.’s internet regulator, Ofcom, has published an open letter to social media platforms raising concerns about the use of their tools to incite violence. The development follows days of violent civil unrest and rioting in towns and cities around the United Kingdom after the slaying of three young girls in a knife attack in Southport on July 30.

Ofcom has powers to sanction video platforms for failing to protect their users from content that’s likely to incite violence or hatred. Under the U.K.’s newer Online Safety Act (OSA), Ofcom’s powers to enforce content moderation standards online have been further expanded to cover all sorts of platforms, including social media services.

Penalties under the OSA can reach up to 10% of global annual turnover — so, on paper, the regulator’s toolbox contains hefty new powers to clamp down on serious content moderation failures.

However, Ofcom is still in the process of implementing the regime. Enforcement on social media platforms is not expected to kick in before 2025, as the regulator continues to consult on guidance for how firms should comply.

Parliament will also need to approve these rules before enforcement starts. Currently, there is no clear legal route for Ofcom to compel social media firms to tackle hateful conduct that may be whipping up violent social unrest.

Nonetheless, in recent days there have been calls for Ofcom’s enforcement timeline to be speeded up in light of the civic unrest and for the regulator to be more proactive in dealing with social media giants.

Speaking to the BBC Radio 4’s World at One program on Tuesday, former minister Damian Collins urged Ofcom to “put the tech companies on notice.”

“Communications on social media platforms that incite violence, create genuine fear people have of being the victim of violent acts, that incite racial hatred, these are already regulatory offences under the Act,” Collins told the BBC. “What Ofcom needs to be doing now is putting the tech companies on notice to say they will be audited using the powers Ofcom has to look at what they did to try and dampen down the spread of extremist content and disinformation related to that extremist content on their platforms.

“[The tech companies] have the power to do that… and my concern is, it’s not just they’re not doing that, they are actively amplifying this content and making the problem worse.”

Concern over the role of social media platforms, including Elon Musk’s X (formerly Twitter), was sparked almost immediately by the swift spread of disinformation about the identity of the minor responsible for killing the three girls.

U.K. media outlets were initially restricted from reporting the identity of the suspect who police had arrested because he is under the age of 18. A judge later lifted the restriction, naming the teen as a British-born citizen called Axel Rudakubana, but not before the information vacuum had been exploited by far right activists using platforms like X to spread false claims that the killer was a Muslim asylum seeker.

Activists also used social media sites and messaging apps such as Telegram to organize fresh unrest. The first violent disturbance took place in Southport the day after the killings. Since then unrest has spread to a number of towns and cities in England and Northern Ireland, with incidents including looting, arson and racist attacks. Several police officers were injured in the clashes.

Musk personally waded into the fray, engaging with content posted on X by far-right influencers intent on using the tragedy to further a divisive political agenda. That includes X user Tommy Robinson (also known as Stephen Yaxley-Lennon), whose account X reinstated last year, lifting a 2018 Twitter ban that had been imposed for breaching the platform’s hateful conduct policies in posts targeting Muslims.

In one of Musk’s own posts remarking on the unrest in the U.K., Musk suggested “civil war is inevitable.” In another, Musk attacked U.K. Prime Minister Keir Starmer, insinuating that his government is responsible for so-called two-tier policing, a right wing conspiracy theory that suggests police are tougher on right-wing criminality.

Ministers have rubbished Musk’s claim and disputed the framing of the violent public disturbances as protests, instead branding the individuals involved “thugs” who are engaged in “criminal acts.”

The government has also vowed to bring the full force of the law to bear on anyone involved. But that still leaves the tricky question of how to handle major tech platforms which are being used to spread content intended to whip up violence and to organize fresh unrest. That specifically includes X, where the owner of a platform is himself amplifying the divisive dogwhistling.

Ofcom’s public letter, which is attributed to Gill Whitehead, its group director for online safety, represents the weakest level of regulatory intervention possible, lacking a strong and forceful call to action for platforms to act. There is only a suggestion to platforms that “you can act now.”

But it may be all Ofcom feels able to do at this point.

“When we publish our final codes of practice and guidance, later this year, regulated services will have three months to assess the risk of illegal content on their platforms, and will then be required to take appropriate steps to stop it appearing, and act quickly to remove it when they become aware of it,” writes Whitehead, underscoring the OSA enforcement gap it’s saddled with — failing new action by the government to speed up the implementation timeline.

“Some of the most widely-used online sites and apps will in due course need to go even further — by consistently applying their terms of service, which often include banning things like hate speech, inciting violence, and harmful disinformation,” the Ofcom letter continues, pointing to some of the incoming duties social media firms will be expected to comply with once the OSA is fully up and running.

The regulator goes on to say it expects “continued engagement” with companies during the OSA implementation period.

“[W]e welcome the proactive approaches that have been deployed by some services in relation to these acts of violence across the UK,” Ofcom adds, ending with a suggestion that platforms shouldn’t wait for the “new safety duties” to kick in but can instead “act now” to ensure their services are “safer for users.”

But without a fully implemented regime to force platforms to clean up their act, Ofcom’s letter may be all too easy for certain chaos-peddlers to ignore.

Linktree acquires social media scheduler tool Plann

Plann's auto posting feature for LinkedIn users

Image Credits: Plann

Link-in-bio platform Linktree announced Thursday that it has acquired social media scheduling tool Plann for an undisclosed amount.

While Sydney, Australia-headquartered Plann will continue to operate as usual for now, Linktree is set to integrate its social scheduling tool into its platform in the coming months. This means that users will soon have access to features like social media planning and auto-posting for platforms like TikTok, Facebook, LinkedIn and Instagram. It’s unclear whether Linktree plans to incorporate all of Plann’s features, such as its AI-powered caption generator. 

Linktree didn’t specify how much the new scheduling tool would cost for users. Plann users will transition to Linktree once it officially launches. 

The company said that the entire Plann team, including employees and contractors, are joining Linktree, including founder Christy Laurence, who will be a permanent team member.

According to the company, social scheduling is one of Linktree’s most requested features, as it helps creators streamline their content distribution process and save time.

“The 50 million creators and businesses using Linktree are busy producing content, and many are juggling multiple ventures – we strive to simplify how they manage, grow, and monetize their audience. Adding social scheduling to Linktree will make it even simpler to distribute inspiring content, so our Linkers can get back to doing what they love – creating,” Linktree co-founder and CEO Alex Zaccaria said in a statement. 

This marks Linktree’s fourth acquisition, following three other investments made by the company in recent years: link-in-bio competitors Koji and Bento, as well as Odesli, the automated music link aggregation platform.

Separately, Linktree launched its beta social commerce offering earlier this year, giving creators the ability to add storefronts to their link-in-bio pages and take 12%-15% commission on sales. 

Linktree touts over 50 million users.

a16z offers social media tips after its founder’s ‘attack’ tweet goes viral

Image Credits: Bloomberg / Getty Images

On Friday, the venture firm Andreessen Horowitz tweeted out a link to its guide on how to “build your social media presence,” which features advice for founders on things like “how to find your people” and “how often to post.”

The timing of the tweet was ironic given the social media frenzy on Friday after founder Ben Horowitz posted on X. He was upset over an article in The San Francisco Standard about his family’s political donations and accused his rival, VC Michael Moritz — who owns The Standard — of orchestrating the story. (The Standard’s executive editor, Jon Steinberg, said he assigned the article and that Moritz had nothing to do with it.) 

Andreessen Horowitz’s Marc Andreessen has also had a storied, and sometimes combative, relationship with social media. A Meta board member, he quit Twitter in 2016 because it had grown too toxic for him but returned and quickly rebuilt a huge following. Earlier this year, he and VC Vinod Khosla fought about AI on the X platform (formerly Twitter), in which a16z is an investor.

Linktree acquires social media scheduler tool Plann

Plann's auto posting feature for LinkedIn users

Image Credits: Plann

Link-in-bio platform Linktree announced Thursday that it has acquired social media scheduling tool Plann for an undisclosed amount.

While Sydney, Australia-headquartered Plann will continue to operate as usual for now, Linktree is set to integrate its social scheduling tool into its platform in the coming months. This means that users will soon have access to features like social media planning and auto-posting for platforms like TikTok, Facebook, LinkedIn and Instagram. It’s unclear whether Linktree plans to incorporate all of Plann’s features, such as its AI-powered caption generator. 

Linktree didn’t specify how much the new scheduling tool would cost for users. Plann users will transition to Linktree once it officially launches. 

The company said that the entire Plann team, including employees and contractors, are joining Linktree, including founder Christy Laurence, who will be a permanent team member.

According to the company, social scheduling is one of Linktree’s most requested features, as it helps creators streamline their content distribution process and save time.

“The 50 million creators and businesses using Linktree are busy producing content, and many are juggling multiple ventures – we strive to simplify how they manage, grow, and monetize their audience. Adding social scheduling to Linktree will make it even simpler to distribute inspiring content, so our Linkers can get back to doing what they love – creating,” Linktree co-founder and CEO Alex Zaccaria said in a statement. 

This marks Linktree’s fourth acquisition, following three other investments made by the company in recent years: link-in-bio competitors Koji and Bento, as well as Odesli, the automated music link aggregation platform.

Separately, Linktree launched its beta social commerce offering earlier this year, giving creators the ability to add storefronts to their link-in-bio pages and take 12%-15% commission on sales. 

Linktree touts over 50 million users.

UK's internet regulator warns social media platforms over risks of inciting violence

Riot police hold back protesters after disorder broke out on July 30, 2024 in Southport, England.

Image Credits: Christopher Furlong / Getty Images

The U.K.’s internet regulator, Ofcom, has published an open letter to social media platforms raising concerns about the use of their tools to incite violence. The development follows days of violent civil unrest and rioting in towns and cities around the United Kingdom after the slaying of three young girls in a knife attack in Southport on July 30.

Ofcom has powers to sanction video platforms for failing to protect their users from content that’s likely to incite violence or hatred. Under the U.K.’s newer Online Safety Act (OSA), Ofcom’s powers to enforce content moderation standards online have been further expanded to cover all sorts of platforms, including social media services.

Penalties under the OSA can reach up to 10% of global annual turnover — so, on paper, the regulator’s toolbox contains hefty new powers to clamp down on serious content moderation failures.

However, Ofcom is still in the process of implementing the regime. Enforcement on social media platforms is not expected to kick in before 2025, as the regulator continues to consult on guidance for how firms should comply.

Parliament will also need to approve these rules before enforcement starts. Currently, there is no clear legal route for Ofcom to compel social media firms to tackle hateful conduct that may be whipping up violent social unrest.

Nonetheless, in recent days there have been calls for Ofcom’s enforcement timeline to be speeded up in light of the civic unrest and for the regulator to be more proactive in dealing with social media giants.

Speaking to the BBC Radio 4’s World at One program on Tuesday, former minister Damian Collins urged Ofcom to “put the tech companies on notice.”

“Communications on social media platforms that incite violence, create genuine fear people have of being the victim of violent acts, that incite racial hatred, these are already regulatory offences under the Act,” Collins told the BBC. “What Ofcom needs to be doing now is putting the tech companies on notice to say they will be audited using the powers Ofcom has to look at what they did to try and dampen down the spread of extremist content and disinformation related to that extremist content on their platforms.

“[The tech companies] have the power to do that… and my concern is, it’s not just they’re not doing that, they are actively amplifying this content and making the problem worse.”

Concern over the role of social media platforms, including Elon Musk’s X (formerly Twitter), was sparked almost immediately by the swift spread of disinformation about the identity of the minor responsible for killing the three girls.

U.K. media outlets were initially restricted from reporting the identity of the suspect who police had arrested because he is under the age of 18. A judge later lifted the restriction, naming the teen as a British-born citizen called Axel Rudakubana, but not before the information vacuum had been exploited by far right activists using platforms like X to spread false claims that the killer was a Muslim asylum seeker.

Activists also used social media sites and messaging apps such as Telegram to organize fresh unrest. The first violent disturbance took place in Southport the day after the killings. Since then unrest has spread to a number of towns and cities in England and Northern Ireland, with incidents including looting, arson and racist attacks. Several police officers were injured in the clashes.

Musk personally waded into the fray, engaging with content posted on X by far-right influencers intent on using the tragedy to further a divisive political agenda. That includes X user Tommy Robinson (also known as Stephen Yaxley-Lennon), whose account X reinstated last year, lifting a 2018 Twitter ban that had been imposed for breaching the platform’s hateful conduct policies in posts targeting Muslims.

In one of Musk’s own posts remarking on the unrest in the U.K., Musk suggested “civil war is inevitable.” In another, Musk attacked U.K. Prime Minister Keir Starmer, insinuating that his government is responsible for so-called two-tier policing, a right wing conspiracy theory that suggests police are tougher on right-wing criminality.

Ministers have rubbished Musk’s claim and disputed the framing of the violent public disturbances as protests, instead branding the individuals involved “thugs” who are engaged in “criminal acts.”

The government has also vowed to bring the full force of the law to bear on anyone involved. But that still leaves the tricky question of how to handle major tech platforms which are being used to spread content intended to whip up violence and to organize fresh unrest. That specifically includes X, where the owner of a platform is himself amplifying the divisive dogwhistling.

Ofcom’s public letter, which is attributed to Gill Whitehead, its group director for online safety, represents the weakest level of regulatory intervention possible, lacking a strong and forceful call to action for platforms to act. There is only a suggestion to platforms that “you can act now.”

But it may be all Ofcom feels able to do at this point.

“When we publish our final codes of practice and guidance, later this year, regulated services will have three months to assess the risk of illegal content on their platforms, and will then be required to take appropriate steps to stop it appearing, and act quickly to remove it when they become aware of it,” writes Whitehead, underscoring the OSA enforcement gap it’s saddled with — failing new action by the government to speed up the implementation timeline.

“Some of the most widely-used online sites and apps will in due course need to go even further — by consistently applying their terms of service, which often include banning things like hate speech, inciting violence, and harmful disinformation,” the Ofcom letter continues, pointing to some of the incoming duties social media firms will be expected to comply with once the OSA is fully up and running.

The regulator goes on to say it expects “continued engagement” with companies during the OSA implementation period.

“[W]e welcome the proactive approaches that have been deployed by some services in relation to these acts of violence across the UK,” Ofcom adds, ending with a suggestion that platforms shouldn’t wait for the “new safety duties” to kick in but can instead “act now” to ensure their services are “safer for users.”

But without a fully implemented regime to force platforms to clean up their act, Ofcom’s letter may be all too easy for certain chaos-peddlers to ignore.

Founder behind social media app IRL charged with fraud

While venture capitalists and the rest of the technorati are off on holiday or attending the Paris Olympics, the U.S. Securities and Exchange Commission and its staff attorneys are keeping busy this summer. 

For the second time this week — and at least the fourth time in the past several months — the SEC has charged a venture-backed founder on allegations of fraud. 

The SEC said Wednesday it has charged Abraham Shafi, the founder and former CEO of the social media startup known as IRL, for allegedly defrauding investors. The agency says Shafi made false and misleading statements about the company’s growth and concealed that he and his fiancée, Barbara Woortmann, extensively used company credit cards to pay for personal expenses.

IRL was positioned as a viral social media app that took off during the pandemic, but there was one small problem: Its millions of users were fake. IRL, which started as a social calendar app and was building out a messaging-based social network to become the “WeChat of the West, shut down in June 2023 after an internal investigation by the company’s board found that 95% of the app’s users were “automated or from bots.” 

Before IRL’s demise, Shafi had managed to raise $200 million in venture capital. The startup’s last round — a Series C raise of $170 million led by Softbank’s Vision Fund 2 — pushed IRL into unicorn status with a $1.17 billion valuation. Problems and concerns emerged not long after. 

The SEC said in its complaint Wednesday that Shafi portrayed IRL as a viral social media platform that had organically attracted its purported 12 million users. Instead, IRL spent millions of dollars on ads that offered incentives to download the IRL app, according to the SEC.

The SEC alleges Shafi then hid those expenses. The complaint also alleges that Shafi didn’t disclose to investors that he and Woortmann charged hundreds of thousands of dollars on the company’s credit cards on clothing, home furnishings and travel.

“As we alleged, Shafi took advantage of investors’ appetite for investments in the pre-IPO technology space and fraudulently raised approximately $170 million by lying about IRL’s business practices,” said Monique C. Winkler, Director of the SEC’s San Francisco Regional Office. “Investors in this space should continue to be vigilant.”

Earlier this week, the SEC charged BitClout founder Nader Al-Naji with fraud and unregistered offering of securities, claiming he used his pseudonymous online identity “DiamondHands” to avoid regulatory scrutiny while he raised over $257 million in cryptocurrency. BitClout, a buzzy crypto startup, was backed by high-profile VCs such as a16z, Sequoia, Chamath Palihapitiya’s Social Capital, Coinbase Ventures and Winklevoss Capital. 

In June, the SEC charged Ilit Raz, CEO and founder of the now-shuttered AI recruitment startup Joonko, with defrauding investors of at least $21 million. The agency alleged Raz made false and misleading statements about the quantity and quality of Joonko’s customers, the number of candidates on its platform and the startup’s revenue.

The agency has also gone after venture firms in recent months. In May, the SEC charged Robert Scott Murray and his firm Trillium Capital LLC with a fraudulent scheme to manipulate the stock price of Getty Images Holdings Inc. by announcing a phony offer by Trillium to purchase Getty Images. 

Communia bets social media can be good for you

Amrapali Gan and Olivia DeRamus of Communia

Image Credits: Communia

Olivia DeRamus is flipping the script: What if scrolling through social media didn’t make us miserable? What if, especially for women, social media could actually make us feel more supported?

“It’s certainly not what mainstream social platforms have been built for,” DeRamus told TechCrunch. But with her social platform Communia, DeRamus is daring to try something that seems counterintuitive.

Communia is both a social platform and a mental health tool; you can post updates in a community feed, or you can privately journal and track your emotions over time. But for users to get vulnerable, they need to feel safe. So, the platform is taking an approach that could polarize some, but could reassure others: People must verify their identity before they can fully use the app.

“It’s a safety feature, but it also kind of sets the tone that this is an intimate space and that you’re safe here,” she said. “So people feel more empowered to talk about their PMDD, or to talk about the difficult life experience they’re going through.”

Image Credits: Communia

DeRamus used to work in the nonprofit sector, focused on women’s anti-violence efforts. But she pivoted to tech because she knew first-hand how valuable a supportive online platform could be. When DeRamus was in college, she was sexually assaulted. In a time when opening up to her friends was a challenge, she sought out online communities to help her process her experience.

“I turned to social media because I was trying to figure out what even was happening to me,” she said. “It was pre-MeToo … Where are women even having conversations around sexual assault?”

With Communia, DeRamus is creating the app that she needed at that time. “We don’t have all the answers at Communia, but even at our tiny, tiny amount of funding, with our tiny fraction of resources, the solutions to many of the internet’s issues are oftentimes more simple than big tech platforms would like us to believe,” she said.

And since she isn’t a typical tech founder, DeRamus isn’t using the typical tech playbook. “I’m aware that in tech, most people go very far without knowing how they’re going to make money, and I know BeReal was acquired recently, and they never figured it out,” she said. “That’s not the pathway for Communia, so we really wanted to take our time not only in building a better digital world for women, but also in building a more sustainable type of tech product.”

So, Communia’s plan to monetize is to onboard creators. The idea is that fans could pay to access more intimate communities with creators they know, where the emphasis is on the group’s connections as a whole, rather than their specific relationship to that creator.

To figure out its creator strategy, Communia brought on ex-OnlyFans CEO Amrapali Gan as a strategic partner and growth adviser.

“We’ve been getting to know each other for about a year now. I actually slid into her DMs, cold, no intro, and she responded to me. We met up, and over time, she was advising us on a number of different things,” DeRamus said. “We realized that she felt really passionate about our mission, and that she was helping us in so many aspects of the business that it made sense to bring her on in a formal role that was much more than a traditional adviser.”

Gan worked at OnlyFans as CMO, and then CEO, in a time when the platform was explosive with growth. Like Communia, Gan emphasized safety in her time at OnlyFans. As a platform most associated with adult content, OnlyFans uses a digital identity platform called Yoti that prevents minors from signing up. Gan left OnlyFans last year to found Hoxton, a creative agency working with startups.

“Her expertise is really allowing us to make even better decisions and grow faster, but also handle a lot of difficult questions that come up for any social media platform with her expertise,” DeRamus said.

Communia remains a relatively green platform. The app participated in TechCrunch’s Startup Battlefield 200 in 2023, but has taken a slow approach to building out the platform. DeRamus hasn’t marketed the platform much, but perhaps Gan’s involvement will change that. Slow growth makes sense for a platform like Communia, though, because its entire premise could deteriorate if it lacks the content moderation capabilities to grow at scale without sacrificing safety. Communia is embarking on its first institutional fundraise now, with the hope that extra funds could help navigate safe growth.