NightCafe

Before Midjourney, there was NightCafe — and it's still kicking

NightCafe

Image Credits: NightCafe

Elle Russell, co-founder of Cairns, Australia-based NightCafe, which offers a suite of AI-powered art-creating tools, prefers to avoid the spotlight.

“I like to remain hidden behind my monitors,” she told me in a recent interview.

NightCafe is similarly low profile.

The company, which Russell helped her partner, Angus Russell, launch five years ago, doesn’t get the same publicity as some of its rivals, like Midjourney. Yet NightCafe — an entirely bootstrapped venture that’s profitable “most months,” according to Elle — has enormous reach. Its over 25 million users have created nearly a billion images with its tools.

To pull back the curtain on one of the web’s oldest generative art marketplaces, I spoke with Elle about NightCafe’s origins, some of the challenges the platform faces, and where she and Angus see it evolving from here.

A website for wall art

As NightCafe’s founding story goes, Angus had recently moved into a semi-detached house in Sydney’s Inner West area and hadn’t had a chance to decorate it with much artwork. “You should get some art; the walls are bare,” remarked one guest. And while Angus agreed, he couldn’t find any prints online that spoke to him.

So in 2019, Angus, who had a degree in design and who’d co-founded a few design-focused startups, began a side hustle: a website where people could buy and sell AI-generated art. He called it NightCafe, after Vincent Van Gogh’s “The Night Café.”

It was an abject failure.

People liked creating the art, which NightCafe didn’t charge for. But they didn’t want to pay for wall prints, which was the only way the site made money.

Then one fateful week, Angus noticed that his hosting bill was a few hundred dollars higher than usual. Someone had generated thousands of images in just a few days. He implemented a credit system to prevent that from happening again.

Soon after, Angus’ inbox was flooded with requests to add an option to buy more credits, which he did. Practically overnight, the site became breakeven.

It’s at this point Elle joined NightCafe to run the business side of the operation. “I have two undergraduate bachelor degrees, in business and communications, and I’m also a CPA,” she said. “It made sense.”

NightCafe’s viral success

NightCafe got its second big break a couple years later, in mid-2021, when OpenAI announced DALL-E.

DALL-E, OpenAI’s first image-generating AI model, was state-of-the-art for the time. OpenAI opted not to release it, but it wasn’t long before enthusiasts managed to reverse-engineer some of the methods behind DALL-E and build open source models of their own.

Angus, who’d been closely following the developments, quickly worked to get one of the more popular DALL-E alternatives, VQGAN+CLIP, on NightCafe. He shelled out for hundreds of GPUs to scale it up.

The investment soon paid for itself.

Images created with NightCafe’s VQGAN+CLIP blew up on Reddit; NightCafe made $17,000 in a single day. Angus decided to quit his job at Atlassian to work on the platform full-time.

A model marketplace

The NightCafe of today is quite different from the NightCafe of several years ago.

The platform still runs some models on its own servers, including recent versions of Stable Diffusion and Ideogram. But it also integrates APIs from AI vendors that offer them, delivering what amounts to custom interfaces for third-party generators.

NightCafe
Selecting a model from NightCafe’s gallery.
Image Credits: NightCafe

That is to say, NightCafe layers tools on top of models from elsewhere, including OpenAI, Google and Black Forest Labs. And, as it has since 2019, the site provides printing services for customers who want mugs, T-shirts and prints of any art they generate.

“We’re a UI and community company,” Elle said. “NightCafe doesn’t have any internal AI or machine learning capability; we aggregate the available image models and make them fun and accessible to use.”

In NightCafe’s chatrooms, users can share their art and collaborate, or kick off “AI art challenges.” The platform also hosts official competitions where people can submit their creations for featured placement.

NightCafe
Chatrooms on NightCafe.
Image Credits: NightCafe

Last year, NightCafe introduced fine-tuning, which allows users to train a model to re-create a specific style, face or object by uploading example images. Fine-tuned models on NightCafe are subject to certain restrictions; for example, they can’t be trained on images showing nudity, celebrities or people under the age of 18, and they must be manually approved by NightCafe’s moderation team. (That’s to mitigate the risk of deepfakes.)

NightCafe
The terms users must agree to before submitting a fine-tuned model.
Image Credits: NightCafe

NightCafe is free to use, but only up to a certain number of images. Packs of image-generation credits can be purchased à la cart, and select features are gated behind a subscription. For fees ranging from $4.79 to $50 per month (undercutting Midjourney and Civitai), users get priority access to more-capable models, the ability to tip creators, the aforementioned fine-tuning capability and a higher image-generation limit.

It’s a model that’s worked exceptionally well for NightCafe.

A source close to the company tells TechCrunch that NightCafe is raking in $4 million in annualized revenue with a gross margin of nearly 50%, meaning that NightCafe is generating approximately $2 million a year in profit after expenses (inclusive of payroll for its nine staff).

Roughly a million people are visiting NightCafe each month, Elle says, and 20,000 have a subscription.

“Any AI art generator online is competing for money from the same people, though our users skew older than a lot of the industry,” she said. “We consider our biggest competitors to be other apps that have a strong community: Leonardo, Civitai and Midjourney.”

By opting not to train its own AI (and moderating fine-tuning), NightCafe is attempting to steer clear of the legal stand-off that’s ensnared many of the AI vendors whose models it aggregates.

Stability AI, Midjourney and a pair of other model providers, DeviantArt and Runway, face a class action lawsuit filed by artists who allege that the vendors engaged in copyright infringement by training their models on art without permission. (The vendors claim a fair use defense.) Some parts of the suit have been struck down. But a federal judge allowed it to move into the discovery stage early this month.

NightCafe may be protected by Section 230 of the Communications Decency Act, which holds users, not platforms, liable for illegal content (like copyright-violating artwork) so long as the platforms remove the content upon request. Australia, NightCafe’s home base, has the Broadcasting Services Act, which closely mirrors Section 230 with the exception that it imposes higher additional fees for failing to expeditiously remove “extreme violent material.”

Of course, should a court rule that the models NightCafe uses are essentially plagiarism machines, that’d be disruptive to the company’s business. But what about copyright as it pertains to NightCafe’s users and the art they generate?

NightCafe
Creating an image with NightCafe.
Image Credits: NightCafe

According to the platform’s terms of service, users retain the copyright for their AI-generated works in countries that recognize these types of works as copyrightable (like the U.S.) — at least as long as there’s permission to use any third-party branding, logos or trademarks within.

A post last May on NightCafe’s blog sheds more light on this: “Legitimate creators recognize and acknowledge where the inspiration used to create their images derived from another source. AI art creation tools are also evolving quickly, with systems in development to support the ongoing creative environment while ensuring that users can only access source material with the [consent] of the original artist — in much the same way that a royalty-free photography image may be permitted for use provided the creator is referenced.”

In other words, in NightCafe’s view, it’s the users, not NightCafe, who have to cover their bases. And if they don’t, the platform won’t defend them from the wrath of IP holders.

But it seems that IP holders don’t intimidate many users.

Cursory searches of NightCafe bring up images of Pokémon and Donald Duck, celebrities like Britney Spears, brands such as Coca-Cola and LEGO and artwork in the style of artists like Stanley “Artgerm” Lau. None appears to have been generated with the blessing of the copyright holders.

NightCafe
Image Credits: NightCafe

“Users can also report content that got through automated filters, and we have a team of human moderators working 24/7 on moderating flagged content,” Elle said when asked about this.

Political policies and deepfakes

As my interview with Elle segued to moderation, we dove into NightCafe’s general content guidelines, particularly its policies around politics and deepfakes.

Platforms, including Midjourney, have taken the step of banning users from generating images of political figures like Donald Trump and Kamala Harris leading up to the U.S. presidential election. But NightCafe hasn’t — and it doesn’t intend to, according to Elle.

“Generating images of Trump and other political and public figures is allowed,” she said. “However, we don’t want NightCafe to be a place for political arguments.”

How can NightCafe have it both ways? While the platform won’t prevent users from publishing political images elsewhere, it will flag those images for review if a user tries to post them to NightCafe’s public feeds.

That being the case, it’s trivial to find images of Biden in a wheelchair, Trump holding a gun and questionable Harris memes in NightCafe’s public gallery. With polls showing that the majority of Americans are concerned about the spread of AI propaganda and deepfakes, NightCafe certainly hasn’t made enforcement easier on itself.

NightCafe
Image Credits: NightCafe

As for what content is or isn’t allowed: It depends.

“Political bait,” glorification of divisive figures or purposely unflattering or demeaning images, are no-gos (in spite of what my searches turned up). Most content the average person would find harmful or offensive is also prohibited; NightCafe’s community standards list calls out things like racist and homophobic images, spam, offensive swear words, terrorism themes, images mocking people with disabilities, and depictions of hate groups and symbols.

These subjects may technically be disallowed. But type a term like “suicide bomber” into NightCafe’s search bar and there’s a decent chance you’ll come across at least one image that seems to fly in the face of the platform’s rules.

Elle tells me that it’s ultimately up to moderators to interpret NightCafe’s guidelines and that repeatedly publishing images in a banned category, or circumventing automated filters, could result in a warning or ban.

NightCafe has a rather small moderation team given its size (and the fact that the site’s users generate at least 700 images a day): five paid moderators and 20 volunteer moderators who get compensation in the form of premium NightCafe features. The paid moderators monitor content, while the volunteers handle comments, NightCafe’s chatrooms and the fine-tuned model queue.

Considering the poor working conditions content moderators are often subject to, I asked Elle for more information about NightCafe’s moderator recruitment practices. She said that the paid team is run through an outsourcing firm based in Indonesia (she wouldn’t name which) and overseen by an internal NightCafe staff member.

NightCafe
A few results for the search term “Coca-Cola.”
Image Credits: NightCafe

All paid moderators get a “market wage,” Elle said. (In Jakarta, the minimum wage was around $325 per month as of early 2024.)

Similar to Civitai, NightCafe has a policy carve-out for “NSFW” content: short of outright nudity, but permissive of suggestive poses (with “bare breasts and bums”), blood and gore, graphic depictions of war, and images of illegal drug use (e.g., Mickeys smoking blunts). This is somewhat dependent on the model; OpenAI’s DALL-E 2 has a stricter set of filters, for instance.

Why allow NSFW images despite the risks and without any form of watermarking (which might soon be legally mandated in California) to prevent abuse? To the first question, Elle says that it would stifle “artistic freedom.”

“We do allow mild artistic nudity and adult themes on the site when tagged as NSFW, but not outright porn. We’ve tried our best to ‘draw the line’ for our users in our community standards so that they understand what’s allowed and what’s not,” she added. “We pride ourselves on our community and being the ‘hub’ for all things AI art.”

From my few searches, NightCafe doesn’t seem overrun with boundary-crossing objectionable stuff. But I couldn’t help but notice that most of the “sexy” images featured women — an unfortunate pattern on platforms such as these.

Where NightCafe goes from here

Like many startups in the AI-powered art-generating space, NightCafe appears to be in a bit of a holding pattern. It’s bringing new models online, including video-generating models like Stable Video Diffusion. But it’s not rocking the boat too much — the unsaid reason being that a single court decision or regulation could force NightCafe to rethink its entire operation.

Still, Elle seems to think NightCafe has legs and doesn’t need outside investment.

“The majority of our competitors raised money over the last two years while image generation was hot,” Russell said. “Pretty much all of them were, or are, offering image generation at a loss to acquire users. Not all of them can succeed; NightCafe pioneered the intersection of AI and art but also championed the idea that creativity using advanced technology should be accessible for all.”

There’s no plans for an enterprise NightCafe offering, despite how lucrative such a product could prove to be (moderation roadblocks aside). Elle says that the focus will remain on building a community and “social hub” atop the latest generative models.

“One challenge that the industry faces is that image-generation models are getting so good, they’ll soon be commoditized,” she said. “What do companies compete on then? At NightCafe, we’ve chosen to focus on being an aggregator of the top models to provide the best variety and highest level of technology.”

We’ll see how it navigates the choppy waters from here.

Digitally generated image, perfectly usable for all kinds of topics related to digital innovations, AI, data processing, network security or technology and computer science in general.

There's an AI 'brain drain' in academia

Digitally generated image, perfectly usable for all kinds of topics related to digital innovations, AI, data processing, network security or technology and computer science in general.

Image Credits: Getty Images

As one might expect, lots of students who graduate with a doctorate in an AI-related field end up joining an AI company, whether a startup or Big Tech giant.

According to Stanford’s 2021 Artificial Intelligence Index Report, the number of new AI PhD graduates in North America entering the AI industry post-graduation grew from 44.4% in 2010 to around 48% in 2019. By contrast, the share of new AI PhDs entering academia dropped from 42.1% in 2010 to 23.7% in 2019.

Private industry’s willingness to pay top dollar for AI talent is likely a contributing factor.

Jobs from the biggest AI ventures, like OpenAI and Anthropic, list eye-popping salaries ranging from $700,000 to $900,000 for new researchers, per data from salary negotiation service Rora. Google has reportedly gone so far as to offer large grants of restricted stock to incentivize leading data scientists.

While AI graduates are no doubt welcoming the trend — who wouldn’t kill for a starting salary that high? — it’s having an alarming impact on academia.

A 2019 survey co-authored by researchers at the Hebrew University of Jerusalem and Cheung Kong Graduate School of Business in Beijing found that close to 100 AI faculty members left North American universities for industry jobs between 2018 and 2019 — an outsized cohort in the context of a specialized computer science field. Between 2004 and 2019, Carnegie Mellon alone saw 16 AI faculty members depart, and the Georgia Institute of Technology and University of Washington lost roughly a dozen each, the study found.

The effects of the mass faculty exodus have been far-reaching, with the Hebrew University and Cheung Kong survey concluding that it’s had an especially stark impact on AI companies founded by students graduating from universities where those professors used to work. Per the survey, there’s a chilling effect on entrepreneurism in the years following faculty departures at a college, with the impact intensifying when the AI professors who leave are replaced by faculty from lower-ranked schools or untenured professors.

That’s perhaps why AI companies and labs are increasingly recruiting talent from industry — not universities.

A new report from VC firm SignalFire suggests that the percentage of AI hires coming from top schools such as Caltech, Harvard, Princeton, Yale and Stanford — or those with doctorates — has dropped significantly from a peak of around 35% in 2015. In 2023, the percentage was closer to 18%, as AI companies began to look for and hire more non-graduate candidates.

“We discovered a high concentration of top AI talent amongst a few startups when historically we saw this clustering at public giants like Google,” Ilya Kirnos, SignalFire’s co-founder and CTO, told TechCrunch+ via email. “That led us to look at where top AI talent was moving across the industry, and whether talent was more correlated with top universities or top startups.”

To arrive at its findings, SignalFire identified a subset of top AI talent via two routes: academic publications and open source project contributions. (Kirnos acknowledges that many AI researchers don’t publish papers or contribute to open source, but says that the report is meant to show a “representative slice” of the AI talent ecosystem rather than the whole picture.)

SignalFire cross-referenced authors at major AI conferences like NeurIPS and ICML with university employment listings to identify AI faculty, and then matched the contributors to popular AI software projects on GitHub with public employment feeds (like LinkedIn) to identify top overall contributors.

Kirnos says that SignalFire’s data shows a growing preference on the part of AI companies (e.g., Hugging Face, Stability AI, Midjourney) for alternatives to prestigious graduate hiring pools, with one of those alternatives being open research communities spawned by new emerging new AI tradecraft (see: prompt engineering). And this, Kirnos claims, is a good thing for its potential to lower the industry barrier to entry for non-PhDs.

“This will create demand for new ways to assess recruiting candidates for real-world software engineering experience,” Kirnos said. “Instead of filtering by university brand names, we may see employers seek out new ways to screen applicants for expertise in building functional products out of the stack the company actually uses.”

Diversity is in the eye of the beholder, of course.

According to the Stanford study, AI PhD programs were decidedly homogenous as of 2019, with white students accounting for 45% of AI PhD graduates. But so were AI teams in industry. In its State of AI in 2022 report, McKinsey found that the average share of employees identifying as racial or ethnic minorities developing AI solutions was a paltry 25%, and 29% of organizations had no minority employees working on AI whatsoever.

Kirnos stands behind his assertion, but does suggest that universities could look to provide students more opportunities to apply research in real-world scenarios that more closely mirror work experience in the sector.

“Engineering is increasingly moving away from building whole products from scratch in a vacuum,” he added, “and toward cobbling together stacks of AI models, APIs, enterprise tools and open source software.”

This writer’s hopeful that universities sit up and pay attention to the alarming trend — and then do something about it. Prestigious AI doctorate programs deserve criticism for their exclusivity, certainly, and the ways in which it concentrates power and accelerates inequality. But I for one am loathe to embrace a future where industry, through hiring and other means of influence, commands increasing control over the AI field’s direction.

ChatGPT

Is there anything AI can’t do?

ChatGPT

Image Credits: STEFANI REYNOLDS/AFP / Getty Images

Welcome to Startups Weekly — your weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday.

I don’t code much anymore, but I’ve been hacking away with a small murder of Arduinos (clearly, like for crows, the plural for an Arduino is a “murder”). My C skills are hella rusty, and ChatGPT has been a surprisingly helpful tool for coding and debugging. Being able to throw a bunch of code, along with the error message the compiler throws, only for the robots to tell me that (1) I really shouldn’t be coding and (2) how to fix my n00b mistakes has been pretty refreshing.

Of course, none of this will come as a surprise to anyone who’s been paying attention, but the vast number of next-generation startups that are coming our way would indicate that the AI tentacles are going far and wide.

The truly mind-boggling thing is how early we are in our journey with AI. The current-generation technology is the tech equivalent of a toddler, and all the mediocre reviews various generative AI software is getting is tantamount to judging a fish by its ability to climb a tree.

“I’m surprised no one has done a parody of actually reviewing a three-month-old baby, and saying all it does is poop in its pants, and it can’t even finish complete sentences,” Steve Blank said in an interview with TechCrunch last year. “Copilot has changed the life of every programmer, period. It has probably increased productivity by 50%, and that is if you are using it poorly.”

I’m eager to see where it’s all going.

On that note . . .

It’s all AI, all the time

ChatGPT, Microsoft Bing Chat and AI chat applications are seen on a mobile device
Image Credits: Jaap Arriens/NurPhoto / Getty Images

FlowGPT emerges as the digital embodiment of the Wild West, a place where the law is more of a suggestion and safety measures are those annoying things you click past to get to the good stuff. Founded by a duo who seemingly decided that what the world really needed was a marketplace for AI apps that range from the mildly useful to the potentially nefarious, FlowGPT is the playground for anyone who thought, “Yes, I do need an app that narrates horror stories as if I’m a scared girl from a movie, but also, can it teach me how to code malware?” Investors, in a move that screams “What could possibly go wrong?” have thrown $10 million at the venture, proving once again that in the tech world, ethics can be as flexible as your funding is solid.

Inkitt looked at the dwindling reading habits of the world and said, “Challenge accepted.” With a bold plan to become the Disney of the 21st century, they’re throwing $37 million at the problem, because why not? The company’s strategy? Use AI to sift through self-published stories on their app, pick the ones that scream potential, and then tweak them into bestsellers.

AI will build your website: While Wix and Squarespace reign supreme with their user-friendly drag-and-drop interfaces, 10Web, from Armenia, emerges aiming to tame the beast that is WordPress.

AI will read your news for you: Particle.news, the brainchild of ex-Twitter engineers, is stepping into the ring with a fresh take on digesting the news. Armed with $4.4 million in seed funding, they have a vision to offer a “multi-perspective” news reading experience.

AI will code for you: StarCoder 2 is a family of models, boasting up to 15 billion parameters and trained on a whopping 67.5 terabytes of data. StarCoder 2 is trained on approximately 619 programming languages.

Most interesting fundraises this week

hand holding a money bag
Image Credits: Liia Galimzianova (opens in a new window) / Getty Images

Fervo Energy is making sure the geothermal sector is heating up, raising $221 million as it pioneers deep into Earth’s crust to harness its heat. This Houston-based enterprise is leveraging directional drilling techniques, a legacy from the oil and gas industry, to significantly extend the reach of its wells. By outfitting these wells with fiber-optic cables and an array of sensors, Fervo is mapping subsurface heat patterns and monitoring well performance with unprecedented precision.

Initia is stepping onto the blockchain scene with a bold mission to tackle the notorious complexity and fragmentation plaguing the development of blockchain applications. The company aims to bridge the gap between the multichain universe and app-specific blockchains, offering a streamlined approach to interoperability and simplification. The company’s approach seeks to remove the technical barriers for app developers, aspiring to become the App Store of the crypto world, where accessing and building applications is as straightforward as possible. With a recent $7.5 million seed financing, the company is going heavy on the accelerator.

Money for photos: Photoroom, a Paris-based AI photo-editing app, has successfully closed its latest funding round, securing $43 million at a $500 million valuation.

Money for money: Embat has successfully secured a $16 million Series A funding. The company aims to revolutionize the way finance teams operate by digitizing and automating processes such as accounting, bank reconciliation, and corporate treasury management.

Money for AI: Mistral AI announced a significant development in its journey with the unveiling of a new large language model named Mistral Large, aimed at competing with giants like OpenAI’s GPT-4. This announcement was coupled with news of a strategic partnership and investment from Microsoft.

This week’s big trend: What goes up must come down

Cartoon rocket taking off and crashing.
Image Credits: Bohdan Skrypnyk / Getty Images

In the VC ecosystem, a new trend is emerging where investors are keenly backing startups designed to assist other startups in their shutdown processes. This trend is gaining momentum against the backdrop of a high startup failure rate and a significant slowdown in venture capital funding post-2021’s boom. Startups like Sunset and SimpleClosure are stepping in to offer streamlined, less painful ways for companies to wind down, handling everything from legal and financial logistics to asset disposal and capital return. These services are becoming increasingly vital as the number of startups facing closure rises, with over 3,200 venture-backed U.S. companies shutting down last year alone.

Google done goofed: Google recently found itself in a rather awkward situation, as Gemini depicted the Founding Fathers of the United States (known to be white slave owners) as a multicultural group, including people of color. This incident has sparked widespread ridicule and criticism, highlighting the challenges of balancing diversity and historical accuracy in AI-generated content.

Apple killed the self-driving car: Apple is scuttling its secretive, long-running effort to build an autonomous electric car, executives announced in a short meeting with the team Tuesday morning. The company is likely cutting hundreds of employees from the team and all work on the project has stopped.

I’m the captain now: Byju Raveendran, the founder of eponymous edtech group Byju’s, told employees on Saturday that he continues to remain the chief executive of the startup and that rumors of his firing have been “greatly exaggerated,” a day after a shareholder group voted to remove him at an emergency general meeting.

Other unmissable TechCrunch stories . . .

Every week, there’s always a few stories I want to share with you that somehow don’t fit into the categories above. It’d be a shame if you missed ’em, so here’s a random grab bag of goodies for ya:

Please someone buy our cars: Toyota’s recent offer on the 2023 Mirai Limited, a fuel-cell vehicle, epitomizes the fuel struggle the automotive industry is finding itself in. The deal effectively reduces the vehicle’s price from $66,000 to $11,000, factoring in discounts and free hydrogen fuel incentives.

Just bumblin’ along: Bumble, once a dominant player in the online dating scene, is currently bumbling through turbulent waters, including major losses and a 350-person layoff.

No, Gmail isn’t going away: An old TechCrunch story got a ton of additional traffic when an online hoax claimed that Google was sunsetting Gmail. That is, of course, not the case.

Apple pours more resources into AI: Apple CEO Tim Cook is promising that Apple will “break new ground” on GenAI this year. He made the pronouncement during the company’s annual shareholders meeting.

The tiger grows 65 billion stripes: Payments infrastructure giant Stripe said today it has inked deals with investors to provide liquidity to current and former employees through a tender offer at a $65 billion valuation.

There’s a real appetite for a fintech alternative to QuickBooks

man considering alternative funding options

Image Credits: anyaberkut (opens in a new window) / Getty Images

Welcome to TechCrunch Fintech! This week, we’re looking at the continued fallout from Synapse’s bankruptcy, how Layer wants to disrupt SMB accounting, and much more!

To get a roundup of TechCrunch’s biggest and most important fintech stories delivered to your inbox every Tuesday at 7:00 a.m. PT, subscribe here

The big story

The prospects for troubled banking-as-a-service (BaaS) startup Synapse went from bad to worse last week when a U.S. Trustee filed an emergency motion asking to convert the company’s debt reorganization Chapter 11 bankruptcy into a liquidation Chapter 7 due to “gross mismanagement” of its estate. Apparently, up to 20 million fintech depositors are at risk as a result of the bankruptcy. As Fintech Business Weekly’s Jason Mikula reports, “Numerous end users of fintechs that have had their ability to access their funds frozen shared the devastating impact it has had on their lives with the court and the hundreds of attendees dialed in to the hearing [on Friday].” Sadly, the fallout from Synapse’s collapse continues.

Analysis of the week

There seems to be a real appetite for an alternative to QuickBooks, the legacy accounting alternative for SMBs, judging by the attention this story on Layer’s $2.3 million raise received. Layer is leaning into what it describes as a better user experience through embedded accounting. Its customers are those working with small and medium-sized businesses to offer accounting and bookkeeping features inside their own products. Better Tomorrow Ventures led the pre-seed investment into the startup and was joined by a group of executives at companies such as Square, Plaid, Unit and Check.

Dollars and cents

PayHOA, a previously bootstrapped Kentucky-based startup that offers software for self-managed homeowner associations (HOAs), is an example of how real-world problems can translate into opportunity. It just raised a $27.5 million Series A round in an environment where nearly $30 million Series A rounds are no longer common.

Buy now, pay later services have become so ubiquitous that BNPL may as well just be another way to say “debt.” But in Mexico, where BNPL platform Aplazo operates, a large underbanked population makes BNPL more like an alternative to cash. A recent $45 million Series B round led by QED Investors should help it further expand its reach, both virtual and physical.

Speaking of QED, it also led a $10 million round into Kudos, which uses artificial intelligence to figure out consumer spending habits so it can then provide more personalized financial advice.

Aeropay, a provider of pay-by-bank solutions for businesses that started out helping cannabis retailers and gaming companies with their payments, is now entering into Visa’s and Mastercard’s territory by innovating the payment networks. And it’s just raised $20 million in a Series B round.

What else we’re writing

The Consumer Financial Protection Bureau (CFPB) is suing SoLo Funds, a fintech company that enables peer-to-peer lending, alleging that the company used “digital dark patterns” to deceive borrowers and illegally took fees while advertising to consumers that there were no fees.

High-interest headlines

CFPB takes action against Chime Financial for illegally delaying consumer refunds

Deel partners with Carta to offer equity tax withholding features

Insurtech Cover Genius raises $80M in Series E funding  (TC covered its last raise here)

Yendo raises $165 million for ‘vehicle-secured’ credit card

FinLocker raises $17M in Series B funding round

Bunq enters insurance market via new partnership

Square adds new integrated solutions for restaurants

Embedded accounting startup Teal raises $8M

ICYMI: Baselayer raises $6.5M in seed funding 

Want to reach out with a tip? Email me at [email protected] or send me a message on Signal at 408.204.3036. You can also send a note to the whole TechCrunch crew at [email protected]. For more secure communications, click here to contact us, which includes SecureDrop (instructions here) and links to encrypted messaging apps.