PayPal Ventures leads $20M round into Gynger, which offers companies 'buy now, pay later' for technology purchases

a maze made out of bundles of dollar bills

Image Credits: Andrej Vodolazhskyi (opens in a new window) / Shutterstock (opens in a new window)

Gynger, a platform that lends capital to companies for technology purchases, has raised $20 million in a Series A round led by PayPal Ventures, it told TechCrunch exclusively.

The financing brings the New York-based startup’s total venture capital raised to $31.7 million and included participation from Gradient Ventures (Google’s AI-focused venture fund), Velvet Sea Ventures, BAG Ventures and Deciens Capital.

In addition to the equity raise, Gynger has secured a debt facility from CIM (Community Investment Management) with an agreement to fund up to $100 million.

Gynger was incubated in June 2021 out of m]x[v Capital, a New York-city based early-stage venture fund founded by Mark Ghermezian. Ghermezian also previously founded Braze, a cloud-based customer engagement platform for multichannel marketing. There, he told TechCrunch at the time of the company’s last raise, he saw how difficult it was to sell software and — on the flip side — how difficult it was for buyers to purchase the software.

Gynger works with both buyers and sellers of technology. It claims to help companies “finance, pay and manage” all of the expenses associated with buying technology, including software, hardware, cloud and infrastructure. It does this by providing businesses with access to unsecured lines of credit, which Ghermezian says gives them the ability to extend their runway and preserve cash.

Gynger says it uses advanced artificial intelligence and data analytics to underwrite and approve credit for customers. It automatically detects technology spend to recommend financing opportunities to best fit the needs of both buyers and sellers, according to Ghermezian.

The company claims that its application process is less than 10 minutes and that companies get credit decisions the next day, “and immediate access to funds once approved” with different options of payment terms. Gynger pays its customers’ vendors on their behalf, and the customers pay it back later. Think of it as a buy now, pay later service for companies purchasing technology. 

On the flip side, Gynger offers vendors selling technology a way to offer embedded financing through an accounts receivable platform that provides “flexible” payment terms, Ghermezian said.

“This equips vendors with an extremely effective tool for accelerating sales, pulling revenue forward, and shortening key financial metrics,” Ghermezian added. The vendors get paid annually upfront by Gynger while their customer pays Gynger back “however they’d like.”

The market is large, Ghermezian said, pointing to a recent Forrester research report which estimates that global tech spend is expected to reach $4.7 trillion in 2024.

All that spend is translating into growth for Gynger. Revenue is up over 700% year-over-year, according to Ghermezian. However, it only started selling in the second quarter of 2023, so that growth is from a small base. The company has also increased its customer base by 5x year-over-year, Ghermezian said. He declined to reveal hard revenue figures, saying only the company was on “a clear, near-term path to profitability.” To date, Gynger has facilitated thousands of payments for its customers across hundreds of vendors, including AWS, Google Cloud, Okta, Cisco, Salesforce, HubSpot, Oracle, GitHub, Snowflake and Amplitude.

Like all BNPL business-model companies, the company charges interest on its loans and also makes money from buyers on loan origination fees, as well as through interchange fees from its card program. It also generates revenue from vendors via service fees and, later this year, it plans to generate revenue from SaaS/platform fees, according to Ghermezian.

Image Credits: Gynger

At the time of the company’s last raise, Ghermezian told TechCrunch that it saw Gynger competing closely with fintechs like Pipe and Capchase, both of which started out by providing businesses funding outside of equity and venture debt. For its part, Capchase in May of 2023 expanded into the buy now, pay later space after launching Capchase Pay. But today, Ghermezian said he doesn’t view the companies as competitors anymore. There are companies that do parts of what Gynger is doing. Some have gone down the SaaS procurement path, like Tropic, Zip and Vendr, Ghermezian also noted. Then there are companies such as Brex and Ramp that offer corporate expense cards to use for purchases, including technology. But he views Bill.com as Gynger’s main competitor.

Presently, the company has 25 employees, up from 13 a year ago.

Gynger will use its new capital to scale its operations and fund the loans.

“As we mature, we are seeing that our customer base is growing from early-stage startups to more mature companies, spanning from Series A to pre-IPO,” Ghermezian said. “We are also tapping into other verticals outside of technology, such as real estate, retail, healthcare and AI.”

PayPal Ventures Managing Partner James Loftus believes that Gynger’s model gives it a “unique advantage.”

“We’re betting that embedding payments and financing in both the buying and selling experience for SaaS will allow Gynger to drive massive network effects and create deep relationships that will ultimately allow the company to realize their goal of becoming the next big AR (accounts receivable)/AP (accounts payable) platform,” he said. “Access to embedded financing solutions that ‘work’ for both buyers and sellers simply have not existed at scale until Gynger.”

Want more fintech news in your inbox? Sign up for TechCrunch Fintech here.

Want to reach out with a tip? Email me at [email protected] or send me a message on Signal at 408.204.3036. You can also send a note to the whole TechCrunch crew at [email protected]. For more secure communications, click here to contact us, which includes SecureDrop (instructions here) and links to encrypted messaging apps.

The Way app offers a chance to meditate alongside a Zen master

The Way founders

Image Credits: The Way

A new app called The Way is aiming to help people explore the deeper side of meditation through a single, structured path guided by an authorized Zen master. Founded by uncle-and-nephew duo Henry Shukman and Jack Shukman, The Way wants to help people move beyond modern mindfulness practices offered by popular meditation apps like Headspace and Calm, and guide them deeper into the teaching of millennia-old meditation traditions.

Henry, who is one of five authorized Zen masters in the Sanbo Zen lineage in the world, had to pivot to online meditation teaching during the COVID-19 pandemic. He found that he was able to create a digital foundation for meditation practice, and that people were responding well to it. 

One of his students happened to be Kevin Rose, a tech founder and partner at True Ventures. Rose floated the idea of creating a meditation app that featured Henry’s teachings. Since Henry didn’t have an idea for a candidate for CEO at the time, the idea was put on the back burner. 

Around the same time, Henry started connecting with his nephew Jack around meditation after he had left his job after spending 10 years in investment banking and consulting. Jack, who originally believed that meditation was a waste of time, started to embrace meditation and saw it had a positive impact on his life. 

“Henry went from being just my boring Uncle Henry to someone I could turn to for guidance and advice,” Jack told TechCrunch. “I have an uncle who is not only a meditation teacher, but actually a Zen master. It was a privilege, but at the time, I was also searching for the right tool and was trying different meditation apps and I couldn’t find anything that really would consistently develop my practice over the long term.”

Jack was turning to Henry more and more for guidance and wanted to share this clarity and reassurance he was getting from his uncle. 

Image Credits: The Way

The duo decided to create the app in 2022, and now The Way has closed a $1.4 million seed funding round led by Rose and True Ventures, with participation from a few angel investors.

The core idea behind The Way is to allow you to have the experience of studying with a Zen master every day for a year as if you were sitting side-by-side. Once you open up the app, you’re greeted by Henry sitting in his Zen school, welcoming you with an overview of how the teachings will work and progress. From there, you get access to a series of daily guided meditations interspersed with a few talks.

Unlike popular meditation apps like Calm and Headspace, The Way focuses on a linear, step-by-step idea when it comes to meditation. 

“The big gamble that we took in our UX design was that all of that content would appear in a linear step by step path, so the user never has to make a choice,” Jack said. “That was inspired by all the user research we did at the start of the project where we spoke to meditators who use different meditation apps. Every single one reported feeling some degree of decision paralysis with the apps that they already use, because every meditation app on the market uses what we call the Netflix model, where you open it and there’s just a thousand options presented to you, different teachers, different courses, different styles, and it can just be really overwhelming.”

Image Credits: The Way

The Way strips all of that away and guides you sequentially through a curriculum led by Henry that is designed to lead you into deeper aspects of mediation. 

And while the duo believes that apps like Calm and Headspace are great for introducing people to a basic level of meditation, The Way can help people graduate from that introductory level and take the next step in deepening their practice. 

“A lot of modern mindfulness, which is of course a fantastic thing, focuses on stress reduction and finding states of calm and balance, but there is further to go than that,” Henry told TechCrunch. “What that looks like is discovering an incredible kind of interconnectedness that we’re all part of. It’s finding flow states in meditation where time goes quiet. We don’t feel effort so much it becomes effortless, very easy and very kind of fulfilling in and of itself, just to be. So those deeper kinds of discoveries from practice, we really wanted them to be in the app.”

The Way offers the first 30 meditation sessions for free. Users who want to unlock the entire curriculum can do so through a $9.99 monthly or $74.99 yearly subscription. People who are interested in the app but can’t afford the subscription can apply for The Way’s scholarship program. 

The app is available on iOS and Android.

Elon Musk unexpectedly offers support for California's AI bill

Image Credits: BRITTA PEDERSEN/POOL/AFP / Getty Images

Elon Musk has come out in support of California’s SB 1047, a bill that requires makers of very large AI models to create and document safeguards against those models causing serious harm.

“This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill,” he wrote on X on Monday afternoon. “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk.”

Musk — whose own large AI model company, xAI, would be subject to SB 1047’s requirements despite his pledge to leave California — has warned of the dangers of runaway AI in the past.

Meanwhile, rival outfit OpenAI recently announced it opposes the bill, supporting an alternative bill instead.

Elon Musk unexpectedly offers support for California's AI bill

Image Credits: BRITTA PEDERSEN/POOL/AFP / Getty Images

Elon Musk has come out in support of California’s SB 1047, a bill that requires makers of very large AI models to create and document safeguards against those models causing serious harm.

“This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill,” he wrote on X on Monday afternoon. “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk.”

Musk — whose own large AI model company, xAI, would be subject to SB 1047’s requirements despite his pledge to leave California — has warned of the dangers of runaway AI in the past.

Meanwhile, rival outfit OpenAI recently announced it opposes the bill, supporting an alternative bill instead.

PayPal Ventures leads $20M round into Gynger, which offers companies 'buy now, pay later' for technology purchases

a maze made out of bundles of dollar bills

Image Credits: Andrej Vodolazhskyi (opens in a new window) / Shutterstock (opens in a new window)

Gynger, a platform that lends capital to companies for technology purchases, has raised $20 million in a Series A round led by PayPal Ventures, it told TechCrunch exclusively.

The financing brings the New York-based startup’s total venture capital raised to $31.7 million and included participation from Gradient Ventures (Google’s AI-focused venture fund), Velvet Sea Ventures, BAG Ventures and Deciens Capital.

In addition to the equity raise, Gynger has closed on a $25 million debt facility from Community Investment Management (CIM) with an agreement that allows it to borrow up to a a total of $100 million. 

Gynger was incubated in June 2021 out of m]x[v Capital, a New York-city based early-stage venture fund founded by Mark Ghermezian. Ghermezian also previously founded Braze, a cloud-based customer engagement platform for multichannel marketing. There, he told TechCrunch at the time of the company’s last raise, he saw how difficult it was to sell software and — on the flip side — how difficult it was for buyers to purchase the software.

Gynger works with both buyers and sellers of technology. It claims to help companies “finance, pay and manage” all of the expenses associated with buying technology, including software, hardware, cloud and infrastructure. It does this by providing businesses with access to unsecured lines of credit, which Ghermezian says gives them the ability to extend their runway and preserve cash.

Gynger says it uses advanced artificial intelligence and data analytics to underwrite and approve credit for customers. It automatically detects technology spend to recommend financing opportunities to best fit the needs of both buyers and sellers, according to Ghermezian.

The company claims that its application process is less than 10 minutes and that companies get credit decisions the next day, “and immediate access to funds once approved” with different options of payment terms. Gynger pays its customers’ vendors on their behalf, and the customers pay it back later. Think of it as a buy now, pay later service for companies purchasing technology. 

On the flip side, Gynger offers vendors selling technology a way to offer embedded financing through an accounts receivable platform that provides “flexible” payment terms, Ghermezian said.

“This equips vendors with an extremely effective tool for accelerating sales, pulling revenue forward, and shortening key financial metrics,” Ghermezian added. The vendors get paid annually upfront by Gynger while their customer pays Gynger back “however they’d like.”

The market is large, Ghermezian said, pointing to a recent Forrester research report which estimates that global tech spend is expected to reach $4.7 trillion in 2024.

All that spend is translating into growth for Gynger. Revenue is up over 700% year-over-year, according to Ghermezian. However, it only started selling in the second quarter of 2023, so that growth is from a small base. The company has also increased its customer base by 5x year-over-year, Ghermezian said. He declined to reveal hard revenue figures, saying only the company was on “a clear, near-term path to profitability.” To date, Gynger has facilitated thousands of payments for its customers across hundreds of vendors, including AWS, Google Cloud, Okta, Cisco, Salesforce, HubSpot, Oracle, GitHub, Snowflake and Amplitude.

Like all BNPL business-model companies, the company charges interest on its loans and also makes money from buyers on loan origination fees, as well as through interchange fees from its card program. It also generates revenue from vendors via service fees and, later this year, it plans to generate revenue from SaaS/platform fees, according to Ghermezian.

Image Credits: Gynger

At the time of the company’s last raise, Ghermezian told TechCrunch that it saw Gynger competing closely with fintechs like Pipe and Capchase, both of which started out by providing businesses funding outside of equity and venture debt. For its part, Capchase in May of 2023 expanded into the buy now, pay later space after launching Capchase Pay. But today, Ghermezian said he doesn’t view the companies as competitors anymore. There are companies that do parts of what Gynger is doing. Some have gone down the SaaS procurement path, like Tropic, Zip and Vendr, Ghermezian also noted. Then there are companies such as Brex and Ramp that offer corporate expense cards to use for purchases, including technology. But he views Bill.com as Gynger’s main competitor.

Presently, the company has 25 employees, up from 13 a year ago.

Gynger will use its new capital to scale its operations and fund the loans.

“As we mature, we are seeing that our customer base is growing from early-stage startups to more mature companies, spanning from Series A to pre-IPO,” Ghermezian said. “We are also tapping into other verticals outside of technology, such as real estate, retail, healthcare and AI.”

PayPal Ventures Managing Partner James Loftus believes that Gynger’s model gives it a “unique advantage.”

“We’re betting that embedding payments and financing in both the buying and selling experience for SaaS will allow Gynger to drive massive network effects and create deep relationships that will ultimately allow the company to realize their goal of becoming the next big AR (accounts receivable)/AP (accounts payable) platform,” he said. “Access to embedded financing solutions that ‘work’ for both buyers and sellers simply have not existed at scale until Gynger.”

Want more fintech news in your inbox? Sign up for TechCrunch Fintech here.

Want to reach out with a tip? Email me at [email protected] or send me a message on Signal at 408.204.3036. You can also send a note to the whole TechCrunch crew at [email protected]. For more secure communications, click here to contact us, which includes SecureDrop (instructions here) and links to encrypted messaging apps.

EU's ChatGPT taskforce offers first look at detangling the AI chatbot's privacy compliance

OpenAI and ChatGPT logos

Image Credits: Didem Mente/Anadolu Agency / Getty Images

A data protection taskforce that’s spent over a year considering how the European Union’s data protection rulebook applies to OpenAI’s viral chatbot, ChatGPT, reported preliminary conclusions Friday. The top-line takeaway is that the working group of privacy enforcers remains undecided on crux legal issues, such as the lawfulness and fairness of OpenAI’s processing.

The issue is important as penalties for confirmed violations of the bloc’s privacy regime can reach up to 4% of global annual turnover. Watchdogs can also order non-compliant processing to stop. So — in theory — OpenAI is facing considerable regulatory risk in the region at a time when dedicated laws for AI are thin on the ground (and, even in the EU’s case, years away from being fully operational).

But without clarity from EU data protection enforcers on how current data protection laws apply to ChatGPT, it’s a safe bet that OpenAI will feel empowered to continue business as usual — despite the existence of a growing number of complaints its technology violates various aspects of the bloc’s General Data Protection Regulation (GDPR).

For example, this investigation from Poland’s data protection authority (DPA) was opened following a complaint about the chatbot making up information about an individual and refusing to correct the errors. A similar complaint was recently lodged in Austria.

Lots of GDPR complaints, a lot less enforcement

On paper, the GDPR applies whenever personal data is collected and processed — something large language models (LLMs) like OpenAI’s GPT, the AI model behind ChatGPT, are demonstrably doing at vast scale when they scrape data off the public internet to train their models, including by syphoning people’s posts off social media platforms.

The EU regulation also empowers DPAs to order any non-compliant processing to stop. This could be a very powerful lever for shaping how the AI giant behind ChatGPT can operate in the region if GDPR enforcers choose to pull it.

Indeed, we saw a glimpse of this last year when Italy’s privacy watchdog hit OpenAI with a temporary ban on processing the data of local users of ChatGPT. The action, taken using emergency powers contained in the GDPR, led to the AI giant briefly shutting down the service in the country.

ChatGPT only resumed in Italy after OpenAI made changes to the information and controls it provides to users in response to a list of demands by the DPA. But the Italian investigation into the chatbot, including crux issues like the legal basis OpenAI claims for processing people’s data to train its AI models in the first place, continues. So the tool remains under a legal cloud in the EU.

Under the GDPR, any entity that wants to process data about people must have a legal basis for the operation. The regulation sets out six possible bases — though most are not available in OpenAI’s context. And the Italian DPA already instructed the AI giant it cannot rely on claiming a contractual necessity to process people’s data to train its AIs — leaving it with just two possible legal bases: either consent (i.e. asking users for permission to use their data); or a wide-ranging basis called legitimate interests (LI), which demands a balancing test and requires the controller to allow users to object to the processing.

Since Italy’s intervention, OpenAI appears to have switched to claiming it has a LI for processing personal data used for model training. However, in January, the DPA’s draft decision on its investigation found OpenAI had violated the GDPR. Although no details of the draft findings were published so we have yet to see the authority’s full assessment on the legal basis point. A final decision on the complaint remains pending.

A precision ‘fix’ for ChatGPT’s lawfulness?

The taskforce’s report discusses this knotty lawfulness issue, pointing out ChatGPT needs a valid legal basis for all stages of personal data processing — including collection of training data; pre-processing of the data (such as filtering); training itself; prompts and ChatGPT outputs; and any training on ChatGPT prompts.

The first three of the listed stages carry what the taskforce couches as “peculiar risks” for people’s fundamental rights — with the report highlighting how the scale and automation of web scraping can lead to large volumes of personal data being ingested, covering many aspects of people’s lives. It also notes scraped data may include the most sensitive types of personal data (which the GDPR refers to as “special category data”), such as health info, sexuality, political views etc, which requires an even higher legal bar for processing than general personal data.

On special category data, the taskforce also asserts that just because it’s public does not mean it can be considered to have been made “manifestly” public — which would trigger an exemption from the GDPR requirement for explicit consent to process this type of data. (“In order to rely on the exception laid down in Article 9(2)(e) GDPR, it is important to ascertain whether the data subject had intended, explicitly and by a clear affirmative action, to make the personal data in question accessible to the general public,” it writes on this.)

To rely on LI as its legal basis in general, OpenAI needs to demonstrate it needs to process the data; the processing should also be limited to what is necessary for this need; and it must undertake a balancing test, weighing its legitimate interests in the processing against the rights and freedoms of the data subjects (i.e. people the data is about).

Here, the taskforce has another suggestion, writing that “adequate safeguards” — such as “technical measures”, defining “precise collection criteria” and/or blocking out certain data categories or sources (like social media profiles), to allow for less data to be collected in the first place to reduce impacts on individuals — could “change the balancing test in favor of the controller”, as it puts it.

This approach could force AI companies to take more care about how and what data they collect to limit privacy risks.

“Furthermore, measures should be in place to delete or anonymise personal data that has been collected via web scraping before the training stage,” the taskforce also suggests.

OpenAI is also seeking to rely on LI for processing ChatGPT users’ prompt data for model training. On this, the report emphasizes the need for users to be “clearly and demonstrably informed” such content may be used for training purposes — noting this is one of the factors that would be considered in the balancing test for LI.

It will be up to the individual DPAs assessing complaints to decide if the AI giant has fulfilled the requirements to actually be able to rely on LI. If it can’t, ChatGPT’s maker would be left with only one legal option in the EU: asking citizens for consent. And given how many people’s data is likely contained in training data-sets it’s unclear how workable that would be. (Deals the AI giant is fast cutting with news publishers to license their journalism, meanwhile, wouldn’t translate into a template for licensing European’s personal data as the law doesn’t allow people to sell their consent; consent must be freely given.)

Fairness & transparency aren’t optional

Elsewhere, on the GDPR’s fairness principle, the taskforce’s report stresses that privacy risk cannot be transferred to the user, such as by embedding a clause in T&Cs that “data subjects are responsible for their chat inputs”.

“OpenAI remains responsible for complying with the GDPR and should not argue that the input of certain personal data was prohibited in first place,” it adds.

On transparency obligations, the taskforce appears to accept OpenAI could make use of an exemption (GDPR Article 14(5)(b)) to notify individuals about data collected about them, given the scale of the web scraping involved in acquiring data-sets to train LLMs. But its report reiterates the “particular importance” of informing users their inputs may be used for training purposes.

The report also touches on the issue of ChatGPT ‘hallucinating’ (making information up), warning that the GDPR “principle of data accuracy must be complied with” — and emphasizing the need for OpenAI to therefore provide “proper information” on the “probabilistic output” of the chatbot and its “limited level of reliability”.

The taskforce also suggests OpenAI provides users with an “explicit reference” that generated text “may be biased or made up”.

On data subject rights, such as the right to rectification of personal data — which has been the focus of a number of GDPR complaints about ChatGPT — the report describes it as “imperative” people are able to easily exercise their rights. It also observes limitations in OpenAI’s current approach, including the fact it does not let users have incorrect personal information generated about them corrected, but only offers to block the generation.

However the taskforce does not offer clear guidance on how OpenAI can improve the “modalities” it offers users to exercise their data rights — it just makes a generic recommendation the company applies “appropriate measures designed to implement data protection principles in an effective manner” and “necessary safeguards” to meet the requirements of the GDPR and protect the rights of data subjects”. Which sounds a lot like ‘we don’t know how to fix this either’.

ChatGPT GDPR enforcement on ice?

The ChatGPT taskforce was set up, back in April 2023, on the heels of Italy’s headline-grabbing intervention on OpenAI, with the aim of streamlining enforcement of the bloc’s privacy rules on the nascent technology. The taskforce operates within a regulatory body called the European Data Protection Board (EDPB), which steers application of EU law in this area. Although it’s important to note DPAs remain independent and are competent to enforce the law on their own patch where GDPR enforcement is decentralized.

Despite the indelible independence of DPAs to enforce locally, there is clearly some nervousness/risk aversion among watchdogs about how to respond to a nascent tech like ChatGPT.

Earlier this year, when the Italian DPA announced its draft decision, it made a point of noting its proceeding would “take into account” the work of the EDPB taskforce. And there other signs watchdogs may be more inclined to wait for the working group to weigh in with a final report — maybe in another year’s time — before wading in with their own enforcements. So the taskforce’s mere existence may already be influencing GDPR enforcements on OpenAI’s chatbot by delaying decisions and putting investigations of complaints into the slow lane.

For example, in a recent interview in local media, Poland’s data protection authority suggested its investigation into OpenAI would need to wait for the taskforce to complete its work.

The watchdog did not respond when we asked whether it’s delaying enforcement because of the ChatGPT taskforce’s parallel workstream. While a spokesperson for the EDPB told us the taskforce’s work “does not prejudge the analysis that will be made by each DPA in their respective, ongoing investigations”. But they added: “While DPAs are competent to enforce, the EDPB has an important role to play in promoting cooperation between DPAs on enforcement.”

As it stands, there looks to be a considerable spectrum of views among DPAs on how urgently they should act on concerns about ChatGPT. So, while Italy’s watchdog made headlines for its swift interventions last year, Ireland’s (now former) data protection commissioner, Helen Dixon, told a Bloomberg conference in 2023 that DPAs shouldn’t rush to ban ChatGPT — arguing they needed to take time to figure out “how to regulate it properly”.

It is likely no accident that OpenAI moved to set up an EU operation in Ireland last fall. The move was quietly followed, in December, by a change to its T&Cs — naming its new Irish entity, OpenAI Ireland Limited, as the regional provider of services such as ChatGPT — setting up a structure whereby the AI giant was able to apply for Ireland’s Data Protection Commission (DPC) to become its lead supervisor for GDPR oversight.

This regulatory-risk-focused legal restructuring appears to have paid off for OpenAI as the EDPB ChatGPT taskforce’s report suggests the company was granted main establishment status as of February 15 this year — allowing it to take advantage of a mechanism in the GDPR called the One-Stop Shop (OSS), which means any cross border complaints arising since then will get funnelled via a lead DPA in the country of main establishment (i.e., in OpenAI’s case, Ireland).

While all this may sound pretty wonky it basically means the AI company can now dodge the risk of further decentralized GDPR enforcement — like we’ve seen in Italy and Poland — as it will be Ireland’s DPC that gets to take decisions on which complaints get investigated, how and when going forward.

The Irish watchdog has gained a reputation for taking a business-friendly approach to enforcing the GDPR on Big Tech. In other words, ‘Big AI’ may be next in line to benefit from Dublin’s largess in interpreting the bloc’s data protection rulebook.

OpenAI was contacted for a response to the EDPB taskforce’s preliminary report but at press time it had not responded.

Close up of hands typing code on a keyboard with code appearing on monitor in front of the keyboard.

Alibaba staffer offers a glimpse into building LLMs in China

Close up of hands typing code on a keyboard with code appearing on monitor in front of the keyboard.

Image Credits: gorodenkoff / Getty Images

Chinese tech companies are gathering all sorts of resources and talent to narrow their gap with OpenAI, and experiences for researchers on both sides of the Pacific Ocean can be surprisingly similar. A recent X post from an Alibaba researcher offers a rare glimpse into the life of developing large language models at the e-commerce firm, which is among a raft of Chinese internet giants striving to match the capabilities of ChatGPT.

Binyuan Hui, a natural language processing researcher at Alibaba’s large language model team Qwen, shared his daily schedule on X, mirroring a post by OpenAI researcher Jason Wei that went viral recently.

The parallel glimpse into their typical day reveals striking similarities, with wake-up times at 9 a.m. and bedtime around 1 a.m. Both start the day with meetings, followed by a period of coding, model training and brainstorming with colleagues. Even after getting home, they continue to run experiments at night and ponder on ways to enhance their models well into bedtime.

The notable differences are in how they choose to characterize leisure time. Hui, the Alibaba employee, mentioned reading research papers and browsing X to catch up on “what is happening in the world.” And as a commentator pointed out, Hui doesn’t have a glass of wine after he arrives home like Wei does.

This intense work regime is not unusual in China’s current LLM space, where tech talent with top university degrees are joining tech companies in droves to build competitive AI models.

To a certain extent, Hui’s demanding schedule seems to reflect a personal drive to match (or at least the social media appearance of doing so), if not outpace, Silicon Valley companies in the AI space. It seems different from the involuntary “996” work hours associated with more “traditional” types of Chinese internet businesses that involve heavy operations, such as video games and e-commerce.

Indeed, even renowned AI investor and computer scientist Kai-Fu Lee puts in an incredible amount of effort. When I interviewed Lee about his newly minted LLM unicorn 01.AI in November, he admitted that late hours were the norm, but employees were willingly working hard. That day, one of his staff messaged him at 2:15 a.m. to express his excitement about being part of 01.AI’s mission.

Outward displays of intense work ethic speak to the urgency of the remits laid out by tech firms in the country, and subsequently the speed with which those firms are now rolling out LLMs.

Qwen, for example, has open sourced a series of foundation models trained with both English and Chinese data. The number of parameters — a figure that speaks to the knowledge the model gains from historical training data that defines its ability to generate contextually relevant responses — is 72 billion for the largest of these. (For some context, GPT3 from OpenAI is believed to have 175 billion; GPT4, its latest LLM, has 1.7 trillion. However, it’s arguable that the aim of a particular LLM will be the more important key to decoding the value of high parameter numbers.)

The team also has been quick to introduce commercial applications. Last April, Alibaba began integrating Qwen into its enterprise communication platform DingTalk and online retailer Tmall.

No definite leader has emerged in China’s LLM space so far, and venture capital firms and corporate investors are spreading their bets across multiple contenders. Besides building its own LLM in-house, Alibaba has been aggressively investing in startups such as Moonshot AI, Zhipu AI, Baichuan and 01.AI.

Facing competition, Alibaba has been trying to carve out a niche, and its multilingual move could become a selling point. In December, the company released an LLM for several Southeast Asian languages. Called SeaLLM, the model is capable of processing information in Vietnamese, Indonesian, Thai, Malay, Khmer, Lao, Tagalog and Burmese. Through its cloud computing business and acquisition of e-commerce platform Lazada, Alibaba has established a sizable footprint in the region and can potentially introduce SeaLLM to these services down the road.

How China is building a parallel generative AI universe

SambaNova now offers a bundle of generative AI models

The world of big data is seen in this complex and vibrantly colored visual representation of data.

Image Credits: John Lund / Getty Images

SambaNova, an AI chip startup that’s raised over $1.1 billion in VC money to date, is gunning for OpenAI — and rivals — with a new generative AI product geared toward enterprise customers.

SambaNova today announced Samba-1, an AI-powered system designed for tasks like text rewriting, coding, language translation and more. The company’s calling the architecture a “composition of experts” — a jargony name for a bundle of generative open source AI models, 56 in total.

Rodrigo Liang, SambaNova’s co-founder and CEO, says that Samba-1 allows companies to fine-tune and address for multiple AI use cases while avoiding the challenges of implementing AI systems ad hoc.

“Samba-1 is fully modular, enabling companies to asynchronously add new models … without eliminating their previous investment,” Liang told TechCrunch in an interview. “Similarly, they’re iterative, extensible and easy to update, giving our customers room to adjust as new models are integrated.”

Liang’s a good salesperson, and what he says sounds promising. But is Samba-1 really superior to the many, many other AI systems for business tasks out there, least of which OpenAI’s models?

It depends on the use case.

The ostensible main advantage of Samba-1 is, because it’s a collection of models trained independently rather than a single large model, customers have control over how prompts and requests to it are routed. A request made to a large model like GPT-4 travels one direction — through GPT-4. But a request made to Samba-1 travels one of 56 directions (to one of the 56 models making up Samba-1), depending on the rules and policies a customer specifies.

This multi-model strategy also reduces the cost of fine-tuning on a customer’s data, Liang claims, because customers only have to worry about fine-tuning individual or small groups of models rather than a massive model. And — in theory — it could result in more reliable (e.g. less hallucination-driven) responses to prompts, he says, because answers from one model can be compared with the answers from the others — albeit at the cost of added compute.

“With this … architecture, you don’t have to break bigger tasks into smaller ones and so you can train many smaller models,” Liang said, adding that Samba-1 can be deployed on-premises or in a hosted environment depending on a customer’s needs. “With one big model, your compute per [request] is higher so the cost of training is higher. [Samba-1’s] architecture collapses the cost of training.”

I’d counter that plenty of vendors, including OpenAI, offer attractive pricing for fine-tuning large generative models, and that several startups, Martian and Credal, provide tools to route prompts among third-party models based on manually programmed or automated rules.

But what SambaNova’s selling isn’t novelty per se. Rather, it’s a set-it-and-forget it package — a full-stack solution with everything included, including AI chips, to build AI applications. And to some enterprises, that might be more appealing than what else is on the table.

“Samba-1 gives every enterprise their own custom GPT model, ‘privatized’ on their data and customized for their organization’s needs,” Liang said. “The models are trained on our customers’ private data, hosted on a single [server] rack, with one-tenth the cost of alternative solutions.”

'A Brief History of the Future' offers a hopeful antidote to cynical tech takes

Image Credits: PBS

Cynicism is a quality taken almost for granted in tech journalism, and certainly we are as guilty as the next publication. But both the risk and the promise of technology are real, and a new documentary series tries to emphasize the latter while not discounting the former. “A Brief History of the Future,” hosted by Ari Wallach, also has the compelling quality of, as a PBS production, being completely free.

The thesis of the show is simply that, while the dangers and disappointments of technology (often due to its subversion by business interests) are worth considering and documenting, the other side of the coin also should be highlighted not out of naiveté but because it is genuinely important and compelling.

I talked with Wallach, who embraces the “futurist” moniker unapologetically from the start, suggesting we run the risk of blinding ourselves to the transformative potential of tech, startups and innovation. (Full disclosure: I met Ari many, many years ago when he was going to Berkeley with my brother, though this is quite coincidental.)

“The theory of the case is that when you ask 10 Americans ‘what do you think about the future?’ nine out of 10 are gonna say, I’m afraid of it, or they’re going say it’s all about technology. Those are two things that this show in some ways is an intervention for,” explained Wallach.

The future, he said, isn’t just what a Silicon Valley publicist tells you, or what “Big Dystopia” warns you of, or even what a TechCrunch writer predicts.

In the six-episode series, he talks with dozens of individuals, companies and communities about how they’re working to improve and secure a future they may never see. From mushroom leather to ocean cleanup to death doulas, Wallach finds people who see the same scary future we do but are choosing to do something about it, even if that thing seems hopelessly small or naïve.

“We wanted to bring the future into people’s living rooms that don’t normally think about it in a critical, open-minded way, in terms of the futures that you create,” he said. “People just don’t get exposed to it. Because at the current moment, there are a whole host of reasons that, culturally, to be critical and cynical is to come across as smart and aware. But now we’re at a point that if we continually do that, we’re going to lose the thread. We’re going to lose the narrative of the entire larger human project.”

The point, in other words, isn’t to pretend the problems don’t exist, but rather that there are enough people talking about the problems already. Shouldn’t someone focus on what people are actually doing to solve them?

Of course the expected themes of AI, automation and climate are there, but also food, art and architecture, and more philosophical concerns like governance and value.

The most common objection my cynical mind raised while watching was the classic “how does this scale?” And Wallach was quick to admit that much of it doesn’t.

“How does it scale, and how do you monetize it — this is kind of the Silicon Valley-ization, the Sand Hill Road of looking at the future. And there’s a time and a place for that! It may go forward, it may not. That’s not the point. We tried to inform and educate around how to think differently about tomorrow, and here are examples of people doing it. It’s a model behavior and action to give people a sense of agency. Like, are we all going to live in 3D-printed homes? Maybe not. But if we think about the 2-3 billion unhoused people on the planet and how we’re going to house them, this is potentially going to be a part of it,” he continued.

“It’s about solution centricity that isn’t purely VC solution centricity. It’s about, how do we solve the problems that we have today through an opportunity lens, as opposed to a ‘we’re all gonna die’ lens, which is usually what the headlines are, right?”

Wallach’s thesis earned his crew a golden ticket to travel the world and talk with numerous interesting people and companies. Vertical farms, mushroom leather, coral propagation. Pete Buttigieg, Emmanuel Macron, Reid Hoffman, Grimes, footballer Kylian Mbappé. And everyone seems to be relieved to be able to talk about the promise of the future rather than the threat of it.

When I asked Wallach where or with whom he’d have liked to have spent a bit more time, he gave three answers. One, a professor in northern Japan who has a theatrical, but apparently quite effective, way of asking seniors to consider the future, by having them pretend they are visiting from it. Two, Lawrence Livermore National Lab, where the level of innovation and ambition was, he said, too high to express. And three, the “death doula” who helps people move past the anxiety of their own existence ending. (Although technology is often discussed, it’s far from the only topic.)

Image Credits: PBS

In case you’re wondering what moneyed special interest is trying to placate you with this beneficent presentation of a kindlier, wiser future… don’t worry, I asked. And the shadowy corporation behind this remarkably well-produced documentary is none other than the nefarious Public Broadcasting Service. Which means, as noted above, that it is not only free to stream on PBS.org, and on YouTube (I’ll add the first episode below as soon as it’s live), but it will also appear on normal, linear TV every Wednesday at 9 p.m. — “right after Nova.”

The general audience at which a show like this is aimed, Wallach reminded me, isn’t engaging on TikTok or often even streaming services. Millions, especially older folks who are not yet embittered to the promise of the future, turn on the TV after dinner to watch the local news, a network show and maybe a documentary like this one.

Wallach and his crew have also put together a classroom-specific version of the show that includes educational materials for following up with students about the topics covered.

“This will be the first nationwide futuring curriculum put into being, available to over 1.5 million teachers on the PBS education platform. That’s like 20 million kids. It’s cool. And it’s free.”

As a parting thought, Wallach noted the shows he grew up with, and how it’s “peak job” to be able to make something in emulation — though he was careful not to compare his to them — of classic shows like Cosmos, The Power of Myth and Connections.

“Cosmos changed how I think about the universe; The Power of Myth, how I think about faith, meaning, psychology; hopefully, A Brief History of the Future changes how folks think about futures and tomorrow. That’s the company that we wanted to be in.”