Biocel kombucha thread

Turns out you can use kombucha to make eco-friendly thread

Biocel kombucha thread

Image Credits: Natasha Lomas/TechCrunch

It’s not every day you come across kombucha playing a starring role in potential industrial disruption. But here at 4YFN in MWC we got chatting to Laura Freixas about her PhD research project that’s using a base of the fermented hipster tea to “upcycle” organic waste into filaments.

Once processed, these biodegradable threads can be knitted into fabrics. They can also be treated to have different properties — such as elasticity or water resistance. Freixas had a selection of samples of knitted bio-filament on show, offering a glimpse of eco-friendly alternative to materials like cotton or plastic essentially being brewed into existence.

Freixas is undertaking the project at the Barcelona School of Design and Engineering as part of the Elisava Research team. They’re aiming to commercialize the bio-filament — which they’re calling Biocel. “The aim is to bio-fabricate filaments from organic waste because we have seen several problems in the textile industry,” she said, highlighting the sector’s many challenges.

While multiple startups have been putting effort into developing eco-friendly leathers in recent years, including fungi-based biomaterials from the likes of Bolt Threads, Mycel and MycoWorks, Freixas says less attention has been paid to devising more environmentally friendly filaments for use in fabric production — despite the textile industry’s heavy use of chemicals, energy and water; major problems with pollution and waste; and an ongoing record of human rights violations linked to poor working conditions.

Unlike conventional fabric production the methods involved in producing Biocel are not labor intensive and do not require lots of energy or harsh chemicals, per Freixas. “This filament is produced with low levels of thermal energy/electricity and no hazardous chemicals,” she told TechCrunch. “Then we obtain a biodegradable filament that we can functionalize, or give properties, to be more elastic, rigid or hydrophobic and make a textile application.”

As with making kombucha, the feedstock for producing the bio-filament needs to have some sugars for the bacteria to work their fermenting magic. Which means some agricultural waste will be better suited — such as grape waste (from wine production), or the cereals left over from brewing beer — owing to relatively high sugar content.

“Between 15% and 50% [of agricultural products] become waste when they are processed. Here we see an opportunity,” she said, pointing to rising regulatory requirements in the European Union aimed at cutting carbon emissions and promoting circularity that are shifting incentives. It could even lead to a situation in which industrial producers pay upcyclers to take their waste off their hands, she suggested.

“Regarding technology, we are building our machine to automatize and monitor the production,” she said. “So we are building a digital platform to control the production. And then we also have a patent pending method for the tension of the filaments.”

Future applications for the bio-filament could include weaving it into accessories such as shoes and bags for the fashion industry; making biodegradable netting for product packaging; or textiles for furniture, according to Freixas.

Currently she said the bio-filament is not ideal for use cases where the knitted material would be in direct contact with people’s skin, owing to a relatively rough texture, but suggested more research could help finesse the finish as they continue to experiment with applying different treatments.

“At this point what we are looking for is for a company that has a need — a real need — so we can develop an application together and put it in the market so we can validate and then scale it,” she added.

Read more about MWC 2024 on TechCrunch

View of main building with logo and signage at the headquarters of professional social networking company LinkedIn

Europe eyes LinkedIn's use of data for ads in another DSA ask

View of main building with logo and signage at the headquarters of professional social networking company LinkedIn

Image Credits: Smith Collection/Gado / Getty Images

Microsoft-owned professional social network, LinkedIn, is the latest to get a formal request for information (RFI) from the EU. The Commission, which oversees larger platforms’ compliance with a subset of risk management, transparency and algorithm accountability rules in its ecommerce rulebook, the Digital Services Act (DSA), is asking questions about LinkedIn’s use of user data for ad targeting.

Of specific concern is whether LinkedIn is breaching the DSA’s prohibition on larger platforms’ use of sensitive data for ad targeting.

Sensitive data under EU law refers to categories of personal data such as health information, political, religious or philosophical views, racial or ethnic origin, sexual orientation and trade union membership. Profiling based on such data to target ads is banned under the law.

The regulation also requires larger platforms (aka VLOPs) to provide users with basic information about the nature and origins of an ad. They must also make an ads archive publicly available and searchable — in a further measure aimed at driving accountability around paid messaging on popular platforms.

In a press release announcing the RFI Thursday, the Commission wrote that it’s asking for “more details on how their service complies with the prohibition of presenting advertisements based on profiling using special categories of personal data”. It also flagged LinkedIn’s requirement to provide users with ad targeting info.

LinkedIn has been given until April 5 to respond to the RFI.

Reached for a response to the Commission’s action, a LinkedIn spokesperson responded by email — stating: “LinkedIn complies with the DSA, including its provisions regarding ad targeting. We look forward to cooperating with the Commission on this matter.”

The RFI represents an early stage in a potential DSA enforcement procedure — suggesting the EU has found issues which are prompting it to ask questions about how LinkedIn adheres to the ban on sensitive data for ads but hasn’t yet established preliminary concerns which would lead it to open a formal investigation. Such a step may follow, though, if it’s not satisfied with the answers it gets.

Compliance is serious business as confirmed violations of the DSA can attract fines of up to 6% of global annual turnover. The DSA also empowers the EU to impose fines for incorrect, incomplete, or misleading information in response to an RFI.

The Commission said its RFI to LinkedIn follows a complaint by civil society organizations, EDRi, Global Witness, Gesellschaft für Freiheitsrechte and Bits of Freedom, back in February — which called for “effective enforcement of the DSA”. 

LinkedIn isn’t the only platform to be in the EU’s spotlight when it comes to use of data for ads. Earlier this month, Meta, the owner of Facebook and Instagram, received an RFI from the Commission asking for more details about how it complies with the DSA’s requirement that use of people’s data for ads needs explicit consent.

A number of other RFIs have also been fired at VLOPs by the EU since the regulation began to apply on them in August last year.

The Commission has said its enforcement is prioritizing action on illegal content/hate speech, child protection, election security and marketplace safety.

Earlier today it announced its first formal investigation of a marketplace, Alibaba’s AliExpress, citing a long list of suspected violations. It also has two open probes of social media sites X and TikTok — raising another string of concerns, such as around illegal content and risk management; and content moderation practices and transparency.

Add to that, today the Commission dialled up scrutiny on how tech giants are responding to risks related to generative AI, such as political deepfakes — sending a bundle of RFIs, including with a eye on the upcoming European Parliament elections in June.

Now the EU is asking questions about Meta’s ‘pay or be tracked’ consent model

EU dials up scrutiny of major platforms over GenAI risks ahead of elections

Anthropic now lets kids use its AI tech — within limits

Amazon to invest up to $4 billion in AI startup Anthropic

Image Credits: Anthropic

AI startup Anthropic is changing its policies to allow minors to use its generative AI systems — in certain circumstances, at least. 

Announced in a post on the company’s official blog Friday, Anthropic will begin letting teens and preteens use third-party apps (but not its own apps, necessarily) powered by its AI models so long as the developers of those apps implement specific safety features and disclose to users which Anthropic technologies they’re leveraging.

In a support article, Anthropic lists several safety measures devs creating AI-powered apps for minors should include, like age verification systems, content moderation and filtering and educational resources on “safe and responsible” AI use for minors. The company also says that it may make available “technical measures” intended to tailor AI product experiences for minors, like a “child-safety system prompt” that developers targeting minors would be required to implement. 

Devs using Anthropic’s AI models will also have to comply with “applicable” child safety and data privacy regulations such as the Children’s Online Privacy Protection Act (COPPA), the U.S. federal law that protects the online privacy of children under 13. Anthropic says it plans to “periodically” audit apps for compliance, suspending or terminating the accounts of those who repeatedly violate the compliance requirement, and mandate that developers “clearly state” on public-facing sites or documentation that they’re in compliance. 

“There are certain use cases where AI tools can offer significant benefits to younger users, such as test preparation or tutoring support,” Anthropic writes in the post. “With this in mind, our updated policy allows organizations to incorporate our API into their products for minors.”

Anthropic’s change in policy comes as kids and teens are increasingly turning to generative AI tools for help not only with schoolwork but personal issues, and as rival generative AI vendors — including Google and OpenAI — are exploring more use cases aimed at children. This year, OpenAI formed a new team to study child safety and announced a partnership with Common Sense Media to collaborate on kid-friendly AI guidelines. And Google made its chatbot Bard, since rebranded to Gemini, available to teens in English in selected regions.

According to a poll from the Center for Democracy and Technology, 29% of kids report having used generative AI like OpenAI’s ChatGPT to deal with anxiety or mental health issues, 22% for issues with friends and 16% for family conflicts.

Last summer, schools and colleges rushed to ban generative AI apps — in particular ChatGPT — over fears of plagiarism and misinformation. Since then, some have reversed their bans. But not all are convinced of generative AI’s potential for good, pointing to surveys like the U.K. Safer Internet Centre’s, which found that over half of kids (53%) report having seen people their age use generative AI in a negative way — for example creating believable false information or images used to upset someone (including pornographic deepfakes).

Calls for guidelines on kid usage of generative AI are growing.

The UN Educational, Scientific and Cultural Organization (UNESCO) late last year pushed for governments to regulate the use of generative AI in education, including implementing age limits for users and guardrails on data protection and user privacy. “Generative AI can be a tremendous opportunity for human development, but it can also cause harm and prejudice,” Audrey Azoulay, UNESCO’s director-general, said in a press release. “It cannot be integrated into education without public engagement and the necessary safeguards and regulations from governments.”

Google TalkBack will use Gemini to describe images for blind people

Image Credits: Google

The company announced that Gemini Nano capabilities are coming to the company’s accessibility feature, TalkBack. This is a great example of a company using generative AI to open its software to more users.

Gemini Nano is the smallest version of Google’s large-language-model-based platform, designed to be run entirely on-device. That means it doesn’t require a network connection to run. Here the program will be used to create aural descriptions of objects for low-vision and blind users.

In the above pop-up, TalkBack refers to the article of clothing as, “A close-up of a black and white gingham dress. The dress is short, with a collar and long sleeves. It is tied at the waist with a big bow.”

According to the company, TalkBack users encounter around 90 or so unlabeled images per day. Using LLMs, the system will be able to offer insight into content, potentially forgoing the need for someone to input that information manually.

“This update will help fill in missing information,” Android ecosystem president, Sameer Samat, noted, “whether it’s more details about what’s in a photo that family or friends sent or the style and cut of clothes when shopping online.”

The device will be arriving on Android later this year. Assuming it works as well as it does in the demo, this could be a game changer for blind people and those with low vision.

We’re launching an AI newsletter! Sign up here to start receiving it in your inboxes on June 5.

Google will use Gemini to detect scams during calls

Image Credits: Google

For a few years now, carriers have been using lists to alert users of potential spam and scam calls as they come in. These systems are hardly foolproof. So what happens once a user picks up? At the Google I/O 2024 developer conference on Tuesday, Google previewed a feature it believes will alert users to potential scams during the call. 

The feature, which will be built into a future version of Android, uses Gemini Nano, the smallest version of Google’s generative AI offering, which can be run entirely on-device. The system effectively listens for “conversation patterns commonly associated with scams” in real time. 

Google gives the example of someone pretending to be a “bank representative.” Common scammer tactics like password requests and gift cards will also trigger the system. These are all pretty well understood to be ways of extracting your money from you, but plenty of people in the world are still vulnerable to these sorts of scams. Once set off, it will pop up a notification that the user may be falling prey to unsavory characters. 

No specific release date has been set for the feature. Like many of these things, Google is previewing how much Gemini Nano will be able to do down the road sometime. We do know, however, that the feature will be opt-in. 

That’s a good thing. While the use of Gemini Nano means the system won’t be automatically uploading to the cloud, the system is still effectively listening to your conversations. It’s the kind of thing that makes the hairs stand up on the back of privacy advocates’ necks. 

However, being opt-in may also mean that some of the people who can benefit the most from such a feature might never tick that box.

We’re launching an AI newsletter! Sign up here to start receiving it in your inboxes on June 5.

Read more about Google I/O 2024 on TechCrunch

Berlin-based trawa raises €10M to use AI to make buying renewable energy easier for SMEs

Trawa founders. (Photograph by Marzena Skubatz)

Image Credits: Trawa founders (photograph by Marzena Skubatz)

The brutal invasion of Ukraine by Russia in February 2022 took businesses that depended on oil and gas energy by surprise. Suddenly, renewable energy became crucial to survival. But how best to buy it?

That was the germination of the idea behind trawa, a Berlin-based renewable energy supplier that recently raised €10 million in a seed round led by Balderton Capital. The funding round brings the startup’s total capital raised to more than €12 million.

Trawa’s pitch is that it simplifies energy purchasing and management for small and medium-sized enterprises (SMEs) by leveraging two things: An AI-powered platform that lets businesses buy from renewable energy sources, and downstream data from the customers themselves about when they need energy most. 

Europe’s ongoing energy crisis has seen electricity prices spike two to three times higher than the U.S. Higher prices have also impacted manufacturing in the Eurozone, in decline for more than a year, and German industry is expected to decline by 1.5% this year due to higher energy prices and interest rates. 

Renewable energy can help businesses alleviate some of those pains, but even though many companies want to switch to green energy sources, the complexity of defining green energy and the security of constant supply is problematic.

Trawa’s co-founder and CEO David Budde hit on the idea of using AI to streamline green energy supply while he was at Bain and Company. He realized economic problems and sustainability regulation were both hitting businesses at the same time. 

“Prices skyrocketed, volatility increased and their core business was being hit. All of a sudden, their products were no longer profitable because the energy costs were rising so fast,” he told TechCrunch.

“At the same time, the European Commission and the German government were pushing further and stronger regulation. Now, businesses had to deal with both. In the past few years, if you wanted green electricity, it meant having to pay a premium. That’s exactly where we come in.”

Budde said trawa gives SMEs, which generally do not have procurement expertise in energy, the tools to structure their energy purchasing. trawa’s AI then creates an optimal combination of power from different products to match the buyer’s consumption patterns. The idea is that trawa can buy electricity in installments at staggered times, yielding significant cost savings. 

trawa’s management software also allows companies to factor in their own rooftop solar systems or batteries. The startup claims the combination of AI-powered purchasing and management software can let companies save up to 30% of their energy costs a year. 

The startup already has a few industrial customers in the DACH region, including textile manufacturer SETEX-Textil, Amano Hotel Group, solar energy company Sunmaxx, logistics company Loxxess and automotive supplier Coroplast Group.

“In the face of the climate crisis and volatile energy pricing market, renewable energy is a way for companies to take control of their energy security. trawa offers companies a bespoke solution for energy procurement, shielding SMEs from price explosions, helping them make the most of investments in assets like smart batteries and solar power and providing granular data for ESG reporting,” James Wise, general partner at Balderton Capital, said in a statement.

German climate tech investor AENU also participated in the round, alongside previous investors Speedinvest, Magnetic and Tiny VC.

Praktika raises $35.5M to use AI avatars to make learning languages feel more natural

Praktika app

Image Credits: Praktika app

Most apps that help you learn languages have features where you select options or swipe away wrong answer cards — you’re more or less interacting with a machine. Language-learning app Praktika is adopting a different approach: It lets you create personalized AI-powered avatars to replicate the experience of having a private tutor, leveraging inflections like tone of voice and emotions to help make learning a language feel more natural.

Praktika claims to have 1.2 million active monthly users across 100 countries and said it generated revenue of almost $20 million in the last 12 months. To keep growing, the startup has now secured a $35.5 million Series A funding round led by Blossom Capital. The round follows a previously unannounced $2.5 million seed fundraise that was led by Creator Ventures and Blue Wire Capital. 

Praktika’s users interact with AI avatars who then “tailor” lessons for you and can speak with several accents, such as American, British, Asian, and Indian. The more the learner interacts with the avatar, the more tailored the lessons will become — at least that’s the idea.

The company’s founding team, Adam Turaev (CEO), Anton Marin (CTO) and Ilya Chernyakov (CPO), previously built Cleverbots, an AI service business that counted companies like Coca-Cola, Kimberly-Clark, and AstraZeneca among its clients. 

“Most language learning apps are all about human interaction, with a human tutor. Or it is ‘machine-to-human’ interaction involving clicks and drag and drops,” Turaev told TechCrunch. “But we are the only app out there about tone of voice, where you mimic human-to-human interaction. We were the first to master this AI avatar approach, which is very natural to language learning. So that’s what really makes it different from any other app that you can see on the market right now.”

When asked how the startup is using AI, he said, “We orchestrate different LLMs, but we’re an AI-native company. We have used GPT-4, GP Turbo, Gemini, Claude, and Mistral. We experiment with different versions of their models as well. We gathered a lot of training data, and the app still learns. We have terabytes of this human-to-AI interaction data. We use anonymized data to reinforce the models.”

“Praktika’s founding team is bringing its deep knowledge of AI to create a fun, affordable way to learn languages with personalized AI tutors. For too long, other learning apps have taken students for granted and shortchanged them. The team’s determination to build a global challenger has translated into one of the fastest-growing early-stage consumer AI companies globally,” Ophelia Brown, managing partner of Blossom Capital, said in a statement.

Both the seed round and the recent Series A saw participation from prominent figures like Carles Reina (ElevenLabs) and Patrice Evra (five-time Premier League champion).

Sasha Kaletsky, managing partner at Creator Ventures, added in a statement: “Learning a language is a basic human experience, and the Praktika team has successfully injected this human-like element into the product using AI … Over a million learners worldwide are improving their English with Praktika already, and this is just the start.”