X is building a 'dislike' button for downvoting replies

An illustration of a phone and the X logo.

Image Credits: Bryce Durbin/TechCrunch

Elon Musk’s X, formerly Twitter, is continuing to develop a downvoting feature that will be used to improve how replies are ranked. Although the company has not yet officially announced its plans, more recent findings indicate the downvote feature may actually resemble a “dislike” button instead of a Reddit-style downvote icon. Code references found in the X iOS app now show a button that appears as a broken heart icon next to X’s heart-shaped “like” button as well as direct references to a “downvote” feature.

As Twitter, the company tested downvoting in 2021, ahead of Elon Musk’s acquisition. At the time of the original experiment, however, Twitter had tested both upvoting and downvoting buttons across all posts. The latest tests indicate that X is only considering allowing downvotes on replies, to help showcase the better replies at the top of a long thread while moving less-liked replies further down the thread. That could prevent users from posting content designed specifically to anger people and trying to generate dislikes as a form of engagement.

Earlier this month, reverse engineer Aaron Perris, @aaronp613 on X, found references in X’s iOS app that referenced a downvote feature that appeared to be in development. He’s now found additional image files in the iOS app that indicate the button could be styled as a broken heart as well as more direct references to the feature itself.

In screenshots shared on X, Perris found that X’s app now includes several newly added references to a “downvoting” function as well as strings of text that ask the user to take action and confirm their downvote. For example, one reads “Do you want to downvote this post?,” while another simply instructs the user to “Downvote this post.”

Given the wording — which references “posts” and not just “replies” — it’s not clear if X is now considering bringing a downvote feature to all posts on the platform or only just replies.

Another user, @P4mui on X, also shared videos of the dislike button in action, including one where a user asked them to dislike their reply to the post. The user, who had enabled the dislike button using a feature flag, additionally noted that the button was only available on replies for the time being, but they weren’t sure if that would later change.

The dislike button was also reportedly spotted on an X employee’s account who had shared a video demo of a new way to expand replies. That post was quickly deleted and reposted without the dislike button in view.

Given the growing number of sightings, it seems likely that more public tests of a dislike button are underway.

This isn’t the only change X has made to its “likes” system under Musk’s ownership. More recently, X began to hide likes from public view, allowing people, as Musk put it, to like more “edgy” content and protect their image.

TigerBeetle is building database software optimized for financial transactions

Image Credits: Bortonia (opens in a new window) / Getty Images

After doing some consulting for Microsoft to develop protections against zero-day exploits, software engineer Joran Dirk Greef worked with Coil, a web monetization startup in San Francisco, to help build its payments infrastructure. At the time, Coil was using a traditional database to store and process transactions. But Greef had the insight that a specialized database could prove to be much more nimble — and powerful.

The idea morphed into a skunkworks project at Coil, and Greef became a staff engineer working full-time on a new database design called TigerBeetle. Two years into the project, after customers started requesting enterprise support for the database, Greef spun out TigerBeetle as a startup.

TigerBeetle’s open source database is engineered for financial online transaction processing, Greef says, capable of handling more than 8,000 debit and credit card transactions in a single query. One query for 8,000 transactions might not sound like a lot. But most general-purpose databases would require 1 to 10 queries per transaction. And more queries translates to more latency — especially if the database is hosted on a remote server somewhere.

“TigerBeetle is a financial transactions database that provides debit/credit primitives out of the box and enforces financial consistency in the database without requiring a developer to cobble together a system of record from scratch,” Greef said.

“TigerBeetle is ideal for use cases where you need to count anything of value — not necessarily money, but including money — moving from one person or place to another,” Greef said. A common application is an internal ledger for a company like Transferwise, he added, which has to keep track of lots of money moving between accounts.

Spinning out TigerBeetle was a wise decision in hindsight. TigerBeetle recently closed a $24 million Series A round led by Spark Capital’s Natalie Vais with participation from Amplify Partners and Coil, bringing its total raised to more than $30 million. A source familiar with the matter tells TechCrunch that TigerBeetle is valued at around $100 million post-money.

“We had planned to raise later in the year,” Greef said. “However, after a surge in community growth at the beginning of 2024, and growing commercial interest, we decided to bring the raise forward to invest in engineering, go-to-market and TigerBeetle’s cloud platform, which is under development.”

TigerBeetle, which only has eight employees at present and plans to double the size of its team by 2025, provides its database technology in the form of a managed service. Greef claims that TigerBeetle has had paying customers “since day one” and that the TigerBeetle community — folks using or contributing to the open source release — has grown over 200% year-over-year.

Vais told TechCrunch that TigerBeetle is one of the more exciting database projects that she’s seen recently.

“TigerBeetle rethinks every component from the ground up to handle modern transactional workloads,” she said. “In a world where everything is becoming more online and more transactional, there’s a huge opportunity for TigerBeetle to become a foundational piece of infrastructure for modern systems of record.”

TigerBeetle’s managed service is currently available by invitation only, and the database reached its first production release just in March. But Greef says that growth — in particular acquiring new customers — will be the focus for the foreseeable future.

“TigerBeetle’s use cases extend beyond fintech,” he continued. “Think usage-based billing with real-time spend caps, gaming live ops and energy smart meters, as well as instant payments, core banking, brokering, inventory, shopping carts, trucking and shipping, warehousing, crowdfunding, voting and reservation systems.”

Overhead shot of team of startup developers working together at a table.

Building a strong startup developer culture requires constant adjustment

Overhead shot of team of startup developers working together at a table.

Image Credits: vgajic / Getty Images

Most tech startups are born from a few early engineers building the company’s initial product. As those first builders work together, they begin to establish a developer culture — sometimes deliberately, sometimes not.

At Web Summit in Lisbon in November, two founders discussed the importance of building a developer culture that’s distinct from a company’s overall culture.

According to Shensi Ding, co-founder and CEO at Merge, a unified API startup, early developer ethos is particularly important inside tech startups, where engineers ultimately control how the product gets built and what gets prioritized. She says her co-founder, CTO Gil Feig, worked to set a positive tone from the start that empowered the team.

“He really instilled in us early on that engineers can, from the very beginning, decide that we can do anything. It just depends on how much time you want to allocate to [a particular task]. And we really wanted to instill that in the developer culture early on,” she said.

Ludmila Pontremolez, CTO and co-founder at Zippi, a Brazilian fintech startup, spent time as an engineer at Square prior to launching Zippi. She wanted to build a team-focused atmosphere: Regardless of who wrote the code, everyone is responsible for it. “Every mistake everyone makes is everyone’s responsibility,” she said. “When there’s something broken in production, Sunday at 1 a.m., it’s probably not the person who wrote the code that’s going to fix it, but whoever is in charge of looking after the servers at the time.”

“So we instill a lot of that communal thinking and how we’re writing code for everyone on the team. It’s not just about whether you should be building something faster. It is about what’s the legacy you’re leaving behind.”

Chronic technical debt could be holding your company back

Ding said that as she and Feig were engineers prior to building Merge, they wanted to create an atmosphere at their company where it was encouraged to take individual initiative and where all engineers owned the product collectively. “Early on, one thing that we really tried to do was lead by example. . . . We would set expectations, like you can just make these decisions for us. And we would also talk a lot about what the impact of what they were doing was because we really wanted to set that foundation early on that they weren’t going to be just task receivers and doers. They were going to be an integral part of the company,” she said.

A big piece of creating that culture is making sure that the group is diverse, and the earlier that can happen the better. Ding said, especially as a woman, that she wanted to deliberately create a tolerant and accepting environment.

“I’ve definitely been in companies where as a young woman, you’ll say something and you feel like people think you’re stupid or they don’t respect what you’re saying,” she said. “And I really want to make sure that everyone in the company is creating a great environment for young women [and others] to feel comfortable saying whatever they want.”

When hiring, it’s important to source and recruit diverse candidates, but that in itself isn’t enough, Ding said. “You can’t bring people in and expect them to change the environment. You have to have a great environment for them in the first place,” she said.

As a startup grows, and other areas of the business begin to take shape, the engineering team becomes a more distinct group, and as such has to define its own culture within the broader organization. Part of that is building their own ways to drive more efficient work.

Whatever workflows are put in place, adjustments will have to be made continually as the group grows. “I don’t know if I have a perfect answer for [finding a single way of working]. I’m sure [our processes] break all the time,” she said. So they keep trying new ways of working, knowing that at some point, it’s going to need to be adjusted again. “I think it’s really hard for companies to find a [single] process that will scale with them over time.”

Pontremolez says that her company recently wrote down its development values and what it believes in a software engineering manifesto to help new engineers understand how they fit in when they come on board. But she says that at their stage she doesn’t necessarily want to put hard and fast rules in place. “I think the toughest part about grading processes is separating individuals doing things that are not what you desire and not turning those episodes into rules that apply to everyone.”

For example, she said that if someone doesn’t properly deploy code, you don’t want to necessarily make a rule that developers can no longer deploy independently, something she would consider an overreaction to a single incident. It’s important to balance the needs of the overall company, especially at an early-stage startup, with whatever rules and processes you put in place.

But Pontremolez says that as you add more people, it’s important to help them “connect the dots” between engineering and what the startup is trying to accomplish, so they understand how their contributions fit into the overall goals of the organization.

What’s clear is that building a developer culture doesn’t ever really end. As the company grows and moves through different stages, the developer group will have to adjust to each new reality, and the founders or other management will have to help facilitate that.

Overhead shot of team of startup developers working together at a table.

Building a strong startup developer culture requires constant adjustment

Overhead shot of team of startup developers working together at a table.

Image Credits: vgajic / Getty Images

Most tech startups are born from a few early engineers building the company’s initial product. As those first builders work together, they begin to establish a developer culture — sometimes deliberately, sometimes not.

At Web Summit in Lisbon in November, two founders discussed the importance of building a developer culture that’s distinct from a company’s overall culture.

According to Shensi Ding, co-founder and CEO at Merge, a unified API startup, early developer ethos is particularly important inside tech startups, where engineers ultimately control how the product gets built and what gets prioritized. She says her co-founder, CTO Gil Feig, worked to set a positive tone from the start that empowered the team.

“He really instilled in us early on that engineers can, from the very beginning, decide that we can do anything. It just depends on how much time you want to allocate to [a particular task]. And we really wanted to instill that in the developer culture early on,” she said.

Ludmila Pontremolez, CTO and co-founder at Zippi, a Brazilian fintech startup, spent time as an engineer at Square prior to launching Zippi. She wanted to build a team-focused atmosphere: Regardless of who wrote the code, everyone is responsible for it. “Every mistake everyone makes is everyone’s responsibility,” she said. “When there’s something broken in production, Sunday at 1 a.m., it’s probably not the person who wrote the code that’s going to fix it, but whoever is in charge of looking after the servers at the time.”

“So we instill a lot of that communal thinking and how we’re writing code for everyone on the team. It’s not just about whether you should be building something faster. It is about what’s the legacy you’re leaving behind.”

Chronic technical debt could be holding your company back

Ding said that as she and Feig were engineers prior to building Merge, they wanted to create an atmosphere at their company where it was encouraged to take individual initiative and where all engineers owned the product collectively. “Early on, one thing that we really tried to do was lead by example. . . . We would set expectations, like you can just make these decisions for us. And we would also talk a lot about what the impact of what they were doing was because we really wanted to set that foundation early on that they weren’t going to be just task receivers and doers. They were going to be an integral part of the company,” she said.

A big piece of creating that culture is making sure that the group is diverse, and the earlier that can happen the better. Ding said, especially as a woman, that she wanted to deliberately create a tolerant and accepting environment.

“I’ve definitely been in companies where as a young woman, you’ll say something and you feel like people think you’re stupid or they don’t respect what you’re saying,” she said. “And I really want to make sure that everyone in the company is creating a great environment for young women [and others] to feel comfortable saying whatever they want.”

When hiring, it’s important to source and recruit diverse candidates, but that in itself isn’t enough, Ding said. “You can’t bring people in and expect them to change the environment. You have to have a great environment for them in the first place,” she said.

As a startup grows, and other areas of the business begin to take shape, the engineering team becomes a more distinct group, and as such has to define its own culture within the broader organization. Part of that is building their own ways to drive more efficient work.

Whatever workflows are put in place, adjustments will have to be made continually as the group grows. “I don’t know if I have a perfect answer for [finding a single way of working]. I’m sure [our processes] break all the time,” she said. So they keep trying new ways of working, knowing that at some point, it’s going to need to be adjusted again. “I think it’s really hard for companies to find a [single] process that will scale with them over time.”

Pontremolez says that her company recently wrote down its development values and what it believes in a software engineering manifesto to help new engineers understand how they fit in when they come on board. But she says that at their stage she doesn’t necessarily want to put hard and fast rules in place. “I think the toughest part about grading processes is separating individuals doing things that are not what you desire and not turning those episodes into rules that apply to everyone.”

For example, she said that if someone doesn’t properly deploy code, you don’t want to necessarily make a rule that developers can no longer deploy independently, something she would consider an overreaction to a single incident. It’s important to balance the needs of the overall company, especially at an early-stage startup, with whatever rules and processes you put in place.

But Pontremolez says that as you add more people, it’s important to help them “connect the dots” between engineering and what the startup is trying to accomplish, so they understand how their contributions fit into the overall goals of the organization.

What’s clear is that building a developer culture doesn’t ever really end. As the company grows and moves through different stages, the developer group will have to adjust to each new reality, and the founders or other management will have to help facilitate that.

BlueLayer is building the operating system for carbon project developers

Image Credits: Baac3nes / Getty Images

Meet BlueLayer, a new European startup that is building a software platform specifically designed for carbon project developers. In particular, BlueLayer wants to help these companies manage their carbon credits at scale.

While the best way to prevent carbon emissions is by decarbonizing supply chains, carbon credits will also play a role in the coming years as a supplement to these decarbonization efforts.

“We think that about 70 to 80% of emissions today can be reduced by actions directly within industrial and general goods supply chains. But there are things that we’re going to have to do that are not within the supply chain of companies,” co-founder and COO Vivian Bertseka told me.

And this is where carbon credits come along. This system of carbon offsetting with carbon project developers is sometimes also called beyond value chain mitigation.

The startup focuses specifically on that area and recently raised a $5.6 million seed round led by Point Nine. Before that, the company had already raised a pre-seed round from several angel investors and industry experts, bringing BlueLayer’s total amount raised to $10 million.

BlueLayer acts as the software back end for carbon project developers. It helps those companies when it comes to managing carbon credits as it acts as the single source of truth for inventory and orders with the ability to generate reports, leverage data visualization tools and eventually make decisions that could potentially maximize revenue.

BlueLayer aims to focus exclusively on project developers. They could be working on various projects, such as reforestation, forest conservation, peatland restoration and more. The startup also picked a traditional SaaS revenue model instead of charging commissions on carbon credit sales.

“We’re not a broker; we’re not an intermediary. So we don’t take big commissions on sales and that’s because we need to be aligned with one side of the ecosystem, which is the developers,” co-founder and CEO Alexander Argyros told me.

Right now, they need better tools to manage their existing projects as well as integrations with other stakeholders. BlueLayer can also connect with legacy ERP systems, distribution channels and rating agencies. In the future, they may have other needs when it comes to pre-financing future projects with forward contracts and pre-allocating carbon credits.

You might think that there is a limited pool of potential clients, but BlueLayer has already held talks with over 200 carbon project developers.

Headquartered in Berlin with teams in London and Athens, BlueLayer was co-founded by Alexander Argyros, who previously co-founded PE investment platform Moonfare; Vivian Bertseka, who was a climate investor with Generation Investment Management and a founding partner at Just Climate; and Gerardo Bonilla, Moonfare’s former head of product.

Kore.ai, a startup building conversational AI for enterprises, raises $150M

Earth (focus on Europe) represented by little dots, binary code and lines

Image Credits: Getty Images

In the midst of a wave of tech industry layoffs, it’s heartening to see some startups succeeding despite the dour market outlook.

Kore.ai, a company developing enterprise-focused conversational AI and GenAI products, today announced that it raised $150 million in a funding round led by FTV Capital, Nvidia, Vistara Growth, Sweetwater PE, NextEquity, Nicola and Beedie. Bringing the company’s total raised to ~$223 million, the new cash will be put toward product development and scaling up Kore.ai’s workforce, co-founder and CEO Raj Koneru told me in an interview.

Koneru started Kore.ai in 2014 after launching Kony, a mobile app development startup, and several other small companies, including iTouchPoint (an outsourcing firm) and Intelligroup (a tech consultancy). He says he was inspired to found Kore.ai after seeing the potential of AI, particularly large language models (LLMs) along the lines of OpenAI’s ChatGPT, to transform user experiences.

“With the introduction of GenAI and LLMs, the tech landscape turned out to be very chaotic and uncertain due to rapid advancements,” Koneru said via email. “There were more questions than answers … but I saw conversational AI and LLMs as an opportunity to innovate.”

GenAI being a newer discipline, Kore.ai wasn’t developing GenAI products in 2014 per se. But Koneru says that the company was laying the foundations for GenAI products to come — investing heavily in text-generating and text-analyzing models.

So how’s Kore.ai innovating? Well, as Koneru describes it, the startup provides a no-code platform to help companies power various “business interactions” via AI — essentially any customer-to-employee or employee-to-employee interaction over the phone or text (think support chats with an IT/HR service desk). Kore.ai offers workflows and tools designed to give companies in industries such as banking, healthcare and retail the ability to create custom conversational AI apps or deploy pre-built, “domain-trained” chatbots.

“Kore.ai’s platform encompasses intelligent virtual agent, contact center AI, agent AI and search and answer capabilities for all kinds of customer experience and employee experience use cases,” Koneru said. “In addition, Kore.ai’s array of industry and horizontal solutions address the needs of specific industries and enterprise functions.”

But aren’t there lots of vendors building GenAI- and LLM-powered solutions for search, question-answering and the other sorts of applications Kore.ai advertises support? Indeed, there are.

See Acree, which hosts a platform for building corporate GenAI apps, and Giga ML, which offers tools to help companies deploy LLMs offline. Reka and Contextual AI both recently emerged from stealth to help create custom AI models for organizations, while Fixie is crafting tools to make it easier for companies to code on top of LLMs.

What Kore.ai does differently, Koneru asserts, is offer great flexibility where it concerns where companies can deploy their AI apps — in the cloud, locally or in virtual machines — and the degree to which they can fine-tune these apps. For certain applications (e.g. text summarization, finding and generating answers, topic discovery and sentiment analysis), Koneru makes the case that fine-tuned models — Kore.ai’s specialty — are superior to the larger, more powerful models available from vendors like Anthropic and OpenAI, as well as more cost-effective.

There’s a privacy argument to be made, too, for smaller, offline models.

A 2023 Predibase survey found that more than 75% of enterprises don’t plan on using commercial, cloud-hosted LLMs in production over fears that the models will compromise sensitive info. In a separate poll from GenAI platform Portal26 and data research firm CensusWide, 85% of businesses said that they’re concerned about GenAI’s privacy and security risks.

Kore.ai
Creating a GenAI or conversational AI workflow using Kore.ai’s web tooling. Image Credits: Kore.ai

“Over the past 18 months, we’ve observed that fine-tuned models are very effective compared to pre-trained models for specific enterprise use cases,” Koneru said. “Compared to a large pre-trained model, it takes less than 2% of the enterprise data to train and create a fine-tuned model that companies can deploy safely for enterprise use cases. We’ve successfully built smaller enterprise LLMs that provide higher efficiency, better accuracy, the ability to control responses and — most importantly — reduce latency and cost.”

Also unlike some rivals, Kore.ai offers ways for organizations to scale up their AI as needed, Koneru says, and expand their use of AI into new and diverse domains.

“Kore.ai sits above the infrastructure and fragmentation of all the LLM layers with a platform-driven approach, offering freedom of choice with built-in guardrails for effective AI implementation,” Koneru added.

Now, the extent to which these capabilities are truly differentiating is subject to debate. Vendors like Google Cloud, Azure and AWS offer robust scaling solutions for conversational AI and GenAI apps, and Kore.ai isn’t the only platform to let customers deploy models in a range of local and cloud compute environments.

But — whether on the strength of its platform, nearly 1,000-person-workforce, marketing campaign or all three — Orlando, Florida-based Kore.ai has established an impressive foothold in the competitive AI field. The company’s customer base eclipsed 400 brands (including PNC, AT&T, Cigna, Coca-Cola, Airbus and Roche) last year, and its annual recurring revenue now stands north of $100 million — thanks to income from licensing and usage fees in addition to consulting services.

It probably helps that funding for GenAI startups of all stripes remains strong. According to a recent survey from GlobalData, the London-based data analytics and consulting firm, GenAI startups raised a record $10 billion in 2023 — a 110% increase compared to 2021.

The question is whether the growth is sustainable, given that GenAI isn’t a home run in the enterprise — at least not yet. Koneru argues that it is, pointing to surveys like Gartner’s from last October, which found that 55% of organizations are already piloting or deploying GenAI tech into production for functions such as customer service, marketing and sales.

“We haven’t observed any slowdown in the market,” Koneru said. “The most pressing challenge [we’re facing] is to operate and innovate in a market that’s not just seen rapid growth but also disruption driven by advancements in technology, changing user expectations and a broader integration of newer AI capabilities that are evolving each day. Enterprise players need to take advantage of the benefits of technology while avoiding security, privacy and compliance pitfalls.”

Added FTV Capital’s Kapil Venkatachalam in a statement: “While the advanced AI market has experienced rapid growth in recent years, many enterprises are grappling with how to responsibly and effectively deploy AI across their organizations. We were impressed with Kore.ai’s open platform approach for leveraging AI models, scalability, vertical specific out-of-the-box applications and low-code no-code capabilities, making them well-positioned to take advantage of the growing demand from global brands looking for innovative AI solutions to enhance business interactions and drive value.”

Daedalus

Daedalus, which is building precision-manufacturing factories powered by AI, raises $21M

Daedalus

A fledgling startup founded by one of OpenAI’s first engineering hires is looking to “redefine manufacturing,” with AI-powered factories for creating bespoke precision parts.

Daedalus, as the company is called, is based in the southwestern German city of Karlsruhe, where its solo factory is currently housed. Here, Daedalus takes orders from industries such as medical devices, aerospace, defense, and semiconductors, each requiring unique components for their products. For example, a pharmaceutical company might require a customized metal casing for a valve used in the production of a particular medicine.

As it looks to ramp up operations with a view toward opening additional factories in its domestic market, Daedalus today announced it has raised $21 million in a Series A round of funding led by Nokia-funded NGP Capital, with participation from existing investors Khosla Ventures and Addition.

This takes Daedalus’s total funding past the $40 million mark, with other notable investors including Y Combinator (YC), which became involved after Daedalus participated in YC’s Winter 2020 program.

The Daedalus factory floor
The Daedalus factory floor. Image Credits: Daedalus

Fragmented fabrication

The manufacturing industry — particularly as it relates to precision part fabrication — is hugely fragmented by just about every estimation. While it’s tempting to imagine that a typical manufacturing setup in 2024 is something akin to that of a large automotive assembly plant, this really only applies where high-volume products (like cars) are involved — the reality is somewhat different when you get down to the level of precision parts used in industrial machinery.

A company that has been designing industry-specific valves for decades likely won’t be manufacturing everything itself internally. It will typically rely on an old-school network of manufacturers, which may mean working with a small business consisting of a single expert “craftsman” and a handful of helpers working from a small facility. 

“What this means is they’re not doing much in terms of digitization, and it’s difficult to change that because they’re just used to working with pen and paper, basically,” Daedalus founder and CEO Jonas Schneider told TechCrunch. “So you have these very low-tech manufacturers supplying the most critical components for these extremely high-end products.”

Daedalus founder & CEO Jonas Schneider
Daedalus founder and CEO Jonas Schneider. Image Credits: Daedalus

Founded in 2019, Daedalus uses similar off-the-shelf hardware available to any manufacturer, but its special sauce lies in the software it deploys on top to control and optimize the “shop floor” — that is, it automates many of the manual tasks involved in producing a particular part. So a customer will send their CAD (computer-aided design) drawings as usual, and Daedalus develops these drawings to a finished part with automation permeating the process.

“It’s about orchestrating all of the workflows across the production, planning and scheduling of those running around on the factory floor doing the work,” Schneider said.

For context, when production begins for a new “part” in a machine, there are typically dozens of steps and hundreds of decisions involved that impact what tooling will be needed, what settings to use to create the precise shape and dimensions of the part, and so on. And this is where Daedalus enters the fray — its software captures the manufacturing decisions data of one “part” and uses that to guide the decisions around how a similar part is created in the future. So a slightly bigger valve, or a valve with an extra fitting, might be substantively the same as an earlier part, thus Daedalus uses pattern matching to apply that previous knowledge to configure its machines for the new part.

In many ways, Daedalus extends the basic concept of 3D printing, which has been democratizing the manufacturing process for more than a decade. But with machine learning smarts under the hood, it’s taking things to the next level — it’s like 3D printing on steroids.

“The comparison is very apt — as an outsider to this industry in the beginning, to me it seemed like custom manufacturing had [already] been solved with 3D printing. But it mostly comes down to technical limitations of the process,” Schneider said. “With 3D printing, it still means that you need to design a new part specifically so that it can be 3D-printed, and that actually ends up being quite an expensive process. But for the vast majority of the industrial base, it’s not really feasible, and they can’t do 3D printing because it’s not precise enough, or the materials are not strong enough. You can frame what we’re doing, in a sense, as taking this idea from 3D printing and applying it to industrial grade, high-end parts.”

The story so far

Prior to Daedalus, Schneider was technical lead at OpenAI, where he was instrumental in getting the company’s robotics division off the ground in 2016. Indeed, OpenAI might be better known today for its flagship ChatGPT AI chatbot, but the company also operated a robotics unit that conducted research into things like solving a Rubik’s Cube with a robotic hand, a project that Schneider was directly involved in.

OpenAI's Rubik's Cube hand
OpenAI’s Rubik’s Cube hand. Image Credits: OpenAI

OpenAI ultimately disbanded this team in 2021, but Schneider had spearheaded the software engineering side of operations for more than three years before he departed to start Daedalus in 2019.

While there were various reasons why Schneider ended up leaving to form his own startup, there was one experience he encountered building the Rubik’s Cube hand that played a big part in his decision to launch Daedalus.

“At one point, the robot hand broke down and we had to get spare parts,” Schneider said. “And guess what? They needed to be precision manufactured. So there were these machines just like ours today, but we had to wait months to get these parts. And I thought, why is it so hard to get spare parts here? All of this contributed to me looking at this whole manufacturing space a bit more.”

For now, Daedalus has a single 50,000-square-foot factory factory in Karlsruhe from where it largely targets the German-speaking markets, including Austria and Switzerland. In the near term, the plan is to expand to a second factory in Germany, and then farther afield if demand is sufficient.

“This is the blueprint factory, right? This is where we’re learning all of the systems and all of the knowledge and distilling it into our way of producing these parts,” Schneider said. “And then in the long run, we’ll put these factories wherever our customers need them.”

Two business people looking at computer screen with data about the health of the business.

Crux is building GenAI-powered business intelligence tools

Two business people looking at computer screen with data about the health of the business.

Image Credits: Morsa Images / Getty Images

GenAI might be one of the most exciting technologies today. But it’s also one of the fastest-moving — posing a challenge for enterprises keen to deploy it. With each new GenAI innovation, companies have to worry not only about staying on top of trends but validating what’s working, all while maintaining a semblance of accuracy, compliance and security.

There’s an entire cohort of startups tailoring GenAI tools to fit enterprise needs. Arcee, for instance, is creating solutions to help businesses securely train, benchmark and manage GenAI models, and Articul8 AI, an Intel spin-out, is building algorithm-powered enterprise software.

Now another upstart’s joining the crew.

Himank Jain, Atharva Padhye and Prabhat Singh are the co-founders of Crux, which creates AI models that answer questions about business data in plain language along the lines of OpenAI’s ChatGPT. Padhye explains it thusly:

“With a simple natural language command, executives can get any report, insight, root cause analysis or prediction at their fingertips,” he said via email. “For example, marketers using HubSpot can ask a question like ‘Why are my email campaigns to new users getting low conversions?’”

Crux — which Jain, Padhye and Singh pivoted nearly 15 times before arriving at the platform’s current form — converts schema, the structure of databases, into a “semantic layer” that AI models can understand. Beyond this, Crux allows customers to customize question-answering models to their business intelligence needs, terms and policies, thus improving the quality of the outputs, according to Padhye.

Crux
Crux’s platform for building, customizing and deploying enterprise-focused GenAI models and tools. Image Credits: Crux

Crux leverages a multi-model framework that breaks down into individual parts questions posed by users, distributing these parts among specialized, purpose-built models. For example, one model, which Padhye calls the “clarification agent,” asks follow-up questions to get at a user’s intent.

Crux is deployed on-premise and doesn’t use customer data to train the models, Padhye says.

“Crux aims to launch [an] AI-powered decision-making engine for enterprises,” Padhye said. “Crux is challenging the incumbent business intelligence tools … by iterating faster and rethinking the analytics stack as a decision-making stack.”

Crux makes money by selling subscriptions plus charging setup and maintenance fees that depend on the type and size of a company’s deployment. It’s proven to be a profitable model; Crux’s annual recurring revenue reached $240,000 in four months with a four-company customer base, Padhye claims.

Crux aims to remain small for the foreseeable future, maxing out at a headcount of around 20 people by the end of the year. But with $2.6 million in capital it recently raised from Emergent Ventures, Y Combinator and a number of angel backers, Crux plans to expand “up-market,” Padhye said — focusing on acquiring new enterprise clients.

Added Emergent Ventures’ Anupam Rastogi via email:

“BI and data analytics … is on the brink of a major transformation driven by large language models and advanced AI. This sector is poised for exponential growth over the next decade, transitioning from static dashboards and manual reporting to real-time, accurate and actionable intelligence on demand. Crux has developed a groundbreaking product in this space which helps its customers drive incremental revenues quickly, and is already attracting significant customer interest.”

man interacting with Hello Robotics

Hello is building a platform toward more home robots

man interacting with Hello Robotics

Image Credits: Hello Robotics

The road to the home robot is one fraught with peril. The number of success stories it’s delivered can be counted on a single hand. The reasons for this massive disconnect are nuanced and complex — much like the insides of our homes. Twenty-years after the first Roomba arrived, the robot vacuum has begun to feel like a fluke — more of the exception than the rule.

Aaron Edsinger, the former Google Robotics director who now serves at Hello Robot’s CEO, isn’t attempting to build the universal home robot — at least not now. The Stretch Robot (not to be confused with the Boston Dynamics truck-unpacking robot of the same name) line is a platform the company hopes the next generation of home robots will be built around. Watching it cruise around a home in the demo videos brings to mind Nvidia’s line of reference robots.

The newly announced Stretch 3 is a robot with a wheeled base and an adjustable-height gripper. In the promotion video, you’ll see a couple of Stretches cruising around a home, making beds and unloading the dishwasher — exactly the manner of things people have long dreamed of in a home robot.

There are, however, two very important caveats. First is the $24,950 price tag. As someone who has been known to complain about high-end Roombas topping out above $1,000, it’s hard imagine anyone paying the cost of a low-end new car — especially given the system’s shortcomings for consumers.

That brings us to point number two: The system is controlled by teleop. There’s nothing wrong with teleop in and of itself, of course. I’ve made that point plenty of times. But a one-to-one human to robot control scenario is not a sustainable one — particularly in the home, which you probably don’t want to open up to whoever ends up on the other side of the camera.

One place where teleop is great is the robotic learning process. This is where reinforcement learning comes in — walking the robot through the process of performing tasks in different scenarios. This is the kind of thing Tesla is mostly likely doing in that recent video of Optimus folding laundry — even if the company initially didn’t seem particularly eager to disclose that information.

“Too often, a video offers an exciting glimpse of the future, but the robot isn’t available,” co-founder Charlie Kemp says in a release. “Stretch 3 isn’t vaporware. It’s available today. It’s an invitation to join an amazing community creating an inspiring future. It’s also the most fun I’ve ever had as a programmer.”

All of that is true — save, perhaps for the last bit. We’ll just have to take the good doctor’s word on that one. But being on sale today doesn’t mean most people will — or should — buy it. Much like the above Nvidia example, it’s most correctly viewed as a reference device third-party developers can access to make the sorts of apps that could — one day — be genuinely useful.

Back to the question posed at the beginning. Why have we been waiting so long for a proper follow-up to the Roomba? That product was designed to do one thing competently and has grown much better at that single task over time. The initial Roomba had a hockey puck design, and honestly hasn’t strayed too far from gen one on that front. There are, however, extreme limitations to that form factor, including height (this matters a lot when it comes to where mounted sensors are placed) and a lack of limbs.

Image Credits: Hello Robotics

As far as that second part goes, Hello tellingly refers to the recent excitement around humanoid robots. The notion of “general purpose” pops up a lot. Remember, for example, when Tesla Bot was first announced and the company’s CEO promised a robot that can work all day in the car factory and then grab you some groceries on the way home?

It would take a lot more words than I’m currently allotting myself here to explain why truly generalized robots are a lot further off than you probably think. I’ve often discussed a middle ground between the two — moving from single- to multi-purpose robots. The path there may, indeed, involve an SDK and an app store-style approach to introducing new functionality.

In this case, one begins to ask the reasonable question of how much the next home robot needs to look like us? The truly compelling argument here is stairs, but we’re far from a point where such mechatronic complexities can be delivered to home users at reasonable rates.

I find this bit from Hello’s press material to be particularly interesting: “Hello Robot has pioneered a middle way between simple single-purpose robots and complex humanoid robots, showing that robot’s don’t need to be humanoid to perform a wide variety of compelling tasks in homes.”

Mobile manipulation is a huge, huge bottleneck to the development of a proper home robot. Likely the solution will be more than just a couple of arms stuck on a Roomba. Rather than jumping straight to building yet another robot in our image, Stretch offers a manipulator more in line with what I’ve seen from home robot research projects like those found at the Toyota Research Institute.

I would say, at the very least, this is a space worth watching, even though you’re going to have to continue to patiently wait for your next robot pal.

smart speakers graphic

This German nonprofit is building an open voice assistant that anyone can use

smart speakers graphic

Image Credits: Bryce Durbin / TechCrunch

There have been many attempts at open source AI-powered voice assistants (see Rhasspy, Mycroft and Jasper, to name a few) — all established with the goal of creating privacy-preserving, offline experiences that don’t compromise on functionality. But development’s proven to be extraordinarily slow. That’s because, in addition to all the usual challenges attendant with open source projects, programming an assistant is hard. Tech like Google Assistant, Siri and Alexa have years, if not decades, of R&D behind them — and enormous infrastructure to boot.

But that’s not deterring the folks at Large-scale Artificial Intelligence Open Network (LAION), the German nonprofit responsible for maintaining some of the world’s most popular AI training data sets. This month, LAION announced a new initiative, BUD-E, that seeks to build a “fully open” voice assistant capable of running on consumer hardware.

Why launch a whole new voice assistant project when there are countless others out there in various states of abandonment? Wieland Brendel, a fellow at the Ellis Institute and a contributor to BUD-E, believes there isn’t an open assistant with an architecture extensible enough to take full advantage of emerging GenAI technologies, particularly large language models (LLMs) along the lines of OpenAI’s ChatGPT.

“Most interactions with [assistants] rely on chat interfaces that are rather cumbersome to interact with, [and] the dialogues with those systems feel stilted and unnatural,” Brendel told TechCrunch in an email interview. “Those systems are OK to convey commands to control your music or turn on the light, but they’re not a basis for long and engaging conversations. The goal of BUD-E is to provide the basis for a voice assistant that feels much more natural to humans and that mimics the natural speech patterns of human dialogues and remembers past conversations.”

Brendel added that LAION also wants to ensure that every component of BUD-E can eventually be integrated with apps and services license-free, even commercially — which isn’t necessarily the case for other open assistant efforts.

A collaboration with Ellis Institute in Tübingen, tech consultancy Collabora and the Tübingen AI Center, BUD-E — recursive shorthand for “Buddy for Understanding and Digital Empathy” — has an ambitious roadmap. In a blog post, the LAION team lays out what they hope to accomplish in the next few months, chiefly building “emotional intelligence” into BUD-E and ensuring it can handle conversations involving multiple speakers at once.

“There’s a big need for a well-working natural voice assistant,” Brendel said. “LAION has shown in the past that it’s great at building communities, and the ELLIS Institute Tübingen and the Tübingen AI Center are committed to provide the resources to develop the assistant.”

BUD-E is up and running — you can download and install it today from GitHub on Ubuntu or Windows PC (macOS is coming) — but it’s very clearly in the early stages.

LAION patched together several open models to assemble an MVP, including Microsoft’s Phi-2 LLM, Columbia’s text-to-speech StyleTTS2 and Nvidia’s FastConformer for speech-to-text. As such, the experience is a bit unoptimized. Getting BUD-E to respond to commands within about 500 milliseconds — in the range of commercial voice assistants such as Google Assistant and Alexa — requires a beefy GPU like Nvidia’s RTX 4090.

Collabora is working pro bono to adapt its open source speech recognition and text-to-speech models, WhisperLive and WhisperSpeech, for BUD-E.

“Building the text-to-speech and speech recognition solutions ourselves means we can customize them to a degree that isn’t possible with closed models exposed through APIs,” Jakub Piotr Cłapa, an AI researcher at Collabora and BUD-E team member, said in an email. “Collabora initially started working on [open assistants] partially because we struggled to find a good text-to-speech solution for an LLM-based voice agent for one of our customers. We decided to join forces with the wider open source community to make our models more widely accessible and useful.”

In the near term, LAION says it’ll work to make BUD-E’s hardware requirements less onerous and reduce the assistant’s latency. A longer-horizon undertaking is building a dataset of dialogs to fine-tune BUD-E — as well as a memory mechanism to allow BUD-E to store information from previous conversations and a speech processing pipeline that can keep track of several people talking at once. 

I asked the team whether accessibility was a priority, considering speech recognition systems historically haven’t performed well with languages that aren’t English and accents that aren’t Transatlantic. One Stanford study found that speech recognition systems from Amazon, IBM, Google, Microsoft and Apple were almost twice as likely to mishear Black speakers versus white speakers of the same age and gender.

Brendel said that LAION’s not ignoring accessibility — but that it’s not an “immediate focus” for BUD-E.

“The first focus is on really redefining the experience of how we interact with voice assistants before generalizing that experience to more diverse accents and languages,” Brendel said.

To that end, LAION has some pretty out-there ideas for BUD-E, ranging from an animated avatar to personifying the assistant to support for analyzing users’ faces through webcams to account for their emotional state.

The ethics of that last bit — facial analysis — are a bit dicey, needless to say. But Robert Kaczmarczyk, a LAION co-founder, stressed that LAION will remain committed to safety.

“[We] adhere strictly to the safety and ethical guidelines formulated by the EU AI Act,” he told TechCrunch via email — referring to the legal framework governing the sale and use of AI in the EU. The EU AI Act allows European Union member countries to adopt more restrictive rules and safeguards for “high-risk” AI, including emotion classifiers.

“This commitment to transparency not only facilitates the early identification and correction of potential biases, but also aids the cause of scientific integrity,” Kaczmarczyk added. “By making our data sets accessible, we enable the broader scientific community to engage in research that upholds the highest standards of reproducibility.”

LAION’s previous work hasn’t been pristine in the ethical sense, and it’s pursuing a somewhat controversial separate project at the moment on emotion detection. But perhaps BUD-E will be different; we’ll have to wait and see.