Piramidal's foundation model for brain waves could supercharge EEGs

Image Credits: Piramidal

AI models are being applied to every dataset under the sun but are inconsistent in their outcomes. This is as true in the medical world as anywhere else, but a startup called Piramidal believes it has a sure thing with a foundational model for analyzing brain scan data.

Co-founders Dimitris Sakellariou and Kris Pahuja have observed that electroencephalography (EEG) technology, while used in practically every hospital, is fragmented among many types of machines and requires specialized knowledge to interpret. A piece of software that can consistently flag worrisome patterns, regardless of time, location, or equipment type, could improve outcomes for folks with brain disorders, while taking some of the load off overworked nurses and doctors.

“In the neural ICU, there are nurses actually monitoring the patient and looking for signs on the EEG. But sometimes they have to leave the room, and these are acute conditions,” said Pahuja. An abnormal reading or alarm could mean an epileptic episode, or a stroke, or something else — nurses don’t have that training, and even specialist doctors may recognize one but not the other.

The two started the company after working for years on the feasibility of computational tools in neurology. They found there is absolutely a way to automate analysis of EEG data that is beneficial for care but that there’s no simple way to deploy that technology where it’s needed.

“I have experience with this, and I mean I’ve been sitting next to neurologists in the operating room to understand exactly why these brain waves are useful, and how we can build computational systems to identify them,” said Sakellariou. “They’re helpful in many contexts, but every time you use an EEG device, you have to rebuild the whole system for that specific problem. You need to get new data, you need to have humans annotate the data from scratch.”

That would be hard enough if every EEG system, hospital IT setup, and data format were the same, but they vary widely in the most basic elements, like how many electrodes are on the machine and where they’re placed.

Co-founders Dimitris Sakellariou (left) and Kris Pahuja.
Image Credits: Piramidal

Piramidal’s founders believe — and claim to know, though this culmination of their work is not yet published — that a foundational model for EEG readings could make lifesaving brain wave pattern detection work out-of-the-box rather than after months of studies.

To be clear, it’s not meant to be a do-it-all medical platform — a closer analogue may be Meta’s Llama series of (relatively) open models, which foot the initial expense of creating the foundational capability of language understanding. Whether you build a customer service chatbot or a digital friend is up to you, but neither works without the fundamental ability to understand human language.

But AI models aren’t limited to language — they can be trained to work in fluid dynamics, music, chemistry, and more. For Piramidal, the “language” is brain activity, as read by EEGs, and the resulting model would notionally be capable of understanding and interpreting signals from any setup, any number of electrodes or model of machine, and any patient.

No one has yet built one — at least, not publicly.

Although they were careful not to overstate their current progress, Sakellariou and Pahuja did say, “We have built the foundational model, we have run our experiments on it, and now we are in the process of productionizing the code base so it is ready to be scaled to billions of parameters. It’s not about research — from day one it’s been about building the model.”

The first production version of this model will be deployed in hospitals early next year, Pahuja said. “We’re working on four pilots starting in Q1; all four of them will test in the ICU, and all four want to co-develop with us.” This will be a valuable proof of concept that the model works in the diverse circumstances presented by any care unit. (Of course, Piramidal’s tech will be over and above any monitoring the patients would normally be provided.)

The foundation model will still need to be fine-tuned for certain applications, work that Pahuja said they will do themselves at first; unlike many other AI companies, they don’t plan to build a foundation model and then rake in fees from API usage. But they were clear that it’s still incredibly valuable as is.

“There’s no world where a model trained from scratch will do better than a pretrained model like ours; having a warm start can only improve things,” Sakellariou said. “It’s still the biggest EEG model that has ever existed, infinitely larger than anything else out there.”

To move forward, Piramidal needs the two things essential to every AI company: money and data. The first they have a start on, with a $6 million seed round co-led by Adverb Ventures and Lionheart Ventures, with participation by Y Combinator and angel investors. That money will go toward compute costs (huge for training models) and staffing up.

As far as data goes, they have enough to get their first production model trained. “It turns out there’s a lot of open source data — but a lot of open source siloed data. So we’ve been in the process of aggregating and harmonizing that into a big integrated data store.”

The partnerships with the hospitals should provide valuable and voluminous training data, though — thousands of hours of it. This and other sources could help elevate the next version of the model beyond human capability.

Right now, Sakellariou said, “We can address confidently this set of defined patterns doctors look out for. But a bigger model will let us pick out patterns smaller than the human eye can consistently and empirically tell exist.”

That’s still a ways off, but superhuman capability is not a prerequisite to improving the quality of care. The ICU pilots should allow the tech to be evaluated and documented much more rigorously, both in scientific literature and likely in investors’ meeting rooms.

How to build the foundation for a profitable AI startup

Image Credits: razum (opens in a new window) / Shutterstock (opens in a new window)

Sanjay Dhawan

ContributorSanjay Dhawan is the CEO of SymphonyAI, a leader in predictive and generative enterprise AI SaaS. He brings a reputation for rapidly growing technology companies and unlocking market value as an executive at Cerence, Harman, Symphony Teleca and Aricent.

Investment in AI companies has now entered its cautious phase. Following a year when the money directed at AI startups far outpaced any other sector, investments have recently become more sound or validated. Investors are more wary about the AI hype and are looking for companies that will turn a profit.

Building a profitable AI business poses unique challenges beyond those faced when launching a typical tech startup. Systemic issues like the high cost of renting GPUs, a widening talent gap, towering salaries, and expensive API and hosting requirements can cause costs to quickly spiral out of control.

The coming months could be daunting for AI company founders as they watch their fellow leaders struggle or even fail in new businesses, but there is a proven path to profitability. I applied these steps when I joined SymphonyAI at the beginning of 2022, and we just wrapped up a year in which we grew 30% and approached $500 million in revenue run rate. The same formula worked at my previous companies (Cerence, Harman, Symphony Teleca and Aricent, among others): focusing on specific customer needs and capturing value across a particular industry. All along the way, here are the considerations that formed the foundation for our successful efforts.

Build a realistic and accurate cost model

Startups face many challenges, but AI businesses have some unique factors that can skew financial models and revenue projections, leading to spiraling costs down the road. It’s easy to miscalculate here — decisions on big issues may have unintended consequences, while there’s a long list of non-obvious expenses to consider as well.

Let’s begin with one of the most important upfront decisions: Is it more cost-effective to use a cloud-based AI model or host your own? It’s a decision that teams must make early because as you head down your chosen path, you’ll either go deeper into the custom capabilities offered by the AI giants or you’ll begin building your own tech stack. Each of those carries significant costs.

Defining your answer begins with determining your particular use case, but generally, the cloud makes sense for training and inference if you won’t be moving vast amounts of data in and out of data stores and racking up huge egress fees. But be careful, if you expect to sell your solution for $25 per user per month with unlimited queries — and OpenAI is charging you per token behind the scenes — that model will fall flat pretty quickly as your unit economics fail to turn a profit.

Interestingly, one of the biggest stories of the past year, the boom in GPUs for AI, isn’t that big a factor in your ultimate gross margin equation. Most startups typically pick up a pre-deployed model and use an available API, with the onus on OpenAI to figure out the GPU allocation and give you the production capacity. It’s much more important to procure high-quality training data than to chase the latest GPU hardware — that’s the real foundation for a successful AI application built on top of an existing model.

Beyond those factors, there’s a host of other costs that can have outsized impacts. Don’t forget to factor ongoing data cleaning and PII (personal identifiable information) removal into your resource and budget allocations, as this is crucial for both model accuracy and risk mitigation. And think critically about your hiring plan — a balanced team of data scientists and industry experts, including remote roles, are essential to optimal growth and contextual decision-making.

Go vertical, not horizontal

Building a broad AI platform or solution may be the biggest pitfall for many promising AI businesses. A horizontal approach with more general-purpose capabilities aims for a wide audience but leaves the company open to more focused, targeted competitors that incorporate specialized domain expertise and workflows or put the onus on your customers to define and fit it within their use cases. Other startups can take the same AI models and APIs and get a head start to build a similar horizontal solution over a few months. Also, the latest updates or features from AI giants like OpenAI and Google leave horizontal businesses open for disruption.

A smarter approach is to go narrow and deep — identify a specific industry use case with urgent problems that AI can solve well and bring value (by the way, not an easy task in itself), then channel all your efforts into building vertical-specific models tailored and tuned to deliver maximum value for that specific use case within that industry. That means investing heavily in your technology and hiring subject matter experts to inform your software architecture and go-to-market strategy. Resist the temptation to scale horizontally until you have unequivocally solved your initial use case.

Fine-tune existing models

As part of this vertical approach, there’s no need to spend valuable capital training a model on massive general-purpose datasets. Once you’ve determined the specific vertical problem to solve, you can fine-tune open source variants of GPT to create domain-specific models to underpin your applications.

The use of digital copilots in industrial businesses, financial services, and retail illustrates this approach well. Tailored, vertically optimized predictive and generative AI together provide contextual answers to specific questions or generate and organize data for business insights.

Know when to say when

One of the most critical product decisions on your way to profitability is: How do you know when your AI solution is ready for production? The sooner you can go to market, the sooner you can monetize your hard work. Training and fine-tuning models can go on indefinitely, so creating a standardized benchmark that can serve as both an evaluation and a comparison point is essential.

Begin by comparing your model against existing rule-based engines. Does it perform the work better than what’s in the market today? Does it help upskill less experienced team members to perform more like their highest-performing peers? That’s what makes a compelling value proposition for a prospective customer. You’re aiming for a real-world results measurement versus a consideration of what’s possible.

There’s always a trade-off between improving the accuracy and relevance of your data and the resulting training costs. At some point, you’ll need to determine the right amount of data and when to stop. There is a balance between data training costs and incremental quality improvement that you get by continuing to train — that is, the benefit the end user will derive from those few additional points of inference quality for that use case. (One example: we have an industrial AI model with 10 trillion available data points for training, but we stopped at 3 trillion for our first release.)

The road to profitability

The coming year will mark a dividing line in the growth of enterprise AI. After the hype of 2023, it will take more than an eye-popping product demo to attract investors or close a sale: AI companies will need to demonstrate a thoughtful approach to their business and more fully developed products ready for testing and deployment — with bonus points for having real customers who will provide feedback on requirements and testing that improve the product.

AI companies still have immense potential, but those that succeed will need to stay nimble, contain costs, and resist scope creep in these final shaping stages. Profitability awaits those who move confidently forward.

How to build the foundation for a profitable AI startup

Image Credits: razum (opens in a new window) / Shutterstock (opens in a new window)

Sanjay Dhawan

Contributor

Sanjay Dhawan is the CEO of SymphonyAI, a leader in predictive and generative enterprise AI SaaS. He brings a reputation for rapidly growing technology companies and unlocking market value as an executive at Cerence, Harman, Symphony Teleca and Aricent.

Investment in AI companies has now entered its cautious phase. Following a year when the money directed at AI startups far outpaced any other sector, investments have recently become more sound or validated. Investors are more wary about the AI hype and are looking for companies that will turn a profit.

Building a profitable AI business poses unique challenges beyond those faced when launching a typical tech startup. Systemic issues like the high cost of renting GPUs, a widening talent gap, towering salaries, and expensive API and hosting requirements can cause costs to quickly spiral out of control.

The coming months could be daunting for AI company founders as they watch their fellow leaders struggle or even fail in new businesses, but there is a proven path to profitability. I applied these steps when I joined SymphonyAI at the beginning of 2022, and we just wrapped up a year in which we grew 30% and approached $500 million in revenue run rate. The same formula worked at my previous companies (Cerence, Harman, Symphony Teleca and Aricent, among others): focusing on specific customer needs and capturing value across a particular industry. All along the way, here are the considerations that formed the foundation for our successful efforts.

Build a realistic and accurate cost model

Startups face many challenges, but AI businesses have some unique factors that can skew financial models and revenue projections, leading to spiraling costs down the road. It’s easy to miscalculate here — decisions on big issues may have unintended consequences, while there’s a long list of non-obvious expenses to consider as well.

Let’s begin with one of the most important upfront decisions: Is it more cost-effective to use a cloud-based AI model or host your own? It’s a decision that teams must make early because as you head down your chosen path, you’ll either go deeper into the custom capabilities offered by the AI giants or you’ll begin building your own tech stack. Each of those carries significant costs.

Defining your answer begins with determining your particular use case, but generally, the cloud makes sense for training and inference if you won’t be moving vast amounts of data in and out of data stores and racking up huge egress fees. But be careful, if you expect to sell your solution for $25 per user per month with unlimited queries — and OpenAI is charging you per token behind the scenes — that model will fall flat pretty quickly as your unit economics fail to turn a profit.

Interestingly, one of the biggest stories of the past year, the boom in GPUs for AI, isn’t that big a factor in your ultimate gross margin equation. Most startups typically pick up a pre-deployed model and use an available API, with the onus on OpenAI to figure out the GPU allocation and give you the production capacity. It’s much more important to procure high-quality training data than to chase the latest GPU hardware — that’s the real foundation for a successful AI application built on top of an existing model.

Beyond those factors, there’s a host of other costs that can have outsized impacts. Don’t forget to factor ongoing data cleaning and PII (personal identifiable information) removal into your resource and budget allocations, as this is crucial for both model accuracy and risk mitigation. And think critically about your hiring plan — a balanced team of data scientists and industry experts, including remote roles, are essential to optimal growth and contextual decision-making.

Go vertical, not horizontal

Building a broad AI platform or solution may be the biggest pitfall for many promising AI businesses. A horizontal approach with more general-purpose capabilities aims for a wide audience but leaves the company open to more focused, targeted competitors that incorporate specialized domain expertise and workflows or put the onus on your customers to define and fit it within their use cases. Other startups can take the same AI models and APIs and get a head start to build a similar horizontal solution over a few months. Also, the latest updates or features from AI giants like OpenAI and Google leave horizontal businesses open for disruption.

A smarter approach is to go narrow and deep — identify a specific industry use case with urgent problems that AI can solve well and bring value (by the way, not an easy task in itself), then channel all your efforts into building vertical-specific models tailored and tuned to deliver maximum value for that specific use case within that industry. That means investing heavily in your technology and hiring subject matter experts to inform your software architecture and go-to-market strategy. Resist the temptation to scale horizontally until you have unequivocally solved your initial use case.

Fine-tune existing models

As part of this vertical approach, there’s no need to spend valuable capital training a model on massive general-purpose datasets. Once you’ve determined the specific vertical problem to solve, you can fine-tune open source variants of GPT to create domain-specific models to underpin your applications.

The use of digital copilots in industrial businesses, financial services, and retail illustrates this approach well. Tailored, vertically optimized predictive and generative AI together provide contextual answers to specific questions or generate and organize data for business insights.

Know when to say when

One of the most critical product decisions on your way to profitability is: How do you know when your AI solution is ready for production? The sooner you can go to market, the sooner you can monetize your hard work. Training and fine-tuning models can go on indefinitely, so creating a standardized benchmark that can serve as both an evaluation and a comparison point is essential.

Begin by comparing your model against existing rule-based engines. Does it perform the work better than what’s in the market today? Does it help upskill less experienced team members to perform more like their highest-performing peers? That’s what makes a compelling value proposition for a prospective customer. You’re aiming for a real-world results measurement versus a consideration of what’s possible.

There’s always a trade-off between improving the accuracy and relevance of your data and the resulting training costs. At some point, you’ll need to determine the right amount of data and when to stop. There is a balance between data training costs and incremental quality improvement that you get by continuing to train — that is, the benefit the end user will derive from those few additional points of inference quality for that use case. (One example: we have an industrial AI model with 10 trillion available data points for training, but we stopped at 3 trillion for our first release.)

The road to profitability

The coming year will mark a dividing line in the growth of enterprise AI. After the hype of 2023, it will take more than an eye-popping product demo to attract investors or close a sale: AI companies will need to demonstrate a thoughtful approach to their business and more fully developed products ready for testing and deployment — with bonus points for having real customers who will provide feedback on requirements and testing that improve the product.

AI companies still have immense potential, but those that succeed will need to stay nimble, contain costs, and resist scope creep in these final shaping stages. Profitability awaits those who move confidently forward.