SoftBank forms AI healthcare JV in Japan with Tempus

SoftBank Group Corp. founder, Chairman and CEO Masayoshi Son

Image Credits: Alessandro Di Ciommo/NurPhoto (opens in a new window) / Getty Images

SoftBank Group founder Masayoshi Son announced on Thursday that the Japanese tech giant has set up a joint venture in the country with Chicago-based health tech company Tempus. Together, the pair plan to develop AI-powered personalized medical services, by way of analyzing data in Japan under the name SB Tempus.

The company plans to start with oncology. Cancer remains the biggest cause of death in Japan, according to Son, whose father passed away last year because of the disease.

This move underscores Son’s ambitious, wider focus on AI. Today, during the ad hoc press conference, he filled out some details around a more specific application within that: the medical industry.

SoftBank’s ties with Tempus precede today’s JV news. It invested $200 million in Tempus in April, right before Tempus’s Nasdaq debut earlier this month. Tempus, once valued at $8.1 billion in 2022, raised nearly $411 million at a valuation of more than $6 billion via its IPO. Its valuation, however, has not stood up: its market cap currently is $4.5 billion.

The U.S.-based genomic testing and data analysis company was started by serial entrepreneur and billionaire Groupon founder Eric Lefkofsky in 2015 after noticing that doctors didn’t rely on data during his wife’s treatment of breast cancer.

Tempus competes with industry peers including Foundation Medicine, which uses big data to analyze tumors, and Guardant Health, a biotech company that sells blood tests to track and potentially detect cancer.

SB Tempus will be a vehicle for Tempus to bring its data-driven medical technology to Japan. Tempus will “build clinical sequencing capabilities, organize patient data and build a real-world data business in Japan,” and Son said SB Tempus would provide genomic testing, medical data aggregation and analysis (genomic, clinical, pathology and imaging data), as well as AI insights for personalized treatments and therapies.

Both companies have made a substantial investment in this venture. SoftBank and Tempus, respectively, hold a 50% stake, with SoftBank set to inject 30 billion yen, equivalent to around $188 million, Son said on the stage at the media briefing.

SB Tempus, which will start operations in August, will offer three medical services to hospitals using AI to analyze personal medical data as early as within the year, according to Son.

How it works: The JV company will begin collecting and analyzing patient genetic data from Japanese hospitals and universities. The data, which will include genomic, pathological, clinical information and photo images, will be used to train AI patterns for patients in Japan. The company will provide hospitals with processed data for clinical use, and the AI offerings will suggest the best treatment for patients.

Son stated that in Japan, only about 1% of patients have experience with genomic testing. In comparison, approximately 30% have had the chance to receive genomic testing in the U.S. He also mentioned the company’s goal of reaching the same level as the U.S.

As well as fighting cancer, the plan is to expand to other diseases, such as neuropsychology, radiology and cardiology.

The announcement comes about a week after the Japanese tech giant’s CEO made a special public appearance last Friday at the group’s annual meeting.

Son said AI will be 10,000 times smarter than humans in a decade and laid out his vision for a world featuring Artificial Super Intelligence (ASI) at the meeting. He also mentioned that SoftBank’s past investments were “just a warm-up” for his ambition to create an era of AI.

Son reiterated how AI will benefit humans in various sectors, adding that medicine is one example. SoftBank is reportedly one of the companies interested in investing in Perplexity AI, the U.S.-based AI company, at a valuation of $3 billion, per Bloomberg’s report today. TechCrunch was the first to report on that round back in April.)

A string of losses at SoftBank’s investment arm Vision Fund led the Japanese tech mogul to switch into “defense mode,” taking a more conservative investment strategy. Now it really seems that SoftBank, which has billions of dollars in its war chest, is ready to work in full swing to invest in AI.

Tempus rises 9% on the first day of trading, demonstrating investor appetite for a health tech with a promise of AI

J2 Ventures, focused on military healthcare, grabs $150M for its second fund

Image Credits: AFP (opens in a new window) / Getty Images

J2 Ventures, a firm led mostly by U.S. military veterans, announced on Thursday that it has raised a $150 million second fund. The Boston-based firm invests in startups whose products are purchased by civilians and the U.S. Department of Defense.  

While many emerging VCs are struggling to raise second funds, J2’s latest vehicle is more than double its $67.5 million debut fund from 2021.

At first blush, the firm may seem to be benefiting from VCs’ growing interest in defense tech. But J2 has no interest in positioning itself as a defense tech investor.

“Our portfolio is national-security adjacent, but not defense-focused,” said Alexander Harstrick, J2’s managing partner. The firm does not invest in technologies that protect critical national infrastructure or help deter attacks, such as drones, robotics, or surveillance tech.

Instead, J2 backs companies whose products help maintain the well-being and healthcare of nearly 3 million people employed by the U.S. military.  

Harstrick said that the Department of Defense (DoD) has historically adopted new technologies before they became popular with civilians. And it’s not just the internet, which was partially developed by the military.

“The Department of Veterans Affairs was the first to use telemedicine,” Harstrick said. “They were also the first to adopt electronic health records.”

J2’s healthcare investments include Tasso, a maker of needle-free blood draw tech, and Lumia Health, a wearable device that measures blood flow to the brain.

The firm also backs cybersecurity, infrastructure, and advanced computing startups like Femtosense, a developer of energy-efficient AI chips for smart devices.

J2 backs companies at the pre-seed stage to Series A and writes checks that range from $1 million to $5 million. The firm’s limited partners include JPMorgan and New Mexico State Investment Council.

Harstrick served as a military intelligence officer in the U.S. Army Reserve and was deployed in Iraq and Afghanistan. Before starting J2, he was an investor in the Defense Innovation Unit. 

SoftBank Group Corp. founder, Chairman and CEO Masayoshi Son

SoftBank forms AI healthcare JV in Japan with Tempus

SoftBank Group Corp. founder, Chairman and CEO Masayoshi Son

Image Credits: Alessandro Di Ciommo/NurPhoto (opens in a new window) / Getty Images

SoftBank Group founder Masayoshi Son announced on Thursday that the Japanese tech giant has set up a joint venture in the country with Chicago-based health tech company Tempus. Together, the pair plan to develop AI-powered personalized medical services, by way of analyzing data in Japan under the name SB Tempus.

The company plans to start with oncology. Cancer remains the biggest cause of death in Japan, according to Son, whose father passed away last year because of the disease.

This move underscores Son’s ambitious, wider focus on AI. Today, during the ad hoc press conference, he filled out some details around a more specific application within that: the medical industry.

SoftBank’s ties with Tempus precede today’s JV news. It invested $200 million in Tempus in April, right before Tempus’s Nasdaq debut earlier this month. Tempus, once valued at $8.1 billion in 2022, raised nearly $411 million at a valuation of more than $6 billion via its IPO. Its valuation, however, has not stood up: its market cap currently is $4.5 billion.

The U.S.-based genomic testing and data analysis company was started by serial entrepreneur and billionaire Groupon founder Eric Lefkofsky in 2015 after noticing that doctors didn’t rely on data during his wife’s treatment of breast cancer.

Tempus competes with industry peers including Foundation Medicine, which uses big data to analyze tumors, and Guardant Health, a biotech company that sells blood tests to track and potentially detect cancer.

SB Tempus will be a vehicle for Tempus to bring its data-driven medical technology to Japan. Tempus will “build clinical sequencing capabilities, organize patient data and build a real-world data business in Japan,” and Son said SB Tempus would provide genomic testing, medical data aggregation and analysis (genomic, clinical, pathology and imaging data), as well as AI insights for personalized treatments and therapies.

Both companies have made a substantial investment in this venture. SoftBank and Tempus, respectively, hold a 50% stake, with SoftBank set to inject 30 billion yen, equivalent to around $188 million, Son said on the stage at the media briefing.

SB Tempus, which will start operations in August, will offer three medical services to hospitals using AI to analyze personal medical data as early as within the year, according to Son.

How it works: The JV company will begin collecting and analyzing patient genetic data from Japanese hospitals and universities. The data, which will include genomic, pathological, clinical information and photo images, will be used to train AI patterns for patients in Japan. The company will provide hospitals with processed data for clinical use, and the AI offerings will suggest the best treatment for patients.

Son stated that in Japan, only about 1% of patients have experience with genomic testing. In comparison, approximately 30% have had the chance to receive genomic testing in the U.S. He also mentioned the company’s goal of reaching the same level as the U.S.

As well as fighting cancer, the plan is to expand to other diseases, such as neuropsychology, radiology and cardiology.

The announcement comes about a week after the Japanese tech giant’s CEO made a special public appearance last Friday at the group’s annual meeting.

Son said AI will be 10,000 times smarter than humans in a decade and laid out his vision for a world featuring Artificial Super Intelligence (ASI) at the meeting. He also mentioned that SoftBank’s past investments were “just a warm-up” for his ambition to create an era of AI.

Son reiterated how AI will benefit humans in various sectors, adding that medicine is one example. SoftBank is reportedly one of the companies interested in investing in Perplexity AI, the U.S.-based AI company, at a valuation of $3 billion, per Bloomberg’s report today. TechCrunch was the first to report on that round back in April.)

A string of losses at SoftBank’s investment arm Vision Fund led the Japanese tech mogul to switch into “defense mode,” taking a more conservative investment strategy. Now it really seems that SoftBank, which has billions of dollars in its war chest, is ready to work in full swing to invest in AI.

Tempus rises 9% on the first day of trading, demonstrating investor appetite for a health tech with a promise of AI

J2 Ventures, focused on military healthcare, grabs $150M for its second fund

Image Credits: AFP (opens in a new window) / Getty Images

J2 Ventures, a firm led mostly by U.S. military veterans, announced on Thursday that it has raised a $150 million second fund. The Boston-based firm invests in startups whose products are purchased by civilians and the U.S. Department of Defense.  

While many emerging VCs are struggling to raise second funds, J2’s latest vehicle is more than double its $67.5 million debut fund from 2021.

At first blush, the firm may seem to be benefiting from VCs’ growing interest in defense tech. But J2 has no interest in positioning itself as a defense tech investor.

“Our portfolio is national-security adjacent, but not defense-focused,” said Alexander Harstrick, J2’s managing partner. The firm does not invest in technologies that protect critical national infrastructure or help deter attacks, such as drones, robotics, or surveillance tech.

Instead, J2 backs companies whose products help maintain the well-being and healthcare of nearly 3 million people employed by the U.S. military.  

Harstrick said that the Department of Defense (DoD) has historically adopted new technologies before they became popular with civilians. And it’s not just the internet, which was partially developed by the military.

“The Department of Veterans Affairs was the first to use telemedicine,” Harstrick said. “They were also the first to adopt electronic health records.”

J2’s healthcare investments include Tasso, a maker of needle-free blood draw tech, and Lumia Health, a wearable device that measures blood flow to the brain.

The firm also backs cybersecurity, infrastructure, and advanced computing startups like Femtosense, a developer of energy-efficient AI chips for smart devices.

J2 backs companies at the pre-seed stage to Series A and writes checks that range from $1 million to $5 million. The firm’s limited partners include JPMorgan and New Mexico State Investment Council.

Harstrick served as a military intelligence officer in the U.S. Army Reserve and was deployed in Iraq and Afghanistan. Before starting J2, he was an investor in the Defense Innovation Unit. 

J2 Ventures, focused on military healthcare, grabs $150M for its second fund

Image Credits: AFP (opens in a new window) / Getty Images

J2 Ventures, a firm led mostly by the U.S. military veterans, announced on Thursday that it has raised a $150 million second fund. The Boston-based firm invests in startups whose products are purchased by civilians and the U.S. Department of Defense.  

While many emerging VCs are struggling to raise second funds, J2’s latest vehicle is more than double its $67.5 million debut fund from 2021.

At first blush, the firm may seem to be benefiting from VCs’ growing interest in defense tech. But J2 has no interest in positioning itself as a defense tech investor.

“Our portfolio is national-security adjacent, but not defense-focused,” said Alexander Harstrick, J2’s managing partner. The firm does not invest in technologies that protect critical national infrastructure or help deter attacks, such as drones, robotics, or surveillance tech.

Instead, J2 backs companies whose products help maintain the well-being and healthcare of nearly 3 million people employed by the U.S. Military.  

Harstrick said that the DoD has historically adopted new technologies before they became popular with civilians. And it’s not just the internet, which was partially developed by the military.

“The Department of Veterans Affairs was the first to use telemedicine,” Harstrick said. “They were also the first to adopt electronic health records.”

J2’s healthcare investments include Tasso, a maker of needle-free blood draw tech, and Lumia Health, a wearable device that measures blood flow to the brain.

The firm also backs cybersecurity, infrastructure, and advanced computing startups like Femtosense, a developer of energy-efficient AI chips for smart devices.

J2 backs companies at the pre-seed stage to Series A and writes checks that range from $1 million to $5 million. The firm’s limited partners include JP Morgan and New Mexico State Investment Council.

Harstrick served as a military intelligence officer in the U.S. Army Reserve and was deployed in Iraq and Afghanistan. Before starting J2, he was an investor in the Defense Innovation Unit. 

Texas-based care provider HMG Healthcare says hackers stole unencrypted patient data

Flashlight beam shining on medical records in a dark room

Image Credits: Dave Whitney / Getty Images

Texas-based care provider HMG Healthcare has confirmed that hackers accessed the personal data of residents and employees, but says it has been unable to determine what types of data were stolen.

HMG Healthcare is headquartered in The Woodlands, Texas, and provides a range of services, including memory care, rehabilitation and assisted living. HMG’s website says it employs more than 4,100 people and serves approximately 3,500 patients, generating more than $150 million in annual revenues.

In a notice published on its website, HMG chief executive Derek Prince confirmed that hackers in August accessed a server storing “unencrypted files” containing sensitive information belonging to patients, employees, and their dependents. HMG said it learned of the breach months later in November.

HMG said the stolen information “likely contained” personal information, including names, dates of birth, contact information, Social Security numbers and records related to employment; as well as medical records, general health information and information regarding medical treatment, according to the notice. HMG also said that the notice has been published in order to inform “individuals for whom HMG has insufficient or out-of-date contact information” about the incident, suggesting historical patient data may have been impacted.

However, HMG admits that while it attempted to identify the specific data that was compromised, “we have now determined that such identification is not feasible.”

It’s not yet known why HMG couldn’t determine the types of data stolen, and a company spokesperson did not respond to TechCrunch’s questions.

HMG did not say in its notice how many individuals are thought to be affected by the breach. However, a filing with the Texas attorney general submitted by HMG on Monday confirms that approximately 75,000 Texans were impacted by the breach; though it’s not known how many non-state residents are affected.

HMG did not describe the nature of the cyberattack, but noted that “HMG worked diligently to ensure that the stolen files were not further shared by the hackers to other sources.” It’s not uncommon for corporate victims of ransomware attacks to pay hackers a ransom demand in an effort to limit the spread of stolen data, despite having no guarantees that the hackers would keep their end of the deal.

TechCrunch asked HMG if it had paid a ransom to the hackers.

Per HMG’s data breach notice, the healthcare provider also has a number of facilities in Kansas — including Tanglewood Health and Rehabilitation, and Smoky Hill Health and Rehabilitation — that were affected by the data breach.

HMG CEO Prince noted that the organization has “increased its data security protocols” in light of the incident, but did not specify what additional security steps were taken.

Why ransomware victims can’t stop paying off hackers

Texas-based care provider HMG Healthcare says hackers stole unencrypted patient data

Flashlight beam shining on medical records in a dark room

Image Credits: Dave Whitney / Getty Images

Texas-based care provider HMG Healthcare has confirmed that hackers accessed the personal data of residents and employees, but says it has been unable to determine what types of data were stolen.

HMG Healthcare is headquartered in The Woodlands, Texas, and provides a range of services, including memory care, rehabilitation and assisted living. HMG’s website says it employs more than 4,100 people and serves approximately 3,500 patients, generating more than $150 million in annual revenues.

In a notice published on its website, HMG chief executive Derek Prince confirmed that hackers in August accessed a server storing “unencrypted files” containing sensitive information belonging to patients, employees, and their dependents. HMG said it learned of the breach months later in November.

HMG said the stolen information “likely contained” personal information, including names, dates of birth, contact information, Social Security numbers and records related to employment; as well as medical records, general health information and information regarding medical treatment, according to the notice. HMG also said that the notice has been published in order to inform “individuals for whom HMG has insufficient or out-of-date contact information” about the incident, suggesting historical patient data may have been impacted.

However, HMG admits that while it attempted to identify the specific data that was compromised, “we have now determined that such identification is not feasible.”

It’s not yet known why HMG couldn’t determine the types of data stolen, and a company spokesperson did not respond to TechCrunch’s questions.

HMG did not say in its notice how many individuals are thought to be affected by the breach. However, a filing with the Texas attorney general submitted by HMG on Monday confirms that approximately 75,000 Texans were impacted by the breach; though it’s not known how many non-state residents are affected.

HMG did not describe the nature of the cyberattack, but noted that “HMG worked diligently to ensure that the stolen files were not further shared by the hackers to other sources.” It’s not uncommon for corporate victims of ransomware attacks to pay hackers a ransom demand in an effort to limit the spread of stolen data, despite having no guarantees that the hackers would keep their end of the deal.

TechCrunch asked HMG if it had paid a ransom to the hackers.

Per HMG’s data breach notice, the healthcare provider also has a number of facilities in Kansas — including Tanglewood Health and Rehabilitation, and Smoky Hill Health and Rehabilitation — that were affected by the data breach.

HMG CEO Prince noted that the organization has “increased its data security protocols” in light of the incident, but did not specify what additional security steps were taken.

Why ransomware victims can’t stop paying off hackers

Concept illustration depicting health data

Generative AI is coming for healthcare, and not everyone's thrilled

Concept illustration depicting health data

Image Credits: Nadezhda Fedrunova / Getty / Getty Images

Generative AI, which can create and analyze images, text, audio, videos and more, is increasingly making its way into healthcare, pushed by both Big Tech firms and startups alike.

Google Cloud, Google’s cloud services and products division, is collaborating with Highmark Health, a Pittsburgh-based nonprofit healthcare company, on generative AI tools designed to personalize the patient intake experience. Amazon’s AWS division says it’s working with unnamed customers on a way to use generative AI to analyze medical databases for “social determinants of health.” And Microsoft Azure is helping to build a generative AI system for Providence, the not-for-profit healthcare network, to automatically triage messages to care providers sent from patients.  

Prominent generative AI startups in healthcare include Ambience Healthcare, which is developing a generative AI app for clinicians; Nabla, an ambient AI assistant for practitioners; and Abridge, which creates analytics tools for medical documentation.

The broad enthusiasm for generative AI is reflected in the investments in generative AI efforts targeting healthcare. Collectively, generative AI in healthcare startups have raised tens of millions of dollars in venture capital to date, and the vast majority of health investors say that generative AI has significantly influenced their investment strategies.

But both professionals and patients are mixed as to whether healthcare-focused generative AI is ready for prime time.

Generative AI might not be what people want

In a recent Deloitte survey, only about half (53%) of U.S. consumers said that they thought generative AI could improve healthcare — for example, by making it more accessible or shortening appointment wait times. Fewer than half said they expected generative AI to make medical care more affordable.

Andrew Borkowski, chief AI officer at the VA Sunshine Healthcare Network, the U.S. Department of Veterans Affairs’ largest health system, doesn’t think that the cynicism is unwarranted. Borkowski warned that generative AI’s deployment could be premature due to its “significant” limitations — and the concerns around its efficacy.

“One of the key issues with generative AI is its inability to handle complex medical queries or emergencies,” he told TechCrunch. “Its finite knowledge base — that is, the absence of up-to-date clinical information — and lack of human expertise make it unsuitable for providing comprehensive medical advice or treatment recommendations.”

Several studies suggest there’s credence to those points.

In a paper in the journal JAMA Pediatrics, OpenAI’s generative AI chatbot, ChatGPT, which some healthcare organizations have piloted for limited use cases, was found to make errors diagnosing pediatric diseases 83% of the time. And in testing OpenAI’s GPT-4 as a diagnostic assistant, physicians at Beth Israel Deaconess Medical Center in Boston observed that the model ranked the wrong diagnosis as its top answer nearly two times out of three.

Today’s generative AI also struggles with medical administrative tasks that are part and parcel of clinicians’ daily workflows. On the MedAlign benchmark to evaluate how well generative AI can perform things like summarizing patient health records and searching across notes, GPT-4 failed in 35% of cases.

OpenAI and many other generative AI vendors warn against relying on their models for medical advice. But Borkowski and others say they could do more. “Relying solely on generative AI for healthcare could lead to misdiagnoses, inappropriate treatments or even life-threatening situations,” Borkowski said.

Jan Egger, who leads AI-guided therapies at the University of Duisburg-Essen’s Institute for AI in Medicine, which studies the applications of emerging technology for patient care, shares Borkowski’s concerns. He believes that the only safe way to use generative AI in healthcare currently is under the close, watchful eye of a physician.

“The results can be completely wrong, and it’s getting harder and harder to maintain awareness of this,” Egger said. “Sure, generative AI can be used, for example, for pre-writing discharge letters. But physicians have a responsibility to check it and make the final call.”

Generative AI can perpetuate stereotypes

One particularly harmful way generative AI in healthcare can get things wrong is by perpetuating stereotypes.

In a 2023 study out of Stanford Medicine, a team of researchers tested ChatGPT and other generative AI–powered chatbots on questions about kidney function, lung capacity and skin thickness. Not only were ChatGPT’s answers frequently wrong, the co-authors found, but also answers included several reinforced long-held untrue beliefs that there are biological differences between Black and white people — untruths that are known to have led medical providers to misdiagnose health problems.

The irony is, the patients most likely to be discriminated against by generative AI for healthcare are also those most likely to use it.

People who lack healthcare coverage — people of color, by and large, according to a KFF study — are more willing to try generative AI for things like finding a doctor or mental health support, the Deloitte survey showed. If the AI’s recommendations are marred by bias, it could exacerbate inequalities in treatment.

However, some experts argue that generative AI is improving in this regard.

In a Microsoft study published in late 2023, researchers said they achieved 90.2% accuracy on four challenging medical benchmarks using GPT-4. Vanilla GPT-4 couldn’t reach this score. But, the researchers say, through prompt engineering — designing prompts for GPT-4 to produce certain outputs — they were able to boost the model’s score by up to 16.2 percentage points. (Microsoft, it’s worth noting, is a major investor in OpenAI.)

Beyond chatbots

But asking a chatbot a question isn’t the only thing generative AI is good for. Some researchers say that medical imaging could benefit greatly from the power of generative AI.

In July, a group of scientists unveiled a system called complementarity-driven deferral to clinical workflow (CoDoC), in a study published in Nature. The system is designed to figure out when medical imaging specialists should rely on AI for diagnoses versus traditional techniques. CoDoC did better than specialists while reducing clinical workflows by 66%, according to the co-authors. 

In November, a Chinese research team demoed Panda, an AI model used to detect potential pancreatic lesions in X-rays. A study showed Panda to be highly accurate in classifying these lesions, which are often detected too late for surgical intervention. 

Indeed, Arun Thirunavukarasu, a clinical research fellow at the University of Oxford, said there’s “nothing unique” about generative AI precluding its deployment in healthcare settings.

“More mundane applications of generative AI technology are feasible in the short- and mid-term, and include text correction, automatic documentation of notes and letters and improved search features to optimize electronic patient records,” he said. “There’s no reason why generative AI technology — if effective — couldn’t be deployed in these sorts of roles immediately.”

“Rigorous science”

But while generative AI shows promise in specific, narrow areas of medicine, experts like Borkowski point to the technical and compliance roadblocks that must be overcome before generative AI can be useful — and trusted — as an all-around assistive healthcare tool.

“Significant privacy and security concerns surround using generative AI in healthcare,” Borkowski said. “The sensitive nature of medical data and the potential for misuse or unauthorized access pose severe risks to patient confidentiality and trust in the healthcare system. Furthermore, the regulatory and legal landscape surrounding the use of generative AI in healthcare is still evolving, with questions regarding liability, data protection and the practice of medicine by non-human entities still needing to be solved.”

Even Thirunavukarasu, bullish as he is about generative AI in healthcare, says that there needs to be “rigorous science” behind tools that are patient-facing.

“Particularly without direct clinician oversight, there should be pragmatic randomized control trials demonstrating clinical benefit to justify deployment of patient-facing generative AI,” he said. “Proper governance going forward is essential to capture any unanticipated harms following deployment at scale.”

Recently, the World Health Organization released guidelines that advocate for this type of science and human oversight of generative AI in healthcare as well as the introduction of auditing, transparency and impact assessments on this AI by independent third parties. The goal, the WHO spells out in its guidelines, would be to encourage participation from a diverse cohort of people in the development of generative AI for healthcare and an opportunity to voice concerns and provide input throughout the process.

“Until the concerns are adequately addressed and appropriate safeguards are put in place,” Borkowski said, “the widespread implementation of medical generative AI may be … potentially harmful to patients and the healthcare industry as a whole.”