MIT develops recyclable 3D-printed glass blocks for construction

Image Credits: MIT

The use of 3D printing has been praised as an alternative to traditional construction. It promises to deliver faster construction times, creative design, and fewer construction errors, all while reducing carbon footprints. New research out of MIT points to a compelling new take on the concept, relying on 3D-printed glass blocks shaped like a figure eight that snap together like Legos.

The team points to glass’ optical properties and its “infinite recyclability” as reasons for turning to the material. “As long as it’s not contaminated, you can recycle glass almost infinitely,” says mechanical engineering assistant professor Kaitlyn Becker.

The team relied on 3D printers designed by Evenline — itself an MIT spinoff.

Man looking at big data represented by binary code and data symbols like graphs.

MIT researchers release a repository of AI risks

Man looking at big data represented by binary code and data symbols like graphs.

Image Credits: Ariya Sontrapornpol / Getty Images

Which specific risks should a person, company or government consider when using an AI system, or crafting rules to govern its use? It’s not an easy question to answer. If it’s an AI with control over critical infrastructure, there’s the obvious risk to human safety. But what about an AI designed to score exams, sort resumes or verify travel documents at immigration control? Those each carry their own, categorically different risks, albeit risks no less severe.

In crafting laws to regulate AI, like the EU AI Act or California’s SB 1047, policymakers have struggled to come to a consensus on which risks the laws should cover. To help provide a guidepost for them, as well as for stakeholders across the AI industry and academia, MIT researchers have developed what they’re calling an AI “risk repository” — a sort of database of AI risks.

“This is an attempt to rigorously curate and analyze AI risks into a publicly accessible, comprehensive, extensible and categorized risk database that anyone can copy and use, and that will be kept up to date over time,” Peter Slattery, a researcher at MIT’s FutureTech group and lead on the AI risk repository project, told TechCrunch. “We created it now because we needed it for our project, and had realized that many others needed it, too.”

Slattery says that the AI risk repository, which includes over 700 AI risks grouped by causal factors (e.g. intentionality), domains (e.g. discrimination) and subdomains (e.g. disinformation and cyberattacks), was born out of a desire to understand the overlaps and disconnects in AI safety research. Other risk frameworks exist. But they cover only a fraction of the risks identified in the repository, Slattery says, and these omissions could have major consequences for AI development, usage and policymaking.

“People may assume there is a consensus on AI risks, but our findings suggest otherwise,” Slattery added. “We found that the average frameworks mentioned just 34% of the 23 risk subdomains we identified, and nearly a quarter covered less than 20%. No document or overview mentioned all 23 risk subdomains, and the most comprehensive covered only 70%. When the literature is this fragmented, we shouldn’t assume that we are all on the same page about these risks.”

To build the repository, the MIT researchers worked with colleagues at the University of Queensland, the nonprofit Future of Life Institute, KU Leuven and AI startup Harmony Intelligence to scour academic databases and retrieve thousands of documents relating to AI risk evaluations.

The researchers found that the third-party frameworks they canvassed mentioned certain risks more often than others. For example, over 70% of the frameworks included the privacy and security implications of AI, whereas only 44% covered misinformation. And while over 50% discussed the forms of discrimination and misrepresentation that AI could perpetuate, only 12% talked about “pollution of the information ecosystem” — i.e. the increasing volume of AI-generated spam.

“A takeaway for researchers and policymakers, and anyone working with risks, is that this database could provide a foundation to build on when doing more specific work,” Slattery said. “Before this, people like us had two choices. They could invest significant time to review the scattered literature to develop a comprehensive overview, or they could use a limited number of existing frameworks, which might miss relevant risks. Now they have a more comprehensive database, so our repository will hopefully save time and increase oversight.”

But will anyone use it? It’s true that AI regulation around the world today is at best a hodgepodge: a spectrum of different approaches disunified in their goals. Had an AI risk repository like MIT’s existed before, would it have changed anything? Could it have? That’s tough to say.

Another fair question to ask is whether simply being aligned on the risks that AI poses is enough to spur moves toward competently regulating it. Many safety evaluations for AI systems have significant limitations, and a database of risks won’t necessarily solve that problem.

The MIT researchers plan to try, though. Neil Thompson, head of the FutureTech lab, tells TechCrunch that the group plans in its next phase of research to use the repository to evaluate how well different AI risks are being addressed.

“Our repository will help us in the next step of our research, when we will be evaluating how well different risks are being addressed,” Thompson said. “We plan to use this to identify shortcomings in organizational responses. For instance, if everyone focuses on one type of risk while overlooking others of similar importance, that’s something we should notice and address.

MIT researchers release a repository of AI risks

Man looking at big data represented by binary code and data symbols like graphs.

Image Credits: Ariya Sontrapornpol / Getty Images

Which specific risks should a person, company or government consider when using an AI system, or crafting rules to govern its use? It’s not an easy question to answer. If it’s an AI with control over critical infrastructure, there’s the obvious risk to human safety. But what about an AI designed to score exams, sort resumes or verify travel documents at immigration control? Those each carry their own, categorically different risks, albeit risks no less severe.

In crafting laws to regulate AI, like the EU AI Act or California’s SB 1047, policymakers have struggled to come to a consensus on which risks the laws should cover. To help provide a guidepost for them, as well for stakeholders across the AI industry and academia, MIT researchers have developed what they’re calling an AI “risk repository” — a sort of database of AI risks.

“This is an attempt to rigorously curate and analyze AI risks into a publicly accessible, comprehensive, extensible and categorized risk database that anyone can copy and use, and that will be kept up to date over time,” Peter Slattery, a researcher at MIT’s FutureTech group and lead on the AI risk repository project, told TechCrunch. “We created it now because we needed it for our project, and had realized that many others needed it, too.”

Slattery says that the AI risk repository, which includes over 700 AI risks grouped by causal factors (e.g. intentionality), domains (e.g. discrimination) and subdomains (e.g. disinformation and cyberattacks), was borne out of a desire to understand the overlaps and disconnects in AI safety research. Other risk frameworks exist. But they cover only a fraction of the risks identified in the repository, Slattery says, and these omissions could have major consequences for AI development, usage and policymaking.

“People may assume there is a consensus on AI risks, but our findings suggest otherwise,” Slattery added. “We found that the average frameworks mentioned just 34% of the 23 risk subdomains we identified, and nearly a quarter covered less than 20%. No document or overview mentioned all 23 risk subdomains, and the most comprehensive covered only 70%. When the literature is this fragmented, we shouldn’t assume that we are all on the same page about these risks.”

To build the repository, the MIT researchers worked with colleagues at the University of Queensland, the nonprofit Future of Life Institute, KU Leuven and AI startup Harmony Intelligence to scour academic databases and retrieve thousands of documents relating to AI risk evaluations.

The researchers found that the third-party frameworks they canvassed mentioned certain risks more often than others. For example, over 70% of the frameworks included the privacy and security implications of AI, whereas only 44% covered misinformation. And while over 50% discussed the forms of discrimination and misrepresentation that AI could perpetuate, only 12% talked about “pollution of the information ecosystem” — i.e. the increasing volume of AI-generated spam.

“A takeaway for researchers and policymakers, and anyone working with risks, is that this database could provide a foundation to build on when doing more specific work,” Slattery said. “Before this, people like us had two choices. They could invest significant time to review the scattered literature to develop a comprehensive overview, or they could use a limited number of existing frameworks, which might miss relevant risks. Now they have a more comprehensive database, so our repository will hopefully save time and increase oversight.”

But will anyone use it? It’s true that AI regulation around the world today is at best a hodgepodge: a spectrum of different approaches disunified in their goals. Had an AI risk repository like MIT’s existed before, would it have changed anything? Could it have? That’s tough to say.

Another fair question to ask is whether simply being aligned on the risks that AI poses is enough to spur moves toward competently regulating it. Many safety evaluations for AI systems have significant limitations, and a database of risks won’t necessarily solve that problem.

The MIT researchers plan to try, though. Neil Thompson, head of the FutureTech lab, tells TechCrunch that the group plans in its next phase of research to use the repository to evaluate how well different AI risks are being addressed.

“Our repository will help us in the next step of our research, when we will be evaluating how well different risks are being addressed,” Thompson said. “We plan to use this to identify shortcomings in organizational responses. For instance, if everyone focuses on one type of risk while overlooking others of similar importance, that’s something we should notice and address.

MIT obesity pill

MIT scientists are working on a vibrating obesity pill

MIT obesity pill

Image Credits: MIT News

MIT likens a new vibrating capsule to drinking a glass full of water prior to eating. Dieticians recommend the latter as a method for sending signals to your brain to simulate the sensation of being full. The researchers behind the new project further suggest it as a future alternative to surgery and GLP-1s. The latter, which includes semaglutides like Ozempic and Wegovy, are both extremely popular and prohibitively expensive, owing in large part to pharma IP laws.

MIT’s capsule has seen some laboratory success. Giving test animals the pill 20 minutes before eating reduced their consumption by around 40%, per the team. Like the glass of water trick, the capsule stimulates mechanoreceptors, which send a signal to the brain through the vagus cranial nerve. Once activated, the brain kicks off the production of insulin, GLP-1, C-peptide and PYY hormones, decreasing hunger while ramping up the digestion process.

“The behavioral change is profound, and that’s using the endogenous system rather than any exogenous therapeutic,” associate professor Giovanni Traverso notes. “We have the potential to overcome some of the challenges and costs associated with delivery of biologic drugs by modulating the enteric nervous system.”

The capsule, which is roughly the size of a standard multi-vitamin, contains a vibrating motor, powered by a silver oxide battery. After reaching the stomach, gastric acid dissolves the outside layer and completes the circuit, kickstarting the vibration.

Beyond efficacy, the team is working to determine the system’s safety. That requires a method for ramping up production and eventual human testing. “At scale, our device could be manufactured at a pretty cost-effective price point,” says post-doc researcher, Shriya Srinivasan.

Capsule-based technology treatments have been a hot category in recent years, as researchers explore ingestible sensors and even micro-robotic systems.

MIT obesity pill

MIT scientists are working on a vibrating obesity pill

MIT obesity pill

Image Credits: MIT News

MIT likens a new vibrating capsule to drinking a glass full of water prior to eating. Dieticians recommend the latter as a method for sending signals to your brain to simulate the sensation of being full. The researchers behind the new project further suggest it as a future alternative to surgery and GLP-1s. The latter, which includes semaglutides like Ozempic and Wegovy, are both extremely popular and prohibitively expensive, owing in large part to pharma IP laws.

MIT’s capsule has seen some laboratory success. Giving test animals the pill 20 minutes before eating reduced their consumption by around 40%, per the team. Like the glass of water trick, the capsule stimulates mechanoreceptors, which send a signal to the brain through the vagus cranial nerve. Once activated, the brain kicks off the production of insulin, GLP-1, C-peptide and PYY hormones, decreasing hunger while ramping up the digestion process.

“The behavioral change is profound, and that’s using the endogenous system rather than any exogenous therapeutic,” associate professor Giovanni Traverso notes. “We have the potential to overcome some of the challenges and costs associated with delivery of biologic drugs by modulating the enteric nervous system.”

The capsule, which is roughly the size of a standard multi-vitamin, contains a vibrating motor, powered by a silver oxide battery. After reaching the stomach, gastric acid dissolves the outside layer and completes the circuit, kickstarting the vibration.

Beyond efficacy, the team is working to determine the system’s safety. That requires a method for ramping up production and eventual human testing. “At scale, our device could be manufactured at a pretty cost-effective price point,” says post-doc researcher, Shriya Srinivasan.

Capsule-based technology treatments have been a hot category in recent years, as researchers explore ingestible sensors and even micro-robotic systems.