Disinformation may 'go nuclear' rather than 'go viral,' researchers say

Image Credits: Abigail Malate/AIP

We say something “goes viral” because we tend to think of rumors and disinformation spreading the way that an infection spreads. But these days it may be more accurate to say something “goes nuclear,” according to a new paper that models disinfo as a form of fission reaction.

As common wisdom had it even before the age of instantaneous transmission of data, “A lie can get halfway around the world while the truth is still pulling on its boots.” A pithy epigram, yes, but it’s not much help in analyzing the phenomenon.

We often reach for natural processes to represent how humans act as groups. Physically, large crowds and traffic act like fluids and are often accurately modeled as such. Other rules govern our behavior, and when it comes to spreading rumors, the spread of disease is an intuitive analogue. People act as vectors for a lie rather than a virus, and the results provide a lot of insight into how it works and how to stop it.

But the fast-paced and aggressive modern social media and news environment changes things somewhat. As researchers from Shandong Normal University in China, led by Wenrong Zheng, describe it in their paper published in AIP Advances:

The infectious disease model is not yet able to truly reflect the rumor propagation in the network. This is mainly due to the fact that infectious diseases do not propagate actively, while rumors propagate actively, and the model ignores the rationality and subjectivity of the rumor spreaders; secondly, the infectious disease model only takes into account the changes in the group size, but not the resulting social impact and potential risks.

In other words, the disease model imperfectly represents the way those “infected” by a rumor actively propagate it, rather than simply passing it to someone near them at the grocery store. And disease models are often intended to project and prevent death, but perhaps not other important metrics relevant to the study of disinformation.

So what natural process can we use instead? Scientists have proposed wildfires, swarms of insects, and collections of bouncing balls — but today’s stand-in from nature is … nuclear fission.

A quick nuclear reactors 101: Fission is when uranium atoms are forced into an excited state in which they emit neutrons, striking other uranium atoms and causing them to do the same. At a certain level of artificial stimulation, this reaction of atoms exciting other atoms becomes self-sustaining; in a reactor, this process is tightly controlled and the resulting heat from all these neutrons splitting off is harvested for power. In a bomb, however, the reaction is encouraged to grow exponentially, producing an explosion.

Here’s how the researchers map rumors onto that process:

Firstly, the initial online rumors are compared to neutrons, uranium nuclei are compared to individual rumor receivers, and fission barriers are compared to individual active propagation thresholds; Secondly, the process of nuclear fission is analyzed, and the degree of energy accumulation is used to compare the social impact of online rumors.

The rumors are neutrons, shooting off of people (atoms), which like different states of uranium have varying thresholds for activation, but upon reaching a sufficiently excited state, also become active propagators.

Image Credits: Zheng et al

This provides a few more levers and dials for modelers to manipulate when trying to figure out how a rumor will or did spread. For instance, how high energy is the rumor? What’s the concentration of less-reactive users (U238) versus users ready to be activated by a single stray rumor (U235)? What’s the rate of decay for forward propagation (neutron or retweet) and is the heat (user activity) being captured somehow?

S is stable, E is excited, L is latent (i.e. primed for reaction), G is base (i.e. returns to stability).
Image Credits: Zheng et al

It’s a rich and interesting new way to think about how this kind of thing works, and although it sounds quite mechanical, it arguably assigns its people/atoms more agency than in a passive epidemiological model or one based on fluid mechanics. People may be atoms in this model, but they’re atoms with human qualities: How resistant is one to incoming rumors, how educated is one, how quickly does one return to a receptive state for new disinformation?

Most interestingly, the overall “heat” generated by the system can be made to represent impact on society in general. And this can act as a stand-in for telling not just whether a rumor propagated, but also whether that propagation had an effect; a fission system that is excited but never reaches a chain reaction state may be understood as a rumor that was successfully managed without being outright quashed.

Of course, the researchers’ recommendation that “the government and related media should monitor the social network in real-time and check the rumor information at the early stage of rumor development and make corresponding strategies” must be considered in the context of their being under the Chinese regulatory regime. That casts the research in a slightly different light: online rumors represented as weapons-grade uranium that need close government scrutiny!

Still, it’s an exciting (if you will) new way of thinking about how information moves, duplicates, and indeed explodes in this highly volatile era.

As unrest fueled by disinformation spreads, the UK may seek stronger power to regulate tech platforms

Keir Starmer

Image Credits: Toby Melville / POOL / AFP via Getty Image / Getty Images

The U.K. government has indicated it may seek stronger powers to regulate tech platforms following days of violent disorder across England and Northern Ireland fueled by the spread of online disinformation.

On Friday Prime Minister Keir Starmer confirmed there will be a review of the Online Safety Act (OSA).

The legislation, which was passed by parliament in September 2023 after years of political wrangling, puts duties on platforms that carry user-to-user communications (such as social media platforms, messaging apps etc.) to remove illegal content and protect their users from other harms like hate speech — with penalties of up to 10% of global annual turnover for non-compliance.

“In relation to online and social media, the first thing I’d say is this is not a law-free zone, and I think that’s clear from the prosecutions and sentencing,” said Starmer, emphasizing that those who whip up hate online are already facing consequences as the Crown Prosecution Service reports the first sentences associated with hate speech postings related to violent disorder being handed down.

But Starmer added: “I do agree that we’re going to have to look more broadly at social media after this disorder, but the focus at the moment has to be on dealing with the disorder and making sure that our communities are safe and secure.” 

The Guardian reported that confirmation of the review followed criticism of the OSA by the London mayor, Sadiq Khan — who called the legislation “not fit for purpose.”

Violent disturbances have wracked cities and towns across England and Northern Ireland after a knife attack killed three young girls in Southport on July 30.

False information about the perpetrator of the attack erroneously identified them as a Muslim asylum seeker who had arrived in the country on a small boat. That falsehood quickly spread online, including through social media posts amplified by far-right activists. Disinformation about the killer’s identity has been widely linked to the civil unrest rocking the country in recent days.

Also on Friday, a British woman was reported to have been arrested under the Public Order Act 1986 on suspicion of stirring up racial hatred by making false social media posts about the identity of the attacker.

Such arrests remain the government’s stated priority for its response to the civil unrest for now. But the wider question of what to do about tech platforms and other digital tools that are used to spread disinformation far and wide is unlikely to go away.

As we reported earlier, the OSA is not yet fully up and running because the regulator is in the process of consulting on guidance. So some might say a review of the legislation is premature before at least the middle of next year — to give the law a chance to work.

At the same time, the bill has faced criticism for being poorly drafted and failing to tackle the underlying business models of platforms that profit from driving engagement via outrage.

The previous Conservative government also made some major revisions in fall 2022 that specifically removed clauses focused on tackling “legal but harmful” speech (aka, the area where disinformation typically falls).

At the time, digital minister Michelle Donelan said the government was responding to concerns about the bill’s impact on free speech. However another former minister, Damian Collins, disputed the government’s framing — suggesting the removed provisions had only intended to apply transparency measures to ensure platforms enforce their own terms and conditions, such as in situations where content risks inciting violence or hatred.

Mainstream social media platforms, including Facebook and X (formerly Twitter), have terms and conditions that typically prohibit such content, but it’s not always obvious how rigorously they’re enforcing these standards. (Just one immediate example: on August 6, a U.K. man was arrested for stirring up racial hatred by posting messages on Facebook about attacking a hotel where asylum seekers were housed.)

Platforms have long applied a playbook of plausible deniability — by saying they took down content once it was reported to them. But a law that regulates the resources and processes they are expected to have in place could force them to be more proactive about stopping the free spread of toxic disinformation.

One test case is already up and running against X in the European Union, where enforcers of the bloc’s Digital Services Act have been investigating the platform’s approach to moderating disinformation since December.

On Thursday, the EU told Reuters that X’s handling of harmful content related to the civic disturbances in the U.K. may be taken into account in its own investigation of the platform as “what happens in the U.K. is visible here.” “If there are examples of hate speech or incitements to violence, they could be taken into account as part of our proceedings against X,” the Commission’s spokesperson added.

Once the OSA is fully up and running in the U.K. by next spring, the law may exert a similar pressure on larger platforms’ approach to dealing with disinformation, according to the Department for Science, Innovation and Technology. A Department spokesperson told us that under the current law the biggest platforms with the most requirements under the Act will be expected to consistently enforce their own terms of service –including where these prohibit the spread of misinformation. 

The apps Instagram, Facebook and WhatsApp can be seen on the display of a smartphone in front of the logo of the Meta internet company.

Meta axed CrowdTangle, a tool for tracking disinformation. Critics claim its replacement has just '1% of the features'

The apps Instagram, Facebook and WhatsApp can be seen on the display of a smartphone in front of the logo of the Meta internet company.

Image Credits: Jens Büttner/picture alliance / Getty Images

Journalists, researchers and politicians are mourning Meta’s shutdown of CrowdTangle, which they used to track the spread of disinformation on Facebook and Instagram.

In CrowdTangle’s place, Meta is offering its Content Library — but is limiting usage to people from “qualified academic or nonprofit institutions who are pursuing scientific or public interest research.” Many researchers and academics, and most journalists, are barred from accessing the tool. 

Those who have been using the Meta Content Library say it is less transparent and accessible, has fewer features and has a worse user experience design. 

Many people in the community have written open letters to Meta in protest. They question why the company axed a useful tool for combating misinformation three months ahead of the most contentious U.S. election in history — an election that is already threatened by the proliferation of AI deepfakes and chatbot misinformation, some of which has come from Meta’s own chatbot — and replaced it with a tool that academics say is simply not as effective.

In short, if it ain’t broke, why fix it?

Meta hasn’t provided many answers. At an MIT Technology Review conference in May, Meta’s president of global affairs Nick Clegg was asked why the company wouldn’t wait to shut down CrowdTangle until after the election. He called CrowdTangle a “degrading tool” that doesn’t provide complete and accurate insights into what’s happening on Facebook.

“It only measures a narrow cake slice of a cake slice, which is particular forms of engagement,” said Clegg at the time. “It literally doesn’t tell you what people are seeing online.”

His rhetoric paints CrowdTangle as an almost recklessly bad tool for Meta to allow to exist. That’s in stark contrast to Meta’s promotion of the platform in 2020 as a source provided to Secretaries of State and election boards across the country to help them “quickly identify misinformation, voter interference and suppression” and create custom “public Live Displays for each state.”

Today, Meta’s hard line is that the Content Library provides more detailed insights about what people actually see and experience on Facebook and Instagram. A spokesperson from Meta told TechCrunch the new tools offer a more comprehensive data gathering experience, which now includes multimedia from Reels and page view counts. The spokesperson said MCL will soon include Threads content, as well, and pointed out that CrowdTangle’s data was weighted towards accounts with very large followings and engagement.

Some researchers who were accustomed to the old tool disagree that CrowdTangle was inadequate. They would also point out that those accounts with the most engagement are exactly the ones they want data on, as those are clearly the most influential.

“[MCL has] only 10% of the usability of CrowdTangle,” Cameron Hickey, CEO of the National Conference on Citizenship, told TechCrunch. He pointed out that CrowdTangle was “a sophisticated quasi-commercial product” with its own business before Facebook acquired it in 2016. Under the Facebook umbrella, the tool only improved as the team onboarded feature recommendations from a large pool of users. Hickey helped author a report that compares the features on the two platforms, co-published by Proof News and the Tow Center for Digital Journalism at Columbia’s Journalism School.

Hickey said Meta’s content library offers some of the same data from CrowdTangle, but ultimately only “1% of the features.”

“If you wanted to look at the number of followers that CNN’s Facebook page has had over time, that’s something you can’t do in the Meta Content Library, but you can do in CrowdTangle,” said Hickey. “And indicators like that are often very useful for understanding how the prevalence or prominence of an actor on social media changes over time, and connecting those to other things, like, did they make a viral post and then suddenly their total number of followers doubled?”

Research from Proof News, the Tow Center for Digital Journalism and Algorithmic Transparency Institute detailing how Meta Content Library’s features stack up to CrowdTangle, which Meta shut down Wednesday.
Image Credits: TechCrunch | Proof News, Tow Center for Journalism, Algorithmic Transparency Institute

Some of the features that exist across both platforms — like tracking how often political parties post about certain topics and seeing the relative engagement — are simply more tedious to do on MCL, says Hickey, which points to poor user experience design. 

Crucially, even though people might be able to access data — say, about posts that mentioned immigration — what they can do with that data is considerably more limited. 

“You can’t build out the kinds of interactive charts that were available with CrowdTangle,” said Hickey. “You can’t build out public dashboards.”

(A spokesperson for Meta told TechCrunch that on August 14, the day CrowdTangle died, the company launched a configurable real-time dashboard feature to let users quickly display post feeds and trend charts based on certain keywords and producers.)

“And most importantly,” Hickey continued, “you can’t download all of the posts.”

Users can only download posts for accounts that have greater than 25,000 followers, but many politicians fall well short of that count. 

“This leaves a lot of researchers with very few options, and one of the only remaining ones is one that has complications, which is scraping the data directly,” said Hickey. 

Another main problem with MCL is that Meta is not granting access to watchdogs that previously used CrowdTangle to track misinformation’s spread. 

Media Matters, a nonprofit watchdog journalism organization, told TechCrunch it doesn’t have access to MCL today. In the past, the organization used CrowdTangle to show that contrary to right-wing media and Republican talking points, Facebook was not actually censoring conservative information. 

In fact, right-leaning pages got considerably more engagement on their content compared to non-aligned or left-leaning pages, research director Kayla Gogarty told TechCrunch.

“CrowdTangle has given us the ability to see the sorts of content that is widely engaged with on the platform,” Gogarty said. “Algorithms are usually a black box, but at least having some of that engagement data could help us learn a little more about the algorithms.”

Gogarty pointed out that ahead of the January 6 attack on Capitol Hill, researchers and reporters used the tool to sound the alarm about online organizing and the potential for violence to delegitimize the election. 

“What this ultimately is going to mean is just that fewer civil society groups are able to monitor and track what’s happening on Facebook and Instagram during this election year,” Brandi Geurkink, executive director of the Coalition for Independent Technology Research, told TechCrunch.

Hickey contrasted Meta, which did spend time and probably millions of dollars to create the Content Library, with Elon Musk’s actions at Twitter (now X). When Musk bought Twitter, he immediately limited access to the Twitter API, which allows developers, journalists and researchers to access and analyze data from the platform in a similar fashion to CrowdTangle. Now, the price tag on the cheapest enterprise X API package is $42,000 a month, and it provides access to only 50 million posts.

This article has been updated with more information from Meta.

Meta axed CrowdTangle, a tool for tracking disinformation. Critics claim its replacement has just '1% of the features'

Image Credits: Getty Images

Journalists, researchers, and politicians are mourning Meta’s shutdown of CrowdTangle, which they used to track the spread of disinformation on Facebook and Instagram.

In CrowdTangle’s place, Meta is offering its Content Library – but is limiting usage to people from “qualified academic or nonprofit institutions who are pursuing scientific or public interest research.” Many researchers and academics, and most journalists, are barred from accessing the tool. 

Those who have been using the Meta Content Library say it is less transparent and accessible, has fewer features, and has a worse user experience design. 

Many people in the community have written open letters to Meta in protest. They question why the company axed a useful tool for combating misinformation three months ahead of the most contentious U.S. election in history – an election that is already threatened by the proliferation of AI deep fakes and chatbot misinformation, some of which has come from Meta’s own chatbot – and replace it with a tool that academics say is simply not as effective?

In short, if it ain’t broke, why fix it?

Meta hasn’t provided many answers. At an MIT Technology Review conference in May, Meta’s president of global affairs Nick Clegg was asked why the company wouldn’t wait to shut down CrowdTangle until after the election. He called CrowdTangle a “degrading tool” that doesn’t provide complete and accurate insights into what’s happening on Facebook.

“It only measures a narrow cake slice of a cake slice, which is particular forms of engagement,” said Clegg at the time. “It literally doesn’t tell you what people are seeing online.”

His rhetoric paints CrowdTangle as an almost recklessly bad tool for Meta to allow to exist. That’s in stark contrast to Meta’s promotion of the platform in 2020 as a source provided to Secretaries of State and election boards across the country to help them “quickly identify misinformation, voter interference and suppression” and create custom “public Live Displays for each state.”

Today, Meta’s hard line is that the Content Library provides more detailed insights about what people actually see and experience on Facebook and Instagram. A spokesperson from Meta told TechCrunch the new tools offer a more comprehensive data gathering experience. 

Some researchers who were accustomed to the old tool disagree.

“It’s only 10% of the usability of CrowdTangle,” Cameron Hickey, program director at the  Algorithmic Transparency Institute, told TechCrunch. He pointed out that CrowdTangle was “a sophisticated quasi-commercial product” with its own business before Facebook acquired it in 2016. Under the Facebook umbrella, the tool only improved as the team onboarded feature recommendations from a large pool of users. Hickey helped author a report that compares the features on the two platforms, co-published by Proof News and the Tow Center for Digital Journalism at Columbia’s Journalism School.

Hickey said Meta’s content library offers some of the same data from CrowdTangle, but ultimately only “1% of the features.”

“If you wanted to look at the number of followers that CNN’s Facebook page has had over time, that’s something you can’t do in the Meta Content Library, but you can do in CrowdTangle,” said Hickey. “And indicators like that are often very useful for understanding how the prevalence or prominence of an actor on social media changes over time, and connecting those to other things, like, did they make a viral post and then suddenly their total number of followers doubled?”

Research from Proof News, the Tow Center for Digital Journalism, and Algorithmic Transparency Institute detailing how Meta Content Library’s features stack up to CrowdTangle, which Meta shut down Wednesday.
Image Credits: TechCrunch | Proof News, Tow Center for Journalism, Algorithmic Transparency Institute

Some of the features that exist across both platforms – like tracking how often political parties post about certain topics and seeing the relative engagement – are simply more tedious to do on MCL, says Hickey, which points to poor user experience design. 

Crucially, even though people might be able to access data – say, about posts that mentioned immigration – what they can do with that data is considerably more limited. 

“You can’t build out the kinds of interactive charts that were available with CrowdTangle,” said Hickey. “You can’t build out public dashboards. And most importantly…you can’t download all of the posts.”

Users can only download posts for accounts that have greater than 25,000 followers, but many politicians fall well short of that count. 

“This leaves a lot of researchers with very few options, and one of the only remaining ones is one that has complications, which is scraping the data directly,” said Hickey. 

Another main problem with MCL is that Meta is not granting access to watchdogs that previously used CrowdTangle to track misinformation’s spread. 

Media Matters, a nonprofit watchdog journalism organization, told TechCrunch it doesn’t have access to MCL today. In the past, the organization used CrowdTangle to show that contrary to right wing media and Republican talking points, Facebook was not actually censoring conservative information. 

In fact, right-leaning pages got considerably more engagement on their content compared to non-aligned or left-leaning pages, research director Kayla Gogarty told TechCrunch.

“CrowdTangle has given us the ability to see the sorts of content that is widely engaged with on the platform,” Gogarty said. “Algorithms are usually a black box, but at least having some of that engagement data could help us learn a little more about the algorithms.”

Gogarty pointed out that ahead of the January 6 attack on Capitol Hill, researchers and reporters used the tool to sound the alarm about online organizing and the potential for violence to delegitimize the election. 

“What this ultimately is going to mean is just that fewer civil society groups are able to monitor and track what’s happening on Facebook and Instagram during this election year,” Brandi Geurkink, executive director of the Coalition for Independent Technology Research, told TechCrunch.

Hickey contrasted Meta, which did spend time and probably millions of dollars to create the Content Library, with Elon Musk’s actions at Twitter (now X). When Musk bought Twitter,  he immediately limited access to the Twitter API, which allows developers, journalists and researchers to access and analyze data from the platform in a similar fashion to CrowdTangle. Now, the price tag on the cheapest enterprise X API package is $42,000 a month, and it provides access to only 50 million posts.

As unrest fueled by disinformation spreads, the UK may seek stronger power to regulate tech platforms

Keir Starmer

Image Credits: Toby Melville / POOL / AFP via Getty Image / Getty Images

The U.K. government has indicated it may seek stronger powers to regulate tech platforms following days of violent disorder across England and Northern Ireland fueled by the spread of online disinformation.

On Friday Prime Minister Keir Starmer confirmed there will be a review of the Online Safety Act (OSA).

The legislation, which was passed by parliament in September 2023 after years of political wrangling, puts duties on platforms that carry user-to-user communications (such as social media platforms, messaging apps etc.) to remove illegal content and protect their users from other harms like hate speech — with penalties of up to 10% of global annual turnover for non-compliance.

“In relation to online and social media, the first thing I’d say is this is not a law-free zone, and I think that’s clear from the prosecutions and sentencing,” said Starmer, emphasizing that those who whip up hate online are already facing consequences as the Crown Prosecution Service reports the first sentences associated with hate speech postings related to violent disorder being handed down.

But Starmer added: “I do agree that we’re going to have to look more broadly at social media after this disorder, but the focus at the moment has to be on dealing with the disorder and making sure that our communities are safe and secure.” 

The Guardian reported that confirmation of the review followed criticism of the OSA by the London mayor, Sadiq Khan — who called the legislation “not fit for purpose.”

Violent disturbances have wracked cities and towns across England and Northern Ireland after a knife attack killed three young girls in Southport on July 30.

False information about the perpetrator of the attack erroneously identified them as a Muslim asylum seeker who had arrived in the country on a small boat. That falsehood quickly spread online, including through social media posts amplified by far-right activists. Disinformation about the killer’s identity has been widely linked to the civil unrest rocking the country in recent days.

Also on Friday, a British woman was reported to have been arrested under the Public Order Act 1986 on suspicion of stirring up racial hatred by making false social media posts about the identity of the attacker.

Such arrests remain the government’s stated priority for its response to the civil unrest for now. But the wider question of what to do about tech platforms and other digital tools that are used to spread disinformation far and wide is unlikely to go away.

As we reported earlier, the OSA is not yet fully up and running because the regulator is in the process of consulting on guidance. So some might say a review of the legislation is premature before at least the middle of next year — to give the law a chance to work.

At the same time, the bill has faced criticism for being poorly drafted and failing to tackle the underlying business models of platforms that profit from driving engagement via outrage.

The previous Conservative government also made some major revisions in fall 2022 that specifically removed clauses focused on tackling “legal but harmful” speech (aka, the area where disinformation typically falls).

At the time, digital minister Michelle Donelan said the government was responding to concerns about the bill’s impact on free speech. However another former minister, Damian Collins, disputed the government’s framing — suggesting the removed provisions had only intended to apply transparency measures to ensure platforms enforce their own terms and conditions, such as in situations where content risks inciting violence or hatred.

Mainstream social media platforms, including Facebook and X (formerly Twitter), have terms and conditions that typically prohibit such content, but it’s not always obvious how rigorously they’re enforcing these standards. (Just one immediate example: on August 6, a U.K. man was arrested for stirring up racial hatred by posting messages on Facebook about attacking a hotel where asylum seekers were housed.)

Platforms have long applied a playbook of plausible deniability — by saying they took down content once it was reported to them. But a law that regulates the resources and processes they are expected to have in place could force them to be more proactive about stopping the free spread of toxic disinformation.

One test case is already up and running against X in the European Union, where enforcers of the bloc’s Digital Services Act have been investigating the platform’s approach to moderating disinformation since December.

On Thursday, the EU told Reuters that X’s handling of harmful content related to the civic disturbances in the U.K. may be taken into account in its own investigation of the platform as “what happens in the U.K. is visible here.” “If there are examples of hate speech or incitements to violence, they could be taken into account as part of our proceedings against X,” the Commission’s spokesperson added.

Once the OSA is fully up and running in the U.K. by next spring, the law may exert a similar pressure on larger platforms’ approach to dealing with disinformation, according to the Department for Science, Innovation and Technology. A Department spokesperson told us that under the current law the biggest platforms with the most requirements under the Act will be expected to consistently enforce their own terms of service –including where these prohibit the spread of misinformation. 

Disinformation may 'go nuclear' rather than 'go viral,' researchers say

Image Credits: Abigail Malate/AIP

We say something “goes viral” because we tend to think of rumors and disinformation spreading the way that an infection spreads. But these days it may be more accurate to say something “goes nuclear,” according to a new paper that models disinfo as a form of fission reaction.

As common wisdom had it even before the age of instantaneous transmission of data, “A lie can get halfway around the world while the truth is still pulling on its boots.” A pithy epigram, yes, but it’s not much help in analyzing the phenomenon.

We often reach for natural processes to represent how humans act as groups. Physically, large crowds and traffic act like fluids, and are often accurately modeled as such. Other rules govern our behavior, and when it comes to spreading rumors, the spread of disease is an intuitive analogue. People act as vectors for a lie rather than a virus, and the results provide a lot of insight into how it works and how to stop it.

But the fast-paced and aggressive modern social media and news environment changes things somewhat. As researchers from Shandong Normal University in China, led by Wenrong Zheng, describe it in their paper published in AIP Advances:

The infectious disease model is not yet able to truly reflect the rumor propagation in the network. This is mainly due to the fact that infectious diseases do not propagate actively, while rumors propagate actively, and the model ignores the rationality and subjectivity of the rumor spreaders; secondly, the infectious disease model only takes into account the changes in the group size, but not the resulting social impact and potential risks.

In other words, the disease model imperfectly represents the way those “infected” by a rumor actively propagate it, rather than simply passing it to someone near them at the grocery store. And disease models are often intended to project and prevent death, but perhaps not other important metrics relevant to the study of disinformation.

So what natural process can we use instead? Scientists have proposed wildfires, swarms of insects, and collections of bouncing balls — but today’s stand-in from nature is … nuclear fission.

A quick nuclear reactors 101: fission is when uranium atoms are forced into an excited state in which they emit neutrons, striking other uranium atoms and causing them to do the same. At a certain level of artificial stimulation, this reaction of atoms exciting other atoms becomes self-sustaining; in a reactor, this process is tightly controlled and the resulting heat from all these neutrons splitting off is harvested for power. In a bomb, however, the reaction is encouraged to grow exponentially, producing an explosion.

Here’s how the the researchers map rumors onto that process:

Firstly, the initial online rumors are compared to neutrons, uranium nuclei are compared to individual rumor receivers, and fission barriers are compared to individual active propagation thresholds; Secondly, the process of nuclear fission is analyzed, and the degree of energy accumulation is used to compare the social impact of online rumors.

The rumors are neutrons, shooting off of people (atoms), which like different states of uranium have varying thresholds for activation, but upon reaching a sufficiently excited state, also become active propagators.

Image Credits: Zheng et al

This provides a few more levers and dials for modelers to manipulate when trying to figure out how a rumor will or did spread. For instance, how high-energy is the rumor? What’s the concentration of less-reactive users (U238) vs users ready to be activated by a single stray rumor (U235)? What’s the rate of decay for forward propagation (neutron or retweet) and is the heat (user activity) being captured somehow?

S is stable, E is excited, L is latent (i.e. primed for reaction), G is base (i.e. returns to stability).
Image Credits: Zheng et al

It’s a rich and interesting new way to think about how this kind of thing works, and although it sounds quite mechanical, it arguably assigns its people/atoms more agency than in a passive epidemiological model or one based on fluid mechanics. People may be atoms in this model, but they’re atoms with human qualities: how resistant is one to incoming rumors, how educated is one, how quickly does one return to a receptive state for new disinformation?

Most interestingly, the overall “heat” generated by the system can be made to represent impact on society in general. And this can act as a stand-in for telling not just whether a rumor propagated, but whether that propagation had an effect; a fission system that is excited but never reaches a chain reaction state may be understood as a rumor that was successfully managed without being outright quashed.

Of course, the researchers’ recommendation that “The government and related media should monitor the social network in real-time and check the rumor information at the early stage of rumor development and make corresponding strategies” must be considered in the context of their being under the Chinese regulatory regime. That casts the research in a slightly different light: online rumors represented as weapons-grade uranium that needs close government scrutiny!

Still, it’s an exciting (if you will) new way of thinking about how information moves, duplicates, and indeed explodes in this highly volatile era.