What does 'open source AI' mean, anyway?

Open Source Initiative (OSI) executive director Stefano Maffulli

Image Credits: Open Source Initiative (OSI) // Stefano Maffulli, OSI Executive Director

The struggle between open source and proprietary software is well understood. But the tensions permeating software circles for decades have shuffled into the artificial intelligence space, in part because no one can agree on what “open source” really means in the context of AI.

The New York Times recently published a gushing appraisal of Meta CEO Mark Zuckerberg, noting how his “open source AI” embrace had made him popular once more in Silicon Valley. By most estimations, however, Meta’s Llama-branded large language models aren’t really open source, which highlights the crux of the debate.

It’s this challenge that the Open Source Initiative (OSI) is trying to address, led by executive director Stefano Maffulli (pictured above), through a series of conferences, workshops, panels, webinars, reports and more, starting some three years ago.

AI ain’t software code

Image Credits: Westend61 via Getty

The OSI has been the steward of the Open Source Definition (OSD) for more than a quarter of a century, setting out how the term “open source” can, or should, be applied to software. A license that meets this definition can legitimately be deemed “open source,” though it recognizes a spectrum of licenses ranging from extremely permissive to not quite so permissive.

But transposing legacy licensing and naming conventions from software onto AI is problematic. Joseph Jacks, open source evangelist and founder of VC firm OSS Capital, goes as far as to say that there is “no such thing as open-source AI,” noting that “open source was invented explicitly for software source code.” Further, “neural network weights” (NNWs) — a term used in the world of artificial intelligence to describe the parameters or coefficients through which the network learns during the training process — aren’t in any meaningful way comparable to software.

“Neural net weights are not software source code; they are unreadable by humans, [and they are not] debuggable,” Jacks notes. “Furthermore, the fundamental rights of open source also don’t translate over to NNWs in any congruent manner.”

These inconsistencies last year led Jacks and OSS Capital colleague Heather Meeker to come up with their own definition of sorts, around the concept of “open weights.” And Maffulli, for what it’s worth, agrees with them. “The point is correct,” he told TechCrunch. “One of the initial debates we had was whether to call it open source AI at all, but everyone was already using the term.”

Meta analysis

Llama illustration
Image Credits: Larysa Amosova via Getty

Founded in 1998, the OSI is a not-for-profit public benefit corporation that works on a myriad of open source-related activities around advocacy, education and its core raison d’être: the Open Source Definition. Today, the organization relies on sponsorships for funding, with such esteemed donors as Amazon, Google, Microsoft, Cisco, Intel, Salesforce and Meta.

Meta’s involvement with the OSI is particularly notable right now as it pertains to the notion of “open source AI.” Despite Meta hanging its AI hat on the open-source peg, the company has notable restrictions in place regarding how its Llama models can be used: Sure, they can be used gratis for research and commercial use cases, but app developers with more than 700 million monthly users must request a special license from Meta, which it will grant purely at its own discretion.

Meta’s language around its LLMs is somewhat malleable. While the company did call its Llama 2 model open source, with the arrival of Llama 3 in April, it retreated somewhat from the terminology, using phrases such as “openly available” and “openly accessible” instead. But in some places, it still refers to the model as “open source.”

“Everyone else that is involved in the conversation is perfectly agreeing that Llama itself cannot be considered open source,” Maffulli said. “People I’ve spoken with who work at Meta, they know that it’s a little bit of a stretch.”

On top of that, some might argue that there’s a conflict of interest here: a company that has shown a desire to piggyback off the open source branding also provides finances to the stewards of “the definition”?

This is one of the reasons why the OSI is trying to diversify its funding, recently securing a grant from the Sloan Foundation, which is helping to fund its multi-stakeholder global push to reach the Open Source AI Definition. TechCrunch can reveal this grant amounts to around $250,000, and Maffulli is hopeful that this can alter the optics around its reliance on corporate funding.

“That’s one of the things that the Sloan grant makes even more clear: We could say goodbye to Meta’s money anytime,” Maffulli said. “We could do that even before this Sloan Grant, because I know that we’re going to be getting donations from others. And Meta knows that very well. They’re not interfering with any of this [process], neither is Microsoft, or GitHub or Amazon or Google — they absolutely know that they cannot interfere, because the structure of the organization doesn’t allow that.”

Working definition of open source AI

Concept illustration depicting finding a definition
Image Credits: Aleksei Morozov / Getty Images

The current Open Source AI Definition draft sits at version 0.0.8, constituting three core parts: the “preamble,” which lays out the document’s remit; the Open Source AI Definition itself; and a checklist that runs through the components required for an open source-compliant AI system.

As per the current draft, an Open Source AI system should grant freedoms to use the system for any purpose without seeking permission; to allow others to study how the system works and inspect its components; and to modify and share the system for any purpose.

But one of the biggest challenges has been around data — that is, can an AI system be classified as “open source” if the company hasn’t made the training dataset available for others to poke at? According to Maffulli, it’s more important to know where the data came from, and how a developer labeled, de-duplicated and filtered the data. And also, having access to the code that was used to assemble the dataset from its various sources.

“It’s much better to know that information than to have the plain dataset without the rest of it,” Maffulli said.

While having access to the full dataset would be nice (the OSI makes this an “optional” component in its current definition), Maffulli says that it’s not possible or practical in many cases. This might be because there is confidential or copyrighted information contained within the dataset that the developer doesn’t have permission to redistribute. Moreover, there are techniques to train machine learning models whereby the data itself isn’t actually shared with the system, using techniques such as federated learning, differential privacy and homomorphic encryption.

And this perfectly highlights the fundamental differences between “open source software” and “open source AI”: The intentions might be similar, but they are not like-for-like comparable, and this disparity is what the OSI is trying to capture in its definition.

In software, source code and binary code are two views of the same artifact: They reflect the same program in different forms. But training datasets and the subsequent trained models are distinct things: You can take that same dataset, and you won’t necessarily be able to re-create the same model consistently.

“There is a variety of statistical and random logic that happens during the training that means it cannot make it replicable in the same way as software,” Maffulli added.

So an open source AI system should be easy to replicate, with clear instructions. And this is where the checklist facet of the Open Source AI Definition comes into play, which is based on a recently published academic paper called “The Model Openness Framework: Promoting Completeness and Openness for Reproducibility, Transparency, and Usability in Artificial Intelligence.”

This paper proposes the Model Openness Framework (MOF), a classification system that rates machine learning models “based on their completeness and openness.” The MOF demands that specific components of the AI model development be “included and released under appropriate open licenses,” including training methodologies and details around the model parameters.

Stable condition

Stefano Maffulli presenting at the Digital Public Goods Alliance (DPGA) members summit in Addis Ababa
Stefano Maffulli presenting at the Digital Public Goods Alliance (DPGA) members summit in Addis Ababa.
Image Credits: OSI

The OSI is calling the official launch of the definition the “stable version,” much like a company will do with an application that has undergone extensive testing and debugging ahead of prime time. The OSI is purposefully not calling it the “final release” because parts of it will likely evolve.

“We can’t really expect this definition to last for 26 years like the Open Source Definition,” Maffulli said. “I don’t expect the top part of the definition — such as ‘what is an AI system?’ — to change much. But the parts that we refer to in the checklist, those lists of components depend on technology. Tomorrow, who knows what the technology will look like.”

The stable Open Source AI Definition is expected to be rubber stamped by the Board at the All Things Open conference at the tail end of October, with the OSI embarking on a global roadshow in the intervening months spanning five continents, seeking more “diverse input” on how “open source AI” will be defined moving forward. But any final changes are likely to be little more than “small tweaks” here and there.

“This is the final stretch,” Maffulli said. “We have reached a feature complete version of the definition; we have all the elements that we need. Now we have a checklist, so we’re checking that there are no surprises in there; there are no systems that should be included or excluded.”

What does 'open source AI' mean, anyway?

Open Source Initiative (OSI) executive director Stefano Maffulli

Image Credits: Open Source Initiative (OSI) // Stefano Maffulli, OSI Executive Director

The struggle between open source and proprietary software is well understood. But the tensions permeating software circles for decades have shuffled into the artificial intelligence space, in part because no one can agree on what “open source” really means in the context of AI.

The New York Times recently published a gushing appraisal of Meta CEO Mark Zuckerberg, noting how his “open source AI” embrace had made him popular once more in Silicon Valley. By most estimations, however, Meta’s Llama-branded large language models aren’t really open source, which highlights the crux of the debate.

It’s this challenge that the Open Source Initiative (OSI) is trying to address, led by executive director Stefano Maffulli (pictured above), through a series of conferences, workshops, panels, webinars, reports and more, starting some three years ago.

AI ain’t software code

Image Credits: Westend61 via Getty

The OSI has been the steward of the Open Source Definition (OSD) for more than a quarter of a century, setting out how the term “open source” can, or should, be applied to software. A license that meets this definition can legitimately be deemed “open source,” though it recognizes a spectrum of licenses ranging from extremely permissive to not quite so permissive.

But transposing legacy licensing and naming conventions from software onto AI is problematic. Joseph Jacks, open source evangelist and founder of VC firm OSS Capital, goes as far as to say that there is “no such thing as open-source AI,” noting that “open source was invented explicitly for software source code.” Further, “neural network weights” (NNWs) — a term used in the world of artificial intelligence to describe the parameters or coefficients through which the network learns during the training process — aren’t in any meaningful way comparable to software.

“Neural net weights are not software source code; they are unreadable by humans, [and they are not] debuggable,” Jacks notes. “Furthermore, the fundamental rights of open source also don’t translate over to NNWs in any congruent manner.”

These inconsistencies last year led Jacks and OSS Capital colleague Heather Meeker to come up with their own definition of sorts, around the concept of “open weights.” And Maffulli, for what it’s worth, agrees with them. “The point is correct,” he told TechCrunch. “One of the initial debates we had was whether to call it open source AI at all, but everyone was already using the term.”

Meta analysis

Llama illustration
Image Credits: Larysa Amosova via Getty

Founded in 1998, the OSI is a not-for-profit public benefit corporation that works on a myriad of open source-related activities around advocacy, education and its core raison d’être: the Open Source Definition. Today, the organization relies on sponsorships for funding, with such esteemed donors as Amazon, Google, Microsoft, Cisco, Intel, Salesforce and Meta.

Meta’s involvement with the OSI is particularly notable right now as it pertains to the notion of “open source AI.” Despite Meta hanging its AI hat on the open-source peg, the company has notable restrictions in place regarding how its Llama models can be used: Sure, they can be used gratis for research and commercial use cases, but app developers with more than 700 million monthly users must request a special license from Meta, which it will grant purely at its own discretion.

Meta’s language around its LLMs is somewhat malleable. While the company did call its Llama 2 model open source, with the arrival of Llama 3 in April, it retreated somewhat from the terminology, using phrases such as “openly available” and “openly accessible” instead. But in some places, it still refers to the model as “open source.”

“Everyone else that is involved in the conversation is perfectly agreeing that Llama itself cannot be considered open source,” Maffulli said. “People I’ve spoken with who work at Meta, they know that it’s a little bit of a stretch.”

On top of that, some might argue that there’s a conflict of interest here: a company that has shown a desire to piggyback off the open source branding also provides finances to the stewards of “the definition”?

This is one of the reasons why the OSI is trying to diversify its funding, recently securing a grant from the Sloan Foundation, which is helping to fund its multi-stakeholder global push to reach the Open Source AI Definition. TechCrunch can reveal this grant amounts to around $250,000, and Maffulli is hopeful that this can alter the optics around its reliance on corporate funding.

“That’s one of the things that the Sloan grant makes even more clear: We could say goodbye to Meta’s money anytime,” Maffulli said. “We could do that even before this Sloan Grant, because I know that we’re going to be getting donations from others. And Meta knows that very well. They’re not interfering with any of this [process], neither is Microsoft, or GitHub or Amazon or Google — they absolutely know that they cannot interfere, because the structure of the organization doesn’t allow that.”

Working definition of open source AI

Concept illustration depicting finding a definition
Image Credits: Aleksei Morozov / Getty Images

The current Open Source AI Definition draft sits at version 0.0.8, constituting three core parts: the “preamble,” which lays out the document’s remit; the Open Source AI Definition itself; and a checklist that runs through the components required for an open source-compliant AI system.

As per the current draft, an Open Source AI system should grant freedoms to use the system for any purpose without seeking permission; to allow others to study how the system works and inspect its components; and to modify and share the system for any purpose.

But one of the biggest challenges has been around data — that is, can an AI system be classified as “open source” if the company hasn’t made the training dataset available for others to poke at? According to Maffulli, it’s more important to know where the data came from, and how a developer labeled, de-duplicated and filtered the data. And also, having access to the code that was used to assemble the dataset from its various sources.

“It’s much better to know that information than to have the plain dataset without the rest of it,” Maffulli said.

While having access to the full dataset would be nice (the OSI makes this an “optional” component in its current definition), Maffulli says that it’s not possible or practical in many cases. This might be because there is confidential or copyrighted information contained within the dataset that the developer doesn’t have permission to redistribute. Moreover, there are techniques to train machine learning models whereby the data itself isn’t actually shared with the system, using techniques such as federated learning, differential privacy and homomorphic encryption.

And this perfectly highlights the fundamental differences between “open source software” and “open source AI”: The intentions might be similar, but they are not like-for-like comparable, and this disparity is what the OSI is trying to capture in its definition.

In software, source code and binary code are two views of the same artifact: They reflect the same program in different forms. But training datasets and the subsequent trained models are distinct things: You can take that same dataset, and you won’t necessarily be able to re-create the same model consistently.

“There is a variety of statistical and random logic that happens during the training that means it cannot make it replicable in the same way as software,” Maffulli added.

So an open source AI system should be easy to replicate, with clear instructions. And this is where the checklist facet of the Open Source AI Definition comes into play, which is based on a recently published academic paper called “The Model Openness Framework: Promoting Completeness and Openness for Reproducibility, Transparency, and Usability in Artificial Intelligence.”

This paper proposes the Model Openness Framework (MOF), a classification system that rates machine learning models “based on their completeness and openness.” The MOF demands that specific components of the AI model development be “included and released under appropriate open licenses,” including training methodologies and details around the model parameters.

Stable condition

Stefano Maffulli presenting at the Digital Public Goods Alliance (DPGA) members summit in Addis Ababa
Stefano Maffulli presenting at the Digital Public Goods Alliance (DPGA) members summit in Addis Ababa.
Image Credits: OSI

The OSI is calling the official launch of the definition the “stable version,” much like a company will do with an application that has undergone extensive testing and debugging ahead of prime time. The OSI is purposefully not calling it the “final release” because parts of it will likely evolve.

“We can’t really expect this definition to last for 26 years like the Open Source Definition,” Maffulli said. “I don’t expect the top part of the definition — such as ‘what is an AI system?’ — to change much. But the parts that we refer to in the checklist, those lists of components depend on technology. Tomorrow, who knows what the technology will look like.”

The stable Open Source AI Definition is expected to be rubber stamped by the Board at the All Things Open conference at the tail end of October, with the OSI embarking on a global roadshow in the intervening months spanning five continents, seeking more “diverse input” on how “open source AI” will be defined moving forward. But any final changes are likely to be little more than “small tweaks” here and there.

“This is the final stretch,” Maffulli said. “We have reached a feature complete version of the definition; we have all the elements that we need. Now we have a checklist, so we’re checking that there are no surprises in there; there are no systems that should be included or excluded.”

What does 'open source AI' mean, anyway?

Open Source Initiative (OSI) executive director Stefano Maffulli

Image Credits: Open Source Initiative (OSI) // Stefano Maffulli, OSI Executive Director

The struggle between open source and proprietary software is well understood. But the tensions permeating software circles for decades have shuffled into the artificial intelligence space, in part because no one can agree on what “open source” really means in the context of AI.

The New York Times recently published a gushing appraisal of Meta CEO Mark Zuckerberg, noting how his “open source AI” embrace had made him popular once more in Silicon Valley. By most estimations, however, Meta’s Llama-branded large language models aren’t really open source, which highlights the crux of the debate.

It’s this challenge that the Open Source Initiative (OSI) is trying to address, led by executive director Stefano Maffulli (pictured above), through conferences, workshops, panels, webinars, reports and more.

AI ain’t software code

Image Credits: Westend61 via Getty

The OSI has been a steward of the Open Source Definition (OSD) for more than a quarter of a century, setting out how the term “open source” can, or should, be applied to software. A license that meets this definition can legitimately be deemed “open source,” though it recognizes a spectrum of licenses ranging from extremely permissive to not quite so permissive.

But transposing legacy licensing and naming conventions from software onto AI is problematic. Joseph Jacks, open source evangelist and founder of VC firm OSS Capital, goes as far as to say that there is “no such thing as open-source AI,” noting that “open source was invented explicitly for software source code.” Further, “neural network weights” (NNWs) — a term used in the world of artificial intelligence to describe the parameters or coefficients through which the network learns during the training process — aren’t in any meaningful way comparable to software.

“Neural net weights are not software source code; they are unreadable by humans, [and they are not] debuggable,” Jacks notes. “Furthermore, the fundamental rights of open source also don’t translate over to NNWs in any congruent manner.”

These inconsistencies last year led Jacks and OSS Capital colleague Heather Meeker to come up with their own definition of sorts, around the concept of “open weights.” And Maffulli, for what it’s worth, agrees with them. “The point is correct,” he told TechCrunch. “One of the initial debates we had was whether to call it open source AI at all, but everyone was already using the term.”

Meta analysis

Llama illustration
Image Credits: Larysa Amosova via Getty

Founded in 1998, the OSI is a not-for-profit public benefit corporation that works on a myriad of open source-related activities around advocacy, education and its core raison d’être: the Open Source Definition. Today, the organization relies on sponsorships for funding, with such esteemed members as Amazon, Google, Microsoft, Cisco, Intel, Salesforce and Meta.

Meta’s involvement with the OSI is particularly notable right now as it pertains to the notion of “open source AI.” Despite Meta hanging its AI hat on the open-source peg, the company has notable restrictions in place regarding how its Llama models can be used: Sure, they can be used gratis for research and commercial use cases, but app developers with more than 700 million monthly users must request a special license from Meta, which it will grant purely at its own discretion.

Meta’s language around its LLMs is somewhat malleable. While the company did call its Llama 2 model open source, with the arrival of Llama 3 in April, it retreated somewhat from the terminology, using phrases such as “openly available” and “openly accessible” instead. But in some places, it still refers to the model as “open source.”

“Everyone else that is involved in the conversation is perfectly agreeing that Llama itself cannot be considered open source,” Maffulli said. “People I’ve spoken with who work at Meta, they know that it’s a little bit of a stretch.”

On top of that, some might argue that there’s a conflict of interest here: a company that has shown a desire to piggyback off the open source branding also provides finances to the stewards of “the definition”?

This is one of the reasons why the OSI is trying to diversify its funding, recently securing a grant from the Sloan Foundation, which is helping to fund its multi-stakeholder global push to reach the Open Source AI Definition. TechCrunch can reveal this grant amounts to around $250,000, and Maffulli is hopeful that this can alter the optics around its reliance on corporate funding.

“That’s one of the things that the Sloan grant makes even more clear: We could say goodbye to Meta’s money anytime,” Maffulli said. “We could do that even before this Sloan Grant, because I know that we’re going to be getting donations from others. And Meta knows that very well. They’re not interfering with any of this [process], neither is Microsoft, or GitHub or Amazon or Google — they absolutely know that they cannot interfere, because the structure of the organization doesn’t allow that.”

Working definition of open source AI

Concept illustration depicting finding a definition
Image Credits: Aleksei Morozov / Getty Images

The current Open Source AI Definition draft sits at version 0.0.8, constituting three core parts: the “preamble,” which lays out the document’s remit; the Open Source AI Definition itself; and a checklist that runs through the components required for an open source-compliant AI system.

As per the current draft, an Open Source AI system should grant freedoms to use the system for any purpose without seeking permission; to allow others to study how the system works and inspect its components; and to modify and share the system for any purpose.

But one of the biggest challenges has been around data — that is, can an AI system be classified as “open source” if the company hasn’t made the training dataset available for others to poke at? According to Maffulli, it’s more important to know where the data came from, and how a developer labeled, de-duplicated and filtered the data. And also, having access to the code that was used to assemble the dataset from its various sources.

“It’s much better to know that information than to have the plain dataset without the rest of it,” Maffulli said.

While having access to the full dataset would be nice (the OSI makes this an “optional” component), Maffulli says that it’s not possible or practical in many cases. This might be because there is confidential or copyrighted information contained within the dataset that the developer doesn’t have permission to redistribute. Moreover, there are techniques to train machine learning models whereby the data itself isn’t actually shared with the system, using techniques such as federated learning, differential privacy and homomorphic encryption.

And this perfectly highlights the fundamental differences between “open source software” and “open source AI”: The intentions might be similar, but they are not like-for-like comparable, and this disparity is what the OSI is trying to capture in its definition.

In software, source code and binary code are two views of the same artifact: They reflect the same program in different forms. But training datasets and the subsequent trained models are distinct things: You can take that same dataset, and you won’t necessarily be able to re-create the same model consistently.

“There is a variety of statistical and random logic that happens during the training that means it cannot make it replicable in the same way as software,” Maffulli added.

So an open source AI system should be easy to replicate, with clear instructions. And this is where the checklist facet of the Open Source AI Definition comes into play, which is based on a recently published academic paper called “The Model Openness Framework: Promoting Completeness and Openness for Reproducibility, Transparency, and Usability in Artificial Intelligence.”

This paper proposes the Model Openness Framework (MOF), a classification system that rates machine learning models “based on their completeness and openness.” The MOF demands that specific components of the AI model development be “included and released under appropriate open licenses,” including training methodologies and details around the model parameters.

Stable condition

Stefano Maffulli presenting at the Digital Public Goods Alliance (DPGA) members summit in Addis Ababa
Stefano Maffulli presenting at the Digital Public Goods Alliance (DPGA) members summit in Addis Ababa.
Image Credits: OSI

The OSI is calling the official launch of the definition the “stable version,” much like a company will do with an application that has undergone extensive testing and debugging ahead of prime time. The OSI is purposefully not calling it the “final release” because parts of it will likely evolve.

“We can’t really expect this definition to last for 26 years like the Open Source Definition,” Maffulli said. “I don’t expect the top part of the definition — such as ‘what is an AI system?’ — to change much. But the parts that we refer to in the checklist, those lists of components depend on technology? Tomorrow, who knows what the technology will look like.”

The stable Open Source AI Definition is expected to be rubber stamped by the Board at the All Things Open conference at the tail end of October, with the OSI embarking on a global roadshow in the intervening months spanning five continents, seeking more “diverse input” on how “open source AI” will be defined moving forward. But any final changes are likely to be little more than “small tweaks” here and there.

“This is the final stretch,” Maffulli said. “We have reached a feature complete version of the definition; we have all the elements that we need. Now we have a checklist, so we’re checking that there are no surprises in there; there are no systems that should be included or excluded.”

human hand raised rejecting robotic hand (concepts of human rejecting AI)

The year of 'does this serve us' and the rejection of reification

human hand raised rejecting robotic hand (concepts of human rejecting AI)

Image Credits: Getty Images

2024 has arrived, and with it, a renewed interest in artificial intelligence, which seems like it’ll probably continue to enjoy at least middling hype throughout the year. Of course, it’s being cheerled by techno-zealot billionaires and the flunkies bunked within their cozy islands of influence, primarily in Silicon Valley — and derided by fabulists who stand to gain from painting the still-fictional artificial general intelligence (AGI) as humanity’s ur-bogeyman for the ages.

Both of these positions are exaggerated and untenable, e/acc versus decel arguments be damned. Speed without caution only ever results in compounding problems that proponents often suggest are best-solved by pouring on more speed, possibly in a different direction, to arrive at some idealized future state where the problems of the past are obviated by the super powerful Next Big Thing of the future; calls to abandon or regress entire areas of innovation meanwhile ignore the complexity of a globalized world where cats generally can not be put back into boxes universally, among many, many other issues with that kind of approach.

The long, thrilling and tumultuous history of technology development, particularly in the age of the personal computer and the internet, has shown us that in our fervor for something new, we often neglect to stop and ask “but is the new thing also something we want or need.” We never stopped to ask that question with things like Facebook, and they ended up becoming an inextricable part of the fabric of society, an eminently manipulable but likewise essential part of crafting and sharing in community dialog.

Here’s the main takeaway from the rise of social media that we should carry with us into the advent of the age of AI: Just because something is easier or more convenient doesn’t make it preferable — or even desirable.

LLM-based so-called “AI” has already infiltrated our lives in ways that will likely prove impossible to wind back, even if we wanted to do such a thing, but that doesn’t mean we have to indulge in the escalation some see as inevitable, wherein we relentlessly rip out human equivalents of some of the gigs that AI is already good at, or shows promise in, to make way for the necessary “forward march of progress.”

The oft-repeated counter to fears that increased automation or handing menial work over to AI agents is that it’ll always leave people more time to focus on “quality” work, as if dropping a couple of hours per day spent on filling in Excel spreadsheets will leave the office admin who was doing that work finally free to compose the great symphony they’ve had locked away within them, or to allow the entry-level graphic designer who had been color-correcting photos the liberty to create a lasting cure for COVID.

In the end, automating menial work might look good on paper, and it might also serve the top executives and deep-pocketed equity-holders behind an organization through improved efficiency and decreased costs, but it doesn’t serve the people who might actually enjoy doing that work, or who at least don’t mind it as part of the overall mix that makes up a work life balanced between more mentally taxing and rewarding creative/strategic exercises and day-to-day low-intensity tasks. And the long-term consequence of having fewer people doing this kind of work is that you’ll have fewer overall who are able to participate meaningfully in the economy — which is ultimately bad even for those rarified few sitting at the top of the pyramid who reap the immediate rewards of AI’s efficiency gains.

Utopian technologist zeal always fails to recognize that the bulk of humanity (techno-zealots included) are sometimes lazy, messy, disorganized, inefficient, error-prone and mostly satisfied with the achievement of comfort and the avoidance of boredom or harm. That might not sound all that aspirational to some, but I say it with a celebratory fervor, since for me all those human qualities are just as laudable as less attainable ones like drive, ambition, wealth and success.

I’m not arguing against halting or even slowing the development of promising new technology, including LLM-based generative AI. And to be clear, where the consequences are clearly beneficial — e.g. developing medical image diagnosis tech that far exceeds the accuracy of trained human reviewers, or developing self-driving car technology that can actually drastically reduce the incidence of car accidents and loss of human life — there is no cogent argument to be made for turning away from use of said tech.

But in almost all cases where the benefits are painted as efficiency gains for tasks that are far from life or death, I’d argue it’s worth a long, hard look at whether we need to bother in the first place; yes, human time is valuable and winning some of that back is great, but assuming that’s always a net positive ignores the complicated nature of being a human being, and how we measure and feel our worth. Saving someone so much time they no longer feel like they’re contributing meaningfully to society isn’t a boon, no matter how eloquently you think you can argue they should then use that time to become a violin virtuoso or learn Japanese.

human hand raised rejecting robotic hand (concepts of human rejecting AI)

The year of 'does this serve us' and the rejection of reification

human hand raised rejecting robotic hand (concepts of human rejecting AI)

Image Credits: Getty Images

2024 has arrived, and with it, a renewed interest in artificial intelligence, which seems like it’ll probably continue to enjoy at least middling hype throughout the year. Of course, it’s being cheerled by techno-zealot billionaires and the flunkies bunked within their cozy islands of influence, primarily in Silicon Valley — and derided by fabulists who stand to gain from painting the still-fictional artificial general intelligence (AGI) as humanity’s ur-bogeyman for the ages.

Both of these positions are exaggerated and untenable, e/acc versus decel arguments be damned. Speed without caution only ever results in compounding problems that proponents often suggest are best-solved by pouring on more speed, possibly in a different direction, to arrive at some idealized future state where the problems of the past are obviated by the super powerful Next Big Thing of the future; calls to abandon or regress entire areas of innovation meanwhile ignore the complexity of a globalized world where cats generally can not be put back into boxes universally, among many, many other issues with that kind of approach.

The long, thrilling and tumultuous history of technology development, particularly in the age of the personal computer and the internet, has shown us that in our fervor for something new, we often neglect to stop and ask “but is the new thing also something we want or need.” We never stopped to ask that question with things like Facebook, and they ended up becoming an inextricable part of the fabric of society, an eminently manipulable but likewise essential part of crafting and sharing in community dialog.

Here’s the main takeaway from the rise of social media that we should carry with us into the advent of the age of AI: Just because something is easier or more convenient doesn’t make it preferable — or even desirable.

LLM-based so-called “AI” has already infiltrated our lives in ways that will likely prove impossible to wind back, even if we wanted to do such a thing, but that doesn’t mean we have to indulge in the escalation some see as inevitable, wherein we relentlessly rip out human equivalents of some of the gigs that AI is already good at, or shows promise in, to make way for the necessary “forward march of progress.”

The oft-repeated counter to fears that increased automation or handing menial work over to AI agents is that it’ll always leave people more time to focus on “quality” work, as if dropping a couple of hours per day spent on filling in Excel spreadsheets will leave the office admin who was doing that work finally free to compose the great symphony they’ve had locked away within them, or to allow the entry-level graphic designer who had been color-correcting photos the liberty to create a lasting cure for COVID.

In the end, automating menial work might look good on paper, and it might also serve the top executives and deep-pocketed equity-holders behind an organization through improved efficiency and decreased costs, but it doesn’t serve the people who might actually enjoy doing that work, or who at least don’t mind it as part of the overall mix that makes up a work life balanced between more mentally taxing and rewarding creative/strategic exercises and day-to-day low-intensity tasks. And the long-term consequence of having fewer people doing this kind of work is that you’ll have fewer overall who are able to participate meaningfully in the economy — which is ultimately bad even for those rarified few sitting at the top of the pyramid who reap the immediate rewards of AI’s efficiency gains.

Utopian technologist zeal always fails to recognize that the bulk of humanity (techno-zealots included) are sometimes lazy, messy, disorganized, inefficient, error-prone and mostly satisfied with the achievement of comfort and the avoidance of boredom or harm. That might not sound all that aspirational to some, but I say it with a celebratory fervor, since for me all those human qualities are just as laudable as less attainable ones like drive, ambition, wealth and success.

I’m not arguing against halting or even slowing the development of promising new technology, including LLM-based generative AI. And to be clear, where the consequences are clearly beneficial — e.g. developing medical image diagnosis tech that far exceeds the accuracy of trained human reviewers, or developing self-driving car technology that can actually drastically reduce the incidence of car accidents and loss of human life — there is no cogent argument to be made for turning away from use of said tech.

But in almost all cases where the benefits are painted as efficiency gains for tasks that are far from life or death, I’d argue it’s worth a long, hard look at whether we need to bother in the first place; yes, human time is valuable and winning some of that back is great, but assuming that’s always a net positive ignores the complicated nature of being a human being, and how we measure and feel our worth. Saving someone so much time they no longer feel like they’re contributing meaningfully to society isn’t a boon, no matter how eloquently you think you can argue they should then use that time to become a violin virtuoso or learn Japanese.