The Cosmos Institute, whose founding fellows include Anthropic co-founder Jack Clark, launches grant programs and an AI lab

Image Credits: Getty Images

The Cosmos Institute, a nonprofit whose founding fellows include Anthropic co-founder Jack Clark and former Defense Department technologist Brendan McCord, has announced a venture program and research initiatives to — in the organization’s words — “cultivate a new generation of technologists and entrepreneurs equipped with deep philosophical thinking to navigate the uncharted territory of our AI age.”

In a blog post, McCord, Cosmos’ chair, said that the Institute will found an AI lab at the University of Oxford called the Human-Centered AI Lab, or “HAI Lab” for short. It’ll be led by Oxford philosopher Philipp Koralus — another of the Institute’s founding fellows — and will aim to “translate the philosophical principles of human flourishing into open source software and AI systems.”

What does “translating the philosophical principles of human flourishing” entail, exactly? Unclear. But the crux is to foster the creation of AI tech that respects human dignity while avoiding promoting harmful disruption (e.g., automation leading to job loss). A high and nebulous bar, to be sure — but one the HAI Lab is going to strive for.

The Institute is also spinning up a fellowship — the Cosmos Fellowship — with a cohort of four fellows to start. Working at the HAI Lab or other partner institutions for between “a term and a year,” fellows will collaborate with Cosmos mentors and pursue independent projects to “[explore the] intersection of AI expertise and deep philosophical insight,” McCord said.

There’s an investment component to the Institute’s plans, and it’s launching in the form of a venture org: Cosmos Ventures. Led by former DeepMind product lead Jason Zhao, ex-Stripe head of corporate strategy Alex Komoroske, Darren Zhu, and Zoe Weinberg, Cosmos Ventures will support “provocative new prototypes, essays and creative projects that explore fundamental questions around the philosophy of technology,” McCord said.

McCord characterizes Cosmos Ventures as “low overhead,” modeled after Institute fellow Tyler Cowen’s Emergent Ventures. Investments will range between $1,000 and $10,000 per project, and projects — which can be new or existing — must produce a “major deliverable” within three months.

The first group has already been funded, McCord said.

The Cosmos Institute isn’t the first to attempt to advance a more ethical, humanistic vision of AI. OpenAI, founded with the mission of delivering the benefits of advanced AI to all humanity, has dismantled entire safety teams. Clark’s own Anthropic once positioned itself as a safer, more ethical vendor, but in recent months has pushed back on AI regulations and aggressively scraped data without permission.

Perhaps the Cosmos Institute will fare better with its embrace of what McCord calls “accelerationism”: the idea that the future isn’t predetermined and therefore humanity is responsible for what it holds.

“We must build AI that encourages inquiry over complacency and promotes active engagement over passive dependence, especially in education, medicine, the public square and other vital domains,” McCord writes. “We champion AI that decentralizes power and enables bottom-up solutions, allowing individuals and communities to co-create a richer, more diverse society.”

They’re admirable positions. This reporter only hopes that money, influence and power don’t make the temptation to abandon them irresistible.

Menlo Ventures and Anthropic team up on a $100M AI fund

Anthropic Claude 3.5 logo

Image Credits: Anthropic

Silicon Valley VC firm Menlo Ventures, one of the biggest investors in artificial intelligence startup Anthropic, on Wednesday said the two companies are teaming up to set up a $100 million initiative dubbed “the Anthology Fund” to invest in pre-seed, seed and Series A artificial intelligence companies.

Menlo recently climbed to the top of Anthropic’s list of backers, closing a yet-to-be-announced, over $750 million funding round in the foundation model company, according to a source familiar with the matter.

The capital for the Anthology Fund comes after Menlo closed its latest $1.35 billion vehicle, which the firm raised last November, Tim Tully, a partner at Menlo Ventures, told TechCrunch. (Menlo PR told us after publication that the exact source of funds for this is still in discussions with LPs.)

“We’re one of the biggest investors in Anthropic and huge fans of what they’re doing,” Tully said. “We thought this was an opportunity for us to do something together, where we can see the ecosystem and find great companies that are building on Anthropic or AI more broadly.”

The venture firm is essentially leveraging its investment and close relationship with one of the world’s most prominent foundational model companies to identify interesting AI-first startups for future investments.

The Anthology Fund will write checks starting at $100,000 in startups and provide them with $25,000 worth of credits to use Anthropic’s models.

The fund is accepting applications from startups through an online form. Menlo will use the firm’s proprietary machine learning tool to score and rank applications, Tully said, adding that the diligence process on these companies is expected to be more “lightweight” than for a typical investment by the firm.

Menlo will back any subsequent rounds raised by promising Anthology Fund companies, Tully said.

Anthropic publishes the 'system prompts' that make Claude tick

Anthropic Claude 3.5 logo

Image Credits: Anthropic

Generative AI models aren’t actually humanlike. They have no intelligence or personality — they’re simply statistical systems predicting the likeliest next words in a sentence. But like interns at a tyrannical workplace, they do follow instructions without complaint — including initial “system prompts” that prime the models with their basic qualities and what they should and shouldn’t do.

Every generative AI vendor, from OpenAI to Anthropic, uses system prompts to prevent (or at least try to prevent) models from behaving badly, and to steer the general tone and sentiment of the models’ replies. For instance, a prompt might tell a model it should be polite but never apologetic, or to be honest about the fact that it can’t know everything.

But vendors usually keep system prompts close to the chest — presumably for competitive reasons, but also perhaps because knowing the system prompt may suggest ways to circumvent it. The only way to expose GPT-4o‘s system prompt, for example, is through a prompt injection attack. And even then, the system’s output can’t be trusted completely.

However, Anthropic, in its continued effort to paint itself as a more ethical, transparent AI vendor, has published the system prompts for its latest models (Claude 3 Opus, Claude 3.5 Sonnet and Claude 3 Haiku) in the Claude iOS and Android apps and on the web.

Alex Albert, head of Anthropic’s developer relations, said in a post on X that Anthropic plans to make this sort of disclosure a regular thing as it updates and fine-tunes its system prompts.

The latest prompts, dated July 12, outline very clearly what the Claude models can’t do — e.g. “Claude cannot open URLs, links, or videos.” Facial recognition is a big no-no; the system prompt for Claude Opus tells the model to “always respond as if it is completely face blind” and to “avoid identifying or naming any humans in [images].”

But the prompts also describe certain personality traits and characteristics — traits and characteristics that Anthropic would have the Claude models exemplify.

The prompt for Claude 3 Opus, for instance, says that Claude is to appear as if it “[is] very smart and intellectually curious,” and “enjoys hearing what humans think on an issue and engaging in discussion on a wide variety of topics.” It also instructs Claude to treat controversial topics with impartiality and objectivity, providing “careful thoughts” and “clear information” — and never to begin responses with the words “certainly” or “absolutely.”

It’s all a bit strange to this human, these system prompts, which are written like an actor in a stage play might write a character analysis sheet. The prompt for Opus ends with “Claude is now being connected with a human,” which gives the impression that Claude is some sort of consciousness on the other end of the screen whose only purpose is to fulfill the whims of its human conversation partners.

But of course that’s an illusion. If the prompts for Claude tell us anything, it’s that without human guidance and hand-holding, these models are frighteningly blank slates.

With these new system prompt changelogs — the first of their kind from a major AI vendor — Anthropic is exerting pressure on competitors to publish the same. We’ll have to see if the gambit works.

Anthropic publishes the 'system prompts' that make Claude tick

Anthropic Claude 3.5 logo

Image Credits: Anthropic

Generative AI models aren’t actually humanlike. They have no intelligence or personality — they’re simply statistical systems predicting the likeliest next words in a sentence. But like interns at a tyrannical workplace, they do follow instructions without complaint — including initial “system prompts” that prime the models with their basic qualities and what they should and shouldn’t do.

Every generative AI vendor, from OpenAI to Anthropic, uses system prompts to prevent (or at least try to prevent) models from behaving badly, and to steer the general tone and sentiment of the models’ replies. For instance, a prompt might tell a model it should be polite but never apologetic, or to be honest about the fact that it can’t know everything.

But vendors usually keep system prompts close to the chest — presumably for competitive reasons, but also perhaps because knowing the system prompt may suggest ways to circumvent it. The only way to expose GPT-4o‘s system prompt, for example, is through a prompt injection attack. And even then, the system’s output can’t be trusted completely.

However, Anthropic, in its continued effort to paint itself as a more ethical, transparent AI vendor, has published the system prompts for its latest models (Claude 3 Opus, Claude 3.5 Sonnet and Claude 3 Haiku) in the Claude iOS and Android apps and on the web.

Alex Albert, head of Anthropic’s developer relations, said in a post on X that Anthropic plans to make this sort of disclosure a regular thing as it updates and fine-tunes its system prompts.

The latest prompts, dated July 12, outline very clearly what the Claude models can’t do — e.g. “Claude cannot open URLs, links, or videos.” Facial recognition is a big no-no; the system prompt for Claude Opus tells the model to “always respond as if it is completely face blind” and to “avoid identifying or naming any humans in [images].”

But the prompts also describe certain personality traits and characteristics — traits and characteristics that Anthropic would have the Claude models exemplify.

The prompt for Claude 3 Opus, for instance, says that Claude is to appear as if it “[is] very smart and intellectually curious,” and “enjoys hearing what humans think on an issue and engaging in discussion on a wide variety of topics.” It also instructs Claude to treat controversial topics with impartiality and objectivity, providing “careful thoughts” and “clear information” — and never to begin responses with the words “certainly” or “absolutely.”

It’s all a bit strange to this human, these system prompts, which are written like an actor in a stage play might write a character analysis sheet. The prompt for Opus ends with “Claude is now being connected with a human,” which gives the impression that Claude is some sort of consciousness on the other end of the screen whose only purpose is to fulfill the whims of its human conversation partners.

But of course that’s an illusion. If the prompts for Claude tell us anything, it’s that without human guidance and hand-holding, these models are frighteningly blank slates.

With these new system prompt changelogs — the first of their kind from a major AI vendor — Anthropic is exerting pressure on competitors to publish the same. We’ll have to see if the gambit works.

Anthropic Co-Founder & CEO Dario Amodei speaks onstage during TechCrunch Disrupt 2023 at Moscone Center.

Anthropic takes steps to prevent election misinformation

Anthropic Co-Founder & CEO Dario Amodei speaks onstage during TechCrunch Disrupt 2023 at Moscone Center.

Image Credits: Kimberly White/Getty Images for TechCrunch

Ahead of the 2024 U.S. presidential election, Anthropic, the well-funded AI startup, is testing a technology to detect when users of its GenAI chatbot ask about political topics and redirect those users to “authoritative” sources of voting information.

Called Prompt Shield, the technology, which relies on a combination of AI detection models and rules, shows a pop-up if a U.S.-based user of Claude, Anthropic’s chatbot, asks for voting information. The pop-up offers to redirect the user to TurboVote, a resource from the nonpartisan organization Democracy Works, where they can find up-to-date, accurate voting information.

Anthropic says that Prompt Shield was necessitated by Claude’s shortcomings in the area of politics- and election-related information. Claude isn’t trained frequently enough to provide real-time information about specific elections, Anthropic acknowledges, and so is prone to hallucinating — i.e. inventing facts — about those elections.

“We’ve had ‘prompt shield’ in place since we launched Claude — it flags a number of different types of harms, based on our acceptable user policy,” a spokesperson told TechCrunch via email. “We’ll be launching our election-specific prompt shield intervention in the coming weeks and we intend to monitor use and limitations … We’ve spoken to a variety of stakeholders including policymakers, other companies, civil society and nongovernmental agencies and election-specific consultants [in developing this].”

It’s seemingly a limited test at the moment. Claude didn’t present the pop-up when I asked it about how to vote in the upcoming election, instead spitting out a generic voting guide. Anthropic claims that it’s fine-tuning Prompt Shield as it prepares to expand it to more users.

Anthropic, which prohibits the use of its tools in political campaigning and lobbying, is the latest GenAI vendor to implement policies and technologies to attempt to prevent election interference.

The timing’s no coincidence. This year, globally, more voters than ever in history will head to the polls, as at least 64 countries representing a combined population of about 49% of the people in the world are meant to hold national elections.

In January, OpenAI said that it would ban people from using ChatGPT, its viral AI-powered chatbot, to create bots that impersonate real candidates or governments, misrepresent how voting works or discourage people from voting. Like Anthropic, OpenAI currently doesn’t allow users to build apps using its tools for the purposes of political campaigning or lobbying — a policy which the company reiterated last month.

In a technical approach similar to Prompt Shield, OpenAI is also employing detection systems to steer ChatGPT users who ask logistical questions about voting to a nonpartisan website, CanIVote.org, maintained by the National Association of Secretaries of State.

In the U.S., Congress has yet to pass legislation seeking to regulate the AI industry’s role in politics despite some bipartisan support. Meanwhile, more than a third of U.S. states have passed or introduced bills to address deepfakes in political campaigns as federal legislation stalls.

In lieu of legislation, some platforms — under pressure from watchdogs and regulators — are taking steps to stop GenAI from being abused to mislead or manipulate voters.

Google said last September that it would require political ads using GenAI on YouTube and its other platforms, such as Google Search, be accompanied by a prominent disclosure if the imagery or sounds were synthetically altered. Meta has also barred political campaigns from using GenAI tools — including its own — in advertising across its properties.

Female software developers discuss over the computer while sitting at a desk in the workplace. Creative businesswomen discuss the new coding program in the office.

Stainless is helping OpenAI, Anthropic and others build SDKs for their APIs

Female software developers discuss over the computer while sitting at a desk in the workplace. Creative businesswomen discuss the new coding program in the office.

Image Credits: Luis Alvarez / Getty Images

Besides a focus on generative AI, what do AI startups like OpenAI, Anthropic and Together AI share in common? They use Stainless, a platform created by ex-Stripe staffer Alex Rattray, to generate SDKs for their APIs.

Rattray, who studied economics at the University of Pennsylvania, has been building things for as long as he can remember, from an underground newspaper in high school to a bike-share program in college. Rattray picked up programming on the side while at UPenn, which led to a job at Stripe as an engineer on the developer platform team.

At Stripe, Rattray helped to revamp API documentation and launch the system that powers Stripe’s API client SDK. It’s while working on those projects Rattray observed there wasn’t an easy way for companies, including Stripe, to build SDKs for their APIs at scale.

“Handwriting the SDKs couldn’t scale,” he told TechCrunch. “Today, every API designer has to settle a million and one ‘bikeshed’ questions all over again, and painstakingly enforce consistency around these decisions across their API.”

Now, you might be wondering, why would a company need an SDK if it offers an API? APIs are simply protocols, enabling software components to communicate with each other and transfer data. SDKs, on the other hand, offer a set of software-crafting tools that plug into APIs. Without an SDK to accompany an API, API users are forced to read API docs and build everything themselves, which isn’t the best experience.

Rattray’s solution is Stainless, which takes in an API spec and generates SDKs in a range of programming languages including Python, TypeScript, Kotlin, Go and Java. As APIs evolve and change, Stainless’ platform pushes those updates with options for versioning and publishing changelogs.

“API companies today have a team of several people building libraries in each new language to connect to their API,” Rattray said. “These libraries inevitably become inconsistent, fall out of date and require constant changes from specialist engineers. Stainless fixes that problem by generating them via code.”

Stainless isn’t the only API-to-SDK generator out there. There’s LibLab and Speakeasy, to name a couple, plus longstanding open source projects such as the OpenAPI Generator.

Stainless, however, delivers more “polish” than most others, Rattray said, thanks partly to its use of generative AI.

“Stainless uses generative AI to produce an initial ‘Stainless config’ for customers, which is then up to them to fine-tune to their API,” he explained. “This is particularly valuable for AI companies, whose huge user bases includes many novice developers trying to integrate with complex features like chat streaming and tools.”

Perhaps that’s what attracted customers like OpenAI, Anthropic and Together AI, along with Lithic, LangChain, Orb, Modern Treasury and Cloudflare. Stainless has “dozens” of paying clients in its beta, Rattray said, and some of the SDKs it’s generated, including OpenAI’s Python SDK, are getting millions of downloads per week.

“If your company wants to be a platform, your API is the bedrock of that,” he said. “Great SDKs for your API drive faster integration, broader feature adoption, quicker upgrades and trust in your engineering quality.”

Most customers are paying for Stainless’ enterprise tier, which comes with additional white-glove services and AI-specific functionality. Publishing a single SDK with Stainless is free. But companies have to fork over between $250 per month and $30,000 per year for multiple SDKs across multiple programming languages.

Rattray bootstrapped Stainless “with revenue from day one,” he said, adding that the company could be profitable as soon as this year; annual recurring revenue is hovering around $1 million. But Rattray opted instead to take on outside investment to build new product lines.

Stainless recently closed a $3.5 million seed round with participation from Sequoia and The General Partnership.

“Across the tech ecosystem, Stainless stands out as a beacon that elevates the developer experience, rivaling the high standard once set by Stripe,” said Anthony Kline, partner at The General Partnership. “As APIs continue to be the core building blocks of integrating services like LLMs into applications, Alex’s first-hand experience pioneering Stripe’s API codegen system uniquely positions him to craft Stainless into the quintessential platform for seamless, high-quality API interactions.”

Stainless has a 10-person team based in New York. Rattray expects headcount to grow to 15 or 20 by the end of the year.