New U.S. Commerce Department report endorses 'open' AI models

Capitol building

Image Credits: Stefani Reynolds/Bloomberg / Getty Images

The U.S. Commerce Department on Monday issued a report in support of “open-weight” generative AI models like Meta’s Llama 3.1, but recommended the government develop “new capabilities” to monitor such models for potential risks.

Authored by the Commerce Department’s National Telecommunications and Information Administration (NTIA), the report said open-weight models broaden generative AI’s availability to small companies, researchers, nonprofits and individual developers. For these reasons, the government shouldn’t place restrictions on access to open models, it suggests — at least not before investigating whether restrictions might harm the market.

The sentiment echoes recent comments from FTC Commission chair Lina Khan, who believes that open models can let more small players bring their ideas to market and, in doing so, promote healthy competition.

“The openness of the largest and most powerful AI systems will affect competition, innovation and risks in these revolutionary tools,” Alan Davidson, assistant secretary of Commerce for Communications and Information and NTIA administrator, said in a statement. “NTIA’s report recognizes the importance of open AI systems and calls for more active monitoring of risks from the wide availability of model weights for the largest AI models. Government has a key role to play in supporting AI development while building capacity to understand and address new risks.”

The report comes at a time when regulators domestic and abroad are weighing rules that could restrict or impose new requirements on companies that wish to release open-weight models.

California is close to passing bill SB 1047, which would mandate that any company training a model using more than 1026 FLOP of compute power must beef up its cybersecurity and develop a way to “shut down” copies of the model within its control. Overseas, the EU recently finalized compliance deadlines for companies under its AI Act, which imposes new rules around copyright, transparency and AI applications.

Meta has said that the EU’s AI policies will prevent it from releasing some open models in the future. And a number of startups and big tech companies have come out against California’s law, which they claim is too onerous.

The NTIA’s model governance philosophy isn’t completely laissez-faire.

In its report, the NTIA calls for the government to develop an ongoing program to collect evidence of the risks and benefits of open models, evaluate that evidence, and act on those evaluations, including imposing certain restrictions on model availability if warranted. Specifically, the report proposes that the government research the safety of various AI models, support research into risk mitigation, and develop thresholds of “risk-specific” indicators to signal if a change in policy might be needed.

These and the other steps would align with President Joe Biden’s executive order on AI, noted Gina Raimondo, U.S. Secretary of Commerce. The order called for government agencies and companies to set new standards around the creation, deployment and use of AI.

“The Biden-Harris Administration is pulling every lever to maximize the promise of AI while minimizing its risks,” Raimondo said in a press release. “Today’s report provides a roadmap for responsible AI innovation and American leadership by embracing openness and recommending how the U.S. government can prepare for and adapt to potential challenges ahead.”

FTC and Justice Department sue TikTok over alleged child privacy violations

A laptop keyboard and TikTok logo displayed on a phone screen are seen in this multiple exposure illustration.

Image Credits: Jakub Porzycki/NurPhoto / Getty Images

The U.S. Federal Trade Commission and the Justice Department are suing TikTok and ByteDance, TikTok’s parent company, with violating the Children’s Online Privacy Protection Act (COPPA). The law requires digital platforms to notify and obtain parents’ consent before collecting and using personal data from children under the age of 13.

In a press release issued Friday, the FTC’s Bureau of Consumer Protection said that TikTok and ByteDance were “allegedly aware” of the need to comply with COPPA, yet spent “years” knowingly allowing millions of children under 13 on their platform. TikTok did so, the FTC alleges, even after settling with the FTC in 2019 over COPPA violations; as a part of that settlement, TikTok agreed to pay $5.7 million and implement steps to prevent kids under 13 from signing up.

“As of 2020, TikTok had a policy of maintaining accounts of children that it knew were under 13 unless the child made an explicit admission of age and other rigid conditions were met,” the FTC wrote in the press release. “TikTok human reviewers allegedly spent an average of only five to seven seconds reviewing each account to make their determination of whether the account belonged to a child.”

TikTok and ByteDance maintained and used underage users’ data, including data for ads targeting, even after employees raised concerns and TikTok reportedly changed its policy not to require an explicit admission of age, according to the FTC. More damningly, TikTok continued to allow users to sign up with third-party accounts, like Google and Instagram, without verifying that they were over 13, the FTC adds.

The FTC also found issue with TikTok Kids Mode, TikTok’s supposedly more COPPA-compliant mobile experience. Kids Mode collected “far more data” than needed, the FTC alleges, including info about users’ in-app activities and identifiers that TikTok used to build profiles (and shared with third parties) to try to prevent attrition.

When parents requested that their child’s accounts be deleted, TikTok made it difficult, the FTC said, and often failed to comply with those requests.

“TikTok knowingly and repeatedly violated kids’ privacy, threatening the safety of millions of children across the country,” FTC chair Lina Khan said in a statement. “The FTC will continue to use the full scope of its authorities to protect children online — especially as firms deploy increasingly sophisticated digital tools to surveil kids and profit from their data.”

TikTok had this to share with TechCrunch via email: “We disagree with these allegations, many of which relate to past events and practices that are factually inaccurate or have been addressed. We are proud of our efforts to protect children, and we will continue to update and improve the platform. To that end, we offer age-appropriate experiences with stringent safeguards, proactively remove suspected underage users, and have voluntarily launched features such as default screen time limits, Family Pairing, and additional privacy protections for minors.”

The FTC and Justice Department propose fining TikTok and ByteDance civil penalties up to $51,744 per violation per day and a permanent injunction to prevent future COPPA violations.

FTC and Justice Department sue TikTok over alleged child privacy violations

A laptop keyboard and TikTok logo displayed on a phone screen are seen in this multiple exposure illustration.

Image Credits: Jakub Porzycki/NurPhoto / Getty Images

The U.S. Federal Trade Commission and the Justice Department are suing TikTok and ByteDance, TikTok’s parent company, with violating the Children’s Online Privacy Protection Act (COPPA). The law requires digital platforms to notify and obtain parents’ consent before collecting and using personal data from children under the age of 13.

In a press release issued Friday, the FTC’s Bureau of Consumer Protection said that TikTok and ByteDance were “allegedly aware” of the need to comply with COPPA, yet spent “years” knowingly allowing millions of children under 13 on their platform. TikTok did so, the FTC alleges, even after settling with the FTC in 2019 over COPPA violations; as a part of that settlement, TikTok agreed to pay $5.7 million and implement steps to prevent kids under 13 from signing up.

“As of 2020, TikTok had a policy of maintaining accounts of children that it knew were under 13 unless the child made an explicit admission of age and other rigid conditions were met,” the FTC wrote in the press release. “TikTok human reviewers allegedly spent an average of only five to seven seconds reviewing each account to make their determination of whether the account belonged to a child.”

TikTok and ByteDance maintained and used underage users’ data, including data for ads targeting, even after employees raised concerns and TikTok reportedly changed its policy not to require an explicit admission of age, according to the FTC. More damningly, TikTok continued to allow users to sign up with third-party accounts, like Google and Instagram, without verifying that they were over 13, the FTC adds.

The FTC also found issue with TikTok Kids Mode, TikTok’s supposedly more COPPA-compliant mobile experience. Kids Mode collected “far more data” than needed, the FTC alleges, including info about users’ in-app activities and identifiers that TikTok used to build profiles (and shared with third parties) to try to prevent attrition.

When parents requested that their child’s accounts be deleted, TikTok made it difficult, the FTC said, and often failed to comply with those requests.

“TikTok knowingly and repeatedly violated kids’ privacy, threatening the safety of millions of children across the country,” FTC chair Lina Khan said in a statement. “The FTC will continue to use the full scope of its authorities to protect children online — especially as firms deploy increasingly sophisticated digital tools to surveil kids and profit from their data.”

TikTok had this to share with TechCrunch via email: “We disagree with these allegations, many of which relate to past events and practices that are factually inaccurate or have been addressed. We are proud of our efforts to protect children, and we will continue to update and improve the platform. To that end, we offer age-appropriate experiences with stringent safeguards, proactively remove suspected underage users, and have voluntarily launched features such as default screen time limits, Family Pairing, and additional privacy protections for minors.”

The FTC and Justice Department propose fining TikTok and ByteDance civil penalties up to $51,744 per violation per day and a permanent injunction to prevent future COPPA violations.

New U.S. Commerce Department report endorses 'open' AI models

Capitol building

Image Credits: Stefani Reynolds/Bloomberg / Getty Images

The U.S. Commerce Department on Monday issued a report in support of “open-weight” generative AI models like Meta’s Llama 3.1, but recommended the government develop “new capabilities” to monitor such models for potential risks.

Authored by the Commerce Department’s National Telecommunications and Information Administration (NTIA), the report said open-weight models broaden generative AI’s availability to small companies, researchers, nonprofits and individual developers. For these reasons, the government shouldn’t place restrictions on access to open models, it suggests — at least not before investigating whether restrictions might harm the market.

The sentiment echoes recent comments from FTC Commission chair Lina Khan, who believes that open models can let more small players bring their ideas to market and, in doing so, promote healthy competition.

“The openness of the largest and most powerful AI systems will affect competition, innovation and risks in these revolutionary tools,” Alan Davidson, assistant secretary of Commerce for Communications and Information and NTIA administrator, said in a statement. “NTIA’s report recognizes the importance of open AI systems and calls for more active monitoring of risks from the wide availability of model weights for the largest AI models. Government has a key role to play in supporting AI development while building capacity to understand and address new risks.”

The report comes at a time when regulators domestic and abroad are weighing rules that could restrict or impose new requirements on companies that wish to release open-weight models.

California is close to passing bill SB 1047, which would mandate that any company training a model using more than 1026 FLOP of compute power must beef up its cybersecurity and develop a way to “shut down” copies of the model within its control. Overseas, the EU recently finalized compliance deadlines for companies under its AI Act, which imposes new rules around copyright, transparency and AI applications.

Meta has said that the EU’s AI policies will prevent it from releasing some open models in the future. And a number of startups and big tech companies have come out against California’s law, which they claim is too onerous.

The NTIA’s model governance philosophy isn’t completely laissez-faire.

In its report, the NTIA calls for the government to develop an ongoing program to collect evidence of the risks and benefits of open models, evaluate that evidence, and act on those evaluations, including imposing certain restrictions on model availability if warranted. Specifically, the report proposes that the government research the safety of various AI models, support research into risk mitigation, and develop thresholds of “risk-specific” indicators to signal if a change in policy might be needed.

These and the other steps would align with President Joe Biden’s executive order on AI, noted Gina Raimondo, U.S. Secretary of Commerce. The order called for government agencies and companies to set new standards around the creation, deployment and use of AI.

“The Biden-Harris Administration is pulling every lever to maximize the promise of AI while minimizing its risks,” Raimondo said in a press release. “Today’s report provides a roadmap for responsible AI innovation and American leadership by embracing openness and recommending how the U.S. government can prepare for and adapt to potential challenges ahead.”