New Senate bill seeks to protect artists' and journalists' content from AI use

QAnon sign at US Capitol

Image Credits: Win McNamee / Staff / Getty Images

A bipartisan group of senators has introduced a new bill that seeks to protect artists, songwriters and journalists from having their content used to train AI models or generate AI content without their consent. The bill, called the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act), also seeks to make it easier to identify AI-generated content and combat the rise of harmful deepfakes. 

Senate Commerce Committee Chair Maria Cantwell (D-WA), Senate AI Working Group member Martin Heinrich (D-NM) and Commerce Committee member Marsha Blackburn (R-TN) authored the bill. 

The bill would require companies that develop AI tools to allow users to attach content provenance information to their content within two years. Content provenance information refers to machine-readable information that documents the origin of digital content, such as photos and news articles. According to the bill, works with content provenance information could not be used to train AI models or generate AI content. 

The bill is designed to give content owners, such as journalists, newspapers, artists, songwriters and others the ability to protect their work, while also setting the terms of use for their content, including compensation. It also gives them the right to sue platforms that use their content without their permission or have tampered with content provenance information. 

The COPIED Act would require the National Institute of Standards and Technology (NIST) to create guidelines and standards for content provenance information, watermarking and synthetic content detection.

These standards would be used to determine if content has been generated or altered by AI, as well as where AI content originated. 

“The bipartisan COPIED Act I introduced with Senator Blackburn and Senator Heinrich, will provide much-needed transparency around AI-generated content,” said Senator Cantwell in a press release. “The COPIED Act will also put creators, including local journalists, artists and musicians, back in control of their content with a provenance and watermark process that I think is very much needed.”

The bill is backed by several artists’ groups, including SAG-AFTRA, National Music Publishers’ Association, The Seattle Times, Songwriters Guild of America and Artist Rights Alliance, among others. 

The introduction of the COPIED Act comes as there has been an influx of bills related to AI as lawmakers look to regulate the technology. 

Last month, Senator Ted Cruz introduced a bill that would hold social media companies like X and Instagram accountable for removing and policing deepfake porn. The Take It Down Act came amid the rise of AI-generated pornographic photos of celebrities like Taylor Swift have made the rounds on social media. 

In May, Senate Majority Leader Chuck Schumer introduced a “roadmap” for addressing AI that would boost funding for AI innovation, tackle the use of deepfakes in elections, use AI to strengthen national security and more.

In addition, Axios reported earlier this year that state legislatures are introducing 50 AI-related bills per week. According to the report, there were 407 total AI-related bills across more than 40 states as of February, depicting a steep increase from the 67 related bills introduced a year ago.

Amid the emergence and popularity of AI tools, President Joe Biden issued an executive order last October to set standards for AI safety and security. The standards would require developers of AI systems to share their safety test results and other critical information with the government before deploying their systems to the public. It’s worth noting that former President Donald Trump has vowed to repeal the executive order if re-elected.

How to protect your startup from email scams

Image Credits: Getty Images / anilakkus

Despite years of claims that the “death of email” is fast approaching, the decades-old communication method continues to thrive in business. In particular, the business of hacking.

An email containing a link that looks legitimate but is actually malicious remains one of the most dangerous yet successful tricks in a cybercriminal’s handbook and has led to some of the largest hacks in recent years, including the 2022 breach of communications giant Twilio and last year’s hack of social media platform Reddit. 

While these emails are sometimes easy to spot, be it thanks to bad spelling or an unusual email address, it is becoming increasingly difficult to identify a dodgy email from a legitimate one as hackers’ tactics become increasingly sophisticated.  

Take business email compromise (or BEC), for example, a type of email-borne attack that targets organizations large and small with the aim of stealing money, critical information, or both. In this type of scam, hackers impersonate or compromise someone familiar to the victim, such as a co-worker, boss or business partner, to manipulate them into unknowingly disclosing sensitive information.

The risk this poses to businesses, particularly startups, can’t be overstated. Individuals in the U.S. lost close to $3 billion in BEC scams last year alone, according to the latest data from the FBI. And these attacks are showing no signs of slowing down.

How to spot a business email compromise scam

Look for the warning signs

While cybercriminals have become more advanced in their email-sending tactics, there are some simple red flags that you can — and should — look out for. These include an email sent outside of typical business hours, misspelled names, a mismatch between the sender’s email address and the reply-to address, unusual links and attachments, or an unwarranted sense of urgency. 

Contact the sender directly

The use of spear phishing — where hackers use personalized phishing emails to impersonate high-level executives within a company or outside vendors — means it can be near-impossible to tell whether a message has come from a trusted source. If an email seems unusual — or even if it doesn’t — contact the sender directly to confirm the request, rather than replying via any email or any phone number provided in the email.

Check with your IT folks

Tech support scams are becoming increasingly common. In 2022, Okta customers were targeted by a highly sophisticated scam that saw attackers send employees text messages with links to phishing sites that imitated the look and feel of their employers’ Okta login pages. These login pages looked so much like the real deal that more than 10,000 people submitted their work credentials. Chances are, your IT department isn’t going to contact you via SMS, so if you receive a random text message out of the blue or an unexpected pop-up notification on your device, it’s important to check if it’s legitimate.

Be (even more) wary of phone calls

Cybercriminals have long used email as their weapon of choice. More recently, criminals rely on fraudulent phone calls to hack into organizations. A single phone call reportedly led to last year’s hack of hotel chain MGM Resorts, after hackers successfully deceived the company’s service desk into granting them access to an employee’s account. Always be skeptical of unexpected calls, even if they come from a legitimate-looking contact, and never share confidential information over the phone.  

Multi-factor all the things!

Multi-factor authentication — which typically requires a code, PIN, or fingerprint for logging in along with your regulator username and password — is by no means foolproof. However, by adding an extra layer of security beyond hack-prone passwords, it makes it far more difficult for cybercriminals to access your email accounts. Take one security step even further by rolling out passwordless technology, like hardware security keys and passkeys, which can prevent password and session token theft from info-stealing malware.

Implement stricter payment processes

With any type of cyberattack, a criminal’s ultimate goal is to make money, and the success of BEC scams often hinges on manipulating a single employee into sending a wire transfer. Some financially motivated hackers pretend to be a vendor requesting payment for services performed for the company. To lessen the risk of falling victim to this type of email scam, roll out strict payment processes: Develop a protocol for payment approvals, require that employees confirm money transfers through a second communication medium, and tell your financial team to double-check every bank account detail that changes. 

You can also ignore it

Ultimately, you can minimize the risk of falling for most BEC scams by simply ignoring the attempt and moving on. Not 100% sure that your boss actually wants you to go out and buy $500 worth of gift cards? Ignore it! Getting a call you weren’t expecting? Hang up the phone! But for the sake of your security team and helping your co-workers, don’t stay quiet. Report the attempt to your workplace or IT department so that they can be on higher alert.

Microsoft emails that warned customers of Russian hacks criticized for looking like spam and phishing

How to protect your startup from email scams

Image Credits: Getty Images / anilakkus

Despite years of claims that the “death of email” is fast approaching, the decades-old communication method continues to thrive in business. In particular, the business of hacking.

An email containing a link that looks legitimate but is actually malicious remains one of the most dangerous yet successful tricks in a cybercriminal’s handbook and has led to some of the largest hacks in recent years, including the 2022 breach of communications giant Twilio and last year’s hack of social media platform Reddit. 

While these emails are sometimes easy to spot, be it thanks to bad spelling or an unusual email address, it is becoming increasingly difficult to identify a dodgy email from a legitimate one as hackers’ tactics become increasingly sophisticated.  

Take business email compromise (or BEC), for example, a type of email-borne attack that targets organizations large and small with the aim of stealing money, critical information, or both. In this type of scam, hackers impersonate or compromise someone familiar to the victim, such as a co-worker, boss or business partner, to manipulate them into unknowingly disclosing sensitive information.

The risk this poses to businesses, particularly startups, can’t be overstated. Individuals in the U.S. lost close to $3 billion in BEC scams last year alone, according to the latest data from the FBI. And these attacks are showing no signs of slowing down.

How to spot a business email compromise scam

Look for the warning signs

While cybercriminals have become more advanced in their email-sending tactics, there are some simple red flags that you can — and should — look out for. These include an email sent outside of typical business hours, misspelled names, a mismatch between the sender’s email address and the reply-to address, unusual links and attachments, or an unwarranted sense of urgency. 

Contact the sender directly

The use of spear phishing — where hackers use personalized phishing emails to impersonate high-level executives within a company or outside vendors — means it can be near-impossible to tell whether a message has come from a trusted source. If an email seems unusual — or even if it doesn’t — contact the sender directly to confirm the request, rather than replying via any email or any phone number provided in the email.

Check with your IT folks

Tech support scams are becoming increasingly common. In 2022, Okta customers were targeted by a highly sophisticated scam that saw attackers send employees text messages with links to phishing sites that imitated the look and feel of their employers’ Okta login pages. These login pages looked so much like the real deal that more than 10,000 people submitted their work credentials. Chances are, your IT department isn’t going to contact you via SMS, so if you receive a random text message out of the blue or an unexpected pop-up notification on your device, it’s important to check if it’s legitimate.

Be (even more) wary of phone calls

Cybercriminals have long used email as their weapon of choice. More recently, criminals rely on fraudulent phone calls to hack into organizations. A single phone call reportedly led to last year’s hack of hotel chain MGM Resorts, after hackers successfully deceived the company’s service desk into granting them access to an employee’s account. Always be skeptical of unexpected calls, even if they come from a legitimate-looking contact, and never share confidential information over the phone.  

Multi-factor all the things!

Multi-factor authentication — which typically requires a code, PIN, or fingerprint for logging in along with your regulator username and password — is by no means foolproof. However, by adding an extra layer of security beyond hack-prone passwords, it makes it far more difficult for cybercriminals to access your email accounts. Take one security step even further by rolling out passwordless technology, like hardware security keys and passkeys, which can prevent password and session token theft from info-stealing malware.

Implement stricter payment processes

With any type of cyberattack, a criminal’s ultimate goal is to make money, and the success of BEC scams often hinges on manipulating a single employee into sending a wire transfer. Some financially motivated hackers pretend to be a vendor requesting payment for services performed for the company. To lessen the risk of falling victim to this type of email scam, roll out strict payment processes: Develop a protocol for payment approvals, require that employees confirm money transfers through a second communication medium, and tell your financial team to double-check every bank account detail that changes. 

You can also ignore it

Ultimately, you can minimize the risk of falling for most BEC scams by simply ignoring the attempt and moving on. Not 100% sure that your boss actually wants you to go out and buy $500 worth of gift cards? Ignore it! Getting a call you weren’t expecting? Hang up the phone! But for the sake of your security team and helping your co-workers, don’t stay quiet. Report the attempt to your workplace or IT department so that they can be on higher alert.

Microsoft emails that warned customers of Russian hacks criticized for looking like spam and phishing

Image of a girl shouting against a brick wall painted pink with multicolored squiggles on the wall to represent her voice.

French startup Nijta hopes to protect voice privacy in AI use cases

Image of a girl shouting against a brick wall painted pink with multicolored squiggles on the wall to represent her voice.

Image Credits: Flashpop (opens in a new window) / Getty Images

A recording of your voice may seem innocuous, but it can actually reveal your identity, as well as additional data about you, such as how you are feeling. But it can also uncover diseases from which you may suffer.

People may not have grasped this yet, but companies that process data are increasingly aware that they need to handle voice as personally identifiable information. This is particularly true in Europe in the context of GDPR: While many companies are hoping to build AI on top of voice data, in many cases, this requires removing biometric information first.

This is where Nijta hopes to help: by providing AI-powered speech anonymization technology to clients that need to comply with privacy requirements. While its name is a Hindi word for privacy, it is based in Lille, France, where its Indian CEO, Brij Srivastava, moved for his PhD at Inria, the French Institute for Research in Computer Science and Automation.

Nijta was born from Inria Startup Studio, a program aimed at supporting PhD entrepreneurs who want to start a business. It worked: Nijta is now an award-winning young B2B company with €2 million in funding from various sources, including French deep tech VC fund Elaia and Lille-based investment firm Finovam Gestion.

“Europe is our primary market,” Srivastava told TechCrunch. The main reason is simple: “GDPR is a very strong data privacy law.” While voice anonymization can be relevant to several sectors, Nijta’s soft spot is a mix of compliance and business opportunity.

“Nijta’s AI-powered voice anonymization technology offers a solution for many enterprises who are increasingly concerned about data privacy and excited about generative AI,” Elaia investment director Céline Passedouet said in a statement.

Growing use cases

Call centers in general are potential customers of Nijta, but even more so when they deal with health data.

One of its early collaborations was around OkyDoky, a project aimed at better handling medical emergency calls. While it is easy to see how AI can help, it is obvious that voices had to be anonymized to both remove speaker identity and personally identifiable information from the training data.

Other use cases include defense scenarios, which Srivastava didn’t expand on for obvious reasons, but also edtech, where children’s voices need to be anonymized before leveraging AI to give them pronunciation feedback, for instance.

Content generated by Nijta is watermarked, which is becoming the standard if not the rule for all things generative AI. The startup also says that Nijta Voice Harbor’s protection is irreversible, unlike some of the voice modifications unwisely used by media outlets hoping to protect victims they interview.

A lack of awareness of privacy issues around voice is one of the challenges Nijta will have to face. This is also why starting with B2B and Europe seems to make sense: Even if customers aren’t pushing for voice privacy, risking a hefty fine is turning companies into early adopters.

Eventually, though, Nijta is hoping to expand into B2C, with an eye on securing recorded messages, for instance. “Real-time anonymization for secure communication is also something that we are very actively exploring,” Srivastava said. But B2C is a few years down the line; Nijta’s small team can’t spread itself too thin.

Northern tailwinds

Nijta has seven team members, including Srivastava; his two full-time co-founders, Seyed Ahmad Hosseini and Nathalie Vauquier; and his former professor, senior research scientist and part-time co-founder Emmanuel Vincent. Srivastava hopes that the team will grow to 10 people by June, but it is also receiving external help for efforts the startup wouldn’t pursue on its own.

Business France, in particular, helps Nijta reduce internationalization costs, Srivastava said. “Because we are small, we cannot hire many salespeople in different countries.” Instead, it can rely on prospection from a Business France representative in a particular country, “and the cost is mostly subsidized by [Lille’s] Hauts-de-France region.” In addition, it opened doors for the startup in its sister state of Maryland.

There’s something going on with AI startups in France

This is one of the reasons why Srivastava has no trouble answering when he (often) gets asked why Nijta is based in Lille, not Paris. While some of the tailwinds it enjoys are more broadly linked to France, it found the country’s northernmost area to be conveniently located within close reach of Paris, as well as Brussels, Amsterdam and London.

To go international, however, Nijta will have to go multilingual. That’s a big R&D challenge, but one the startup is working on, with its sights on Europe and Asia. It should also help that the startup is set to get another €1 million from Bpifrance‘s deep tech development aid, a combined grant and repayable advance to finance R&D expenditure; this will also make the question of why Srivastava chose Lille and France even easier to answer.

US President Joe Biden during a campaign event at the Scranton Cultural Center at the Masonic Temple in Scranton, Pennsylvania

Biden signs bill to protect children from online sexual abuse and exploitation

US President Joe Biden during a campaign event at the Scranton Cultural Center at the Masonic Temple in Scranton, Pennsylvania

Image Credits: Hannah Beier/Bloomberg via Getty Images

In February 2023, Senators Jon Ossoff (D-GA) and Marsha Blackburn (R-SC) proposed a bipartisan bill to protect children from online sexual exploitation, which then passed the House last month, on April 29.

President Biden officially signed the REPORT Act into law on Tuesday. This marks the first time that websites and social media platforms are legally obligated to report crimes related to federal trafficking, grooming, and enticement of children to the National Center for Missing and Exploited Children’s (NCMEC) CyberTipline.

Under the new law, companies that intentionally neglect to report child sex abuse material on their site will suffer a hefty fine. For platforms with over 100 million users, a first-time offense would yield a fine of $850,000, for example. To ensure urgent threats of child sexual exploitation are investigated by law enforcement carefully and thoroughly, the law requires evidence to be held for a longer period, which can be up to a year, instead of only 90 days.

The NCMEC faces challenges in investigating the millions of child sex abuse reports they receive each year due to being understaffed and using outdated technology. Although the new law cannot solve the problem entirely, it is expected to make the assessment of reports more efficient by allowing for things like legal storage of data on commercial cloud computing services.

“Children are increasingly looking at screens, and the reality is that this leaves more innocent kids at risk of online exploitation,” said Senator Blackburn in a statement. “I’m honored to champion this bipartisan solution alongside Senator Ossoff and Representative Laurel Lee to protect vulnerable children and hold perpetrators of these heinous crimes accountable. I also appreciate the National Center for Missing and Exploited Children’s unwavering partnership to get this across the finish line.”