Amazon Prime Air Drone

Amazon ends California drone deliveries

Amazon Prime Air Drone

Image Credits: Amazon

Amazon confirmed it is ending Prime Air drone delivery operations in Lockeford, California. The Central California town of 3,500 was the company’s second U.S. drone delivery site, after College Station, Texas. Operations were announced in June 2022.

The retail giant is not offering details around the setback, only noting, “We’ll offer all current employees opportunities at other sites, and will continue to serve customers in Lockeford with other delivery methods. We want to thank the community for all their support and feedback over the past few years.”

College Station deliveries will continue, along with a forthcoming site in Tolleson, Arizona set to kick off deliveries later this year. Tolleson, a city of just over 7,000, is located in Maricopa County, in the western portion of the Phoenix metropolitan area.

Prime Air’s arrival brings same-day deliveries to Amazon customers in the region, courtesy of a hybrid fulfillment center/delivery station. The company says it will be contacting impacted customers when the service is up and running. There’s no specific information on timing beyond “this year,” owing, in part, to ongoing negotiations with both local officials and the FAA required to deploy in the airspace.

Expansion of the offering has been extremely slow going, in part due to regulatory matters. For much of the project’s life, it has seemed as if Amazon was simply dipping its toes in the unproven waters of drone delivery. It seems that Tolleson will be the service’s sole expansion this calendar year, with additional news held off until 2025. It remains to be seen whether the company will re-engage with California locales.

Amazon did reassert its commitment late last year, with the announcement of medication deliveries in College Station, bringing select Amazon Pharmacy orders to customers in less than an hour.

Select local governments clearly see these sorts of deals as an opportunity to advertise an openness to technological innovation outside of traditional hot spots like San Francisco or New York.

“This kind of delivery is the future, and it’s exciting that it will be starting in the Phoenix Metro Area,” Phoenix Mayor Kate Gallego says. “The shift toward zero-emission package delivery will help us reduce local pollution and further cement our city as a hotbed for the innovative technology of tomorrow.”

Evan Spiegel SnapDSC04082

UK data protection watchdog ends privacy probe of Snap's GenAI chatbot, but warns industry

Evan Spiegel SnapDSC04082

Image Credits: TechCrunch

The U.K.’s data protection watchdog has closed an almost year-long investigation of Snap’s AI chatbot, My AI — saying it’s satisfied the social media firm has addressed concerns about risks to children’s privacy. At the same time, the Information Commissioner’s Office (ICO) issued a general warning to industry to be proactive about assessing risks to people’s rights before bringing generative AI tools to market.

GenAI refers to a flavor of AI that often foregrounds content creation. In Snap’s case, the tech powers a chatbot that can respond to users in a human-like way, such as by sending text messages and snaps, enabling the platform to provide automated interaction.

Snap’s AI chatbot is powered by OpenAI’s ChatGPT, but the social media firm says it applies various safeguards to the application, including guideline programming and age consideration by default, which are intended to prevent kids from seeing age-inappropriate content. It also bakes in parental controls.

“Our investigation into ‘My AI’ should act as a warning shot for industry,” wrote Stephen Almond, the ICO’s exec director of regulatory risk, in a statement Tuesday. “Organisations developing or using generative AI must consider data protection from the outset, including rigorously assessing and mitigating risks to people’s rights and freedoms before bringing products to market.”

“We will continue to monitor organisations’ risk assessments and use the full range of our enforcement powers — including fines — to protect the public from harm,” he added.

Back in October, the ICO sent Snap a preliminary enforcement notice over what it described then as a “potential failure to properly assess the privacy risks posed by its generative AI chatbot ‘My AI’”.

That preliminary notice last fall appears to be the only public rebuke for Snap. In theory, the regime can levy fines of up to 4% of a company’s annual turnover in cases of confirmed data breaches.

Announcing the conclusion of its probe Tuesday, the ICO suggested the company took “significant steps to carry out a more thorough review of the risks posed by ‘My AI’”, following its intervention. The ICO also said Snap was able to demonstrate that it had implemented “appropriate mitigations” in response to the concerns raised — without specifying what additional measures (if any) the company has taken (we’ve asked).

More details may be forthcoming when the regulator’s final decision is published in the coming weeks.

“The ICO is satisfied that Snap has now undertaken a risk assessment relating to ‘My AI’ that is compliant with data protection law. The ICO will continue to monitor the rollout of ‘My AI’ and how emerging risks are addressed,” the regulator added.

Reached for a response to the conclusion of the investigation, a spokesperson for Snap sent us a statement — writing: “We’re pleased the ICO has accepted that we put in place appropriate measures to protect our community when using My AI. While we carefully assessed the risks posed by My AI, we accept our assessment could have been more clearly documented and have made changes to our global procedures to reflect the ICO’s constructive feedback. We welcome the ICO’s conclusion that our risk assessment is fully compliant with UK data protection laws and look forward to continuing our constructive partnership.”

Snap declined to specify any mitigations it implemented in response to the ICO’s intervention.

The U.K. regulator has said generative AI remains an enforcement priority. It points developers to guidance it’s produced on AI and data protection rules. It also has a consultation open asking for input on how privacy law should apply to the development and use of generative AI models.

While the U.K. has yet to introduce formal legislation for AI, because the government has opted to rely on regulators like the ICO determining how various existing rules apply, European Union lawmakers have just approved a risk-based framework for AI — that’s set to apply in the coming months and years — which includes transparency obligations for AI chatbots.

Snap’s AI chatbot draws scrutiny in UK over kids’ privacy concerns