Google removed a pledge to not build AI for weapons or surveillance from its website this week. The change was first spotted by Bloomberg. The company appears to have updated its public AI principles page, erasing a section titled “applications we will not pursue,” which was still included as recently as last week.
Asked for comment, the company pointed TechCrunch to a new blog post on “responsible AI.” It notes, in part, “we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”
Google’s newly updated AI principles note the company will work to “mitigate unintended or harmful outcomes and avoid unfair bias,” as well as align the company with “widely accepted principles of international law and human rights.”
In recent years, Google’s contracts to provide the U.S. and Israeli militaries with cloud services have sparked internal protests from employees. The company has maintained that its AI is not used to harm humans, however, the Pentagon’s AI chief recently told TechCrunch that some company’s AI models are speeding up the U.S. military’s kill chain.
You Might Also Like
Chinese marketplace DHgate becomes a top US app as trade war intensifies
The Trump trade war has gone viral on TikTok, pushing a Chinese e-commerce app, DHgate, to the top of the...
Hertz says customers’ personal data and driver’s licenses stolen in data breach
Car rental giant Hertz has begun notifying its customers of a data breach that included their personal information and driver’s...
OpenAI plans to phase out GPT-4.5, its largest-ever AI model, from its API
OpenAI said on Monday that it would soon wind down the availability of GPT-4.5, its largest-ever AI model, via its...
Google’s newest AI model is designed to help study dolphin ‘speech’
Google’s AI research lab, Google DeepMind, says that it has created an AI model that can help decipher dolphin vocalizations,...