Google removed a pledge to not build AI for weapons or surveillance from its website this week. The change was first spotted by Bloomberg. The company appears to have updated its public AI principles page, erasing a section titled “applications we will not pursue,” which was still included as recently as last week.
Asked for comment, the company pointed TechCrunch to a new blog post on “responsible AI.” It notes, in part, “we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”
Google’s newly updated AI principles note the company will work to “mitigate unintended or harmful outcomes and avoid unfair bias,” as well as align the company with “widely accepted principles of international law and human rights.”
In recent years, Google’s contracts to provide the U.S. and Israeli militaries with cloud services have sparked internal protests from employees. The company has maintained that its AI is not used to harm humans, however, the Pentagon’s AI chief recently told TechCrunch that some company’s AI models are speeding up the U.S. military’s kill chain.
You Might Also Like
Apple and Google take down malicious mobile apps from their app stores
Apple and Google have pulled as many as 20 apps from their respective app stores after security researchers found the...
These are the investors funding Musk’s $97 billion OpenAI takeover attempt
As if Elon Musk doesn’t have enough going on, a consortium of investors led by him announced plans Monday for...
Is AI making us dumb?
Researchers from Microsoft and Carnegie Mellon University recently published a study looking at how using generative AI at work affects...
Carta settles two more lawsuits that alleged sexual harassment and discrimination
Cap table management firm Carta made headlines in 2020 when its former marketing VP Emily Kramer filed a lawsuit alleging...