Google, following on the heels of OpenAI, published a policy proposal in response to the Trump administration’s call for a national “AI Action Plan.” The tech giant endorsed weak copyright restrictions on AI training, as well as “balanced” export controls that “protect national security while enabling U.S. exports and global business operations.”
“The U.S. needs to pursue an active international economic policy to advocate for American values and support AI innovation internationally,” Google wrote in the document. “For too long, AI policymaking has paid disproportionate attention to the risks, often ignoring the costs that misguided regulation can have on innovation, national competitiveness, and scientific leadership — a dynamic that is beginning to shift under the new Administration.”
One of Google’s more controversial recommendations pertains to the use of IP-protected material.
Google argues that “fair use and text-and-data mining exceptions” are “critical” to AI development and AI-related scientific innovation. Like OpenAI, the company seeks to codify the right for it and rivals to train on publicly available data — including copyrighted data — largely without restriction.
“These exceptions allow for the use of copyrighted, publicly available material for AI training without significantly impacting rightsholders,” Google wrote, “and avoid often highly unpredictable, imbalanced, and lengthy negotiations with data holders during model development or scientific experimentation.”
Google, which has reportedly trained a number of models on public, copyrighted data, is battling lawsuits with data owners who accuse the company of failing to notify and compensate them before doing so. U.S. courts have yet to decide whether fair use doctrine effectively shields AI developers from IP litigation.
In its AI policy proposal, Google also takes issue with certain export controls imposed under the Biden administration, which it says “may undermine economic competitiveness goals” by “imposing disproportionate burdens on U.S. cloud service providers.” That contrasts with statements from Google competitors like Microsoft, which in January said that it was “confident” it could “comply fully” with the rules.
Importantly, the export rules, which seek to limit the availability of advanced AI chips in disfavored countries, carve out exemptions for trusted businesses seeking large clusters of chips.
Elsewhere in its proposal, Google calls for “long-term, sustained” investments in foundational domestic R&D, pushing back against recent federal efforts to reduce spending and eliminate grant awards. The company said the government should release datasets that might be helpful for commercial AI training, and allocate funding to “early-market R&D” while ensuring computing and models are “widely available” to scientists and institutions.
Pointing to the chaotic regulatory environment created by the U.S.’ patchwork of state AI laws, Google urged the government to pass federal legislation on AI, including a comprehensive privacy and security framework. Just over two months into 2025, the number of pending AI bills in the U.S. has grown to 781, according to an online tracking tool.
Google cautions the U.S. government against imposing what it perceives to be onerous obligations around AI systems, like usage liability obligations. In many cases, Google argues, the developer of a model “has little to no visibility or control” over how a model is being used and thus shouldn’t bear responsibility for misuse.
Historically, Google has opposed laws like California’s defeated SB 1047, which clearly laid out what would constitute precautions an AI developer should take before releasing a model and in which cases developers might be held liable for model-induced harms.
“Even in cases where a developer provides a model directly to deployers, deployers will often be best placed to understand the risks of downstream uses, implement effective risk management, and conduct post-market monitoring and logging,” Google wrote.
Google in its proposal also called disclosure requirements like those being contemplated by the EU “overly broad,” and said the U.S. government should oppose transparency rules that require “divulging trade secrets, allow competitors to duplicate products, or compromise national security by providing a roadmap to adversaries on how to circumvent protections or jailbreak models.”
A growing number of countries and states have passed laws requiring AI developers to reveal more about how their systems work. California’s AB 2013 mandates that companies developing AI systems publish a high-level summary of the datasets that they used to train their systems. In the EU, to comply with the AI Act once it comes into force, companies will have to supply model deployers with detailed instructions on the operation, limitations, and risks associated with the model.
You Might Also Like
Kerry Washington invests in wedding marketplace Cheersy
Kerry Washington is expanding her angel investment portfolio, serving as lead investor in the pre-seed round of the wedding marketplace...
UK’s secret iCloud backdoor order triggers civil rights challenge
The U.K. government’s secret order to Apple demanding it backdoor the end-to-end encrypted version of its iCloud storage service has...
Trump family is reportedly in talks to acquire stake in Binance’s US arm
President Trump’s family has been weighing an investment in Binance.US, according to a report from the Wall Street Journal. The...
SoftBank buys $676M old Sharp plant for its OpenAI collab in Japan
SoftBank is marching ahead on its ambitions to build out a major AI operation in its home market of Japan,...