2026-04-04 · Don Ho · 1200 words

The Federal Government Wants "Any Lawful" Use of Your AI. Vendors Are Pushing Back.

The General Services Administration released draft procurement language in March that would grant the federal government the right to use AI tools for "any lawful government purpose." That phrase sounds reasonable until you think about what it actually permits. Industry groups are now warning that the proposed rules could gut vendor protections, eliminate meaningful AI guardrails, and force companies to choose between government contracts and their own terms of service.

This isn't a theoretical debate. It's a direct response to the explosive dispute between Anthropic and the Department of Defense earlier this year, where the AI company refused to allow its products to be used for surveillance of Americans or for lethal autonomous weapons systems. The DOD designated Anthropic as a supply chain risk. Anthropic sued. The fallout has rattled every AI vendor doing business with Washington.

What the GSA Draft Actually Says

The proposed clause changes update federal AI acquisition terms in several ways.

The government would own all input data and any custom developments made to an AI model under contract. Contractors would retain ownership of their base models but lose control over anything built on top of them for government use.

The "any lawful government purpose" provision is the most controversial element. It means that once the government acquires access to an AI tool, it can deploy that tool for any application that doesn't violate federal law. The vendor's terms of service, ethical guidelines, and acceptable use policies become irrelevant.

The draft also includes an "eyes off" rule for data handling. Human review of government data would be restricted to situations "strictly necessary" for system access or incident reporting. That means AI systems processing sensitive government information would operate with minimal human oversight by design, not by accident.

Foreign-made AI products would be prohibited entirely. Contractors would also face new incident reporting obligations.

Why Industry Groups Are Alarmed

Americans for Responsible Innovation filed comments calling the proposed changes a threat to civil liberties. Their concern is specific: "lawful" is a low bar. ARI outlined scenarios where AI tools deployed under the "any lawful purpose" standard could psychologically profile benefits applicants, conduct surveillance pattern analysis on American citizens, or screen government employees for "loyalty." All technically legal activities that would violate every responsible AI framework in existence.

"A policy of enabling 'all lawful use' strips away one of the last public safeguards we have against tyranny," ARI wrote. That's strong language from a nonprofit. It's also not wrong.

The Business Software Alliance, whose members include OpenAI, Microsoft, Palo Alto Networks, and IBM, raised a parallel set of concerns. BSA warned that the draft language could diminish contractor intellectual property rights, create implementation problems, increase False Claims Act liability for vendors, and ultimately make AI procurement more expensive and less competitive.

BSA's core argument: the proposed terms would discourage the best AI companies from bidding on government contracts. If vendors know the government can repurpose their technology for any legal application (including ones the vendor explicitly prohibits), the rational business decision is to stay out of the government market entirely. That outcome would leave federal agencies with inferior tools and less innovation, the opposite of the administration's stated goals.

The Anthropic Problem

None of this can be understood without the Anthropic dispute as context.

Earlier this year, Anthropic discovered that Department of Defense agencies were using its Claude AI for applications the company's terms of service explicitly prohibited, including surveillance of American citizens and integration into lethal autonomous weapons research. Anthropic refused to continue providing access. The DOD responded by designating Anthropic as a supply chain risk, effectively blacklisting the company from federal contracts.

Anthropic sued over a dozen federal agencies and government officials. The case is ongoing and has created chaos in the federal AI vendor community. Other AI companies are now asking themselves: if we sign a government contract, can we actually enforce our own acceptable use policies?

The GSA's "any lawful purpose" language answers that question. The answer is no.

The proposed procurement terms were drafted specifically to prevent another Anthropic situation. Not by addressing the underlying ethical concerns, but by eliminating the vendor's ability to object. If the government can use an AI tool for any lawful purpose, there's no contractual basis for a vendor to refuse a specific application.

What This Means for AI Vendors and GCs

If you're an AI company considering government contracts, or you're a general counsel advising one, the GSA draft creates a binary choice.

Accept the terms and lose control over how your technology is used. Your acceptable use policy becomes a suggestion. Your ethical guidelines become marketing materials. The government decides the application, and "lawful" is the only constraint.

Reject the terms and lose access to the largest single buyer of technology on earth. Federal AI spending is projected to exceed $15 billion in 2026. Walking away from that market is not a decision any board takes lightly.

BSA recommended a series of modifications: clarifying the foreign AI prohibition, expanding contractor IP rights, streamlining change management, and aligning with existing software acquisition frameworks. Those recommendations are sensible. Whether GSA adopts them depends on whether the administration prioritizes vendor cooperation or government control.

What to Do Now

For AI companies: review the GSA draft language against your current terms of service and acceptable use policies. Identify the specific conflicts. File public comments before the deadline. If your company has ethical boundaries on how your technology can be used, those boundaries are about to be tested.

For enterprise buyers using AI tools that also serve government clients: understand that the same AI model your company uses for contract review or customer service may simultaneously be deployed for government surveillance or weapons research. That dual-use reality affects your vendor risk assessment.

For general counsel: the Anthropic situation created precedent. A vendor that refuses a government application can be designated a supply chain risk. A vendor that complies may violate its own published commitments. If your client is in the AI space, the GSA draft should be on your radar now, not after the final rule publishes.

The comment period is the last chance to shape these terms. Once they're final, the "any lawful purpose" standard becomes the baseline for every federal AI contract. The question every AI company needs to answer: where is the line between selling a product and surrendering control of it?

Don Ho, Esq. is Co-Founder & CEO of Kaizen AI Lab, advising companies on operational growth strategies and the legal aspects of AI integration in their businesses.