Washington state is developing its own artificial intelligence regulatory framework in the absence of federal legislation, outlining recommendations for how lawmakers should govern AI across healthcare, education, law enforcement, workplaces, and other sectors.
A new interim report from the Washington State AI Task Force notes that the federal government’s “hands-off approach” to AI has created “a crucial regulatory gap that leaves Washingtonians vulnerable.”
The report arrives as the Trump administration advances a deregulatory national AI policy and briefly considered an executive order to preempt state AI laws before pausing the idea following bipartisan opposition.
The interim report published this week observes that AI has “grown more powerful and prevalent than ever before” over the past year, driven by technical advances, the emergence of AI agents, and open AI platforms transforming work and daily life.
The document presents eight recommendations to the Washington State Legislature, including a requirement to enhance transparency in AI development. The proposal would mandate that AI developers publicly disclose the “provenance, quality, quantity and diversity of datasets” used to train models, and explain how training data is processed to mitigate errors and bias. The recommendation includes provisions protecting trade secrets.
State lawmakers introduced proposals earlier this year addressing AI development transparency and disclosure, but those bills failed to advance through the legislative process.
The task force also recommends establishing a grant program utilizing public and private funding to support small businesses and startups building AI that serves the public interest, particularly for founders outside the Seattle area and those facing inequitable access to capital.
The report states that such a program would help Washington retain talent and “maintain its relevance as a tech hub.” An earlier bill to create this program, HB 1833, stalled during the 2025 legislative session.
Additional recommendations include promoting responsible AI governance for high-risk systems, defined as those with “potential to significantly impact people’s lives, health, safety, or fundamental rights.”
The task force calls for investment in K-12 STEM education, higher education AI programs, professional development for teachers, and expanded broadband access in rural communities to ensure equitable AI literacy and preparedness.
In healthcare, the recommendations emphasize improving transparency in prior authorization processes. The proposal would require that any decision to deny, delay, or modify health services based on medical necessity be made only by qualified clinicians, even when AI tools are employed in the evaluation process.
The report recommends developing guidelines for AI in workplace settings, including requiring employers to disclose when AI is used for employee monitoring, discipline, termination, and promotion decisions. This transparency would inform workers about how automated systems influence their employment conditions.
For law enforcement, the task force proposes requiring agencies to publicly disclose AI tools they utilize, including generative AI for report writing, predictive policing systems, license plate readers, and facial recognition technology. This recommendation aims to increase accountability and public awareness of surveillance technologies.
The framework recommends adopting NIST Ethical AI Principles as a guiding structure, building upon existing state guidance that already relies on the NIST AI Risk Management Framework developed by the National Institute of Standards and Technology.
Most recommendations passed by wide margins among task force members, though the law enforcement transparency proposal drew some dissenting votes, including from a representative of the ACLU.
The interim report does not yet include specific Washington-focused recommendations on generative AI in elections and political advertisements, AI and intellectual property rights, or companion chatbots, despite highlighting those issues as areas of growing state activity elsewhere. These topics may be addressed in the final report.
Washington is entering the AI policy arena behind some peer states that have already implemented broad frameworks, including California and Colorado. Other states have targeted specific use cases with narrower legislation.
Washington lawmakers introduced multiple AI bills during 2025, but only one passed into law. HB 1205 makes it a crime to knowingly distribute a forged digital likeness, commonly called a deepfake, to defraud, harass, threaten, or intimidate another person, or for any unlawful purpose.
The task force report notes that 73 new AI-related laws were enacted across 27 states in 2025, covering areas including child safety, transparency, algorithmic accountability, education, labor, healthcare, public safety, deepfakes, and energy consumption.
Washington’s task force comprises 19 members representing technology companies including Microsoft and Salesforce, labor organizations, civil liberties groups, academic institutions, and state agencies. The diverse composition aims to balance innovation interests with consumer protection and civil rights concerns.
The task force, created in 2024, must deliver three reports. A preliminary report was released last year, this interim report represents the second deliverable, and a final report is due by July 1, 2026.
The final report will likely incorporate additional recommendations and potentially address the topics omitted from the interim document. The July 2026 deadline positions findings to inform the 2027 legislative session.
The grant program recommendation reflects recognition that AI development has concentrated in major tech hubs, potentially leaving smaller companies and founders in other regions without adequate resources. Targeted funding could diversify the AI development ecosystem within Washington.
The high-risk AI governance recommendation would establish oversight for systems with significant potential impacts, without stifling innovation in lower-risk applications. This risk-based approach mirrors frameworks adopted in the European Union and other jurisdictions.



