Washington state lawmakers are preparing to revisit legislation in 2026 that would mandate public sector employers negotiate with unions before implementing artificial intelligence technology that affects worker wages or performance evaluations, reviving a measure that passed the House this year but stalled in the Senate.
House Bill 1622 seeks to require government employers to engage in collective bargaining with unions over the deployment of AI technology when such systems impact wages or how worker performance is assessed, a proposal that reflects growing concerns about how rapidly advancing AI capabilities will reshape public sector workplaces.
During the 2025 legislative session, the bill passed the House of Representatives primarily along party lines with Democratic support before failing to advance in the Senate, where concerns about the measure’s impact on management flexibility and workplace innovation prevented it from reaching a floor vote.
Opponents of the legislation, including business organisations and municipal officials, argued the measure would shift the balance of power between employees and managers too heavily toward workers, whilst also warning the bargaining mandate could significantly delay workplace innovation by requiring lengthy negotiation processes before implementing AI tools that might improve efficiency or service delivery.
With hopes of securing passage in the 2026 legislative session, the bill’s lead sponsor, Representative Lisa Parshley, a Democrat from Olympia, presented the proposal to the state’s artificial intelligence task force on Thursday. The Legislature established the task force in 2024 to examine AI policy issues and make recommendations for potential legislation.
“Public sector bargaining covers wages, hours and working conditions and agencies are already required to bargain any change that touches those areas, but without legislation, that bargaining happens after implementation,” stated Washington State Labor Council President April Sims. “With legislation like House Bill 1622, it would happen before,” she added, emphasising that the bill would shift the timing of negotiations to occur before AI systems are deployed rather than after workers experience their effects.
A state law enacted in 2002 prohibits collective bargaining over technology for classified employees of state agencies and higher education institutions, a restriction that labour advocates argue has become outdated given technological advances in the subsequent two decades.
“The biggest technology decisions made by management was, what kind of desktop, what kind of fax, what kind of phone,” Parshley stated, describing the technological landscape when the 2002 law took effect. “Is that fair when we have a technology that now will actually impact our workers in ways that we have not even begun to realise?” she asked, arguing that AI represents a fundamentally different category of workplace technology compared to the office equipment contemplated when the earlier law was written.
A separate statute governing workers employed by cities, counties, and other local government agencies, in contrast, already requires bargaining over technology when it affects issues including wages, hours, or working conditions, creating an inconsistency between state employees covered by the 2002 prohibition and local government workers who possess bargaining rights over technology.
Many workers express concerns about what the rapid advancement of artificial intelligence means for their job security and workplace conditions, fears reflected in recent survey data documenting widespread anxiety about AI’s labour market impacts.
A Pew Research Center survey conducted late last year reported more than half of workers are worried about the future impact of AI on workplaces, and approximately one-third believe the technology will lead to fewer available jobs. About one in six workers indicated AI was already performing some of their work tasks, demonstrating that concerns about displacement are not merely hypothetical but reflect current workplace realities.
States are beginning to deploy AI in consequential ways that directly affect residents’ access to services. Maryland, for example, is partnering with AI company Anthropic to assist residents in applying for food aid, Medicaid, and other social welfare programmes, automating processes that were previously handled entirely by human caseworkers.
In early 2024, then-Governor Jay Inslee issued an executive order outlining a framework for state government’s use of generative artificial intelligence. The order noted that Washington “seeks to harness the potential of generative AI in an ethical and equitable way for the benefit of the state government workforce,” establishing principles intended to guide agency adoption of the technology whilst protecting worker interests.
Consistent with that executive guidance, a September directive from the state’s Office of Financial Management requires providing union-represented state employees with six months’ advance notice of any use of generative AI if deployment “will result in a consequential change in employee wages, hours, or working conditions.” Under the memorandum, unions can file demands to bargain over implementing the technology, creating a process for negotiation even without legislative mandate.
“Including workers at the beginning is not a courtesy. It is a practical necessity,” Sims stated. “It identifies risk. It ensures human oversight where it is needed, and it builds trust amongst staff, who will ultimately have to operate, troubleshoot and rely on these systems,” she added, arguing that worker input improves AI implementation rather than merely protecting jobs.
The Office of Financial Management directive also mandates human review for AI systems when they are used for employment-related decisions including hiring, promotion, discipline, or termination, a safeguard against fully automated personnel decisions that could perpetuate bias or produce unjust outcomes.
Parshley characterised the executive directive as an “excellent first step” but argued her proposed legislation “would allow future administrations to be held accountable” by codifying the policy requirements in statute rather than leaving them vulnerable to reversal by subsequent governors who might issue different executive orders.



