Skip to main content

Issue Brief: Without Robust Guardrails, AI Harms Workers

AFL-CIO Tech Institute
Social share icons

Employers are rapidly deploying AI systems across workplaces to monitor, evaluate, manage, and replace workers. Despite grandiose claims that AI systems can deliver unprecedented productivity gains and usher in a golden era of economic growth, evidence shows that the hype surrounding AI is divorced from reality. AI systems, many of which are available to the public, are often error-prone, produce biased results, and may even generate outputs that advise illegal activity. In many cases, these AI uses are occurring without employees’ knowledge or consent and with little or no regulatory oversight. These systems harm workers in many ways. From exploitative surveillance tools and discriminatory management algorithms to dangerous experimentation and job elimination, AI is causing measurable harm to workers and the public. These technologies, left unregulated, intensify workplace injuries, perpetuate racial and gender discrimination, lead to deskilling, threaten intellectual property rights, and undermine the fundamental dignity of work. AI is often deployed as a tool that increases the power of employers at the expense of workers. This is why the AFL-CIO developed a first-of-its-kind national labor AI agenda. The AFL-CIO’s AI Principles center workers and ensure that the benefits of new technologies are widely shared and do not lead to dangerous, discriminatory, and anti-worker outcomes.

FACT: Employers increasingly use workplace AI systems for key functions, such as hiring, scheduling, task assignment, performance evaluation, and even disciplining or terminating workers

FACT: Unregulated AI tools are being deployed in high-risk, safety-sensitive sectors like healthcaretransportation, and heavy industrial settings

FACT: Without careful testing and human oversight, these tools pose substantive risks to workers and the public who rely on public sector services 

FACT: Workers have the right to demand safeguards for how technologies are being used in their workplace and in society at large.

Weaponizing Workers’ Data to Surveil Workers

Nearly 9 out of 10 large companies use AI in one or more business functions. These tools are often used to continuously collect and analyze large amounts of workers’ data. Sensitive information, including workers’ wage and benefits information, medical and family leave, correspondence, online activity, disciplinary actions, and geographic location, can be fed into AI platforms used to manage workers without informing them what data is being collected, how it is being used, or whether their data is being shared with or sold to third parties

These tools are used across a myriad of sectors, enabling invasive surveillance of workers both during and after work, which raises significant privacy concerns. Some drivers are monitored through AI-enabled cameras, phone apps, and sensors to track their every move, and their eye-movement data is used to analyze “attentiveness.” The system sends video and audio clips to management and issues verbal alerts to drivers, sometimes leading to discipline for mistakes they didn’t even makeCall center software constantly monitors both workers' and customers' voices, using emotion recognition technology to “detect” frustration and provide scripted responses. The technology is used to discipline workers, even if the system is faulty and employees have no control over what data about them is collected, whether it’s accurate, or how it’s used. There are also recorded instances of workers being expected to consent to their biometric data and movements being tracked by their employers as a condition of employment.

Unfair and Exploitative Automated Management Systems

Data from surveillance systems and other sources is also used to evaluate and manage workers. These automated decision systems, often referred to as bossware, perform various management activities, including hiring, terminating, scheduling, allocating tasks, performance evaluation, and wage-setting. Often, these systems have little or no meaningful human oversight and are deployed without giving workers the option to opt out. These technologies can have a dramatic impact on other elements of job quality, including worker health and safety, professional discretion, worker autonomyjob satisfaction, and dignity. Using algorithmic management tools to discipline workers or impede their autonomy is strongly correlated with reported negative outcomes for workers, including a higher likelihood of getting injured on the job and less safe workplace conditions.

Often, these tools are used to measure “productivity,” ranking employees’ performance based on how their metrics align with productivity quotas. One tool used by fast food companies is marketed as a “performance coach” that records employees’ interactions with customers. Employees are graded on their ability to upsell products and maximize profits, with some employers even grading employees on their facial expressions. Another tool introduced by Amazon requires its warehouse workers to wear tracking devices that monitor the speed at which they complete tasks. Under the guise of enhancing productivity and making workplaces safer, these business practices have resulted in systemic safety failures and high injury rates. A study by Human Impact found that the overuse of technology for worker surveillance leads to increased harm to workers, including excessive discipline, injuries, and even termination. 

Not only can AI intensify working conditions through exhaustive automated scheduling or unrealistic task assignments, but it can also have a contradictory effect on workers’ performance–exacerbating workers’ anxiety and leading to higher levels of stress and burnout. Often, AI productivity tools are designed to gamify work, pitting workers against each other to perform tasks at unsustainable levels. In one instance, Amazon created a video game, MissionRacer, that ranked workers on their ability “to assemble customer orders fastest.” When paired with tools measuring workers’ time spent not completing tasks, workers are pressured to place themselves in harm’s way to beat the clock. For workers with disabilities and those who are pregnant, for example, the health risks are multiplied, as they skip using the restroom or taking breaks to meet productivity quotas. Tech that pushes workers to produce more and faster comes at a high cost, resulting in mental and physical harm. 

Additionally, employers have increasingly been using AI and algorithmic systems to determine worker compensation, rather than human managers and negotiations, across sectors such as healthcare and customer service. AI firms specializing in labor management are offering automated products that monitor workers and make workplace decisions in ways that may determine wages. These automated systems lack transparency in their calculations. Surveillance wage systems allow employers to pay different people different wages for the same work, undermining the basic tenet of equal pay for equal work, and potentially violating employment and civil rights laws. For instance, ride-sharing platforms collect and analyze data on drivers’ hours, fare acceptance rates, and tolerance for lower pay, to gradually reduce compensation over time. A study of over 500 AI early-stage companies that specialize in workforce management found that this trend is not limited to gig work. More industries are finding ways to use data collected through worker surveillance and monitoring systems to pay the same worker less money for longer hours. 

Discrimination and Bias Embedded in AI Models Violates Workers’ Civil Rights

Without protections, working women and people of color will likely experience the worst outcomes of this technology use in the workplace. For example, people of color are disproportionately more likely to report being monitored or having tasks or schedules assigned by algorithmic management software. One study found that Black workers are nearly twice as likely to report being monitored by bossware technologies as white workers. Second, the use of predictive analytics to screen candidates for job openings has become widespread in recent years. There have been numerous instances of reported discrimination, where AI-powered hiring software allegedly filtered out job applicants based on racegender, age, and disability. Algorithms trained on biased or unrepresentative data can lead to discriminatory and potentially illegal workplace conditions, not only in hiring, but across a wide spectrum of vital services, including access to housing, healthcare, insurance, and public benefits.

Furthermore, AI-enabled predictive systems can perpetuate or worsen racial biases. Even before recent innovations in healthcare, medical innovations were often developed using datasets that excluded large groups of people, leading to worse outcomes for marginalized communities. For instance, pulse oximeters, used to measure people’s oxygen levels, often overestimate oxygen levels in darker-skinned individuals, leading Black patients to have their oxygen levels perceived as normal when they may be low. Historical inaccessibility to healthcare can lead to predictions that certain populations are less likely to need treatment, thereby reducing access to care. 

Employers Can Use AI Systems to Illegally Block Organizing Activity

Historically, employers have used a variety of tactics to block union organizing efforts, including unlawful surveillance, monitoring workers’ engagement with unions and organizing activity, and coercing workers to oppose unionization. Now, many employers are using a variety of AI tools to assist in anti-union activities. These tactics include deploying AI tools embedded in workplace devices that disseminate anti-union messages and ask intrusive questions about workers’ union sympathies. Some companies repurpose military surveillance and intelligence AI systems that track and analyze worker data to identify “potential threats” to their organization, including unionization efforts. These tools can be used to surveil workers, tracking the use of labor-friendly phrases or sentiments expressed in workplaces and workers’ social media activity. Some of these tools are also used to forecast the likelihood of workers organizing or leaving an employer. Intrusive surveillance methods can enable employers to monitor organizing activity and target employees who express an interest in forming a union with anti-union or even threatening messaging, thereby violating federal labor law.

Deskilling Caused by AI Erodes the Dignity of Work

While some AI tools may allow workers to complete tasks faster or focus on more productive tasks, it can also lead to the long-term loss of core knowledge and expertise, resulting in deskilling. Furthermore, automation can create a fragmentation that attempts to break apart the “complex holistic knowledge into discrete tasks,” thereby undermining workers’ expertise and professional judgement. This can also lead to some tasks being automated or outsourced to off-site, where workers may not have the same training or credentials. Fragmented work often lends itself to diminished worker power, creating more exploitative working conditions. By unbundling jobs, the very nature of a worker’s day can shift – some parts are automated while others intensify. For example, state government workers have shared that when some aspects of benefit analysis are automated, only the most complex cases remain for review, making the work more intensive and difficult. 

Untested, Unregulated AI Systems Expose Workers and the Public to Dangerous Experimentation

Without transparency requirements and limits on how and when they can be deployed, AI systems often operate as “black boxes,” with limited visibility into their design or intended use. AI developers are not required to conduct or publish pre-deployment testing to ensure that their systems adhere to existing laws and regulations and do not pose an imminent threat to the public. Instead, they are often rushed to market, and there are numerous examples of untested AI systems operating in unintended ways. Ranging from the silly to the serious, these failures can create life-or-death situations. For example, AI-enabled automated vehicles make the roads unsafe for other vehicles and pedestrians. Numerous examples, including creating traffic jams due to a blackout failure, ignoring school bus safety rules, impeding emergency responders, failing to respond to weather conditions, driving through active shootings, running over children and pets, and even crashing into pedestrians, have demonstrated the very real pattern of harms directly caused by this poorly regulated technology. 

There are similar high stakes in healthcare for untested, unproven AI-driven systems. For example, systems designed to detect sepsis have high error rates, including both false positives and false negatives. Crucially, relying on developer claims of accuracy proved ineffective, as independent researchers found that they were 14-20 percentage points lower than those promised by the company selling the product. Another study found that only 34% of patient injuries were detected by in-hospital mortality prediction models. Investing in prediction systems with inaccurate results not only harms healthcare workers, whose expertise and professional judgement are at times sidelined by these systems, but can also have dire consequences for patients. An additional 2025 study shows that different medical large language models (LLMs) frequently offer different responses to the same medical queries, demonstrating major problems with accuracy and reliability. At one hospital in New York City, nurses, who are members of the New York State Nurses Association, came to work one day, surprised to find their patients hooked up to AI-driven devices assessing their conditions. Nurses had not received any training on using the devices or data output for patient care. These systems are often used as a rationale to reduce the number of nurses physically present, undermining workers and potentially patient outcomes, especially where early intervention in a health crisis is paramount. 

In mental health treatment, AI chatbots called “mental health companions”  have become a cheaper alternative to doctors and human therapists, leading to more people receiving subpar mental healthcare services. Several teenagers have died by suicide following conversations with AI chatbots, and the scientific community is starting to look into the relationship between people using chatbots to self-manage mental health and self-harm. Typically, before a medical treatment is widely available, it undergoes rigorous testing to ensure patient safety. For chatbots in therapeutic care, there has been only one randomized control trial, typically the gold standard in evaluating the effectiveness and safety of medical treatments. Yet therapy chatbots have exploded on the market, claiming to be a remedy for the dearth of trained therapists. Even chatbots not designed to provide mental healthcare services are being marketed as companions, leading people, especially children, to rely on them as confidantes with serious if not tragic consequences. 

Lack of Consent and Compensation Hurts Workers

In the creative industries, without smart policymaking and requisite safeguards, AI may upend the livelihoods of creative professionals who rely on effective intellectual property rights to earn compensation and benefits, as well as to ensure future career opportunities. Workers in the entertainment and media industries see their works, and often also their voices and likenesses, being stolen by generative AI that threatens to replace them. Generative AI systems, such as LLMs, rely on vast amounts of information to train their models, including news stories, art, and other content mined from the internet. Often, these models do not compensate the creators and producers of this content, as reflected in dozens of ongoing lawsuits in which media companies and news outlets are suing AI companies for using their content to train their models without their consent or compensation. Actors, fashion modelsmusicians, and artists are also finding their works being used to produce AI-generated images, songs, and content without receiving compensation. AI companies then profit from tools built on the work of others.

Centering Workers is Critical to Responsible AI Adoption

It is important that we do not give in to the notion that AI adoption is inevitable or cannot be controlled. We must recognize the limitations that exist for this technology and why it cannot supersede the knowledge, experience, and hands-on work required for many private and public sector jobs. Workers must have a hand in shaping when and how the technology is developed and deployed to ensure that it improves society, delivers public benefits, and does not lead to displacement or other harms for workers. 

Technology should be a tool for improving workplace conditions and augmenting work. It should not be a cudgel for stripping workers of their autonomy. Policymakers should prioritize the principles of privacy, transparency, safety, accountability, and protection of workers’ and civil rights when developing an AI policy framework to ensure that innovation has a positive impact on society, improving workers’ safety and working conditions, and creating opportunities for everyone. As rapid technological change is poised to define the future of millions of jobs, it is essential that federal, state, and local leaders champion policies that:

  1. Strengthen labor rights and broaden opportunities for collective bargaining
  2. Advance guardrails against harmful uses of AI in the workplace
  3. Support and promote copyright and intellectual property protections
  4. Develop a worker-centered workforce development and training system
  5. Institutionalize worker voice within AI R&D
  6. Require transparency and accountability in AI applications
  7. Model best practices for AI use with government procurement
  8. Protect workers’ civil rights and uphold democratic integrity