The U.S. House of Representatives recently convened its sixth congressional hearing focused on the burgeoning integration of Artificial Intelligence (AI) into the American workplace. Spearheaded by the House Education and Workforce Subcommittee, chaired by Representative Ryan Mackenzie (R-Pa.), the session, titled “Building an AI-Ready America: Understanding AI’s Economic Impact on Workers and Employers,” provided a platform for diverse perspectives on navigating this transformative technological shift. The proceedings highlighted a significant schism between employer advocates, who largely favor a unified federal approach to AI regulation, and worker advocates, who champion the continued autonomy of state-level protections.
The debate over AI regulation in the workplace has intensified as businesses increasingly adopt AI-powered tools for recruitment, performance management, and operational efficiency. This latest hearing underscored the tension between fostering innovation and ensuring robust worker protections in this rapidly evolving landscape.
The Case for Federal Preemption: Easing Administrative Burdens and Promoting Innovation
A central theme advanced by employer-side witnesses was the urgent need for a federal override of the patchwork of state laws emerging around AI in employment. Matthew Paul Gizzo, a shareholder at Ogletree Deakins and co-chair of the firm’s Technology Practice Group, presented a compelling argument for a streamlined federal framework. Gizzo, with extensive experience in wage and hour compliance and AI governance, articulated that the complexity of existing federal, state, and local regulations already poses significant compliance challenges for businesses of all sizes. He pointed to the substantial amounts recovered by the Department of Labor (DOL) in back wages—over $259 million in Fiscal Year 2025 alone—as evidence not necessarily of widespread employer malfeasance, but of the inherent difficulty in navigating these intricate legal landscapes.
Gizzo highlighted AI’s potential as a powerful compliance tool, particularly for small and mid-sized employers who often lack the extensive resources of larger corporations. He detailed specific use cases where AI can proactively mitigate compliance risks. These include AI-assisted timekeeping systems that instantly flag missed punches or calculation errors, payroll platforms that automatically adapt to varying overtime rules across different jurisdictions, and scheduling tools designed to prevent violations of predictive scheduling laws. "AI does not replace human judgment; it complements it," Gizzo asserted, emphasizing that technology can serve as a critical support mechanism for ensuring adherence to labor laws. He urged Congress to establish a balanced federal framework that encourages AI innovation in the compliance space, arguing that in this domain, "the benefits greatly outweigh the risks."
Chatrane Birbal, senior vice president of public policy and government relations for the CHRO Association, echoed Gizzo’s call for federal preemption. Representing nearly 400 large U.S. employers, Birbal described the significant compliance headaches created by the growing divergence in state laws. Conflicting definitions, varying audit mandates, and differing notification requirements across states like New York, California, and Colorado present substantial operational hurdles for multi-state employers. Birbal advocated for a federal, principle-based framework built upon transparency, accountability, and risk management, suggesting that instead of creating new AI-specific statutes, policymakers should focus on clarifying how existing laws apply to AI technologies.
The New York City Automated Employment Decision Tool Law, enacted in January 2023, was frequently cited as a cautionary example. This law requires employers to conduct bias audits of their automated employment decision tools and provide notice to candidates and employees. However, a December 2025 audit by the New York State Comptroller revealed that the city lacks an effective system for enforcing this law, raising questions about the practicality and efficacy of such state-specific mandates. Employer representatives argued that a fragmented regulatory environment hinders efficient business operations and could stifle the adoption of beneficial AI technologies.
Worker Advocates’ Concerns: Safeguarding Rights in the Age of AI
In stark contrast to the employer perspective, worker advocates stressed the critical need for robust, and often state-led, regulations to protect employees from potential harms associated with AI in the workplace. Ranking Member Ilhan Omar characterized the push for federal preemption as "a blank check to big tech," arguing that states must retain the flexibility to enact and enforce protective measures while federal policy catches up to the pace of technological advancement.
Sara Steffens, worker power director for Rebuild Progress, brought two decades of experience representing worker advocates to the hearing. She highlighted the escalating issue of workplace surveillance, noting that nearly 75% of U.S. employers already utilize tracking tools. Steffens warned that AI significantly amplifies this trend, making surveillance faster, cheaper, and more difficult for employees to detect. Citing a study involving 1,500 U.S. employers and 1,500 employees, she revealed that 74% of companies now employ online monitoring, including real-time screen tracking (59%) and web browsing logs (62%).
Steffens articulated several specific concerns relevant to HR leaders formulating monitoring policies. These included the potential for AI to facilitate discriminatory hiring and promotion practices by analyzing employee data in ways that perpetuate existing biases. She also raised alarms about the erosion of employee privacy, as AI can monitor productivity, sentiment, and even personal communications, creating a pervasive sense of being under constant scrutiny. Furthermore, she cautioned against the use of AI for performance evaluations without human oversight, which could lead to unfair assessments based on potentially flawed algorithms. The risk of AI being used to suppress unionization efforts by identifying and targeting organizers was also a significant concern.
"Without regulation, AI allows employers and data brokers to accumulate, analyze, and sell workers’ highly specific data in ways that can never be erased," Steffens stated emphatically. She called for federal disclosure requirements and clear limitations on the types of data employers can collect, store, and share, specifically mentioning biometrics, location data, and communications. Her perspective underscored the belief that proactive regulation is essential to prevent the unchecked exploitation of worker data and maintain a balance of power between employers and employees.
The Economic Landscape: Data Gaps and Policy Risks
Rachel Greszler, an economist with Advancing American Freedom, offered a data-driven perspective, emphasizing the challenges of formulating effective policy in the face of significant data gaps. Greszler pointed out that most organizations lack the granular data necessary to accurately assess where AI is displacing tasks, which jobs are most vulnerable, and what specific skills will be in demand in the future. This uncertainty makes it difficult to anticipate the full economic impact on the workforce and to develop appropriate support mechanisms.
Greszler recommended two key actions: enhancing data collection and analysis regarding AI’s impact on jobs and skills, and investing in workforce development programs that focus on adaptable skills rather than narrow, task-specific training. She argued that AI more frequently replaces specific tasks rather than entire jobs, a nuance with direct implications for workforce planning, reskilling initiatives, and the redesign of job roles. Understanding this task-level impact is crucial for HR leaders to effectively guide their organizations through this transition.
A related data challenge identified by Greszler concerns the undercounting of independent contractors. She cited the Bureau of Labor Statistics (BLS) contingent worker survey, which reported approximately 12 million independent contractors in 2023, a figure significantly lower than other estimates that range up to 72 million. This discrepancy in data collection methods can lead to incomplete analyses of the workforce and the impact of AI on different employment arrangements.
The Evolving Role of HR and the Path Forward
The hearing illuminated the multifaceted challenges and opportunities that HR leaders are currently confronting. Birbal described an emerging governance model in leading companies, where AI adoption is a collaborative effort between Chief People Officers and Chief Technology Officers, supported by substantial investments in employee training, clear communication strategies, and mechanisms for two-way employee feedback. This human-centric approach to AI integration aims to maximize benefits while mitigating risks.
Practical use cases shared by Birbal included AI’s application in enhancing employee onboarding by personalizing content and identifying potential skill gaps early on. AI can also be used to improve employee engagement by analyzing sentiment from surveys and communication channels, enabling proactive interventions. Furthermore, AI tools can streamline internal communications by ensuring that relevant information reaches the right employees at the right time.
The contrasting viewpoints presented during the hearing underscore the complex task before policymakers. While employer groups advocate for a unified federal approach to avoid regulatory fragmentation and foster innovation, worker advocates emphasize the need for strong safeguards to protect employees from potential exploitation and surveillance. The insights from economists highlight the critical need for better data to inform policy decisions.
Ultimately, the “Building an AI-Ready America” hearing served as a crucial step in the ongoing national conversation about AI’s transformative influence on the workplace. The divergent testimony suggests that any future federal policy will likely need to strike a delicate balance, aiming to promote technological advancement while robustly defending worker rights and ensuring equitable economic outcomes in the age of artificial intelligence. The coming months and years will reveal how Congress and regulatory bodies navigate these competing interests to shape the future of work in America.
