May 9, 2026
employers-still-playing-catch-up-on-ai-risk-management-littler-report-finds

Artificial intelligence (AI) has emerged as the foremost source of workplace policy and regulatory concern for U.S. employers in 2026, marking a significant shift in corporate priorities. This finding, published Wednesday by the prominent law firm Littler Mendelson in its latest employer survey report, indicates that AI-related challenges now overshadow previously dominant topics such as immigration and diversity, equity, and inclusion (DEI). The rapid advancement and widespread adoption of AI technologies across various business functions have propelled these concerns to the forefront, demanding urgent attention from C-suite executives, HR professionals, and in-house legal counsel.

The Shifting Landscape of Employer Concerns

The Littler Mendelson 2026 Employer Survey Report, which queried over 300 U.S.-based C-suite executives, in-house lawyers, and HR professionals, underscores a dramatic acceleration in AI’s perceived impact. The proportion of respondents anticipating AI-related shifts in their workplaces approximately doubled from the previous year’s survey. In 2025, fewer than half of employers expected such developments, a figure that has now surged, reflecting a growing awareness and direct experience with AI’s transformative potential and associated risks. This pronounced increase signals a critical inflection point, where AI is no longer a futuristic concept but an immediate and tangible factor reshaping the operational and legal fabric of American businesses.

This reordering of priorities highlights the unprecedented speed at which AI has integrated into daily business operations and the subsequent regulatory void or fragmentation it has exposed. While immigration and DEI have historically generated considerable policy and compliance complexities, the emergent and multifaceted nature of AI’s implications—ranging from data privacy and algorithmic bias to job displacement and intellectual property—presents a novel set of challenges that employers are scrambling to address.

Rising AI Adoption and Governance Efforts

The survey data also revealed a substantial increase in AI adoption within organizations. A significant 54% of respondents reported actively using AI in their human resources (HR) functions specifically, indicating that AI is not merely an IT or operational tool but is directly influencing critical aspects of talent management, from recruitment and onboarding to performance evaluation and employee relations. Furthermore, only a small minority, 6%, stated they were not using AI for any function whatsoever, illustrating the pervasive integration of this technology across the corporate landscape.

In response to this widespread adoption, employers are increasingly implementing formal AI governance policies. The 2026 survey found that 68% of organizations had established such policies, a stark contrast to the 38% reported in Littler’s 2025 report. This near doubling of governance efforts within a single year is a positive indicator, suggesting that businesses are recognizing the imperative to manage AI’s deployment responsibly. Littler characterized this finding as "encouraging," acknowledging the proactive steps taken by many employers.

However, the report also highlighted significant gaps in comprehensive AI governance. Despite the increase in formal policies, fewer than half of organizations had instituted critical accompanying measures. These include procedures for vetting third-party AI vendors, providing tool-specific training for AI applications, or designating an internal AI oversight committee. This discrepancy suggests that while many employers have a foundational policy in place, the practical implementation of robust safeguards and oversight mechanisms is still lagging.

Niloy Ray, co-chair of Littler’s AI and technology practice group, noted this ongoing "catch-up" phenomenon. "That mismatch could leave employers vulnerable to significant risk, especially given the complexity around compliance," Ray cautioned in the report. This vulnerability stems from the rapidly evolving nature of AI technology itself, coupled with an equally dynamic and often inconsistent regulatory environment.

Key Litigation Fears and Regulatory Fragmentation

Employers’ concerns regarding AI-related litigation over the next 12 months are multifaceted, with data privacy emerging as the paramount worry. This encompasses sensitive employee and candidate data, as well as data derived from videos and images processed by AI. The potential for misuse, breaches, or non-compliance with existing and emerging privacy regulations like GDPR, CCPA, and future state-level privacy laws is a significant source of anxiety.

Other common concerns include discrimination and algorithmic bias. AI systems, if not carefully designed and monitored, can perpetuate or even amplify existing societal biases present in their training data. This can lead to discriminatory outcomes in hiring, promotions, performance management, and even termination decisions, exposing employers to considerable legal risk under anti-discrimination statutes such as Title VII of the Civil Rights Act. The "black box" nature of some AI algorithms further complicates efforts to identify and remediate such biases.

State-specific regulations also rank high among litigation fears. The absence of a comprehensive federal framework for AI governance has led to a patchwork of state and local laws, particularly concerning AI’s use in hiring. Jurisdictions like New York City (Local Law 144), Illinois (AI Video Interview Act), and Maryland have enacted legislation restricting or regulating the use of AI in employment decisions, requiring transparency, bias audits, and informed consent. Navigating these disparate requirements adds a layer of complexity for multi-state employers.

Finally, recordkeeping and documentation requirements related to AI usage are another area of concern. Employers need to ensure they can demonstrate compliance with all applicable laws, which often necessitates detailed records of AI system design, training data, usage logs, and impact assessments. The lack of standardized best practices for AI recordkeeping creates further compliance challenges.

Broader Context: The AI Revolution and Regulatory Response

The current surge in AI concerns and regulatory activity can be traced to the rapid advancements in generative AI, particularly following the widespread public availability of tools like OpenAI’s ChatGPT in late 2022. This breakthrough democratized access to powerful AI capabilities, quickly shifting AI from a specialized technology to a ubiquitous tool with immediate business applications. The subsequent proliferation of AI across industries necessitated a rapid re-evaluation of existing policies and the development of new ones.

Chronologically, the timeline of AI’s integration into the workplace and the regulatory response has been remarkably compressed. Prior to 2020, AI discussions largely revolved around automation and future workforce impacts. By 2021-2022, initial regulatory frameworks began to emerge, often at the state or municipal level, focusing on specific applications like AI in hiring. The period from 2023 onwards has seen an exponential increase in both AI adoption and legislative proposals, with 2026 marking the point where employer concern has peaked as a result of these concurrent trends.

The regulatory landscape remains highly fragmented. While a growing number of states and localities have adopted AI-related legislation, particularly in the hiring domain, the federal approach has been less unified. The Biden administration issued an Executive Order on AI in October 2023, aiming to set standards for AI safety and security, protect privacy, and promote equity. Conversely, the Trump administration has historically sought a more industry-friendly AI agenda, including proposed restrictions on the ability of states to regulate the technology. This clash between federal and state approaches, and between differing federal philosophies, creates an unstable and complex regulatory environment for employers trying to ensure compliance.

Implications for the Workforce and Business Strategy

The report also examined AI’s impact on job displacement. While 15% of employers indicated they had either eliminated or were planning to eliminate headcount due to AI, a substantial 63% stated they had not and were unlikely to do so. This suggests that while AI is undoubtedly causing shifts, a widespread "robot apocalypse" scenario of mass job destruction is not the immediate reality for most employers. However, respondents were more likely to report that they had either reduced hiring for certain roles or reassessed job responsibilities in response to AI. This indicates a more nuanced transformation, where AI is augmenting roles, changing skill requirements, and influencing staffing patterns rather than simply replacing entire workforces. This ongoing redefinition of job roles necessitates significant investment in reskilling and upskilling initiatives to prepare employees for an AI-augmented future.

The implications for business strategy are profound. Employers must move beyond reactive compliance and adopt proactive, strategic approaches to AI integration. This includes:

  • Developing Comprehensive AI Governance Frameworks: Beyond basic policies, organizations need detailed procedures for AI tool selection, ethical impact assessments, ongoing monitoring, and incident response.
  • Investing in Training and Education: Employees, managers, and HR professionals require training not only on how to use AI tools effectively but also on the ethical considerations, potential biases, and legal implications.
  • Establishing Cross-Functional AI Oversight: Creating internal committees involving legal, HR, IT, and business units can ensure a holistic approach to AI strategy and risk management.
  • Vetting Third-Party AI Vendors Rigorously: Given the reliance on external AI solutions, thorough due diligence on vendor compliance, data security, and ethical practices is critical.
  • Monitoring the Evolving Regulatory Landscape: Staying abreast of federal, state, and international AI regulations is paramount to maintaining compliance and anticipating future requirements.

Marko Mrkonich, also co-chair of Littler’s AI and technology practice group, emphasized the multifaceted expertise now required. "Employers will need a combination of technical knowledge, business judgment and compliance focus to navigate AI as it further reorients work," Mrkonich stated. The firm’s data further suggests that while larger employers are generally more advanced in making AI-related workplace changes, the overall transformation driven by AI is still in its nascent stages across the broader economy.

In conclusion, the Littler survey paints a clear picture: AI is no longer an emerging technology but a central force reshaping the American workplace, bringing with it a complex web of opportunities and regulatory challenges. The significant increase in employer concern and the accelerating adoption of AI governance policies underscore a growing recognition of these realities. However, the identified gaps in comprehensive oversight mechanisms indicate that many organizations are still playing catch-up. As AI continues to evolve and integrate, employers face an ongoing imperative to develop sophisticated, agile, and ethically sound strategies. As Mrkonich aptly put it, "AI is opening new frontiers, redefining job responsibilities, changing the way we hire, and modifying staffing patterns. The old way of doing things is no longer good enough." Navigating this new frontier successfully will require continuous vigilance, adaptability, and a proactive commitment to responsible AI deployment.

Leave a Reply

Your email address will not be published. Required fields are marked *