A high-profile AI policy report commissioned by California Governor Gavin Newsom has just set the stage for potential new AI regulation that could soon impact your hiring processes, workplace surveillance, and AI-fueled decision-making. While the March 18 draft report is open for feedback until April 8 and could be revised before finalization, its recommendations are already shaping legislative discussions, including a proposed AI safety bill (SB 53) that could introduce new AI-related compliance and disclosure obligations. AI regulation is soon coming to California, and employers must prepare for this new wave. What are the four biggest takeaways from this report and what should you do about them?
4 Biggest AI Policy Proposals That Could Affect Employers
You can read the entire 41-page report from the Joint California Policy Working Group on AI Frontier Models here – but we’ve made it easy and pulled the four biggest policy proposals that impact the workplace below.

Mandatory AI Risk Assessments and Third-Party Audits
The report strongly emphasizes independent, third-party AI safety assessments to prevent potential harms.
What This Means for Employers:
- Businesses using AI for hiring, promotions, performance reviews, and terminations may soon be required to conduct formal risk assessments.
- Companies may need to engage third-party auditors to verify that AI tools are not introducing bias, privacy risks, or unfair employment practices.
- AI-powered systems must demonstrate compliance with risk mitigation protocols to avoid liability.
Transparency Requirements for AI Development and Deployment
California policymakers are increasingly focused on forcing AI companies and employers to disclose how AI models function, what data they use, and how decisions are made.
What This Means for Employers:
- Your HR and compliance teams may soon be required to explain how AI-driven hiring and workplace decisions are made.
- AI developers and deployers may be required to disclose the data sources behind AI models, ensuring they do not rely on biased or unlawfully obtained information.
- AI-powered workplace tools may need to include explainability features that clarify how they reach decisions.
AI Whistleblower Protections and Compliance Oversight
The report advocates for stronger legal protections for employees who expose AI-related risks. This means businesses could face new liabilities if they retaliate against workers who report AI-related issues.
What This Means for Employers:
- AI whistleblowers may be protected under expanded labor laws, similar to those covering workplace safety violations.
- Employers could face penalties for failing to investigate AI-related complaints.
- Internal compliance teams will need to update whistleblower policies to incorporate AI concerns.
Adverse Event Reporting and AI Incident Disclosure
The report calls for mandatory reporting systems that require companies to disclose AI-related failures, discrimination, or harm.
What This Means for Employers:
- If AI causes harm (like biased hiring decisions or data breaches), employers may soon be required to report incidents.
- Companies using AI in workforce management could face stricter documentation and reporting requirements.
- Regulators could end up impose penalties for failing to disclose known AI risks.
What’s Next?
Again, this draft report was prepared to seek feedback from stakeholders, including employers. Your organization can submit comments through an online form by April 8. The Joint California Policy Working Group on AI Frontier Models will review all comments and incorporate them into a final report, expected by June 2025.
What Else is Brewing?
Meanwhile, several pieces of AI-related legislation are working their way through the state’s legislative process, including:
- Assembly Bill 1018 seeks to regulate AI decision-making tools in employment and other key areas, imposing strict oversight on automated decision systems (ADS) in an attempt to prevent discrimination in the workplace and elsewhere. You can read about that bill here.
- The “No Robo Bosses” Act (Senate Bill 7) also seeks to regulate the use of ADS in employment, hoping to strictly limit AI-driven tools when hiring, promoting, disciplining, and terminating workers. You can read all about that legislation here.
- Senate Bill 53 builds on failed legislative efforts (which you can read about here) by introducing whistleblower protections for AI workers, increasing transparency requirements for AI models, and potentially mandating independent risk assessments to ensure AI systems do not pose significant societal or workplace harms.
What Should You Do?
With over half of the world’s top AI companies headquartered in California, state regulations are likely to influence other states’ laws. Even businesses operating outside California should monitor these developments, as similar laws may soon emerge in your area. Make sure you engage your Legal and HR teams to consider the recommendations listed above.