In America: How Mobley v. Workday Inc. Is Quietly Reshaping Legal And Legislative AI Accountability In The United States

Claire anderson

American civic identity rests on faith in the judiciary, its sanctity, its impartiality, and its role as guardian of prosperity under the Constitution. One enduring constitutional tension lies in the collision between individual economic rights and the state's regulatory ambitions. Economic liberty, far from an abstraction, has long stood as a fulcrum in the contest over labor policy.

Institutions such as the US Equal Employment Opportunity Commission have been put in place with the explicit purpose of enforcing laws and mandates that prevent discrimination based on protected statuses such as age, sex, race, and religion. Therefore, laws such as Title VII, the Age Discrimination in Employment Act (ADEA), and the Americans with Disabilities Act (ADA) have been a bulwark against discriminatory treatment and have thus been used on numerous occasions for litigation purposes since the Civil Rights Act of 1964. However, it is contested to what extent these laws are adaptable to current fundamental shifts in labor relations in the digital age and the omnipresent and inexorable use of artificial intelligence in all aspects of human activity.

But how do these developments affect the average person? Well, Derek Mobley, a Black American who is 50 years old, suffering from mental disabilities such as depression and anxiety, was rejected hundreds of times, not by a person, but by a machine, or rather, an algorithm. The AI-based hiring software used by the staffing company Workday, according to Mobley, had algorithmic biases within its job applicant screening technology, which discriminated against him based on his being African-American and over 40 years old, both legally protected statuses. Despite the significant implications this case holds for the US workplace, it has garnered lukewarm media attention at best.

Mr. Mobley initially filed a charge of discrimination with the Oakland Field Office of the United States Equal Employment Opportunity Commission (EEOC) on June 3, 2021. A month later, he then filed an amended charge of discrimination. He received a Dismissal and Notice of Right to Sue by the EEOC a year later, holding that there were not enough alleged facts demonstrating that Workday qualifies as an “employment agency”, which would make them liable under ADEA and ADA.  Mobley’s Amended Complaint contained two additional theories of liability that satisfied the prerequisites needed to file this action/case. Namely, that Workday should be held liable as either: 

  1. An indirect employer, which in a legal context is referred to as indirect liability. This is when a party that has significant control or influence over employment decisions but does not directly hire or fire can be held responsible for discriminatory practices.

  2. Or an employer’s agent, meaning “One who represents and acts for another under the contract or relation of agency”, which is a stronger claim that would increase liability for companies like Workday.

Both these assertions challenge the traditional idea that only direct employers are responsible for discrimination in hiring practices. If either one is upheld by the higher courts, a precedent would be set, making AI vendors fully liable for algorithmic biases. 

Yet despite making it to the U.S. District Court of the Northern District of California, the court initially dismissed Mobley’s argument (filed on February 21st, 2023). Again, they granted him permission to file an amended version. And he was not the only one. 

Enough people were purported to have been affected by this trend that it led to a collective action lawsuit against the firm. According to the filed court document, the plaintiffs assert that hundreds of their applications through Workday have been promptly rejected by the company algorithm’s hard-coded biases, an allegation that Workday categorically denies.

On May 16, 2025, Judge Rita Lin of the U.S. District Court of the Northern District of California granted a preliminary order enabling the lawsuit to move into a nationwide collective action lawsuit, justified under the ADEA. For context, collective action lawsuits require affected individuals to voluntarily sign onto the lawsuit, as opposed to class action lawsuits, which automatically include affected persons.

This case could have far-reaching implications for civil rights and AI regulation, potentially establishing that both third-party vendors like Workday and the employers who use their AI-based screening tools can be held liable for delegating a core hiring function, under Title VII, the ADA, and the ADEA.. This will expand the scope of such federal anti-discrimination laws to include the consequences of biased labor practices and AI products. This contrasts with the previous scope that was limited to “intent”, which these products can not, by design, be said to have. Not needing to prove intent against a defendant to litigate employment discrimination cases can also set a dubious precedent for limiting freedom of contract and at-will employment, both of which provide businesses with significant leeway but are already heavily regulated by anti-discrimination laws. Supreme Court Cases such as Santa Clara County v. Southern Pacific Railroad Co. (1886) or Citizens United v. Federal Election Commission (2010) further complicate the legality and constitutional compliance of a ruling in favour of Mr. Mobley, as they established that corporations are afforded the same legal protection as individuals, according to the Fourteenth Amendment. “Intent” thus becomes paramount as part of a legal argument for workplace discrimination, as challenging as it might be to prove with AI systems with indiscernible algorithms that are intellectual property and thus challenging to scrutinize. 

Furthermore, from a legal perspective, the practicality that comes with the efficient screening of applications with a context-sensitive AI system can be argued by Workday to be increasing fairness, as it applies its rubric consistently across all applicant profiles, and that the responsibility chiefly lies with the employer. This does not discount the alleged biases ingrained in the algorithm, and only speaks to the system’s convenience as a service to a hiring company, allowing them to look through more applications.     

From a consequentialist interpretation, the difference in being able to prove discriminatory intent might appear less important in cases of algorithmic discrimination, but they have far-reaching regulatory implications for workplace and third-party service liability, some of which can not yet be anticipated due to the uncharted legal horizons brought about by the digital age. 

Furthermore, the mere fact that the case of Mobley v. Workday, Inc. was permitted to proceed to the Ninth Circuit federal appellate court, which would hear an appeal if either party were to appeal a final decision from the U.S. District Court for the Northern District of California, further incentivizes private sector stakeholder to be proactive about ensuring ethical use of AI and limiting legal exposure to algorithmic biases before legal action occurs. A well-known example of this would be when Amazon removed its automated applicant-screening software once it was discovered that the software favoured men over women, an ostensible swing towards algorithmic prudence in the workplace. 

On a related note, Mobley v Workday Inc also adds positive and negative dimensions to the legislative push towards a comprehensive legal framework for artificial intelligence writ large. For example, California’s House Bill 2094, which aimed to massively curtail and penalized the use of “high-risk” AI systems, was recently vetoed after gaining noteworthy traction on the legislative floor. Without a comprehensive federal bill regulating AI systems and data governance, state lawmakers reactively aim to create local AI legislation. Another example, which is perhaps the most prolific of these restrictive legislations, is Rep Capriglione’s sponsored algorithmic discrimination bill called the Texas Responsible Artificial Intelligence Governance Act, or TRAIGA. This bill would impose legal liability on developers, distributors, and users of AI systems in a broad sense. 

This has led to questions about how to craft appropriate, consistent protections for individuals. Free Speech advocates such as the Foundation for Individual Rights and Expression have argued that existing laws and First Amendment doctrine already address the vast majority of concerns that legislators are seeking to address. Conversely, the DC-based advocacy group The Future of Privacy Forum (FPF), which has a pro-regulatory stance regarding emerging technologies, posits that a policy-forward approach and the creation of new laws are needed for effective and responsible AI governance. Mobley v Workday will provide a judicial perspective that will (and already does) inform stakeholders on the future of AI and federal anti-discrimination laws.

With all this in mind, the federal collective action lawsuit has not yet received a final decision by the U.S. District Court for the Northern District of California. The case is awaiting the completion of the discovery phase to compile all the necessary evidence and allow both sides to resolve the case or pursue additional rulings before the trial commences in full. It is important to note that should the case continue to the Ninth Circuit Appellate Court, each legal argument set by Mobley’s collective action lawsuit would be more strictly scrutinized than lower courts, making the case harder to win, but more salient to the public.


Next
Next

In America: RFK Jr.’s Crusade Against Vaccines & The 8 New Appointees Set To Shake Up The CD