

When we wrote about the legal and ethical implications of A.I. in hiring in 2019, we focused on the assessment of job candidates before they had been hired or consented to an ongoing relationship with an organization. What has become increasingly clear since then is that the far more consequential, and far less scrutinized, deployment of A.I. may be happening deep within the employment relationship itself.
Over the past few years, digital innovations and advances in A.I. have turbocharged remote work through data capture, producing a new generation of workplace monitoring, performance analytics and employee profiling tools. Many of these technologies promise to help organizations improve productivity, identify high-potential talent, reduce unwanted turnover and allocate compensation more efficiently. The pitch is compelling: why rely on the inevitably subjective judgment of a manager who observes an employee for a few hours a week when you can have an A.I. system that synthesizes thousands of behavioral data points continuously and in real time?
But this power asymmetry between organizations armed with sophisticated predictive tools and employees who are largely unaware of how they are being profiled raises profound ethical and legal questions that the business community has not yet adequately considered or confronted. Whether they know it or not, most people have now been subject to “surveillance pricing” as consumers. For example, the airline that offers a specific fare bundle because loyalty-program data signals you are likely to buy it, or the website that charges more for infant formula because an algorithm has sensed the desperation of a new parent. The same logic, applied to the employment relationship, produces what labor advocates and researchers have begun to call “surveillance wages”: a system in which pay is set not by an employee’s performance or market value, but by formulas that use personal data—often collected without the employee’s knowledge or consent—to identify the minimum compensation she will accept before looking elsewhere. This is only the beginning.
To be sure, performance management has always been imperfect. Alan Colquitt’s research cited in Next Generation Performance Management consistently shows that performance ratings tell us nearly as much about the rater as about the person being rated, reflecting idiosyncratic biases, attribution errors and halo effects as much as actual job performance. Organizations have long recognized this problem and invested in calibration sessions, 360-degree feedback systems and structured rating scales in an attempt to reduce subjectivity. Now, A.I. promises to replace biased human judgment with objective, data-driven evaluation, but the transition from bias-laden human assessment to algorithm-driven appraisal introduces its own set of distortions. The added danger is that those distortions are invisible, self-reinforcing and cloaked in the authority of “objective” data science.
Before examining the specific temptations that employers will face and what can be done to address them, it is worth noting that the legal framework governing the employment relationship was not designed with these tools in mind. Employment law in the United States rests on a foundation of statutes like the Americans with Disabilities Act (ADA), Title VII of the Civil Rights Act, the Age Discrimination in Employment Act (ADEA), the National Labor Relations Act (NLRA) and an increasingly active patchwork of state privacy laws that were drafted to govern the conduct of human decision-makers, not algorithmic systems trained on behavioral data. As A.I. becomes a substitute for managerial judgment, the legal protections these statutes were designed to afford employees may be quietly circumvented.
What temptations will companies face in using A.I. to monitor and evaluate employee performance?
The first and most straightforward temptation is to use A.I. to monitor and evaluate employee behavior in ways that go far beyond what any manager could observe directly. Modern workplace monitoring tools can log keystrokes, track mouse movements and active screen time, analyze email and messaging patterns for sentiment and engagement signals, flag extended periods of inactivity, transcribe and interpret video calls and track an employee’s physical location through mobile phones or badge swipe data. Productivity platforms increasingly use machine learning to synthesize these digital outputs into a single performance score that is fed, often invisibly, into compensation, promotion and termination decisions. This is no longer a niche practice: a 2022 New York Times examination found that eight of the ten largest American companies surveil their employees with tracking software, while global demand for employee monitoring tools increased 65 percent between 2019 and 2022—a figure that has only grown as remote and hybrid work normalized continuous digital observation.
Microsoft’s Viva Insights platform, deployed across thousands of enterprises globally, tracks employees’ email response times, meeting attendance, focus hours and collaboration patterns, synthesizing these into dashboards visible to managers and HR. Commercial monitoring vendors such as Teramind and Hubstaff offer SaaS tools enabling any employer to log keystrokes, take random screenshots and generate per-employee productivity scores; Teramind’s platform additionally analyzes email content and web browsing behavior for “insider threat” detection.
Additionally, Amazon’s algorithmic management system in its warehouse operations tracks worker activity to the second via its “Time Off Task” (TOT) system: employees who accumulate more than 30 minutes of inactivity receive automated warnings, and those exceeding two hours face automatic termination workflows, entirely without manager involvement. In January 2024, France’s data protection authority (CNIL) fined Amazon €32 million for this “excessively intrusive” surveillance system.
The problem is that such systems measure activity, not performance. And they do so in ways that can systematically disadvantage employees with disabilities, caregiving responsibilities or non-traditional work styles. An employee who processes information slowly due to a learning disability, who takes frequent short breaks to manage anxiety or who thinks best in extended periods of offline focus may score poorly on an A.I. system calibrated on the behavioral signatures of historically top-rated employees. These top-rated employees also may have been rated highly due to factors unrelated to their actual contribution, such as gender, race or similarity to their supervisors. If historical performance ratings are biased, and the research suggests they frequently are, then training an A.I. model on those ratings will simply launder and amplify those biases at scale, with the additional complication that the resulting discrimination becomes harder to detect and challenge.
There is also a more insidious temptation: to use A.I.-generated performance profiles not merely to evaluate employees but to categorize them in ways that invisibly shape how they are managed, communicated with and developed over time. If an algorithm flags an employee as “low potential” or “high flight risk,” that categorization may subtly recalibrate every subsequent interaction she has with the organization, reducing the developmental investment she receives, limiting her access to stretch assignments and potentially creating a self-fulfilling prophecy of disengagement and exit. The A.I. doesn’t terminate the employee directly, but it reorganizes the environment around her until she leaves of her own accord. Under the ADA, an employer cannot take adverse action against an employee because it perceives her to have a disability or impairment. But if an A.I. system, trained on behavioral patterns correlated with depression, anxiety or ADHD, flags that employee for reduced investment, the legal and ethical boundaries become deeply blurred. This all means that A.I. is now not only empowered to identify or infer disabilities or disadvantages, but in some sense to create them.
What temptations will companies face in using employees’ personal data to profile and manipulate compensation?
A second, and more recently visible, temptation involves the use of data that extends far beyond the four corners of the employment relationship to calibrate an employer’s leverage over individual workers. This includes consumer data such as spending patterns and subscription services, which can reveal whether an employee is living paycheck to paycheck or has a financial cushion. It may include real estate records indicating the size of an employee’s mortgage. It may include signals from professional networks suggesting whether an individual is passively browsing external opportunities. When aggregated and processed by a predictive model, these data points can give an employer a remarkably granular estimate of the lowest salary a given employee will accept before seeking employment elsewhere.
A first-of-its-kind August 2025 audit of hundreds of labor-management A.I. vendors by Veena Dubal and Wilneida Negrón, published by the Washington Center for Equitable Growth, found that employers in healthcare, customer service, logistics and retail are among the customers of vendors whose tools are specifically designed to enable this practice, with major U.S. companies including Intuit, Salesforce and Colgate-Palmolive identified as clients.
According to Nina DiSalvo, policy director at the labor advocacy group Towards Justice, some of these systems use signals directly associated with financial vulnerability, including data on whether a prospective or current employee has taken out a payday loan or is carrying a high credit-card balance, to infer the minimum pay she might accept. Employers can also scrape candidates’ public social-media pages, DiSalvo has noted, to determine whether they are more likely to seek to join a union or may become pregnant: data points that, if used in employment decisions, veer directly into discrimination territory under Title VII and the Pregnancy Discrimination Act.
This practice represents a fundamental distortion of what labor economists have traditionally understood as the employment bargain. The functioning labor market assumes that both parties to the wage negotiation are operating with roughly comparable information: that the employee knows her market value and can advocate for it, while the employer evaluates its options and competes accordingly. When employers gain access to fine-grained medical, psychological and financial profiles of individual employees, this equilibrium collapses. The employee believes she is negotiating, while in practice, she is being managed to a predetermined outcome.
The gig economy offers the clearest early illustrations of surveillance wages in action. In on-demand healthcare staffing, platforms such as CareRev, Clipboard Health, ShiftKey and ShiftMed have been found to adjust pay for individual shifts based on what the algorithm knows about each worker, including how often a nurse accepts shifts, how quickly she responds to postings and what pay she has accepted in the past, rather than on the skill or seniority the shift requires. The Roosevelt Institute, drawing on interviews with gig nurses, found that this routinely produces situations where workers are paid materially different amounts for identical work, even within the same facility. This pay was driven by the algorithm’s individual behavioral profile of each nurse (including shift acceptance rates, response speed and prior accepted wages) rather than skill or seniority. ShiftKey even charges nurses fees for every shift worked.
In rideshare, comparable dynamics have been documented for years: Fordham University law professor Zephyr Teachout has written that Uber “uses data-rich driver profiles to match the wage to the individual incentives of the driver,” and drivers themselves report being offered different base fares for the same trip at the same time, with no transparency about how the starting rate is set. “It’s judging our desperation rate,” Nicole Moore, president of Rideshare Drivers United, has said—a phrase that captures, with uncomfortable precision, what surveillance wages are designed to do. The same logic is now migrating from gig platforms into traditional employment, as the tools that perfected algorithmic pay-setting for contingent workers become available to any employer willing to purchase them.
A 2023 article in the Columbia Law Review by Veena Dubal documented that Uber and Lyft use behavioral profiling to offer different base fares to different drivers for identical trips—a practice she termed “algorithmic wage discrimination.” Her research found that between 2021 and 2023, driver income declined substantially while Uber’s effective take rate from drivers reached 40 percent of gross bookings. The article identifies this as a structural shift from uniform wage-setting to individualized, behavior-contingent compensation.
Uber itself acknowledged practices that confirm the article’s claims: the U.K.’s Worker Info Exchange published evidence in 2024 that Uber admitted to using individualized driver behavioral profiles to set both pay rates and task allocation.
The legal status of such practices is, at best, uncertain. The NLRA guarantees employees the right to discuss their wages and working conditions with one another; a right that Congress specifically enacted because it recognized the inherent information asymmetry between employers and employees. Using A.I. to deepen that asymmetry in ways that effectively suppress individual wages may not constitute a technical violation of the statute, but it is difficult to argue that it honors its animating purpose. Moreover, if the personal data used to profile employees includes information that correlates with protected characteristics—financial precarity that proxies for race, consumer behavior that correlates with disability status or location data that reflects national origin—then the employer may be inadvertently encoding protected-class information into its compensation model, raising potential liability under Title VII, the ADA and the ADEA. As we have noted in prior work, the fact that a decision is not made directly on the basis of a protected characteristic does not immunize an employer from liability if the practical outcome is discriminatory.
What temptations will companies face in using A.I. to monitor employee psychological states and predict behavior?
Perhaps the most consequential, and least publicly examined, frontier of workforce A.I. is the application of behavioral and sentiment analysis to infer employees’ physical condition, psychological states, loyalty and future behavior. Tools already exist and are being commercially deployed that analyze patterns in employee email and messaging communications to produce individual engagement scores, identify potential flight risks before they materialize, flag early indicators of burnout or dissatisfaction and detect what vendors sometimes euphemistically describe as “cultural misalignment.” More sophisticated systems use passive sensing data from workplace devices to infer mood, cognitive load and interpersonal dynamics. Some organizations are now deploying continuous micro-surveys processed by natural language processing algorithms to monitor the psychological temperature of the workforce in real time and at the level of individual employees.
Verint, an A.I. software deployed by major insurance and financial services companies in their contact centers, uses A.I. to analyze employees’ voice tone and speech patterns in real time during calls, inferring emotional state and alerting supervisors when an employee appears disengaged, stressed or emotionally flat. The system provides managers with a continuous emotional readout of the workforce.
Microsoft’s Viva Insights, integrated into Microsoft 365 and used by millions of employees globally, provides managers with aggregated and individual-level data on employee “well-being” and “focus time,” and its 2025 Copilot Analytics extension tracks which employees are adopting A.I. tools and at what pace, creating a new behavioral metric that can feed into performance assessments. Perceptyx and Glint, the latter acquired by Microsoft, deploy continuous “listening” tools that analyze patterns in pulse surveys and open-ended text responses to generate individual-level engagement scores and flight-risk predictions, surfacing these to managers and HR.
These tools are often sold to employers as instruments of employee well-being as a way to identify and support those struggling before burnout or attrition occurs. That framing is not entirely disingenuous; there are legitimate use cases for understanding the aggregate health of a workforce and directing well-being resources toward those who need them most. But the granularity of individual-level data, combined with the inherent power imbalance of the employment relationship, creates conditions that are ripe for manipulation rather than support. An employer that knows which employees are most stressed, most financially vulnerable, most disengaged and most likely to leave is positioned not merely to help those employees, but to manage them by identifying and neutralizing sources of potential dissent, selectively investing retention resources in those deemed most valuable and quietly initiating the managing-out of those who do not fit the desired profile.
The ADA’s prohibition on disability-related inquiries extends to current employees, not only to job candidates. An employer cannot ask an employee whether she suffers from depression, anxiety or any other condition that might qualify as a disability under the Act. But if an A.I. system infers these conditions from behavioral data and routes those inferences into management decisions, limiting opportunity, reducing investment and accelerating separation, the employer may be effectively circumventing a legal protection that Congress deliberately enacted. The absence of a direct question does not eliminate the legal exposure, nor does it resolve the ethical one.
HireVue, one of the largest A.I.-powered hiring assessment companies, serving over 700 enterprise clients, deployed facial expression analysis technology using Affectiva’s emotion recognition software for several years to generate “employability scores” that weighted nonverbal cues from video interviews before discontinuing the feature in 2021 following documented concerns about disability bias and discriminatory outcomes. In 2023, the ACLU filed a complaint with the Colorado Civil Rights Division against Intuit and HireVue after research demonstrated that the A.I. system performed significantly worse when evaluating deaf, hard-of-hearing and non-white speakers, a finding with direct implications for ADA-covered applicants and employees. The same underlying technology architecture that infers characteristics from behavioral data that correlate with protected conditions is now commercially available not only for pre-employment screening, but also for post-hire performance monitoring.
A still more troubling frontier involves the deliberate use of employee psychological profiles to actively engineer behavior. Once an A.I. system has assembled a sufficiently detailed portrait of an employee’s psychological traits—the stable, enduring characteristics such as introversion, need for achievement, risk aversion or sensitivity to fairness—and psychological states—the more transient and situational conditions such as current stress levels, financial anxiety, career frustration or heightened ambition—the temptation to deploy that knowledge as a tool of behavioral influence becomes very real. This represents a qualitative shift from A.I. as a surveillance and prediction instrument to A.I. as a manipulation engine, operating at the level of the individual employee and calibrated to her specific psychological vulnerabilities.
The distinction between traits and states matters enormously in this context. Traits are relatively stable features of personality and temperament that predict behavior across situations: an employee’s characteristic level of conscientiousness, her chronic need for status recognition, her sensitivity to equity and fairness or her tendency toward self-doubt. States, by contrast, are transient psychological conditions shaped by recent experience: the employee who just received a difficult performance review, the recently divorced colleague who is suddenly financially precarious or the ambitious manager who has just been passed over for a promotion. Both are exploitable, but in different ways. While traits enable long-term, systematic profiling and targeting, states enable real-time, opportunistic leverage. An A.I. system sophisticated enough to track both is positioned to influence her behavior with a precision that no human manager could match, and that most employees would find deeply alarming if they understood it was occurring.
A few concrete examples illustrate how this might work in practice. For the employee whose profile reveals a high need for recognition but limited financial negotiating leverage, an employer might substitute effusive public praise: a recognition award, a “spotlight” feature in the company newsletter, a personal call from a senior leader in lieu of meaningful compensation increases. For the employee whose profile suggests a competitive disposition and strong status sensitivity, management communications might be systematically framed in terms of peer rankings and relative performance standings, driving effort without increasing pay.
For the employee with a highly developed sense of organizational justice and loyalty, appeals to mission and fairness may be deployed to discourage job searching without improving actual working conditions. For the employee whose profile reflects fragile self-esteem paired with a strong need to prove herself, a task might be introduced with language deliberately engineered to trigger a psychological reaction: “This is an unusually difficult challenge—not everyone has what it takes to pull it off, but we thought you might be bold enough to try.” For the employee whose profile indicates a strong need for group belonging and affiliation, the framing might instead appeal to collective identity and team loyalty: “The whole team is counting on you—you’re the kind of person who doesn’t let people down.” And for the employee whose behavioral data suggests a thin boundary between professional and personal identity, that very characteristic may be quietly exploited, with the A.I. routing high-visibility weekend assignments, last-minute deadline projects and holiday-adjacent demands to her calendar, because her profile predicts she will comply without complaint.
None of these interventions necessarily violates any existing employment statute in isolation. No law prohibits an employer from publicly praising an employee, framing a task as a challenge, appealing to team spirit or scheduling weekend work. But when these decisions are made systematically, at scale, on the basis of covertly assembled psychological profiles that employees did not consent to share, and in ways specifically designed to extract maximum effort or minimize compensation cost at the individual level, the cumulative effect is a form of institutional manipulation that strikes at the foundation of trust on which the employment relationship depends.
It is also worth noting that the scientific reliability of A.I.-generated psychological profiles is itself open to serious question. Psychological traits inferred from email sentiment analysis, calendar patterns or messaging response times are not the same as traits measured by validated psychometric instruments administered with informed consent. The gap between what the A.I. “knows” about an employee and what is actually true about her creates the additional risk that manipulation strategies will be built on inaccurate foundations, causing harm not only to the employee’s dignity and autonomy, but potentially to the organization’s own legitimate interests as well.
A parallel that has only recently received the legal attention it deserves offers a sobering preview of where this trajectory leads. In courts across the U.S. and in regulatory proceedings around the world, social media companies have faced, and in a growing number of cases now lost, judgments holding them accountable for knowingly designing their platforms to exploit the psychology of users, particularly minors, in ways that generated engagement and advertising revenue while causing documented harm to mental health and well-being. The core finding in these cases is not simply that the platforms were harmful; it is that their architects knew they were harmful and did not prompt a redesign. Instead, they prompted optimization.
Internal research at companies like Meta confirmed that Instagram worsened body image and anxiety in teenage girls, and features like infinite scroll and variable-ratio notification schedules were modeled explicitly on the psychology of slot machines. The same mechanisms that caused harm also drove the engagement metrics on which the business model depended. In a precise and disturbing sense, the psychological harm to the user was not a side effect of the product, but the product itself.
In September 2021, the Wall Street Journal published internal Facebook documents provided by whistleblower Frances Haugen showing that Meta’s own research had confirmed Instagram worsened body image and mental health in teenage girls, findings the company chose not to act upon. By 2024, dozens of U.S. states had filed lawsuits against Meta alleging knowing design of addictive and harmful platform features. Federal courts have allowed these cases to proceed on product liability and consumer protection theories. This litigation is the closest existing legal precedent for what liability might look like when an employer is shown to have deliberately engineered workplace conditions to exploit employees’ psychological vulnerabilities, which is precisely what A.I.-driven behavioral profiling and manipulation, as described in this article, enables.
The logic of A.I.-driven employee profiling and manipulation is structurally identical, and the commercial incentives are, if anything, stronger. An employer that uses a continuously learning A.I. system to identify which employees respond to competitive framing, which are sustained by social approval, which can be retained through status recognition rather than pay and which have so thoroughly fused their professional identity with their work that they have no effective psychological off-switch, is engaged in precisely the same optimization logic as a social media platform maximizing session length. The goal is not employee flourishing; it is the extraction of maximum “discretionary” effort at minimum cost.
And just as social media companies discovered that the psychological mechanisms most effective for that extraction are variable reward, social comparison, fear of exclusion and the need for belonging were also the ones most damaging to users’ long-term well-being, employers pursuing this path may find that the same is true of their workforces. Employees whose behavior has been systematically shaped by A.I.-calibrated stimuli, whose work habits, emotional responses and sense of self have been gradually engineered to serve organizational ends, are not engaged. They are, in the most clinically precise sense of the word, conditioned. In the most extreme version of this trajectory, the workplace becomes not merely a place of employment, but a carefully designed psychological environment engineered to make productive compliance feel not like an obligation, but like a compulsion or an addiction, which benefits the employer and the employee’s diminished autonomy as the cost that never appears on any balance sheet.
Whether the legal frameworks now being developed to address manipulative and addictive platform design will ultimately extend to the employment context remains to be seen. The behavioral science underlying both phenomena is identical. The power imbalance in employment, where the stakes include livelihood, advancement and professional identity, is considerably greater than that between a social media platform and a teenage user. The commercial incentive for employers to exploit psychological vulnerability is no less intense than the advertising incentive that drove platform design. It took more than a decade of documented harm, internal whistleblowers and sustained investigative journalism before courts began to hold social media companies accountable for what they knowingly built. Regulators and legislators who are only now grappling with that reckoning would do well to look ahead to the employment context before, rather than after, an equivalent body of harm accumulates and before the infrastructure of algorithmic psychological conditioning becomes as embedded in the workplace as the infinite scroll is in the smartphone.
What can organizations do to manage and mitigate these temptations?
The foregoing is not an argument against the use of A.I. in the assessment and management of human capital. These technologies are powerful, they are already widely deployed and they will only become more sophisticated. Refusing to engage with them is not a realistic option for most organizations. But there is a meaningful and consequential difference between using A.I. as a tool that augments human judgment in transparent, accountable and validated ways, and using it as an instrument of surveillance and manipulation that deepens the power asymmetry between employers and employees. The former is consistent with both ethical employment practice and long-term organizational health, while the latter is clearly not.
Several principles can help organizations navigate this distinction.
First, transparency: employees should know, in plain language, what data is being collected about them, how it is being used and which decisions it informs or influences. The fiction that behavioral monitoring is an incidental byproduct of routine workplace technology is no longer
tenable, and employers who maintain it are accumulating significant trust deficits that will ultimately manifest as exactly the attrition and disengagement they are seeking to prevent.
Second, consent: there is a meaningful ethical distinction between an employee who knowingly participates in a performance monitoring program and one who is unaware that her communications and movements are being continuously analyzed. The fact that monitoring may be technically permitted under an employment agreement does not make it ethical to conduct it without meaningful disclosure.
Third, validation: just as pre-employment assessments should be validated against actual job performance, A.I. systems used to evaluate and profile employees should be regularly and independently audited for accuracy, adverse impact on protected groups, and alignment with legitimate organizational objectives. An A.I. system that produces biased or inaccurate outputs is not a neutral tool simply because a machine generated the result, and its perceived objectivity may make it more damaging than an equivalent human bias, precisely because it is harder to identify and challenge.
Fourth, human oversight: consequential decisions about compensation, advancement, discipline, and termination should not be fully delegated to algorithmic systems. A.I. can and should inform these decisions, but it should not replace the human accountability that employment law and basic organizational justice demand.
Fifth, legislative engagement: organizations should actively support, rather than resist, the development of clear regulatory frameworks governing the use of algorithmic tools in employment decisions. Legislators are beginning to act. New York state has passed a rule requiring companies to disclose to consumers when prices are set algorithmically using their personal data, a model that advocates are pushing to extend to wages. More ambitiously, Colorado has introduced the Prohibit Surveillance Data to Set Prices and Wages Act, which would ban companies from using intimate personal data, including payday-loan history, location data or search behavior, to set what someone is paid algorithmically, while carving out performance-based compensation. Whether or not these specific measures become law, they signal the direction of regulatory travel. Employers who get ahead of this trajectory by auditing their own systems, adopting voluntary transparency standards and engaging constructively with policymakers will be better positioned than those who wait for enforcement to force their hand.
In Mobley v. Workday (N.D. Cal.), a federal class action filed in 2023, plaintiffs allege that Workday’s A.I.-powered applicant screening tools systematically discriminated against candidates based on race, age and disability, making it the first major U.S. class action to hold an A.I. vendor (rather than the employer) directly liable for algorithmic employment discrimination. The court denied Workday’s motion to dismiss in July 2024; in May 2025 the court conditionally certified a nationwide ADEA collective action.
There is, however, a critical gap in this emerging legislative landscape that deserves explicit attention: every measure that has been proposed or enacted, from the E.U.’s General Data Protection Regulation and AI Act to California’s Consumer Privacy Act, from New York’s algorithm-disclosure rule to Colorado’s proposed wage-surveillance prohibition, addresses monitoring. They require that companies disclose what data they collect, how they process it, and, in some cases, what automated decisions flow from it. These are meaningful protections, and they represent genuine legislative progress. But they say nothing, and no legislature anywhere in the world has yet proposed anything about manipulation.
A company could comply fully with every transparency and disclosure requirement currently on the books, inform its employees in plain language that it has assembled a detailed psychological profile of each of them, and then proceed to use those profiles to craft individualized behavioral interventions: engineering which employees are praised in lieu of being paid, which are baited with competitive framing, which are assigned weekend work because their profiles predict compliance, which are told a project “might be too hard for you” because their profiles identify a reactive need to prove themselves. None of that is currently unlawful anywhere.
This gaping omission, comprehensive legislative attention to the inputs of algorithmic employment systems, and virtually none to their behavioral outputs, represents the most significant unaddressed frontier in employment law today. Closing it will require regulators and legislators to grapple with a question that has no clear precedent: not merely what an employer knows about you, but what it is permitted to do with that knowledge to shape your behavior without your awareness and against your interests.
A.I.’s expansion from the recruitment process into the daily life of the employment relationship represents one of the most consequential and underexamined developments in the contemporary world of work. The same technologies that promised to make talent decisions fairer and more objective at the point of hiring can, in the hands of organizations operating without appropriate governance, become instruments of intrusion, control, manipulation and discrimination.
The legal frameworks designed to protect employees are straining to adapt to a technological environment that moves far faster than legislation, case law, organizational policies or employee handbooks. The possible harm that may result from surveillance wages and algorithmic psychological manipulation needs to be discussed, debated and mitigated.
Employees subject to it won’t be able to identify it or effectively inoculate themselves against it. That is precisely what makes it so dangerous, and precisely why it demands attention now, before it becomes the permanent infrastructure of the contemporary workplace. There are no simple answers to the questions we have raised here. But they are questions that urgently require honest engagement from technology companies, business leaders, HR leaders, legal scholars, regulators and workers themselves.

