Scale-AI’s Predatory Labor Practices

Second in the series, “Stealing and Hoarding Power from the Most Vulnerable: Authoritarian Practices in U.S. Business Cultures” (originally published 8 May 2024 on LinkedIn)

The Relational Democracy Project
13 min readMay 20, 2024
Image by Pradeep, with permission (standard license)

“Over time interviewing, I’ve found that I mainly screen for one key thing: giving a shit. To be more specific, there’s actually two things to screen for: (1) they give a shit about Scale, and (2) they give a shit about their work in general.” — Alexandr Wang, CEO & Co-Founder, Scale-AI (parent company to Outlier and Remotasks and operating under HireArt, HireDigital, RemoteWorker US, Work From Home, and more), https://alexw.substack.com/ (comments disabled)

Alexandr Wang gives a shit about Scale-AI. He gives a shit about his work, who he hires for management, and who he invites into his C-Suite. However, given the evidence, Mr. Wang does not appear to give a shit about the workplace conditions at the bottom of his artificial intelligence labor hierarchy. Both in the US and abroad, humans working for Outlier and Remotasks who “teach” Amazon’s and MetaAI’s LLMs — making possible the $7 billion start-up’s considerable profits* — are not included in Mr. Wang’s philosophy of giving a shit.

I worked full-time for four months remotely “training” chatbots on Scale’s Outlier platform. The workplace cultural conditions I experienced, observed, and documented reflect what the Wall Street Journal, Oxford University’s Internet Institute, and thousands of former employees recount: Scale-AI’s predatory labor practices create authoritarian cultural conditions for workers, not just abroad, but also here in the US.

The Wall Street Journal reported in August 2023 that Scale AI has helped create a “vast underbelly” of “digital sweatshops” abroad:

While AI is often thought of as human-free machine learning, the technology actually relies on the labor-intensive efforts of a workforce spread across much of the Global South and often subject to exploitation. … Scale AI has paid workers at extremely low rates, routinely delayed or withheld payments and provided few channels for workers to seek recourse, according to interviews with workers, internal company messages and payment records, and financial statements.

The Oxford Internet Institute noticed in 2023 AI’s expanding predatory labor domain, and Scale-AI’s Remotasks was named one of the Institute’s top violators of fair labor practices:

Last year, the Oxford Internet Institute, which scores digital work platforms on labor standards, highlighted Scale AI for “obfuscating” its labor process. In its assessment this year, the institute, part of Oxford University, gave Remotasks a score of 1 out of 10, failing the company on key metrics including its ability to fully pay workers.

As one former US Scale employee wrote, standards for performance that determine work status and compensation are opaque and shift constantly:

I recently started working at Scale AI and let me tell you, the concept of job security doesn’t exist to them. You will be working very well; even receiving a 4/5 performance rating and the next you’ll see a mail in your inbox stating you are laid off due to low performance rating.

Another former US Scale worker echoes this experience:

The work I was doing as a prompt engineer and subject expert was fulfilling and paid well. I had passed my training exam with flying colors and got a bonus for it. I even got bonuses after for my regular work as well. Then suddenly got kicked off the platform and management told me I would get put back on but no update was ever given.

In the US and abroad, complaints about Scale-AI’s workplace practices abound. The lack of communication channels for workers, inaccurate information offered by management, lack of communication from management, withheld or missing compensation without cause or recourse, lack of consistent performance standards, lack of consistent work quality standards, and the constant threat of losing access to the Outlier or Remotasks platform without justification or recourse are norms for workers in these workplace cultures.

How do these practices make the workplace cultures authoritarian?

Democratic vs Authoritarian Practices

At bottom, democratic cultures — whether workplace, family, or government — are created to share power with everyone. At bottom, authoritarian cultures — whether workplace, family, or government — function to steal power from those with the least and hoard it. What is power? Contrary to popular belief, power is not money, influence, authority, or bandwidth, although each functions to fuel power.

Power, at bottom, is the ability to generate and maintain forward momentum. All beings on the planet are born with power: we all have the inherent ability to generate and maintain our forward momentum. Our forward momentum belongs to us: we own our power. When management’s workplace practices create unjustified and unnecessary barriers, they block our forward momentum, functionally stealing our power.

A common example of a power-stealing practice — that is also a norm in authoritarian workplace cultures — is intentionally offering inaccurate information. If a team lead or other manager offers me inaccurate information, and I base a decision on it, my decision won’t be sound, and the inaccurate information will take me in the wrong direction. When that becomes apparent, I must find better information, make a new plan, and shift gears in a productive direction. The time, energy, creativity, and resources necessary to re-orient after consumption and use of inaccurate workplace information functions to slow, stagger, or stop dead my forward momentum. In the aggregate, power-stealing practices — like offering inaccurate information to workers — create toxic power-scarcity conditions at the bottom of the workplace hierarchy.

Authoritarian workplace cultures are created when power is stolen and hoarded in practices by those few who hold authority, forcing those with less or very little power to adapt to the power-scarcity at the bottom of the hierarchy. The adaptation practices in hyper-competitive power-scarcity conditions often mimic — for survival — the power-stealing practices employed by management. In other words, the toxic conditions of the culture grow humans oriented to power-stealing.

Additionally, those with the least power are forced to comply with authority or face punishment in some form, the threat of which is directly expressed or implied. In authoritarian workplace cultures, punishment most often comes in the form of practices that inflict economic and psychological violence on workers at the bottom of the hierarchy.

What do authoritarian workplace cultures look like?

Authoritarian Workplace Conditions

Marlies Glasius, in “What authoritarianism is … and is not: A practice perspective,” uses the term authoritarianness to argue for an approach to understanding how authoritarianism functions in the everyday practices of ostensibly democratic states. Glasius’ orientation is from a political science perspective, and she notes that “professional political scientists can give little guidance as to whether there are such things as ‘everyday acts of authoritarianism’” that constitute authoritarian cultures.

Following Glasius’ lead, my critical ethnographic research — from philosophy of communication and cultural studies perspectives — focuses on a range of authoritarian and democratic expression, including everyday relational practices that in the aggregate constitute either democratic or authoritarian workplace, family, academic, and community cultures. The research ultimately offers recommendations for how to shift those infected cultures in healthy democratic directions.

Findings from seven years of ethnographic data collection and analysis in a variety of US cultures show that four baseline conditions are necessary for relationally democratic workplace cultures to grow and thrive: (1) openness, (2) transparency, (3) accurate information, and (4) nonviolence. Healthy relationally democratic workplace cultures create the conditions for worker safety, trust, and well-being.

Conversely, the findings show that the baseline conditions for relationally authoritarian workplace cultures to grow and thrive are (1) closedness, (2) inaccurate information, (3) non-transparency, and (4) violence (physical, psychological, economic, and/or discursive). Authoritarian workplace cultures — by functioning to steal power from workers at the bottom and plunging them into toxic power-scarcity conditions — destroy worker safety, trust, and well-being.

The data I collected and analyzed over three months (January — April 2024) show that Scale-AI’s Outlier workplace culture is unmistakably authoritarian.

Outlier’s Workplace Culture

I read the ad on LinkedIn: $40 an hour to help AI function better for humans. If I commit full-time to write for Outlier, the ad suggests, I can earn $1600 a week. The ad tells me I can work when I want — any hours, any days — and I will be paid weekly. I need only complete coursework and pass an assessment — for which I’ll be compensated, along with a bonus — and I can start earning immediately. It seems like the perfect gig: I can avoid being locked down in a management role while staying buried in words so I can finish writing up findings and monetize my field research.

Outlier seemed almost too good to be true. Sadly, it was.

I passed the assessment after completing the coursework. After onboarding, Outlier (and Remotasks) workers join Slack, are assigned a project group, and are supposed to be assigned a team lead for support. When I started at Outlier at the end of January, there were approximately 33K members in the Slack channels. Currently, there are approximately 173K members, a 424% worker increase over just three months. The mass of workers at the bottom far outnumber team leads, so many new workers are left without a team lead, a project or group, or any support. Scale’s industry employment numbers, however, appear impressive from the outside.

After joining Slack, admins also move workers to a specific project to which they’ll contribute and to related projects and admin channels. Rarely are the reasons or justifications given for being assigned specific projects. The opaque aims of these projects are also reflected in their names and multiple iterations: Flamingo Ultimate, Flamingo SFT, Bulba Peppermint, Bulba Spearmint, Seal, OTS single-turn, OTS multi-modal, etc.

I was added first to a project called “Flamingo.” Over two months, I was moved 18 times to different projects. Workers are expected to “train” on the new project to which they’ve been moved by reading long, complex, seemingly committee-generated conglomerations of task instructions. The last set of instructions I read — before deciding to leave Outlier last week — was 85 PowerPoint slides long. Some are even longer than that. At the beginning of April when paid tasks were being “throttled” (or stopped without notice or justification), my former team lead — now a team lead coordinator — wrote that I would love a project called “Ostrich” because it had an “evergreen queue,” and she offered to move me to that project.

Extensive training and four evaluation tasks were necessary for me to be allowed to work on the Ostrich project. Before starting my first two tasks, the only training was reading the convoluted instructions. Everyone working toward the project was promised feedback on their first and second tasks so that we could adjust and improve our performance on the following two tasks. No evaluation criteria were offered, and the promised reviews were not accessible.

After the Ostrich team admitted to losing the first two tasks from workers who completed them — each task takes up to 6 hours to complete — anxiety, fear, frustration, and chaos ensued on the Slack channels. No reviews of work — or rushed, hostile reviews that made no sense — were the norm for hundreds working toward admission to the Ostrich project.

Without apology for the lost work, Jad Faraj, Scale’s new Strategic Projects Lead, unilaterally decided to “adjust the parameters” by throwing away the first two tasks and considering only the 3rd and 4th, which were undertaken without training beyond reading instructions or feedback on previous work. Not only did this choice create fear, confusion, and anxiety, but devalued and demeaned workers whose labor was lost.

Mr. Faraj is directly responsible for modeling the relationally authoritarian practices that create chaos and human suffering in Outlier-AI and Remotasks workplace cultures.

As of this writing, lower hourly earners (Tier 1 and 2, $15–18 an hour) have been moved to all projects except Ostrich. Meanwhile, thousands of Tier 3 of 3 contributors — professional experts in their fields — are ostensibly moving toward Ostrich project admission and are waiting, without work, in virtual lines behind a backlog of lost tasks, missing or specious reviews from opaque sources, and inaccurate information offered to keep them hoping and waiting. (UPDATE: On May 10th, Scale’s Outlier cut pay by 37.5% for all T3 experts on its platform, without justification and without recourse.)

Scale-AI’s Relationally Authoritarian Practices

“Scale AI was conceived in 2016 as a one-stop shop for supplying human labor to perform tasks that could not be done by algorithms — essentially, the antithesis of AI. Cofounders Alexandr Wang and Lucy Guo realized humans were vital in labeling the data needed to train the AI used in self-driving cars.”

That quote is from an article in Forbes, where coverage of tech companies like Scale-AI is overwhelmingly positive because profits and funding rounds drive the news cycle. As Disconnect illustrated this week, however:

One of the reasons coverage can be so positive toward tech companies comes from bias; from people liking those companies or believing in a broader ideology that tech ultimately does good in the world (if they think it does much bad at all). Those biases often go unacknowledged, even though they can be crucial to how a journalist or an entire outlet approach a company.

When you’re a leader in tech, it’s nearly impossible to see what damage the company’s processes create outside the cash-packed tech bubble. This is especially true for workers whose lives are dictated by the chaotic cultures of perpetual start-ups like Scale-AI.

It is possible, however, to prompt change by providing a variety of mirrors — like this article and others — into which tech CEOs like Alexandr Wang might look and see themselves from the perspective of those exploited for his financial gain. It’s certainly worth a try given the human suffering that’s been scaled by AI companies so that other humans (clients and users) have semi-functional chatbots to sell and use.

The RLHF (reinforcement learning with human feedback) phase of training LLMs is the final crucial step to align properly how an AI chatbot interacts with human users and what its responses produce. The humans doing this vital work must not be forced to endure authoritarian conditions that create their suffering. Not only are the conditions a form of psychological violence, but the mass human suffering directly translates to lower-quality data for LLM alignment, as Labelbox notes.

Countless times on the Slack channels, team leads scold workers and threaten them for “poor quality” or claim that the client “wasn’t pleased” and may not do more business with Outlier.

The pressure on workers to continue to produce quality text data while enduring authoritarian conditions that steal their power to move forward is enormous. Whether young and naive, experienced and wise, young and wise, or experienced and naive, no human put in these conditions is oriented toward other humans via relationally democratic — or power-sharing — practices. Just quitting is also not a realistic option for laid off tech workers or displaced academics and other experts who are often without healthy diverse options in the employment marketplace. Big tech gigs monopolize.

Ultimately, I decided I was no longer interested in Scale’s Outlier, even for the promise of a $40 an hour gig with an unlimited number of paid tasks. I calculated my earnings over the 13 weeks. When I started, I planned for 40 hours a week, based on the information I was given i the ad, during onboarding, and from team leads.

Based on the inaccurate information, I planned to earn $20,800 for the first 90 days and budgeted accordingly. When I calculated 13 weeks — even being available and waiting 7 days a week — my total earnings were only $13,940.27. That’s a $6859.73 loss facilitated by bad information from Scale and Outlier employees.

Openness. Transparency. Information Accuracy. Nonviolence. These are not aspirational, but baseline conditions for a democratic workplace culture that supports humans and broader democratic norms. The workplace conditions described in this article are not cultural bugs: they are features of Scale-AI’s authoritarian workplace culture and they must change.

Scale and similar company cultures create the conditions for the possibility of more authoritarian orientations, enablers, leaders, and regimes. This struggling democracy is only further undermined by workplace cultures like Scale’s Outlier and Remotasks.

Users of tech company products need to ask themselves: Do we support with our purchases workplace cultures that strengthen US democracy or do we support workplace cultures that undermine it?

My hope in offering this analysis is that Alex Wang gives a shit and does something to change conditions for the workers at the bottom who make his profits and successful funding rounds possible.

*Wang and his team at Scale-AI just completed a $1B round of funding from Amazon and Meta, which will enable Scale to grow its predatory practices alongside Amazon and Meta, both also notorious for their predatory labor practices. Big tech’s human predation, in the aggregate, significantly contributes to growing the conditions for the possibility of a US authoritarian regime.

Got a story about working for one of Scale’s companies like Outlier or Remotasks? We’d love to hear it! Reach out to relationaldemocracy@gmail.com.

First in the series, “Stealing and Hoarding Power from the Most Vulnerable: Authoritarian Practices in U.S. Business Cultures”: Octapharma Plasma

Cathy B Glenn, PhD is an independent researcher at The Relational Democracy Project. She is currently completing an extensive white paper that outlines findings and makes recommendations for democratic cultural change based on seven years of original research. The paper is called “The Human Relational Basis of Democracy,” and this article draws from that work.

--

--

The Relational Democracy Project

Native of the San Francisco Bay area, Cathy B Glenn, PhD is an independent researcher, educator, creative, and founder of The Relational Democracy Project.