AI firing

AI tools developed by white-collar workers now being turned against them

The swift dismissal of 12,000 Google employees sparked fears that AI was involved in the process. "When white-collar workers dedicate their lives to building tools designed to monitor, control, sort, and decide people's lives, they shouldn't be surprised when those same tools are turned against them"

The Washington Post first reported last week what many had already been talking about in the high-tech sector - is it possible that Google used an algorithm to make decisions about who will be fired as part of its cutback process?
This concern arose through various channels among the employees after the technology giant announced the layoffs of 12,000 employees, and the decisions that were made were seen by some to be unusually quick, random at times, and made without any consultation with direct managers or department managers. Google was quick to respond to the concern and announced that "an algorithm was not involved" in the decision-making process, but the horse had already left the barn, and a full-blown anxious discussion began.
1 View gallery
איור הייטק בינה מלאכותית פיטורים
איור הייטק בינה מלאכותית פיטורים
AI firing
(Credit: Yonatan Popper)
Although Google hastened to distance itself from the idea that it fired thousands using a machine, almost all human resource managers of the largest companies in the United States did not. In a survey conducted in January among 300 of them and quoted in The Washington Post, 98% indicated that they would use an algorithm in the coming year to make these types of decisions.
The use of an algorithm gives HR managers the justification of a "hands-on" or "multiple data points" based decision, which seemingly allows for infinite objectivity. At the same time, it also enables significant savings in time and manpower, in what would otherwise be a complex process of employee evaluation and dialogue.
Of course, objectivity is just justification. Algorithms that integrate objectivity into decision-making processes, whether it is in the search and sorting of employees or in recommending a purchase or deciphering data, are notoriously full of biases. These biases occur from the stage of labeling the data, sorting it, entering the parameters for making the decisions and up to the human decision regarding when and where to deploy the technology. In fact, the biases that these tools produce are so numerous and hidden that regulators around the world constantly examine their deployment and use and even conduct a long dialogue on imposing a ban or limiting the use of these methods in sensitive processes such as employee selection, recommendations on the provision of social benefits or financial services.
But the problem is much more than the question of objectivity, and it is well reflected in the reaction of Google employees to the idea of being laid off by a machine. A response that had nothing laconic about it, but was full of fear. For them, what is already a complex event had become even more difficult to stomach.
Their feeling is understandable as well as the public’s fascination with the story that took place following the article. But these reactions have an ahistorical component. This is not the first time that an algorithm has been entrusted with the responsibility of determining the course of employees' lives. For years, software has been dictating an inhuman pace of work in warehouses and factories. Algorithms have also long been deployed to set work quotas and measure outputs. There are algorithms that track the movement of drivers in vehicles, that monitor the number of times an employee went for a bathroom break and how long the break was. Cameras equipped with artificial intelligence have long been deployed to follow the movement of workers in factories, alerting managers if one worker has been talking too much with a co-worker in a suspicious manner so that they do not unionize.
Other software measures keystrokes and mouse clicks to rate the level of productivity of workers. Other tools identify lack of productivity and reduce wages accordingly. There are also systems that monitor the tone of the employees in call service centers in order to intervene in real time in the conversations as the managers see fit. Not to mention that there are factories where software keeps a logbook of targets and if they are being met, and after a set number of violations also immediately issues a dismissal letter.
Yes, algorithms already years ago received the sensitive life-changing responsibility, anxiety-causing task, of firing employees. These were also diligent and skilled workers, only they were not based in luxurious campuses, but rather earned low wages in factories, in warehouses, those who drive the cars and delivery trucks, in giant chains and in supermarkets.
Not only have things like this been happening for years, but it is the employees of companies in the technology sector who conceived, built and sold these programs. So when white-collar workers dedicate their lives to building tools designed to monitor, control, sort, and decide people's lives, they shouldn't be surprised when those same tools are turned against them. When programmers are not concerned with the consequences of the code they write, with the question of what product they are helping to build, how these products will contribute to society or the community in which they live, there is no justification to resent the negative consequences of these products.
When you work silently in a sector that produces tools designed to make workers easy to control, surely they will later be treated as a commodity. And when we sanctify the activities of these companies, crowning these managers as prophets, saviors and people of the year, we contribute to the long march back to the time when the workers had no voice, power or value, except what their superiors determined for them. If it weren't for the high-tech companies, it's hard to imagine how small and large companies would be able to realize their ambitions for profit at any price that no one sees, and complete the journey of dehumanization of the workers. Without the code, the algorithm and the machines, it is hard to imagine how the massive exploitation factories could operate as elegantly and efficiently as they have for decades.
The driving force behind these companies and the products they chose to develop was always one thing - capital. The aggressive effort to develop products whose purpose is to "optimize" processes, "supervise" employees, save costs or time, was created within the framework of competitive pressures of corporations to prosper in terms of profit. As long as managers are limited in utilization and cannot do as in the old days and simply extend the working day more and more, they began to look for other ways to increase the value from the employees. In the last century, there were two main tools to fulfill this task - the introduction of new technologies to reduce the time it takes to produce, and increasing the pressure on the workers to make them work harder in the same period of time.

The final product and definitely also the goal of these approaches is, as mentioned, maximizing profits, but without any consideration for the workers' standard of living and while weakening their status. These are automation and technological innovation that do not serve progress, that are not intended for the common good and the improvement of life and that do not constitute a positive factor for society and the community in which they are deployed.
Technological innovation is not innocent. But it doesn't have to be that way. Technology can increase labor productivity, and also empower workers, this is not a question of technology but a broader question of economy and society. Technology can be built and deployed as a tool designed to serve the common good and promote social wealth, but for this to happen, those who build it, or at least help build it, need to recognize that it is not external to social relations, and that its design and goals are closely related to those who develop it. As long as we continue to remove people from processes that are related to and affect people, whether if it is to shirk responsibility and use objectivity as an excuse or to push for quick solutions, people will always be hurt.
First published: 12:55, 01.03.23