AI at Work: Zooming In and Out
Tristan Harris | David Streitland | Christopher Mims | Phoebe Moore | Peter Capelli
Quote of the Moment
The tragedy of investment is that it causes crisis because it is useful. Doubtless many people will consider this paradoxical. But it is not the theory which is paradoxical, but its subject – the capitalist economy.
| Michał Kalecki
There is a broad contrast in the way that different commentators orient their perspectives on the issues of the day. One metaphor I use is to ask whether they are looking through a microscope or a telescope.
A microscope zooms down to the smaller scale of whatever issue, for example describing a unionization campaign in the granularity of the day-to-day concerns of the specific individuals in a single Starbucks and the activities of Starbucks management in response.
A telescopic view would position that particular shops union campaign in the context of other Starbucks actions across the country, or place it in the mix with the greater world of unionization, such as the United Auto Workers (UAW) push in the past few years to open non-union plants in the soutern U.S.
Some writers can shift back and forth across the microscope/telescope divide, complementing one with the other.
And of course, issues are also framed by the scale of time: is a writer attempting to zoom into the next few weeks of a strike action, or setting context across the past and future decades of unionization?
I was struck by these thoughts while reading three writings about AI — from Christopher Mims, David Streitfeld and Tristan Harris — that I encountered in the past weeks.
Mims
Christopher Mims of the Wall Street Journal adopts a narrow aperture in his analysis of the business of AI in The AI Revolution Is Already Losing Steam. Mims tries to connect various dots in the AI business puzzle -- how much it costs to run and use large-scale Large Language Models (LLMs), adoption rates in business, slowing progress by the developers, growing consolidation in features, and the failure and sale of various AI startups -- to make the case that AI's impact is at the very least a long way off, and maybe will evolve into something much less sweeping than envisioned by others.
MIMs completely forgets to mention AI's role in social media algorithms and hedge funds, and the impacts they have made on society, and the fortunes generated there.
He cites Peter Capelli with the potentially best comparison in the article:
While these systems can help some people do their jobs, they can’t actually replace them. This means they are unlikely to help companies save on payroll. [Cappelli] compares it to the way that self-driving trucks have been slow to arrive, in part because it turns out that driving a truck is just one part of a truck driver’s job.
If AI were to have a major impact on trucking, the breakthrough might not be fully automating the truck driver, but breaking apart the various parts of truck-centered logistics and finding the pieces that can be automated. For example, delivering food to a grocery store in the future might require automation of how grocery stores operate: how could AI speed up (or lower costs) of getting food from the truck onto the shelves? A grocery store in ten years might seem more like an automated delivery warehouse, than today’s Aldi or Walmart.
Mims provides no real non-LLMs examples of major breakthroughs for AI, like the use of AI in agriculture dropping pesticide use by 90%, relying on AI vision, not LLM. Or AI's devastating conquest of zero-sum games, like chess and go. And of course, social media algorithms.
If I could pose a question for Mims, about time scale and the telescopic view of the AI market, it would be this: What if a new AI breakthrough jumps past the LLM chokepoint based on AI developing knowledge of the physical world and its people, instead of just analyzing words?
Streitfeld
David Streitfeld takes a very different tack in If A.I. Can Do Your Job, Maybe It Can Also Replace Your C.E.O, looking microscopically at the possible AI-ification of business management. He embraces the concept from the outset:
The chief executive is increasingly imperiled by A.I., just like the writer of news releases and the customer service representative.
[…] This is not just a prediction. A few successful companies have begun to publicly experiment with the notion of an A.I. leader, even if at the moment it might largely be a branding exercise.
[…]
The change delivered by A.I. in corporations will be as great or greater at the higher strategic levels of management as the lower ranks. | Saul Berman, former senior consulting partner, IBM.
The examples he offers seem very much like PR stunts, but it could be the case that companies may be keeping any such experiments quiet, for many reasons. If it is a possible breakthrough, it could be a competitive advantage, but also, any experiment that fails could look bad to investors, employees, and customers.
He discusses the need for fiduciary accountability: some one has to be accountable for business decisions. But can't a standalone corporation be formed to own an AI, and since corporations have personhood, the AI corporate 'person' could enter into contracts -- like an employment contract -- and have other rights, as well?
But I found compelling his citations of research into the opinions of C-level executives:
EdX [...] surveyed hundreds of chief executives and other executives last summer about the issue. Respondents were invited to take part and given what edX called “a small monetary incentive” to do so. The response was striking. Nearly half — 47 percent — of the executives surveyed said they believed “most” or “all” of the chief executive role should be completely automated or replaced by A.I. Even executives believe executives are superfluous in the late digital age.
And it appears that employees might go along:
In a 2017 survey of 1,000 British workers commissioned by an online accounting firm, 42 percent said they would be “comfortable” taking orders from a computer.
He doesn’t mention those forced to accept algorithmic bosses already, like Uber drivers and Amazon warehouse workers.
He cites some smart insights from Anant Agrawal (who believes in the modern age anybody ‘could be a CEO’) and Phoebe Moore, who believes that perhaps many senior employees don’t need managing, anyway:
Someone who is already quite advanced in their career and is already fairly self-motivated may not need a human boss anymore. In that case, software for self-management can even enhance worker agency.
Streitland buys that:
The pandemic prepared people for this. Many office workers worked from home in 2020, and quite a few still do, at least several days a week. Communication with colleagues and executives is done through machines. It’s just a small step to communicating with a machine that doesn’t have a person at the other end of it.
Streitland ends telescopically, it seems. But he fails to make the connection with the Uber and Amazon workers already reporting to an algorithmic boss.
Harris
Tristan Harris takes the most telescopic view, looking back to our first contact with AI -- 'curation AI' -- and says that its impacts could have been predicted.
[A] warped incentive structure within social-media platforms—an invisible engine that would come to drive the psychological experience of billions of people. Darker realities emerged. As social media tightened its grip on our everyday existence, we witnessed the steady shortening of attention spans, the outrage-ification of political discourse and big increases in loneliness and anxiety. Social-media platforms fostered polarisation, pushing online harms into offline spaces, with at times tragic, fatal results.
And now, a second contact: meet 'generative' AI.
I believe that we can predict the outcome now, the same way those who looked closely enough were able to predict the outcome with social media, by examining the incentives that drive the development and roll-out of the technology.
[…]
Financial scams against the elderly have been enhanced with voice clones of loved ones; “nudification” apps are being weaponised against teens; and deepfake audio content is being used to blackmail people of all ages.
Remember Jamais Cascio's warning: to imagine how a new technology will transform society just imagine how criminals will use it.
I can summarize Harris’s predictions:
Generative AI heightens pre-existing dysfunction in the digital ecosystem.
And the only workable response is regulation by the government and checks by other institutions:
Politicians need to find a way out of this gridlock. Tech giants need to be held accountable for the harms their products cause, not just encouraged to innovate. [...] These companies must be subjected to a liability framework that exposes them to meaningful financial losses, should they be found responsible for harms. Only then will they take safety more seriously in both the AI-development process and “downstream”, once it is deployed. Liability has the power to wire new incentives into the foundation of AI businesses.
Of course, large tech giants are rallying their lobbying might to avoid regulation. He says we should not be swayed by taglines and marketing campaigns.
What’s driving AI research, development and deployment is already clear: a dangerous incentive to race ahead. If we want a better outcome this time, we cannot wait another decade—or even another year—to act.
Except that we are in the earliest phase now, where the monster tech companies are running wild, and there are few initiatives with the scale to counter what they are up to.
…
AI is one of the three pillars of The Human Spring I've been writing about for ten years: the destablizing rise of AI, economic inequality, and climate change. AI's impact on the human right to work -- which Harris and the others allude to only parenthetically -- may be the trigger — along with climate change and inequality — of a global self-organizing movement to make governments and institutions (including giant corporations) listen to the voice of the people.
We are, after all, the final check.
More Background for Supporters
Keep reading with a 7-day free trial
Subscribe to Work Futures to keep reading this post and get 7 days of free access to the full post archives.