Can AI help democratise the economy?

Autumn 2023 #43
written by
Daniel Stanley
illustration by
Hanna Norberg-Williams
subscribe

A critique sometimes levelled at advocates of a new economy – or new economics – is that many of the changes they propose are not exactly ‘new’, being often based on alternative philosophies and theories with long histories behind them. Less common, but perhaps more cutting, is the similar complaint that the focus of new economy work is as often looking backwards as it is forwards, proposing solutions for today’s or even yesterday’s problems rather than genuine alternatives to our emerging future economic trends and challenges.

One such emerging challenge, preoccupying much of the political and policy establishment, and dominating media coverage this year, is the risks posed by a new wave of Artificial Intelligence technology, in particular that of Large Language Models (LLMs), ChatGPT being by far the most well known example.

Ironically though, for all its status as the cutting edge topic of the day, the most common themes of the recent round of AI debate have a very familiar flavour, being dominated by Hollywood-tinged concerns about ‘existential risk’ to the future of humanity. Unfortunately, as critical voices have pointed out, the dominance of such doom-laden narratives about AI, rather than serving to galvanise action from governments and legislators, has instead served to distract their attention away from the very real harms that irresponsible use of AI and related technologies are doing today. This no doubt explains the leading role AI company leaderships have played in such warnings.

Into such a situation, then, the question arises: how should advocates of new, more democratic models of the economy, and economics, react and respond to these changes?

This was explored at ‘Using AI to Democratise the Economy’, a recent Stir to Action event with speakers from the Institute for the Future of Work (IFOW) and the Ada Lovelace Institute, two of the leading lights in the current debate over the shape of emerging AI technologies.

As a recent Ada Lovelace Report (Regulating AI in the UK) points out, AI is not a singular technology, and people’s experience of it differs widely according to the different usages. There are extremes of concern like autonomous weapons, and more obviously benign examples such as robot vacuum cleaners, but most practical, immediate impact occurs between these poles. Already automated decision making systems are being used by government bodies to decide outcomes like benefit sanctions with little recourse or accountability, and police forces are making use of facial recognition systems with proven racial bias.

Another location where the impact of AI is already being felt – and where its impact in future is likely to be acute – is the workplace. Along with fears about robot takeover, one of the inherited tropes about automation and AI is that of robots ‘taking our jobs’.

Fortunately, organisations like IFOW have been taking a more considered and comprehensive approach, looking at how automated and algorithmic systems have been steadily implemented in workplaces in a range of different settings. Of particular interest and concern are the systems in places like Amazon warehouses, where invasive AI tech is already tracking, measuring, and directing performance down to minute details of hand movements. Implementations like this make clear the urgent need for systems of oversight and transparency, such as IFOW’s own Good Work Algorithmic Impact Assessments, and suggest a troubling divergence between professionals enjoying the benefits of using new AI tools and others increasingly finding themselves being used by them.

So is the picture mainly negative? It's an understandable impression. But there is emerging research that suggests some potential positive impacts, and ways in which the opportunities of AI, and the effects of its increased usage, might be harnessed in service of making our economy more democratic.

Harnessing AI to support existing efforts

Firstly, there is the most obvious opportunity to utilise what is proposed as a uniquely powerful new set of technologies in service of the many different existing ways that organisations are looking to build fairer and more just economic conditions.

While the technology is of course rapidly changing and developing, there is already a relatively clear set of established types of work that LLMs seem particular well-equipped to assist with, namely:

• Standard writing tasks, or at least the generation of relatively generic copy for non-specialist usage and outputs

• More strategic planning, research, and ideation tasks, where the ability of the technology to generate structured documents and numerous connections between discrete topics can be a useful starting point

• Generating new or hybrid images according to an existing style, using software such as Midjourney

• Most recently, performing more complex data analysis and sorting, such as through the plugins available for ChatGPT

Each of these has their own particular challenges – for instance the ongoing tendency of all major LLM systems to ‘hallucinate’ and make up facts means most details will need checking, while the copyright status of much of this work remains a matter of debate (and legal conflict). But they can undoubtedly be major time-saving devices, freeing up badly needed resources for other areas of our work.

What is more, the wider benefits to employees and organisations are starting to become clear, with research showing that the technology allows in particular new, lower-skilled workers to gain skills much more quickly – and there is also initial evidence that job satisfaction can be increased in many roles (i.e. Noy & Zhang, Science, 2023). Technology writer Alberto Romero memorably proposed this outcome as the result of Generative AI being ‘a bullshit tool for bullshit jobs’ – that its tendency to say what people want to hear makes it particularly well equipped to take on a lot of the kind of formulaic reporting work that people least enjoy doing.

With this in mind, training community organisations, democratic businesses, and others to be confident in the use of AI, and to see where it can help their work while being aware of its limitations and dangers, seems an obvious and urgent necessity for the wider movement, if it is not to be left behind. Stir to Action is exploring this currently, and will have more to share soon.

Democratic technology – or not

While the practical use of AI could represent an immediate opportunity for those looking to democratise the economy, the business models of those companies providing the technology are anything but democratic. This has not always been the case – OpenAI was famously founded as a nonprofit, stating in its founding charter in 2015 that its goal was “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return”. Fast forward to the present day, and OpenAI is a long way from these initial principles, having agreed a $10 billion deal with Microsoft, projecting an income of $1 billion itself in 2024, and regularly demonstrating a lack of transparency that makes its name seem similarly obsolete.

There is little better to be found in other leading AI companies, the most prominent being a familiar rogues’ gallery of Big Tech operators such as Meta and Alphabet (holding companies for Facebook and Google), with Apple and Amazon waiting in the wings. There are of course smaller companies looking to break into the sector, but with the huge infrastructure and data requirements for training modern LLM systems, it seems very likely the incumbents will dominate the future of this next round of technology-led change.

Exploiting opportunities created by AI adoption

There exists then a particular tension at play for organisations looking to make the economy more democratic and equal, who are considering using AI. But there are also other opportunities starting to emerge as a result of the technology’s spread that could provide positive outcomes.

One feature of the rise of generative AI has been the sheer speed of this spread – ChatGPT in particular being one of the fastest adopted technologies in history, reaching 100 million users, much faster than most social networks from the last two decades. This pace has left many businesses and other organisations struggling to keep up, and uncertain how to deal with a technology that is often unpredictable and risky, not least through its tendency to confidently assert made-up information. This and other inherent characteristics of the tech has led many organisations to restrict or even ban their employees from using it, but such is the time-saving attraction that many have continued to do so, using their own devices to make their jobs easier – leading to a situation where research suggests there are increasing numbers of what Professor Ethan Mollick calls ‘secret cyborgs’ – individuals successfully utilising AI in their job without revealing it to their employers.

This situation produces a new and interesting power dynamic between employees and management/ownership, as a tension arises between the need of management to urgently know how AI is being used to increase productivity (before their competitors do), and the interests of workers in holding on to this knowledge. With the right support and preparation, it’s possible that this situation could potentially be leveraged in future so that workers might negotiate more control, power, and even ownership over their work and organisations.

At the moment this is only one example of a hypothetical opportunity, but when considering the increasing evidence that the current round of AI tools benefits lower skilled and more inexperienced workers the most, it is a tantalising glimpse of how tools provided by companies that are anything but committed to greater economic fairness, might still create the opportunities for it to occur.

What it will require, though, is a real engagement not only with the immediate uses of AI, or the future dangers, but with understanding and foreseeing the indirect, less predictable outcomes that might result from its increased use. Alongside the continuing push for better regulation, and combating the ways in which AI is already being used to exploit and dehumanise, it represents another front in the fight to make sure this technology can improve rather than worsen our economy and society.

Daniel Stanley is a Senior Associate and co-owner of Stir to Action. He is CEO at the Future Narratives Lab, and an Associate Director at Cohere Partners. He has a background in community organising and social psychology, and writes and lectures on framing, narrative, and social change strategy.

Autumn 2023 #43
subscribe

Suggested reading

20 years of the CIC

by
Adrian Ashton

'New Economies' and the Rebuilding of Democratic Power

by
Jonny Gordon-Farleigh
Winter 2024 #44

New Social Spaces

by
Dan Gregory
Winter 2024 #44

What’s the Future of Philanthropy?

by
The Social Change Agency, Joseph Rowntree Foundation, & Stir to Action
Winter 2024 #44
SEE ALL ARTICLES
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.