The Political Economy Of Artificial Intelligence
Dr Marcel Mbamalu

The Political Economy Of Artificial Intelligence

1 month ago
10 mins read

A paper presented by Dr Marcel Mbamalu, CEO of Newstide Publications Limited (Publishers of Prime Business Africa) during the Jacksonite Annual Lecture Series and International Conference organised by the Mass Communication Department, University of Nigeria Nsukka (UNN) with the theme “Artificial Intelligence, Communication and Knowledge Economy in the 21st Century”, at the Faculty of Arts Auditorium, UNN on April 18, 2024.

 

Introduction

In late December of 2023, foremost global newspaper, New York Times, sued OpenAI and Microsoft, for infringing on its copyrights by using “unauthorized published work to train artificial intelligence – AI” (Natasha, 2023). The lawsuit, which is staking billions of dollars in claims, “contends that millions of Times articles were used to train automated chatbots, which now compete with the news outlet. The complaint cites several examples in which a chatbot provided users with near-verbatim excerpts from Times articles that would otherwise require a paid subscription to view”.

In 2023, the writers’ union and the Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) in Hollywood went on strike together for the first time since 1960 to protest the threat of AI to the acting profession. The strike jeopardized the production and release of hundreds of film and television shows. The unions protested poor remuneration and their replacement in films by “digital replicas” generated by artificial intelligence.

Since its dramatic rise in 2019, AI has upgraded the disruption of media industries started by the computer revolution and confirmed digital technologies as the mother of all technological disruptions. Discussions about AI’s disruptive tendencies occur in the midst of other issues such as the risks associated with AI, how AI distinguishes from other computer programs, and AI control and regulations. Importantly, however, the discussions may not be totally understood without the contexts of global technological competitions, and in fact the political economy of AI, which is likely to drive the discussions in the coming years.

READ ALSO: Artificial Intelligence, Education And The Christian Faith

It is worthy of note that no sooner did the technological revolution of the 1980s occur than attention shifted to the economics of the information revolution. For instance, all the craze about digitization over the past two decades has quickly gone beyond the amazing technologies underlying digitization. The major things are consumption and market edge, and this is where all the legal, diplomatic and political games lie.

As technologies unravel at great speed, so do firms, industries and countries go over the roof to think about how to dominate the market driving the knowledge economy. In music, video, telecommunication, computing, transport, industry, and even warfare, the race for domination is feverish. Less than 10 firms in each case, control the global market for film, music and computing. Yet more than over 75% of the market share for these industries comes from outside the countries that own the firms, which are mainly in the global north, e.g., the US, Japan, and China. Therefore, there is a lot of propaganda and plain politics to control global production and advancements in AI. How true? The paper looks in this direction and explores the regulatory dimensions of AI in the context of the global economic drivers.

Regrettably, Africa remains a testing ground for technology, such that national control over production, acquisition, training and use of technological ware is very weak. This same music appears to be playing in the area of Artificial Intelligence, which is yet scarcely understood among many internet users, especially in developing countries. This is worsened by scary talks about the dangers posed to human survival and the job market by AI. It is said however that much of this talk is propaganda and plain politics to control global production and advancements in AI. How true? The rest of the paper looks in this direction, beginning with some conceptual explications.

READ ALSO: 

The AI-Computer Program Dichotomy

Computer systems and programs are built to perform definite tasks according to the codes written into the programmes. Such codes can be used to automate complex systems and robots. Much of what computers do in contemporary times was unthinkable a few years before the 1980s. There is no joking about the technological wonders powering computerized systems.

It is basically the same with artificial intelligence. However, while computers perform defined tasks, they are not programed to think or to contrive solutions in situations outside the scope of their codes. This is where humans, as creators of technology, have an eternal edge over technology. But now, the same humans want to compete with themselves by empowering technology to think and to attempt to imagine, feel, and innovate. That’s artificial intelligence, and while it still seems science fiction, it is here with us.

Computer programs, once created, do the same thing over and over, with occasional, human-powered updates. In contrast, AI uses its own codes or programs to gather data while in operation and uses the data to learn new things and to make adjustments in its operation. That’s simply why it is called intelligent.

 

It’s Just like Analogue and Digital Production

 When digital technologies took over from analogue technology, it was a takeover of self-creation from mimicry. That is, analogue equipment, a camera, for instance, tries to recreate the image they see by letting light radiations change the films in the camera according to the pattern of the frequencies of light reaching the films. Conversely, digital technologies have their own inbuilt codes that help the camera to translate the light sensation into codes incorporated into the digital camera. This is why digital reproductions are far more detailed and easier to manipulate in the process of outputting the image in terms of sequence or synchrony and detail. In the analogue situation, there is direct signal-to-image formation. In the digital scenario, the device first recodes signal, and reproduces the object using its own code.

In like manner, AI uses codes just as computer programmes. But instead of using codes directly to perform defined tasks like computer programs, AI uses its code to generate data, sequence data, seek out patterns, interpret data and tailor data to any task in line with a task requested of it. Never mind, this still has limits, because AI does not just yet create humans or do more complex things like building total airplanes all on its own.

The idea is that beyond just being programmed for a task, no matter how complex, like computers, AI aims to think and to perform tasks not even directly reflected in the codes it uses. It’s just like using the same wood to build different furniture or woodwork or using the same data/information to write different communication texts. While computers can record, and transcribe speech, AI will go further to write news after developing questions based on an inputted topic, conduct an interview and write different news genres for different outlets. It can even sniff other news outlets and rewrite their news, giving updates and after rewriting and fact-checking news. Yet, it does much more than news. AI is not simply an advancement in the automation of computerized systems. It can also operate machines, generate information, take decisions for organisations, create models for production, and make projections.

The Political Economy Of Artificial Intelligence
AI Image. Credit: Britannica

AI Systems do not Necessarily Need to Store All the Data

Computers need different codes and storage data for specified tasks. AI data depends on the limitless cloud data owned by the different computer giants like Amazon, Google and Facebook, in addition to new cloud entrants that help firms to tap into cloud data to use AI systems. Thus, AI is cloud based, and it doesn’t need to store all the data needed by a self-driving car, for instance, to learn about all the possible road signs and traffic issues a car can see. It only needs to communicate endlessly with the cloud and continuously learn to do new things according to its almost limitless capacity to process and learn to do new things based on data. This means that all the intelligence generated by the technological giants for internet searches (search engines), social networking, online commerce and banking, data hubs for research, digitized library information, media archives, health data, military intelligence, satellite data, geological information, etc. are being harnessed by cloud migration firms to create AI systems for potential markets.

This is where the main difference lies. AI learns to do new things on its own based on data it generates and processes. Computer systems use inputted codes to do specified tasks and no more. Note that there are rule-based chatbots, which operate based on rules. Once a user operates them based on the rules, they respond in line with their software. Chatbots operate much like computer programs and robots. So, if the rules are not followed, they are helpless. For example, if a user does not ask a question to a computer as inputted into the section for frequently asked questions, they won’t respond favourably to the user.  In contrast, AI chatbots, try to figure out what the user wants or tries to do, and responds accordingly. Overtime, they gather enough data to guess right about how to help users whose questions do not necessarily conform to system guidelines. That’s why the AI engineers are expressing fears that over time AI may become smarter than humans due to available information.

Algorithms Do the Magic

AI systems have complex algorithms to access, read, collate, and learn from data to perform any task required of it. A self-driving car, for instance, will use inputted codes, but will immediately learn to react to any new thing it encounters beyond the codes that help it to self-drive. Instead of depending on the initial codes to self-drive, a car will collect data, process data by labelling it, use algorithms to analyze data and learn or build models to perform tasks from the analysed data, and this can take just micro seconds.

Algorithms are just components that help AI to recognise signals, organize and analyze them in a process of learning some patterns for task performance. Algorithms help AI to perform complex functions, because they can analyze complex data from a myriad of sources, and make predictions or models for decision making.

This means that AI continually collects data, analyses it in connection with other data and makes recommendations or predictions. And based on user responses to its projections, AI continues to self-improve based on feedback. This is the system used by your computer to recommend videos or stories and ads to you after observing your pattern of reading, watching or surfing the internet. Your interaction or responses to the recommendation will help the computer to make adjustments, including automatic updates, without anyone needing to reprogram or update it. This is why data scientists rather than computer programmers, are going to be the most important jobs in the emerging AI markets. Data labelling is very important in AI development, because that’s what determines the efficiency of algorithms in data analytics.

Investment Edge

 According to the 2019 AI Index Report, published by the Stanford Institute for Human-Centered Artificial Intelligence in California, global private investment in AI in 2019 was more than US$70 billion, with the US, China and Europe having a lion share of the investments. The report also says that Start-ups founded on AI technologies are a major part of the ecosystem, garnering more than $37 billion globally in investments in 2019, up from $1.3 billion raised in 2010 (Savage, 2020).

According to Armstrong (2023), global corporate investment in artificial intelligence has grown significantly since the past decade (Figure 2). A total of $934.2 billion worth of investments went to AI around the globe from 2013 to 2022, according to a Stanford University analysis estimates. This included total mergers and acquisitions, minority stake and private investments and public offerings. The analysis, which tracked the investments of over 8 million global public and private companies also shows that recent commitments reached a record high in 2021, when $276.1 billion worth of investments went into AI by businesses around the world. Investments slowed in 2022. However, with the release of OpenAI’s generative AI tool ChatGPT in November of 2022, expert forecasts show that AI is set to be the rave of the future.

Further estimates indicate that global investments in AI could hit $200 billion by 2025. This may nudge up the global GDP by 1 percentage point.  However, a lot of investment is needed in physical and human capital to achieve this (Briggs and Kodnani, 2023). The US is in top gear on AI investments and companies globally, with the country controlling the major technological giants whose cloud data investments are driving AI systems, e.g., Amazon, Facebook and Google.

READ ALSO: AI In Photography: Curbing Threats Of Rising Fake Photos In Online Space

The Legal Imperatives, Economics and Politics of AI Regulation

In late December of 2023, foremost global newspaper, New York Times, sued OpenAI and Microsoft, for infringing on its copyrights by using “unauthorized published work to train artificial intelligence – AI” (F. Natasha, personal communication, December 28, 2023). The lawsuit, which is staking billions of dollars in claims, “contends that millions of Times articles were used to train automated chatbots, which now compete with the news outlet. The complaint cites several examples in which a chatbot provided users with near-verbatim excerpts from Times articles that would otherwise require a paid subscription to view”. The situation tends to justify the strident calls for AI regulation both for the protection of copyrights and for protection against risks.

Recently, The US, UK, Germany, France, Italy, Canada and Japan known as the Group of Seven (G7) largest economies and powerful countries unveiled the path to international framework for AI regulation. The UN looks more towards global guidelines, while the G7 is focusing on clear guidelines. However, with China and Russia in the mix, the stage appears set for competition, especially in the inputs that the two countries will make at the UN level, being avowed rivals of the G7. The rivalry will be a different dimension of the NATO-BRICS economic and military feud extended to AI. It is little wonder that the foremost risks feared about AI are in global health, economic and military competiveness.

The counter arguments between the US, EU and China on the origins of Covid-19 comes to mind in the emerging AI regulatory minefield. As reported by UNESCO, “while the G7 is geared towards clear guidelines, the UN aims to ignite a global dialogue on responsible AI use” (Gelman, 2023). It is difficult to understand the difference between the two focal points, except perhaps in saying that while the G7 wants to set clear guidelines for the US, EU and their trade partners, the United Nations is bracing for a legal spat with China and Russia.

 

Conclusion

The race for AI dominance is on. It is happening through real and controversial calls for protections and controls against perceived risks through laws. For now, all the stakeholders (the three’s Cs, namely, companies, countries and consumers), believe that AI is too important and too risky a technology not to be strictly regulated. China insists that the state must regulate AI, which it says must respect the socialist perspective. The US and EU are generally more liberal in their approaches. Beyond ideologies, however, laws also target AI research funding and child-related risks. Healthcare, financial services, housing, workforce are other areas receiving interest in legislation and competition. Forecasters are projecting trade wars to emanate from these areas, especially between the US and China. Will the stakeholders run according to the set rules, or will competition blight reason and set the world on a blind date with an AI apocalypse? Like the arms race, will the world begin another journey towards building artefacts that can end humanity in seconds like a nuclear war? The doomsday scientists, the sceptics and the ordinary consumer are watching.

 

Dr Mbamalu, a Jefferson Fellow and Member of the Nigerian Guild of Editors (NGE), is a Publisher and Communications/Media Consultant. His extensive research works on Renewable Energy and Health Communication are published in several international journals, including SAGE. Follow on X: @marcelmbamalu

 

 

Dr. Marcel Mbamalu is a communication scholar, journalist and entrepreneur. He holds a Ph.D in Mass Communication from the University of Nigeria, Nsukka and is the Chief Executive Officer Newstide Publications, the publishers of Prime Business Africa.

A seasoned journalist, he horned his journalism skills at The Guardian Newspaper, rising to the position of News Editor at the flagship of the Nigerian press. He has garnered multidisciplinary experience in marketing communication, public relations and media research, helping clients to deliver bespoke campaigns within Nigeria and across Africa.

He has built an expansive network in the media and has served as a media trainer for World Health Organisation (WHO) at various times in Northeast Nigeria. He has attended numerous media trainings, including the Bloomberg Financial Journalism Training and Reuters/AfDB training on Effective Coverage of Infrastructural Development of Africa.

A versatile media expert, he won the Jefferson Fellowship in 2023 as the sole Africa representative on the program. Dr Mbamalu was part of a global media team that covered the 2020 United State’s Presidential election. As Africa's sole representative in the 2023 Jefferson Fellowships, Dr Mbamalu was selected to tour the United States and Asia (Japan and Hong Kong) as part of a 12-man global team of journalists on a travel grant to report on inclusion, income gaps and migration issues between the US and Asia.

Dr. Marcel Mbamalu is a communication scholar, journalist and entrepreneur. He holds a Ph.D in Mass Communication from the University of Nigeria, Nsukka and is the Chief Executive Officer Newstide Publications, the publishers of Prime Business Africa.

A seasoned journalist, he horned his journalism skills at The Guardian Newspaper, rising to the position of News Editor at the flagship of the Nigerian press. He has garnered multidisciplinary experience in marketing communication, public relations and media research, helping clients to deliver bespoke campaigns within Nigeria and across Africa.

He has built an expansive network in the media and has served as a media trainer for World Health Organisation (WHO) at various times in Northeast Nigeria. He has attended numerous media trainings, including the Bloomberg Financial Journalism Training and Reuters/AfDB training on Effective Coverage of Infrastructural Development of Africa.

A versatile media expert, he won the Jefferson Fellowship in 2023 as the sole Africa representative on the program. Dr Mbamalu was part of a global media team that covered the 2020 United State’s Presidential election. As Africa's sole representative in the 2023 Jefferson Fellowships, Dr Mbamalu was selected to tour the United States and Asia (Japan and Hong Kong) as part of a 12-man global team of journalists on a travel grant to report on inclusion, income gaps and migration issues between the US and Asia.

Leave a Reply

Your email address will not be published.


MOST READ

Follow Us

Latest from Technology

Don't Miss

Ministerial Appointment: We Don’t Want Square Pegs In Round Holes – Dr Mbamalu

Artificial Intelligence, Education And The Christian Faith

Introduction Since coming to its highest developmental limelight