Will artificial intelligence really replace human workers and take our jobs?

Original article posted here on 10th May 2023

Ready or not

Although artificial intelligence has been a topic of interest and speculation for decades, it wasn't until the launch of ChatGPT by OpenAI in late 2022, that AI truly captured the public's attention. With the ability to engage in natural language conversations and answer complex questions, ChatGPT provided a glimpse into a future where machines are more human-like in their interactions with us. Similarly, the introduction of other AI models such as Dall-E and Midjourney has only served to fuel our curiosity and enthusiasm for this emerging technology.

AI image generated with Midjourney

While AI offers immense potential to revolutionise our lives and benefit humankind, there are also valid concerns about the ethical and societal implications it poses. The prospect of a dystopian future, where machines interact with humans in ways that blur the lines between reality and fantasy, has long been a theme in science fiction. This future now seems closer than ever before, and even though the benefits of AI are undeniable, it is crucial to acknowledge and prepare for the negative impacts it will have on society. One of the biggest concerns, aside from killer AI robots taking over the world, is AI replacing human workers, which would lead to significant job displacement and economic disruption. Developing strategies that ensure a smooth transition to the AI-driven economy, while protecting workers' rights, and providing support for those who may be negatively affected, will be vital. It will be important to ensure that AI is used fairly and in a way that benefits society as a whole rather than just a select few. Ready or not, the status quo is changing and you best not be asleep at the wheel. Although, it likely won't matter as AI will probably be driving your car.

Key Players

A staggering number of AI-related products and startups are emerging every month. More than 1000 new AI tools have been released already this month. However, most of these lack the necessary computational power and resources to run their own AI models. Instead, they rely on pre-existing AI technologies and connect via APIs to services of the big players like OpenAI's ChatGPT and Dall-E. Making sense of the realm of artificial intelligence can be frustrating due to the abundance of jargon and lack of simple explanations. This article covers various aspects of AI, including key players, common terminology, controversies, and the significant question of whether AI will replace human jobs.

OpenAI

OpenAI was founded as a result of concerns about the safety of AI technology. The co-founder of OpenAI, Elon Musk, had discussions about AI safety with Larry Page, the co-founder of Google, and felt that Page was not taking the issue seriously. Page had expressed his desire for a digital, god-like, superintelligence and had made public statements about the potential of AI for good. However, Musk believed that precautions should be taken to minimise the potential for bad outcomes. When he mentioned the need to ensure the safety of humanity, Page called him a "specist." Musk wanted to create an open-source, transparent, and non-profit organisation to promote AI safety, which led to the founding of OpenAI in 2015 with Sam Altman, Greg Brockman, Ilya Sutskever, John Schulman, and Wojciech Zaremba. At the time, OpenAI's goal was to "advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return." Musk wanted to avoid having a for-profit organisation that only focused on maximising profits without regard for the potential consequences. With OpenAI, he hoped to address the potential risks of AI technology and promote its safe development. At the time Google had acquired DeepMind so between both of them they had about three quarters of all the AI talent in the world. They also had a tremendous amount of money, computing power and close to a monopoly on AI and talent. Musk's idea with OpenAI was to create the antithesis of Google. Instead of being closed and for profit, OpenAI would be open and not for profit. The "Open" in the name refers to "open source" and "transparency".

In 2018, Musk left OpenAI due to "conflicts of interest" with his involvement in other AI companies, such as Tesla and Neuralink. After leaving, he expressed criticism towards the organisation. Musk had concerns that since the creation of a for-profit branch in 2019, the organisation was "training AI to be woke" and was effectively controlled by Microsoft, who invested $10 billion into the company. He felt that OpenAI was not in line with his original intentions, as it had become a closed-source, profit-driven entity. He also disagreed with some of the decisions made by the OpenAI team and admitted he had "taken his eye off the ball" and that the organisation had fallen short of his expectations. In March 2019, OpenAI announced a shift to a capped-profit model as a hybrid of a non and for-profit company. In an interview with The Verge, Ilya Sutskever, one of OpenAI's co-founders spoke of the change in OpenAI's approach to sharing information about its AI language models. "If you believe, as we do, that at some point, AI (AGI) is going to be extremely, unbelievably potent, then it just does not make sense to open-source. I fully expect that in a few years it's going to be completely obvious to everyone that open-sourcing AI is just not wise." Musk said he's now planning to build "TruthGPT," which he described as a "maximum truth-seeking AI that tries to understand the nature of the universe." Altman, OpenAI's CEO, has called Musk a "jerk" and said Musk is "feeling very stressed about what the future is going to look like for humanity."

DeepMind (Google DeepMind)

DeepMind is a British AI research company that was founded in 2010 by Demis Hassabis, Shane Legg, and Mustafa Suleyman. The company is widely regarded as one of the world's leading AI research organisations, with a particular focus on deep learning and reinforcement learning. DeepMind's stated goal is to "solve intelligence", with the ultimate aim of creating machines that can think, learn, and reason like humans. In its early years, DeepMind focused on developing AI systems that could play games such as chess and Go at a high level. In 2016, the company made headlines when its AlphaGo program defeated the world champion Go player Lee Sedol, marking a major breakthrough in the field of AI. Since then, DeepMind has expanded its research into a range of areas, including healthcare, climate science, and robotics. The company has developed a number of AI systems that have been used to improve patient care in hospitals and to predict the behaviour of proteins, among other applications. DeepMind was acquired by Google in 2015, and operates as a subsidiary of the tech giant. The company is known for its strong emphasis on ethical AI research and has established an ethics and society research unit to explore the societal implications of its work. On the 20th of April 2023, Sundar Pichai announced that DeepMind and the Brain team from Google Research would be joining forces as a single, focused unit called "Google DeepMind", having operated as two separate entities since 2015.

Sta­bil­ity AI

Sta­bil­ity AI is touted as "the world's leading open source generative AI company" with Emad Mostaque at the helm as CEO. The company, based in London, has positioned itself as an open source rival to OpenAI, who, despite their name, rarely releases open source models and keep their neural network weights proprietary. Sta­bil­ity AI released its first AI software product, Stable Diffusion, in August 2022. Sta­bil­ity AI also funded LAION, a Ger­man orga­ni­sa­tion that is cre­at­ing larger and larger image datasets for use in AI training by AI com­pa­nies. Sta­bil­ity AI trained Sta­ble Dif­fu­sion using the LAION dataset. Since the release of Stable Diffusion, Sta­bil­ity AI has also released Dream­Stu­dio, a paid app that pack­ages Sta­ble Dif­fu­sion in a web inter­face. In 2023, Stability AI released a new AI language model called StableLM. It is hoping that StableLM could someday be used to build an open source alternative to ChatGPT. Like the large language model (LLM), GPT4, StableLM generates text by predicting the next token or word fragment in a sequence. That sequence starts with information provided by a human in the form of a prompt. As a result, StableLM can also compose human-like text and write programs.

Midjourney

David Holz founded Midjourney in San Francisco in 2021, with the aim of creating a text-to-image generator that produces beautiful and artistic images. Despite positioning itself as a research lab, Midjourney has attracted a significant customer base, including professionals who rely on its image generator. Midjourney is an AI program that generates images based on natural language descriptions or prompts, similar to DALL-E and Stable Diffusion. The company, led by Holz, has been continuously improving its algorithms, with new model versions released every few months. Midjourney offers three subscription tiers and previously offered a free trial, but it has since discontinued the trial due to high demand and trial abuse. As of April 2023, users must pay for a subscription to access the image generation services. In January 2023, three artists filed a copyright infringement lawsuit against Midjourney, Stability AI, and DeviantArt, claiming that these companies infringed upon the rights of artists by training AI tools on billions of images scraped from the web without obtaining the original artists' consent.

X Corp

X Corp is Elon Musk's AI company. The CEO of Tesla, SpaceX and Twitter, has made no secret about his desire to create an "everything app" called X. He recently merged Twitter into a parent corporation called X Corp and has said that the social network will be an "accelerant" for creating X Corp.

LAION

LAION, also known as "Large-scale AI Open Network," is a vast data collection initiative that lies at the heart of the current artificial intelligence revolution capturing the world's imagination. Christoph Schuhmann, a physics and computer science teacher from Hamburg, Germany, is leading the modest team of volunteers building the largest free AI training dataset in the world. His passion project has been used to train text-to-image generators like Stable Diffusion. Two years ago, Schuhmann helped establish LAION with a group of volunteers he met on an AI enthusiast Discord server. He was alarmed when OpenAI's DALL-E was first released and feared it would lead to big tech companies monopolising data. Schuhmann believed that such centralisation would have negative societal impacts, so he and the group created an open-source dataset to train image-to-text diffusion models. They used raw HTML code gathered by the non-profit Common Crawl to locate images on the web and associate them with descriptive text. The process was similar to teaching a language with millions of flashcards and involved no human curation. In a matter of weeks, they generated three million image-text pairs. Three months later, the group released a dataset with 400 million pairs. LAION's reputation grew, and the team worked without pay, receiving only one-off donations. Eventually, former hedge fund manager Emad Mostaque offered to cover the costs of computing power to use LAION to train his open-source generative AI business. The LAION team initially had doubts but eventually gained access to cloud-based GPUs that would have cost around $10,000. When Mostaque launched Stability AI in 2022, he used LAION's dataset to train Stable Diffusion and hired two of LAION's researchers. Thanks to LAION's data, Stability AI is currently seeking a $4 billion valuation. Schuhmann has not profited from LAION and has rejected job offers in order to remain independent.

AI Terminology

AI image generated with Midjourney

Generative AI

Products like ChatGPT, DALL-E, Midjourney and Stable Diffusion all belong to a category of AI sys­tems called generative AI. These systems are trained on certain kinds of creative work like bodies of text, software code, images or music and then remix these works to derive or "generate" more works of the same kind. Generative AI systems use algorithms that can learn from these large datasets and generate outputs that resemble the examples they have been trained on.

Deep Learning

Deep learning is a subfield of AI that helps computers learn by processing large amounts of data and identifying patterns within it. It is similar to how people learn things by practicing and gaining experience.

Diffusion

Diffusion in AI is a process where a computer program creates new content by combining different ideas or concepts from a large set of data. It's like mixing colours to create new ones. The program uses a lot of examples to learn how to create new and original content on its own. This mathematical "diffusion" process is used to store compressed copies of the training material, which in turn are recombined to derive other images. It is, in its simplest terms, a type of collage tool. The result­ing generated material may, or may not, resemble the training material.

Large Language Model (LLM)

A large language model is an artificial intelligence system that has been trained on a massive amount of text data, such as books, articles, and websites so they can learn the rules of language and generate new sentences that make sense. It can understand language patterns, generate human-like text, and perform natural language processing tasks like translation, summarisation, and sentiment analysis.

Neural Networks

A neural network is a type of machine learning algorithm inspired by the structure and function of the human brain. It is made up of a series of interconnected nodes called neurons that process information. These neurons work together to solve problems or recognise patterns in data. Each neuron is connected to many other neurons and can make simple decisions based on the data it receives. Neural networks are widely used in AI applications such as image recognition, natural language processing, and predictive modelling. They are trained on large datasets using a process called backpropagation, which adjusts the weights and biases of the neurons in the network to minimise the difference between the predicted output and the actual output. Once trained, the neural network can be used to make predictions on new data that it has not seen before.Artificial

General Intelligence (AGI)

Artificial General Intelligence (AGI) refers to a hypothetical type of artificial intelligence that would be capable of understanding or learning any intellectual task that a human being can. Unlike current AI systems, which are designed for specific tasks, an AGI system would be able to adapt to new tasks and situations without being explicitly programmed for them. It would essentially be a machine that could think and reason like a human. However, the development of AGI is still in the realm of science fiction, and it is not clear when, or if, it will ever be achieved.

The Singularity

The Singularity is a hypothetical future event in which artificial intelligence surpasses human intelligence, leading to an exponential acceleration in technological progress. It is often described as a point of no return, after which it becomes difficult to predict the future course of human civilisation. Some futurists and experts believe The Singularity could bring about tremendous benefits, such as eliminating disease, poverty, and scarcity, while others warn of the potential risks, such as the loss of control over advanced AI systems and the possibility of AI surpassing human values and goals.

AI Controversy & Ethical Concerns

AI image generated with Midjourney

In 2015, Oxford University philosopher, Nick Bostrom, gave a TED Talk on artificial intelligence and said, "Machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing than we are." Even earlier, in 2014, Stephen Hawking told the BBC, "The development of full artificial intelligence could spell the end of the human race. It would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn't compete and would be superseded." Max Tegmark, an MIT professor and AI researcher, wrote in Time, "I now feel that we're living the movie "Don't Look Up" for another existential threat: unaligned superintelligence." With messages like these being offered since 2014 and before, it's no surprise that the recent advancements in AI have generated significant concerns and controversy. One of the major concerns is the potential loss of jobs as AI and automation become more prevalent. There are also worries about bias in AI systems, particularly with regard to race, gender, and other characteristics. Additionally, there have been ethical concerns about the use of AI for military and surveillance purposes, as well as the potential for AI to be used for malicious purposes such as deepfake videos or autonomous weapons. The lack of transparency and accountability in some AI systems has also raised concerns about the potential misuse or unintended consequences of these systems.

In March 2023, an open letter signed by more than one thousand experts, including Elon Musk and Steve Wozniak, called for a six month halt to the training of advanced AI models. The letter highlighted the potential risks that AI could pose to society and humanity and emphasised the need for safety protocols to be developed by independent experts. The experts urged the industry to take a cautious approach and prioritise safety over speed, recognising the profound risks that unchecked AI could bring. The letter also detailed the dangers that AI could present without proper oversight, highlighting the importance of ethical considerations and accountability in the development and deployment of AI. According to the letter, the potential risks include: the spread of propaganda and untruth, job losses, the development of nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us, and the risk of the loss of control of our civilisation. The letter mentioned that even OpenAI had said it may soon be necessary to "get independent review before starting to train future systems." The letter continued... "Therefore, we call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. Active AI labs and experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts to ensure the systems are safe beyond a reasonable doubt. Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities." Sam Altman did not sign the letter.

Let's take a look in more detail at some of the concerns surrounding AI.

  1. Creators and copyright

  2. AI replacing human workers

  3. The spread of misinformation

  4. The use of AI technologies by bad actors for nefarious purposes

  5. The AI singularity

  6. There is currently no AI regulation

  7. A few big corporations having a monopoly on AI

1. Creators and copyright

In 2018, a digital portrait printed on canvas was sold at Christie's auction house in New York for almost $500,000. The artwork became the first piece created by AI to be sold at auction and generated a lot of attention and controversy in the art world. It challenged traditional notions of authorship and creativity, raising questions about the role of AI in the creative process. Made by the Paris-based art collective "Obvious," the portrait named "Edmond de Belamy" is a digital painting created using a Generative Adversarial Network (GAN). The GAN was trained on a dataset of 15,000 portraits from the 14th to 20th century, and then generated the final portrait of a fictional 18th-century man named Edmond de Belamy. Looking back at that image today, based on the capabilities of the current line up of AI image generators, it looks basic at best. Now, anyone can enter a few words of instruction or prompts and create articles, stories, photos, paintings, digital imagery, music, videos, audio and digital work of any kind and of incredible quality in a matter of seconds. Artists and digital creators are rightly worried about the advent of such powerful systems. There is no doubt generative AI is a paradigm-shifting event for knowledge workers across the globe. And, to make matters worse for many digital creators, it seems that much of their own copyrighted material was used in training the very systems they are now competing with.

Edmond de Belamy AI Portrait

Companies like Stability AI and Midjourney have come under much criticism and are currently the subject of litigation due to the methods used in training their AI systems. Billions of images and copyrighted material were copied with­out the con­sent or knowledge of the orig­i­nal creators. Even Emad Mostaque of Stability AI himself has forecasted that "future AI mod­els will be fully licensed". In the field of data collection, it has become a common practice to assume that consent is not needed, or that people do not need to be informed, or even made aware of the collection. There seems to be a prevailing sense of entitlement that anything available on the web can be gathered and included in a dataset. Much of this is due to the fact that the clarity and precision of an AI-generated image are directly proportional to the size and diversity of the dataset that the AI model is trained on, as well as the quality of the images in that dataset. Christoph Schuhmann, founder of LAION, is of the opinion that anything freely available online is fair game for copying. In order to build the LAION datasets the group scraped visual data from companies like Amazon Web Services, Shopify, YouTube, Pinterest, EyeEm, DeviantArt, the US Department of Defence and many news websites.

At present, it is uncertain whether the actions of AI companies, including those affiliated with them such as LAION, are unlawful, infringe upon copyright, violate terms and conditions, or at the very least are unethical. There are no regulations pertaining to AI in the European Union and the upcoming AI Act is not expected to address the inclusion of copyrighted materials in large datasets. Instead, lawmakers are considering a provision that would require companies responsible for AI generation to disclose the materials utilised to train their algorithms, thereby giving the creators of those materials an opportunity to take action. The underlying concept behind this requirement is that there is an obligation for developers of generative AI to document and be transparent about the copyrighted material used in the training of algorithms.

Many people think that AI systems using training models like Stable Diffusion will cause irrepara­ble harm to artists in the short and long term to the point where they will eventually cease to exist. Only time will tell if the wild west of data collection will change with regulation and litigation but while that may slow down the inevitable it certainly won't stop it. The generative AI cat is out of the bag and there's no putting it back in.

2. AI replacing human workers

One of the most discussed and controversial aspects of AI is its potential to replace human workers. With the rapid advancements in machine learning and robotics, it is becoming increasingly feasible for machines to perform tasks that were previously done exclusively by humans. This has led to concerns that the widespread adoption of AI could result in significant job losses and widespread unemployment.

3. The spread of misinformation

AI images generated with Midjourney

There are also concerns that AI can be used to facilitate the spread of misinformation and propaganda on a massive scale. With the help of AI, it is possible to create highly convincing deepfakes that can spread false information, manipulate public opinion, and undermine trust in democratic institutions. AI algorithms can also be trained to generate and amplify fake news stories by manipulating online conversations and creating fake social media accounts, making them appear more credible and widespread than they actually are. This can be done on a huge scale, with AI-powered bots generating and amplifying false information and misleading narratives, and creating the illusion of widespread support for a particular cause or viewpoint.

4. The use of AI technologies by bad actors for nefarious purposes

AI images generated with Midjourney

The power of AI could also be exploited by bad actors to conduct various malicious activities, such as cyberattacks, fraud, and social engineering. AI-powered bots can launch sophisticated phishing attacks that trick users into revealing sensitive information or downloading malware. AI algorithms can create convincing fake identities that are hard to detect, making it difficult to prevent fraudulent activities. The use of AI to create deepfakes can be exploited for blackmail, propaganda, and other malicious purposes. AI can also be used for sophisticated cyberattacks, including brute-force attacks, data theft, and ransomware, which are harder to detect and prevent due to AI's ability to probe for vulnerabilities and adapt to changing security measures. AI can even automate the production of counterfeit goods, which can be sold on the black market or used to fund criminal activities. As AI becomes more advanced, it could also be used to create autonomous weapons, raising ethical and legal concerns. It is clear that to help prevent the misuse of AI's power for nefarious purposes, it is vital to establish strong ethical guidelines and regulations for its development and deployment, with appropriate safeguards in place.

5. The AI singularity

Geoffrey Hinton speaks with CNN's Jake Tapper about his AI concerns

The singularity is a hypothetical future point in time when artificial intelligence surpasses human intelligence and becomes capable of improving itself recursively, leading to an intelligence explosion. At this point, AI could rapidly evolve beyond human control and comprehension, potentially resulting in a significant shift in the balance of power between humans and machines. Some experts have warned that the singularity could have catastrophic consequences, such as the extinction of humanity or the creation of a dystopian world ruled by machines. Others argue that the singularity is a far-fetched scenario and that AI will remain under human control. However, the concept of the singularity has generated significant concern and debate within the scientific community and the wider public.

6. There is currently no AI regulation

The lack of AI regulation has led to growing concerns about the potential risks associated with the development and deployment of AI systems. Without appropriate regulation and laws, there is a risk that AI could be developed and deployed in ways that are harmful to society or individuals. For example, AI could be used for biased decision-making, surveillance, or autonomous weapons, leading to discrimination, violation of privacy, and loss of life. The lack of regulation and laws could also result in a lack of accountability and transparency, making it difficult to hold individuals or organisations responsible for the consequences of AI systems. It is essential to establish clear ethical guidelines and legal frameworks for the development and deployment of AI to ensure that it is done in a responsible and transparent manner that prioritises the safety and well-being of society as a whole.

7. A few big corporations having a monopoly on AI

The concentration of AI power in the hands of a few big corporations has raised concerns about the potential for monopolistic behavior and abuse of power. Such concentration could lead to limited innovation, higher prices, and reduced consumer choice, as smaller competitors struggle to compete against dominant firms with vast resources and control over critical data. The lack of competition could also stifle diversity and creativity in the development of AI systems, leading to a narrower range of applications and solutions that may not fully serve the needs and interests of society. It is crucial to promote a healthy and competitive ecosystem for AI innovation and development, with clear rules and regulations to prevent monopolistic practices.

Will AI Take Your Job?

AI image generated with Midjourney

The sky was painted with streaks of red and orange as the sun began its descent beyond the horizon. Thunderous footsteps of colossal beasts shook the earth as mighty herbivores lumbered across the plains. Roars of ferocious carnivores, hunting down their next meal, echoed throughout the mountains. The wild and untamed ancient ecosystem was alive with activity and teeming with life; a vibrant tapestry carpeted in the day's fading sunlight. But as the long summer day began its journey to night, the attention of all creatures, great and small, was drawn to the sky. All fixated on the magnificent glowing ball of light, shimmering with an otherworldly radiance, growing larger and more brilliant with each passing minute. The thunderous footsteps and the mighty roars ceased and, for a moment, the world stood still. Eyes of all shapes and sizes were mesmerised by the beauty of the celestial object hurtling toward them.

AI paragraph generated with ChatGPT

While not exactly Shakespearean prose, the paragraph above is certainly acceptable as a piece of creative writing. It was generated using one prompt that was refined twice in ChatGPT, and three different responses were then combined to create the best possible paragraph. The process took less than five minutes. It may seem obvious, but the power of AI is truly staggering, and we are merely scratching the surface of what it can do. In terms of a journey, we could compare AI's development to that of the automobile. It is currently at the stage where someone has just invented the wheel. Could the wheel inventors have ever imagined a Tesla, a Bugatti Chiron, a Formula 1 car or even a BelAZ 75710 (a 70 feet long, 360 tonne, 4,600 horsepower, earth moving truck with two 65-litre diesel engines that weighs 800 tonnes fully loaded). It's difficult to fathom the potential impact that AI will have on the world in the next five to ten years and beyond. The thought is mind-boggling.

BELAZ-7571

The creation of AI is a monumental achievement, ranking alongside some of the most transformative inventions in human history. Often compared to the Gutenberg press, which revolutionised the way we access information by making books widely available, and the discovery of electricity, which powered the Industrial Revolution and modernised countless aspects of our lives. AI has the potential to impact nearly every facet of human existence, from the way we work and communicate to the way we diagnose and treat diseases. It has already made significant strides in fields such as finance, transportation, and manufacturing, where its ability to analyse and process data at lightning speeds has resulted in increased efficiency and profitability. The possibilities for the future of AI are vast, and some experts predict that it will lead to unprecedented advances in science, medicine, and technology. It has the potential to solve some of the world's most pressing challenges, such as climate change, hunger, and disease.

But like the dinosaurs who stared in awe and fixated on the beautiful glowing ball in the sky, until it hit and fried them all to a crisp, we are now marvelling at our ability to create incredible digital material with hardly any skill at all. We can be artists if we've never held a paintbrush or a piece of charcoal, we can be writers without ever having read a book, we can create incredible videos without knowing what a J Cut is, we can be voiceover artists regardless of our accents and so much more. We are patting ourselves on the back for being highly skilled prompt engineers even though that's a bit of a misnomer. But will all of that come at a cost? Will AI burn us all to a crisp in a 5,000 degree inferno?

"AI will not replace you. A person using AI will," is a popular phrase being circulated at the moment and, while it's not quite wrong, it's not quite right either. A more precise version could be... "AI will not replace you. A person using AI will (replace 20 or 30 or 50 or 100 of you)." But that doesn't have the same ring to it. I've also heard comparisons that suggest AI is not a threat to human expertise, much like how calculators didn't replace mathematicians. However, equating AI to a calculator is like comparing a log rolling down a hill to Max Verstappen's RB19. These types of analogies are dangerous and either based on ignorance or intentional agendas. Make no mistake about it, AI and automation will eventually replace a huge percentage of knowledge workers. One human overseer using AI could potentially do the work of hundreds of people. It is impossible to predict how it will play out exactly but it's a safe bet that AI will get rid of many more jobs than it creates in the medium and long term. And it's already begun. Not long after the launch of ChatGPT companies and organisations announced job cuts and hiring freezes specifically due to AI.

According to Bloomberg News in early May 2023, IBM's CEO, Arvind Krishna, revealed that the company plans to halt or reduce hiring for certain positions due to the potential replacement of approximately 7,800 jobs by AI in the near future. Specifically, recruitment for back-office functions such as human resources will be suspended or slowed, and up to 30 percent of non-customer-facing roles could be automated within the next five years. This reduction in staff may also involve not filling positions that become vacant due to attrition. Dropbox, a publicly traded and profitable tech company, has also announced a reduction in its headcount by 16%, with one of the reasons being the integration of AI technology. Despite strong financial performance over the past few years, the company deemed it necessary to implement this significant layoff. And earlier this year, in January 2023, only two months after the official launch of OpenAI's ChatGPT, BuzzFeed, one of the largest entertainment sites in the US, announced that it was laying off 12% of its staff and replacing them with OpenAI tools. According to joint research by Stanford and MIT, the utilisation of GPT-3 software has demonstrated an improvement of up to 35% in the performance of customer service agents. This further suggests that AI's integration into various industries could bring about significant changes in the roles of knowledge workers.

AI is transforming the job landscape at breakneck speed across multiple professions but as the technology improves and proliferates throughout other industries and professions, the transformation will be even more pronounced.

Final Thoughts

We are now at an inflection point with artificial intelligence, and it's vital that we consciously guide its development. In 2022, the sector received $4.5 billion in investment across 269 deals, and that's only on an upward trajectory. Whether we like it or not, AI will become a permanent part of our lives, and as we face a potentially dystopian future, it's increasingly probable that we will need to integrate it even further, despite potential concerns and controversies.

It is likely that many of these concerns and controversies will eventually be addressed through the creation of laws and regulations, copyright protection for training material and datasets, the development of effective AI models to counteract malicious ones, and so on. However, the one unavoidable outcome of AI's integration is job displacement. As AI is projected to redefine two-thirds of global jobs, job security is understandably a valid concern. Nonetheless, it is a certainty that those who fully embrace and utilise the power of AI will be much better positioned than those who do not, as understanding AI will be a critical skill for workers as we move into the future.

"Success in creating effective AI could be the biggest event in the history of our civilisation. Or the worst. We just don’t know. So, we cannot know if we will be infinitely helped by AI, or ignored by it and sidelined, or conceivably destroyed by it." - Stephen Hawking, Web Summit 2017


Further Reading

The Hot New Job That Pays Six Figures: AI Prompt Engineering (Forbes)

ChatGPT took their jobs. Now they walk dogs and fix air conditioners (The Washington Post)

Raspberry Pi Camera Takes Photos Using AI Instead of Lens (Tom's Hardware)

This Bricklaying Robot Is Changing the Future of Construction (Redshift by Autodesk)

It's over for product photographers. AI can now create professional product images for free. (Twitter @moritzkremb)

AI-Threatened Jobs Are Mostly Held by Women, Study Shows (Bloomberg)

Bill Gates says the winner of the AI race will be whoever creates a personal assistant (Fortune)

Software company CEO says using ChatGPT cuts the time it takes to complete coding tasks from around 9 weeks to just a few days (Business Insider)

Sci-fi author says he wrote 97 books in 9 months using AI tools (Business Insider)

China is using AI to raise the dead, and give people one last chance to say goodbye (Business Insider)

Microsoft Says New A.I. Shows Signs of Human Reasoning (The New York Times)

Irish Teacher: AI is going to transform how we teach and how we learn (Irish Examiner)

Artificial intelligence to hit workplace 'like a freight train', energy boss warns (Sky News)

BT to cut 55,000 jobs with up to a fifth replaced by AI (BBC News)

Managers who use AI will replace managers who don't, says an IBM exec (Business Insider)

The AI race heats up: Google announces PaLM 2, its answer to GPT-4. PaLM 2 can code, translate, and "reason" in ways that best GPT-4, says Google (Ars Technica)

AI Expert Says ChatGPT Is Way Stupider Than People Realize (Futurism.com)

Noam Chomsky: AI Isn't Coming For Us All, You Idiots (Futurism.com)