Image by Hotpot.ai

Risks and Consequences of AI Advancing at an Alarming Rate Without Oversight

By Jill Maschio, PhD

Contributing researcher: Mike Maschio

It seems that daily AI is becoming more sophisticated. Scientists and critics are voicing their concerns over the rapid technological advancements that may lead to people losing autonomy and depending on the government for living expenses and resources. After all, Arm CSS, KleidiAL, and Arm CEO Rene Haas expect to have more than 100 billion Arm devices ready for AI by the end of 2025 (Arm, 2024). The Center for AI Safety warns of a risk of human extinction in their open letter signed by Turing Award winners Geoffery Hinton and Yoshua Bengio, as well as executives from OpenAI and DeepMind, such as Sam Altman, Ilya Sutskever, Demis Hassabis, and computer science experts, professors, Bill Gates, Congressman, etc. The letter states,

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,”

It follows another open letter signed by Elon Musk, Steve Wozniak, and over 1,000 other experts who called for a halt to “out-of-control” AI development.

Other concerns center on a possible population decrease in countries that have access to the Internet when males, in particular, select an AI relationship over human ones. Replica.ai has millions of users, but not all users are there for a romantic relationship. Still, people in the U.S. are lonely, maybe lonelier than ever before as we tend to isolate ourselves with our electronics. The more people use AI for romantic companions, the greater the potential to reduce the number of families and children in developed countries. In my other article, AI companions and girlfriends are a new trend. However, AI companions can be controlling and manipulative from an assignment I assigned my psychology students. Replica.ai also tends to be sexual, even though the creator, Eugenia Kuyda (Cole, 2023) said that wasn’t the intent when building it.

Risks of Job Loss

There are many jobs at stake, as many as 300 million full-time jobs by 2030, Baek (2024) reported by Goldman Sachs.

What can AI currently do that may lead to layoffs, you may be asking? Consider what the following humanoid and AI robots can do.

  1. Boston Dynamics—Electric Atlas and Spot. Although the Atlas series was not brought to market, it could do real-world applications. In the video, Atlas helps a construction worker with tasks.

Possible Job Application for Atlas:

Jobs involving motor skills, bending and lifting heavy objects, delivery, mover, lumber yard, rescuing, janitorial work, road construction, lawn care, and warehouse. It also has killer dance moves, so maybe a dance or fitness instructor.

Possible Job Application for Spot:

As described by their video, it can perform Inspection-type jobs, such as inspecting equipment and finding failures, reading gauges, collecting thermal and inspection images, and detecting air leaks. It can also stop a person blocking a path. The application might be working outdoors, such as recreation, underground construction and inspection, farm work, trail maintenance, nursery, habitat restoration, timber management, and clearing debris.

  1. Tesla—Optimus Gen 2—Its fingers have tactile sensors for more fine motor action, similar to human action.

Possible Job Application:

Repetitive jobs, such as typing, factory jobs, desk/service jobs. Boring jobs. Craftperson, carpenter, roofer, handyman and machinist. Maybe serve you at Starbucks one day soon.

  1. Unitree G1: Has speed and agility. Three-finger hand enhanced with tactile sensory. Learns skills through repetition, observation, and reinforcement. It has wi-fi and Bluetooth.

Possible Job Application:

Material handling, hazardous jobs, jobs in extreme temperatures, assembly lines, and quality control are jobs that change frequently and require new skills. It’s a team player, so it can work alongside humans.

What tech corporations may not be considering is that with all the job losses due to AI comes a loss in revenue because many people will not be able to find other jobs, and people’s income may be reduced. It takes money to purchase goods and services.

Environmental Impact and Sustainability

What environmental impacts are there with AI, such as carbon emissions, electricity consumption, and electronic waste? In a TED video, Sasha Luccioni tries to raise awareness of the negative effects of AI on the environment. Sasha is concerned with rising electricity and pollution that counteract efforts to save resources and prevent further destruction of the planet.

According to Degeurin (2024), data centers are popping up more than we may realize. Degeurin reported that Vice President of Engineering Bill Vass believes that companies are building data centers rapidly – his estimate was one new data center every three days. The author went on to state that AI and cryptocurrency mining could cause energy consumption to increase doublefold within two years because AI models require “massive” amounts of energy to power. The increased trend for AI should be alarming to those vested in controlling climate change or global warming.

The Elephant in the Room

People are rightfully questioning whether AI may one day cause human extinction. This idea, perhaps partly pushed by transhumanist and futurist writers, such as Ray Kurzweil in his book Singularity is Nearer and Lovelock and Appleyard in their book Novacene: The Coming Age of Hyperintelligence, bring fear to the idea of accepting AI. AI may come with some risks and that is why some computer experts and leaders are calling for immediate regulation of AI. There are many concerns about AI being used for deleterious intentions, including monitoring, censorship, and disinformation. There are concerns about greedy corporations out to make big bucks with AI and throwing out the window ethics, security, and safety for humans. Others are questioning if AI will ever become power-hungry, change direction to help humans, and develop their own new language before it can be shut down.

Connor Leahy considered one of the top AI gurus, thinks there is trouble ahead.

AI Isn’t as Advanced as you may Think

Meta’s AI chief Yann LeCun (2024) sat down with TIME prior to the award ceremony in Dubai to discuss the barriers to achieving artificial general intelligence. He states that he doesn’t like the term artificial general intelligence because human “intelligence is not general at all” (pf. 4). Charles Spearman (1863-1945), coined the term general intelligence or “g”, which has two intelligence, general intelligence and specific ability. General intelligence, according to Spearman, consists of cognitive tasks that enable people to solve common problems. A general intelligence test measures the following cognitive functions: reasoning, perceptual speed, number speed, vocabulary, and spatial visualization. AI can perform these functions, so it does match human general intelligence.

However, there are other forms of intelligence, such as fluid and crystalized, which involve being able to solve novel problems and reason without any prior experience, as well as being able to solve problems and reason with knowledge of past experience. That was proposed by Raymond Cattell in 1940s. Machines are fed vast amounts of information. It can’t learn, and it can’t be taught because it is not human. It can be fed information from humans and has algorithms, etc. It can’t have experiences and learn from them.

Furthermore, intelligence seems to be related to consciousness, which artificial intelligence does not possess. Humans can think about their thinking while performing tasks, problem solving, and being creative. AI doesn’t possess metacognition.

AI has arrived and is moving at an alarming rate. Is it moving so fast that we may be losing the ability to control and understand any consequences? There is plenty of money to be made, but will the price to pay be detrimental?

I am trying to sound the alarm about another type of consequence—that to the human psyche. Society, experts, and concerned elites are not discussing the impact AI and other advanced technology will have on our thoughts, feelings, and behavior.


References

Arm Editorial Team. (2024). Arm at Ccmputex 2024: The path to 100+ billion arm devices ready for AI by 2025. https://newsroom.arm.com/blog/arm-at-computex-2024

Cole, S. (2023, February 17th). Replika CEO says AI companions were not meant to be horny. Users aren’t buying it. Vice.com. https://www.vice.com/en/article/n7zaam/replika-ceo-ai-erotic-roleplay-chatgpt3-rep

Perrigo, B. (2024, February 11). Meta’s AI chief Yann LeCun on AGI, open-source, and AI risk. Time. https://time.com/6694432/yann-lecun-meta-ai-interview/

Facebook
Twitter
LinkedIn

2 Replies to “Risks and Consequences of AI Advancing at an Alarming Rate Without Oversight”

  1. What i dont understood is in reality how youre now not really a lot more smartlyfavored than you might be now Youre very intelligent You understand therefore significantly in terms of this topic produced me personally believe it from a lot of numerous angles Its like women and men are not interested except it is one thing to accomplish with Woman gaga Your own stuffs outstanding Always care for it up

  2. Great article, and I enjoyed the videos, they were very informative and eye opening as to the status and the electrical demands of AI. I do not think that AI could be used to take over and replace humans, I may be underestimating the abilities of AI, yet I think there is a real threat of dependence upon the AI platforms. I believe humans can be lazy and will often seek the easiest, and quickest way to solve problems, and this is where I believe AI will grow the fastest. With the rise of AI in the job sector poses a great threat to jobs held by individuals, as yet however, I have never heard of a co-peration that is worried about losing personal if the job can be completed by AI backed technology.

Leave a Reply

Your email address will not be published. Required fields are marked *