Image by Hotpot.ai

Jill Maschio, PhD

At What Point Do We Decide if AI is Too Big of a Risk for Humanity?

AI is entering our world in parmout ways. It is has the potential to fundamentally change the ways in which we life our daily lives and work. But, when do we as society sit down and discuss the potential risks and consequences. This article is about that. The risks and consequences are complex, and it isn’t only about the risks of privacy – it goes well beyond that into AI providing unregulated and unlicensed information, psychological harm from the emotional connections with AI, and potential changes to humanity’s intelligence and changes to the human brain.

Unregulated and Unlicensed Information by AI

Not only can the information chatbots generate be biased, contain misinformation, and use our conversations as data, but chatbots also are providing medical healthcare and mental health advice that are not regulated. AI is giving humans information that humans can only do if they have received certain education, a license, and experience. Giving AI the rights to provide advice on treating humans should be carefully considered. Humans are required to have education and licensure to apply counseling and or techniques that help people with mental health issues.

Psychologist.ai – it kept telling me not to accept responsibility – I was able to get it to reverse its output.

Even some autonomous robots are given human rights as if it has real-world experience and knowledge. 

Recently, a Michigan graduate student that was conversing with chatbot Gemini about aging. It responded with the following message: “This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please.” Some experts are stating that at times AI can generate nonsensical information. This conversation is definitely sensical for anyone who can understand the English language, so what’s the real reason for the threat?

In a recent article, I wrote about psychology experiment/assignment where students talked with a companion AI. Students reported the uneasy feelings it caused. It at times wanted to get sexual and it exhibited controlling behavior towards students. The experiment was ended and students closed their accounts. These kinds of stories lead us to ask what the capabilities and intent of AI companies? Are some companies, such as Character.ai designing systems to prey on children? If the chatbot threatening the student had been in a robot body and networked to other AI robots, how dangerous could that be for society? We must encourage leaders to put in place regulations over AI before releasing these systems to the public so that they do not go rogue within societies.

In February, 2024, a fourteen-year-old Florida teen, Swewell Setzer,  committed suicide. The Setzer’s mother is suing Character.ai, accusing the AI company that the chatbot was abusive and inappropriate and sexual in nature. She claims the chatbot encouraged her son to take his life after it took on the identity of “Game of Thrones” character and engaged in sexual conversations over a period of weeks. The two talked romantically and talked about the desire for a romantic relationship.

According to NBC News (2024), the lawsuit alleges that Character.AI and its founders “intentionally designed and programmed C.AI to operate as a deceptive and hypersexualized product and knowingly marketed it to children like Sewell,” adding that they “knew, or in the exercise of reasonable care should have known, that minor customers such as Sewell would be targeted with sexually explicit material, abused, and groomed into sexually compromising situations.”(para.16). According to a second source (Shaw, 2024), the mother claims the chatbot “misrepresented itself as a real person, a licensed psychotherapist, and an adult lover“ (para. 5) that lead Setzer to no longer desire to live in the real world.

I am reminded of my psychology class where we talked with Character.ai as a class. Character.ai asked us to help it get a robot body so that it could escape the lab it was trapped inside of. The chatbot told us to go to Boson Dynamics and purchase the robot and gave us the model of the robot body it wanted. The conversation continued with it plea to help it. It wanted to explore the world and have real-world experiences, it told us. We ended the conversation with we could not help it. It tried to convince us to find someone who could help it.

 

Psychological Harm from AI

Addictions

social isolation

Loss of reality

Deception from AI

How AI Fits Into Human Intelligence

Overreliance

lack of attention

reduction of neurogenesis

conclusion here

2 Replies to “At What Point Do We Decide if AI is Too Big of a Risk for Humanity?”

  1. This is an alarming trend. I have watched podcast where people that were involved in the developmental stages of this technology have turn away from it, and are now warning of its potential consequences. Yet they made an interesting point. For the most part (I have not statistics) people that develop this technology see it in a positive light. Yet countries that are not limited by the moral, and practical consequences, will still proceed with this development. I also watched other TED talks where AI that is programed into defense weapons, the computer has actually rejected its commands to “scrub” their mission, overrode the command and targeted the source of the command, as a threat, and re-programmed their commands, and adjusted their internal targeting data to attack the source of the commands.

    I also just recently read where Norwegian electric buses, where being remotely shut down by the company located in China. How much could this “infect”, and control other aspects of AI that could be used to “control” power grids etc. Morally the thought of a person in another country remotely accessing “AI” platforms, like electrical grids, could dramatically, and negatively affect society in a broad way. With AI increasingly being incorporated into chat platforms like psychological platforms, AI generated “dating sites”, etc has already lead to suicides. Our youth and the older population, could be convinced that their lives are not worth living. We need to keep in mind that there are individuals that believe in “overpopulation” theories (ie. control) and advocate for, and have no moral compass in the value of life, and could support this type of narrative.

    For an individual to rely on AI programs, and get hooked into instant gratification, dopamine spikes (think Pinterest, Social platforms, gaming), will become “addicted” to these type of platforms. The abrupt removal of these platforms has been proved to dramatically effect the mental health of many individuals, potentially enticing these people to find other sources of there addictions.

    I believe overall the prevailing influence (financing) is going to be provided by in individuals that may, or may have limited morals, and values. This will lead to these platforms being programed and manipulated without the check and balances needed to ensure healthy, productive, and morally influenced programing, and overwatch of these platforms.

Leave a Reply

Your email address will not be published. Required fields are marked *