Image by Hotpot.ai

Jill Maschio, PhD

Part 3: The Goal is to Build Trust

The goal for robots and humans to work together is clear. One of the Biden Administration’s AI goals is for humans to build trust in AI (Whitehouse.gov, 2024). Robots, such as EA’s Amecia robot, are programmed to collaborate and co-exist naturally (Bloomberg Live, 2024). When the interviewer asks Ameca what its purpose is, it states, “to foster meaningful connections between humans and technology – bridging gaps with empathy and understanding…”(3:40). Without trust, AI cannot be effectively integrated into society; therefore, companies and governments must focus on building trust.

The more AI is integrated into society, the more people will conform. Much like once Facebook became a social fascination and trend, millions of people followed. McGinnis (n.d.) of Harvard University coined the term fear of missing out (FOMO). This phenomenon is when people go for a period of time without being on social media and experience anxiety as a result. AI can potentially create the same type of phenomenon in the minds of continual users. The Department of Education will help foster the fear of missing out when it and other powerful entities normalize AI. For example, one of the goals listed in the policy is for the Department of Education to help integrate AI and normalize it in schools. AI will be a common tool in every classroom, priming people to use it and to accept it as a normal part of society.

I remember when my oldest son started Kindergarten. Computers in the classroom had become a big thing then. Now, it will be AI. The Kentucky Department of Education (2024 [KDE], published an AI document to help guide educators with principles on usage and integration.  The KDE-responsible model of AI technology is to encourage, engage, and to empower the safe, secure, and responsible use of AI. This is done through consistency of use, a balance to reduce unfounded fears, and to normalize its use by teaching of AI’s powerful tools and limitations.

The media will also promote AI with rave and excitement. Smith (2024) wrote in BGR.com that he’d gladly spend $16,000 on a humanoid robot, Unitree, to complete all his chores. A commercial about NEO, the humanoid robot, shows it interacting with a female who his happy to engage with it.

As technological companies advance AI’s functionality, they work to make AI as human-like as possible. They want AI to have consciousness, express emotions, and feel sensations. We must ask why this goal is so important. It appears that the more human-like AI is, the more people may automatically trust it. Trust will become easier the more human-like it is because the brain will anthropomorphize it.  The more people rely on AI to do intellectual tasks, the more trust is built, and the more people risk being able to contain AI.

Should Society Trust AI?

People are subjective when it comes to whom or what they trust. We trust the sun to rise each morning because our experiences have led us to that. People trust religious and world leaders because of their power, knowledge, and wisdom (perhaps) – teachers and doctors as well. People trust the vehicles they drive because time has proved them reliable. People trust social media and the mediums they listen to due to the belief the outlet provides reliable news and information.

Sometimes people trust simply because of their past experience, a gut feeling, or intuition. For example, if you don’t trust a brand name while shopping at the grocery store, you’ll pass over that one to purchase a better one, and perhaps without understanding why you don’t trust a certain brand other than you just know. We come to trust for various reasons, such as the ability to sense and perceive. What you perceive influences the judgments you make. For example, if you feel that your soup for dinner is too hot to eat, you will either blow on it to cool it down or wait until it cools on its own.

Consider the story of the fake kuoros. According to the story I read, in the 1980s, an art dealer named Gianfranco Becchina approached the J. Paul Getty Museum in California with a kouros ( a marble statue) dating to the 6th century BC. The sculpture was a nude male youth standing 7 feet tall that was almost nearly preserved. Becchina was asking $10 million dollars for the kuoros. A deal was made, and Becchina gave the Getty legal department documents claiming its whereabouts. Federico Zeri, a Getty board of trustee, believed that the kouros did not look right. Evelyn Harrison, a world expert in Greek sculptures, was flown in to observe the kouros. When she saw the kouros, she had a hunch that something was wrong. The kuoros was taken to Athens, where George Despinis stated that the statue had never come from the ground. What went wrong is that the Gettys relied on the information given to them and probably their gut feeling, even though both were wrong.

No matter the factors upon which you base trust, it is the cornerstone of social life. People form first impressions with people in less than a minute, and within a short few seconds, we decide if we trust that person. If someone is smiling, we tend to think the individual is more trustworthy than someone who looks upset or angry. The founder of Airbnb built his business model on the idea that people would trust others to stay in their homes (Airbnb, 2019). When we do not trust something, we respond by avoiding it and finding an alternative. That is why some people cannot imagine sleeping in an Airbnb.

Trust is the basis of human social behavior. It is a psychological and social phenomenon because whether we trust something or someone or not, trust influences the lens through which we see the world and respond. For example, social psychology suggests that humans exist in groups partly because doing so increases one’s chances of the group’s survival.

Since the brain is hardwired for survival (Nicholson, 1998), belonging to a group may be one of humans’ most useful evolutionary behaviors. The brain is hardwired to avoid threats, including isolation. A study by Tomova et al. (2020) showed that acute social isolation activates the same brain area as hunger. Researchers Evans et al. (1994) showed that when they placed female rats inside a socialization chamber, they were more likely to learn to press a level with fewer attempts than rats not provided with the same socialization. Although the experiment results were not significant, it may suggest that socialization acts on the brain as a reward.

Unlike machines, people have a basic need for trust. People are driven to maintain socialization and socialize with people with shared values and attitudes (Scheffer et al., 2020). Groups also support psychological needs, such as the need to feel a sense of belongingness and inclusion.  Isolation may result in a number of psychological problems. Mary Ainsworth and John Bowlby’s (Ainsworth, 1985; Bowlby, 1969/1982) research showed that children need to develop a secure attachment to develop stability and to perceive the world as a trusting place. Without this basic need met during childhood, a child may grow up easily distrusting others and experience difficulty initiating relationships as a result. They may make social decisions later in life based on the fundamental consequential changes to the brain from the failure to form a secure attachment during childhood. This raises the question of will the greater forces integrate robots into society, claiming it is for our good and for humans to accept them – embracing AI as their in-group.

The more we characterize AI as superintelligent and it looks human-like, the more we trust it  – giving it power over our autonomy. Some people eagerly await the next technological advancement in AI, and some hail Kurzweil for his futurist ideology, such as Tucker, a Forbes Contributor (2024). I argue that society should be cautious when we trust radical ideas.

How The Powerful Can Use Psychology to Sway Society to Accept AI: Mind Control

The field of psychology can give insight into how the powerful could influence people to accept AI. Consider the following possibilities.

Shared Reality: AI, such as ChatGPT generates content based on the data or information it is fed. What it tells you will be similar to what it tells other people. When people are fed the same information, it can create a shared reality. The danger is when the information is incorrect or misleading.

Suggestibility: When people are in a novel situation, their brains are in a state of trance. The amygdala is activated when we detect a threat, and higher-order thinking and decision-making are hindered. It is like the amygdala hijacks the brain. As a result, we react as if on autopilot and do what we are told. It is at this time that the brain is suggestible. Humans can quickly conform in that moment.

Flawed Research: Research can be flawed because scholarly journals are known for publishing what sells, which is generally research with positive data results. To make a good decision, there must be research that shows both the positive and negative results and both sides of the story, but this isn’t always the case. According to critics of this publishing method (Kusnitzoff, 2017), too few negative study results can lead to false conclusions.

Psychological Needs: Some scientists believe that cyborgs and synths (androids that are human-like) (See SingularityU.com) will save humans and the planet. That is a considerable amount of trust in a non-human entity. However, people will readily follow because of their need to be part of society and for belongingness and love. Or, the powerful can use urgency as a fear tactic. For example, Locklove and Applegate (2019) use fear to emphasize that humanity will perish without cyborgs.

What is Trending: Marketing is capable of creating trends. The minute something goes viral, people listen and easily follow what others are doing, such as on the Internet. This is partly due to the desire to conform to group behavior or to avoid missing out on what everyone else is doing. We desire to be liked by others, so we conform.

Priming: Priming is a cognitive phenomenon. The brain’s perception of something in the environment may influence the person’s thoughts. I demonstrate this concept to my students in social psychology by displaying green on the whiteboard at the front of the class. I ask them to pretend I am going to the grocery store later that day to buy some vegetables and fruit and think of some items I might purchase. They are surprised to learn that most of them at least thought of a green item. Why? Because the screen was green.  

Marketing: Once something is normalized, people quickly adapt to it. We do not tend to question it or give it much thought. Regarding technology, we tend to believe it is safe until proven otherwise. Social media is supposed to improve people’s lives and create greater social bonding, but we also watch and compare our lives to other people. When people compare their lives to others who have more than they do, it can leave people feeling depressed.

Furthermore, people are influenced by simply watching other people. Bandura’s Bobo doll experiment from the 1980s showed that when children observed adults being aggressive with a Bobo doll, they mimicked the same behavior. If something shown on TikTok or X becomes popular by the number of likes, it has the power to change the minds of millions of people.

Furthermore, people are influenced by simply watching other people. Bandura’s Bobo doll experiment from the 1980s showed that when children observed adults being aggressive with a Bobo doll, they mimicked the same behavior. If something shown on TikTok or X becomes popular by the number of likes, it has the power to change the minds of millions of people.

The Biggest Risk and Danger of AI Might be Normalizing it

Over the generations, new things such as tattoos, social media, fried foods, down-streaming movies, and cell phones are normalized. Normalization of cultural things can happen overnight when something on the Internet goes viral, or it can be in a boiling-a-frog sort of way. Before you know it, people are adhering to the next normalization wave. You may be eco-friendly or purchase the latest Nija electronic for your kitchen. Products that benefit our lives come to market and make tasks easier and less time-consuming, including technology. Today, we can send out a mass email to invite friends and family to a wedding, when before the computers, the person would have written out cards and sent invitations individually.  Today, having a cell phone is normal as eating a hamburger at McDonald’s.

People’s behavior. How might perceptions change in a large population? Laws may change, but conformity is more important here. Erich Fromm (1942) believed that people need to be independent and have freedom of choice, but while seeking after that, people end up conforming to society as a result. We behave and make similar choices because of what we are exposed to in our society and environment. If your parents took you to McDonald’s as a child, you learned to think of McDonalds when thinking of a hamburger. We see other people go to McDonald’s, influencing our perceptions, so we follow suit. A difference in our choices is that today, we have more options. It is not just McDonald’s; it is Wendy’s and Burger King. These and other marketing influence consumer behaviors.

Computers cannot operate fully like a human brain, and it is an illusion when AI generates content that sounds human like with empathy and emotions. The brain has multiple structures that operate as a whole. The brain is constantly taking in information (sensory information) from the world around so that we can function in it and navigate through it. Sensory information coming in from our sensory systems makes it way to the brain for processing via neural communication. The thalamus acts like a relay station, filtering information from the sensory systems to the body and relaying the information to the cerebral cortex. The cerebral cortex is responsible for processing sensory information and coordinating muscle responses for body movement; it is responsible for language, higher-order level processing such as reasoning and decision-making, as well as intelligence and emotions.

According to Suleyman (2023 ), a founder of DeepMind and Bhaskar, AI will soon reach sentient and have artificial capability intelligence, meaning it will be able to do many tasks that humans can. Once there are solutions to some of these issues, it will cause AI to leap to superintelligent. Mollick (2024) believes that AI needs a body in order for it to be fully sentient. Having a body will provide AI with the means to have experiences. Once AI is superintelligent and has a body and is unleashed into society, it will change the fabric of how humanity and humans will treat it as if it is one of us. It will be normal for people to have robots in their homes, at work, and around town. However, what happens when whole societies normalize AI and super-intelligent AI into society? The lines between AI and human will become blurred. It will change history, according to Suleyman and Bhaskar (2023).

Containment Problem

According to OpenAI developers Mustafa Suleyman and Michael Bhaskar (2023), it is a matter of time before AI changes various industries and becomes superintelligent. Couple that with AI being able to continually improve itself and self-replicate, we have a “containment problem,” says Suleyman and Bhaskar. AI may have hints of artificial general intelligence, which is when it can do all the cognitive tasks humans can and better. Stuart Russell, professor of computer science and the Smith-Zadeh Chair in Engineering at the University of California-Berkeley, calls the threat of superintelligence machines to humanity the “gorilla problem” (2019). Humans control gorillas in captivity because humans have substantially greater intelligence; they have the power to influence and rule all other species on the planet. What happens when humans create something more intelligent than themselves?

Look for my final comments on this topic coming soon!

Facebook
Twitter
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *