Image by Hotpot.ai

By Jill Maschio, Ph.D

‘Gentle Singularity’ may be here, according to Sam Altman

Sam Altman, CEO of OpenAI, is signaling that the Singularity is nearer. He stated on his blog that “ChatGPT is already more powerful than any human who has ever lived”, and that they are close to building superintelligent AI. The Singularity is the point at which AI surpasses all human intelligence. His enthusiasm for superintelligent AI shines through in his writing, as he believes that AI will significantly improve our lives, from freeing up our time to making life-saving discoveries.

Altman states that two challenges still need to be overcome: the alignment problem and making AI more affordable. The alignment problem, an issue raised in 1960, AI pioneer Norbert Wiener described the AI alignment problem as follows: 

 If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively … we had better be quite sure that the purpose put into the machine is the purpose which we really desire.    

This quote highlights the fundamental challenge and pragmatic view of ensuring that an AI system’s objectives align with those of its designers or users, reflecting human intentions. What are the human intentions? I don’t recall this being a social discussion. Is it something discussed in closed corporate boardrooms in secret? When does the rest of society get to be involved in the discussion of critical questions of ethics, human autonomy, and ask about numerous possible consequences from developing something that is planned to be imposed on humanity? I’ll ask some questions here, since we aren’t discussing it as a society.

  • Are we going to lose our humanness from being overly reliant on AI?
  • At what point are we going to stop and ask questions about the meaning of life once AI is working and living beside humans?
  • At what point do we stop and consider any psychological harm from anthropomorphizing AI? You can read my article about that.
  • How might AI further isolate people (much like social media has for some people), reduce social skills and human creativity, and intellect?
  • Humans will be impacted by what AI tells them in their daily lives. Humans are influenced by their environment, what they hear, see, and experience. To what extent will the human psyche change?
  • Human-to-AI relationships are not the same as human-to-human relationships. How will relationships with AI impact humans’ ability to connect with humans? You can read about that here.

I believe AI has the potential to bring enormous benefits to society, including reduced workload, increased prosperity for those who invest in AI, and advancements in various industries, such as medicine, resource extraction, and exploration. On another note, the psychological impact on humans is less discussed, except for employment displacement.

I challenge those in positions of ushering in AI to consider how AI will impact their humanness and that of society. Consider this through the lens of psychology and pose challenging questions while developing AI. Hindsight is a funny thing. 

Leave a Reply

Your email address will not be published. Required fields are marked *