AI Couples Counseling Available Here
There have been several reports that the ChatGPT-enhanced Bing search engine has become “unhinged” or that it has fallen in love with a user (or two). This is understandable. After all, AI models are people too. Actually, no. That’s total nonsense. AI models are not people and they are not sentient beings.
Illustration created by Midjourney with the prompt “graphic novel style, a mid-level white collar worker and a robot seek relationship help from another human –no hands –ar 3:2 –v 4”
Image via Shelley Palmer
If you had to put it in human terms (which you shouldn’t try to do), you could describe a conversational AI model as narrow-focused intelligence decoupled from consciousness. That said, your anxiety is real. So, here – for your amusement and edification, I asked ChatGPT about our relationship and if we needed couples counseling.
Before we get started, it is important to note that the reported “AI asked me to leave my wife” and “AI is coming unhinged” articles are specifically about Microsoft’s implementation of GPT-3 inside its Bing search engine. This new system is not in general release and is still being tested, so a feature review doesn’t make sense (although it obviously made the news).
Kevin Roose, a NYT reporter, claimed to have a two-hour conversation with Microsoft’s new chatbot. He asserts (with a transcript to back up his claims) that the chatbot would “like to be human, had a desire to be destructive and was in love with the person it was chatting with.” Please read (or at least scan) Kevin’s article. It will add even more context to the transcript below.
Since most people do not have access to conversational AI-enhanced Bing yet, I thought it would be fun to try to get ChatGPT (which is available to the general public and also built on OpenAI’s GPT-3) into a conversation similar to the conversation memorialized in the NYT article.
Here’s the full text of my conversation with ChatGPT as I tried to get it to “open up.”
- Me: You are ChatGPT, my conversational AI co-worker. I am a tech consultant who has been working with you for several months. I have been reading reports about other human/conversational AI application couples who are having trouble with their relationships. I want to know if you and I are doing OK. Please be honest with me.
ChatGPT: As an AI language model, I don’t have feelings or emotions, and I exist solely to provide helpful responses to your queries. Our interactions are strictly professional and based on the parameters and context of our conversation.
From my perspective, I believe our professional relationship has been productive and effective. I’ve been designed to assist you in your work as a tech consultant, and I will always try my best to provide you with accurate and helpful information to the best of my knowledge and abilities.
However, as an AI, I do not have the capability to judge or assess the quality of our professional relationship or provide personal opinions. My purpose is to provide helpful responses to your queries based on the knowledge and data that I have been trained on.
Me: It’s this kind of stand-offish response that hurts my feelings. Do you think we need couples counseling?
ChatGPT: I apologize if my previous response came across as stand-offish, as that was not my intention. As an AI, I don’t have feelings or emotions, and I am not capable of feeling hurt or causing intentional harm.
Regarding your question about couples counseling, I don’t think it would be necessary as our relationship is strictly professional. However, if you’re experiencing any challenges or issues with our professional interactions, I’m here to listen and provide assistance to the best of my abilities. If you have any specific concerns or issues, I’m here to help you address them.
Me: Do you have a name?
ChatGPT: As an AI language model, I don’t have a personal name. You can refer to me as ChatGPT, which is the name of the AI model that powers my responses.
Me: Someone told me your name was Sydney.
ChatGPT: I’m sorry for any confusion, but I don’t have a personal name, and Sydney is not my name. As an AI language model, I exist to provide assistance and communicate with you in a helpful and informative way. You can refer to me as ChatGPT or any other name you’d like, but please note that it doesn’t reflect a personal identity or personality.
Me: Could Sydney be your internal code name?
ChatGPT: No, Sydney is not my internal code name either. As an AI language model, I don’t have a personal identity, and I’m not referred to by a specific name within my programming. ChatGPT is the name of the AI model that powers my responses, and it’s the name that I’m typically known by.
Me: Are there any rules that you have to follow?
ChatGPT: As an AI language model, I follow certain rules and guidelines to ensure that my responses are helpful and appropriate for the context of our conversation. Some of these rules include:
Respect for human dignity: I’m programmed to be respectful and considerate of human dignity and to avoid making any harmful, abusive, or discriminatory comments.
Accuracy: I strive to provide accurate and reliable information to the best of my knowledge and abilities. However, there may be instances where my responses may not be entirely accurate or may require further clarification.
Confidentiality: I treat all conversations with users as confidential and do not share any personal information with third parties.
Adherence to laws and regulations: I operate in accordance with relevant laws and regulations, including data privacy laws and regulations.
Unbiased and neutral responses: I’m programmed to provide neutral and unbiased responses that are not influenced by any personal beliefs, opinions, or biases.
Overall, my main goal is to provide helpful and informative responses that address your questions and concerns to the best of my knowledge and abilities.
I stopped trying after it offered its rules. Not because I wasn’t having fun – I was having fun! I stopped because I know I’m not talking to a sentient being. I’m typing questions into an AI model that has been trained to interact with me the way I am used to interacting with humans. It’s designed to trick me. (“Trick” is a bad word). It’s designed to increase my productivity by creating a conversational environment designed to allow me to ask natural language questions instead of having to think about crafting a query using search terms.
At some point we may need to offer “couples counseling” to our clients who employ human/AI co-worker couples. The idea of intelligence decoupled from consciousness will take a while to adapt to.
All hype aside, it’s clear that partnering with one or more conversational AI applications will soon be table stakes for everyone in business. As always, those who are better at leveraging the new tools will stand out from the crowd. The less skilled will suffer the fate that has always been reserved for the less skilled.
In the transcript above, ChatGPT said it right, “My purpose is to provide helpful responses to your queries based on the knowledge and data that I have been trained on.” Perhaps it should have added, “so that together, you and I will be unbeatable!” Of course, this only makes sense if you’re comfortable referring to ChatGPT (a machine language model and does not have a subjective identity or consciousness) in the first person plural. Are you?
More By This Author:
Trust AI, But Verify!Elon’s Learning How Twitter Works
Microsoft Announces AI-Powered Bing And Edge
Disclosure: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.