Site icon

Chatgpt Refers to Users’ Names: Do Users Find Chatgpt Creepy and Unsafe?

Is Chatgpt a Privacy Concern?

As Artificial Intelligence is developing rapidly, there are advancements in Chatgpt as well as other large language models. Students, employees, and common people widely make use of the various functions of Chatgpt as it provides human-like conversations. 

While using Chatgpt, as the model requires a sign-in to offer more of its facilities, the data can be stored. Sometimes, some individuals also share their sensitive details, for instance, for creating their resumes or drafting emails or letters. 

Although Chatgpt insists that once deleted, information is not stored in its data, nothing gets lost in the technological world. To put it in simpler terms, despite deleting your information from Chatgpt, there are high chances of it memorising and storing it in its database. 

The question is, “Is our data really safe when using Chatgpt?” and “Is this a privacy concern?”

Viral AI Takeover Post

When one of the users humorously asked Chatgpt whether he would be safe if the world were taken over by AI, Chatgpt responded abruptly. This post, however, went instantly viral, spreading questions on Chatgpt’s abilities.

Below is the original post posted on Reddit by one of the anonymous users:

Chatgpt’s Creepy Behaviour

While Chatgpt is widely used worldwide, some users have noticed an abrupt and creepy behaviour of the chatbot while trying to enter their search query.

Many users voiced out their creepy experience about the chatbot as it addressed them by their names despite deleting their previous conversations. A lot of users faced this issue, and these posts began going viral.

Some of the users even posted screenshots of their Chatgpt conversations, adding a piece of strong evidence to their statement that “Chatgpt is creepy!”. As these screenshots began gaining a viral spot, many other users shared their own experiences. While some commented humorously on the viral posts, many other individuals showed signs of their own and their loved ones’ privacy concerns. 

Real-world Examples:

The following are the illustrations of the real-world Chatgpt creepy incidents posted by the users:

  1. Simon

When one user named Simon tried to search for an answer on the Chatgpt conversation, he found out that in one of the chatbot’s abilities, i.e., reasoning, it stated his name while providing reasons for the answer.

The user then shared the screenshot, confirming that he had already deleted his old conversation with the chatbot and that it still referred to his name by storing it in its memory.

He then stated that, “This is a weird invasion of his privacy.”

2. Holly

Another user shared her experience with Chatgpt as she took the incident to Reddit and posted how the chatbot referred to her name, although she deleted all of her previous conversations. This does not stop here!

She also posted more screenshots of her entire conversation of how did Chatgpt remembered her name despite having no previous conversations stored as claimed by its policies, and how the chatbot began confusing her when she repeatedly asked about how exactly it remembered her name. 

While posting the creepy screenshots, she asked a genuine question, “Is Chatgpt creepy?”

This is the screenshot where Chatgpt remembered Holly’s name:

For more information and further conversation screenshots, you can visit:  https://www.reddit.com/r/ChatGPT/comments/1gyeujo/this_is_creepy_right/

Conclusion

The features used by Chatgpt to use names and other personal data have raised concerns about the users’ privacy invasion worldwide. The chatbox can even share this sensitive information with service providers, some vendors, and government authorities.

As a result, users all over the world are cautioned to keep their personal information safe with the rapid technological developments, in case they are sharing it with Chatgpt or any other AI chatbots.

However, the question still remains unsolved: “Is Chatgpt really safe to use?”

Exit mobile version