Artificial intelligence has been a major topic of discussion both on- and off-line in recent months. Internet users have shared their amazement about the texts and images it can produce. Others have expressed worries about how the technology is being used.
Specifically, some have privacy concerns about the data that’s being collected, both from the texts used to train large language models and the information that’s being inputted into them. For example, one man filed a complaint against ChatGPT after the LLM falsely claimed that he killed two of his children, per the BBC.
Now, a user on TikTok has gone viral after sharing her own spooky experience with the service. What’s really going on here?
Did this woman gain access to another person’s conversation?
TikTok user Liz (@wishmeluckliz) says that something strange happened as she was creating a grocery list using the speech function of ChatGPT. Her video on the matter has over 354,000 views.
“The short version is that somebody else’s conversation made its way into my conversation. And ChatGPT tells on itself and tells me that this happened,” Liz explains.
As the video progresses, the TikToker details how it all allegedly went down. According to her, she was using the voice mode to create a grocery list. Voice mode allows one to converse with the service.
However, after she completed her grocery list, she remained silent, not realizing that she had left the microphone on.
When she went to check the transcription of her conversation, she noticed something bizarre.
An unusual conversation
“It says, ‘Hello, Lindsey and Robert, it seems like you’re introducing a presentation or a symposium. Is there something specific you’d like assistance with regarding the content or perhaps help with structuring your talk or slides? Let me know how I can assist,’” the TikToker says, showing the screen. “I never said that, and I never said anything leading up to this.”
As she scrolls up, she reveals that during her silence, the service somehow recorded her saying that she was a woman named Lindsey May, who claimed to be the VP of Google, as well as another man named Robert and that the two were giving a symposium.
Confused, Liz gave ChatGPT the following prompt, “I was just randomly sitting here planning groceries, and you asked if Lindsey and Robert needed help with their symposium. I’m not Lindsey and Robert. Am I getting my wires crossed with another account right now?”
After this, ChatGPT appeared to admit that Liz was, in fact, “crossing her wires” with another account.
“This was really scary,” Liz states. “I hope I’m overreacting and that there’s a simple explanation for this.”
Why did this happen?
So, did Liz’s ChatGPT really give her someone else’s prompt? The answer is that it’s very unlikely.
On Reddit, numerous users have noted that silence, whispers, or unintelligible speech can produce bizarre prompts from ChatGPT.
For example, one user on Reddit claimed that their transcriptions kept producing the phrase “Thank you for watching,” even when no such phrase was said. Another said that they coughed, and the transcription said, “This transcript was provided by Transcription Outsourcing, LLC.”
While ChatGPT does appear to learn from user input to a limited degree, it is more likely that the information seen by Liz was not someone else’s prompt but a hallucination generated from ChatGPT’s training data.
Proving this is difficult, as OpenAI is notoriously reticent regarding the specifics of its training data. However, given that there is no record of a symposium featuring the two names featured in Liz’s text, it’s likely that the service is simply grabbing onto data points and reproducing them in a hallucinatory manner, rather than transcribing a real conversation that is taking place.
As for why ChatGPT appeared to admit that it gave Liz someone else’s prompt, this too is likely a hallucination prompted by Liz directly asking it whether it had done so.
ChatGPT is known to occasionally be overly agreeable. A recent version of the program was pulled after users complained that the software was behaving in a sycophantic manner.
Due to the fact that Liz speculated that she was having her “wires crossed” with another account, ChatGPT may have simply hallucinated an agreeable response, even if this is not what was actually occurring.
If Liz is still concerned that this really occurred, however, OpenAI has privacy contacts to whom she can reach out to discuss the incident.
@wishmeluckliz Has this ever happened to anybody else?? #chatgpt #ai #crossedwires ♬ original sound – wishmeluckliz
Commenters are unsettled
In the comments section, some users took the video at face value, while others speculated that the response was simply a hallucination.
“This is spooky but not unheard of – the model is hallucinating,” wrote a user. “When you leave voice mode on but don’t speak, the model will attempt to extract language from the audio – in the absence of spoken word it will hallucinate. it also isn’t crossing wires, but is oriented towards hallucinating in agreement, so you suggested that wires got crossed and it agreed with you in an attempt to successfully ‘answer your query.’”
“A good lesson here is that we all need to assume that any conversation with ChatGPT will never be private. Anything you do online isn’t private,” offered another.
“LLMs will make up stuff all the time. They know literally nothing,” declared a third.
That user continued, “They literally don’t think. They just regurgitate data based on the probability of using one word after the previous one. That’s all they are. And when they don’t know what should go after a word, they just make up something and say it like it’s truth. People really have to stop believing anything that they ‘say.’”
The Daily Dot reached out to OpenAI via email and Liz via Instagram direct message and TikTok direct message and comment.
Internet culture is chaotic—but we’ll break it down for you in one daily email. Sign up for the Daily Dot’s web_crawlr newsletter here. You’ll get the best (and worst) of the internet straight into your inbox.