UPDATE: Jan. 30,the inherent eroticism of the sea 2024, 4:01 p.m. EST This story has been updated to include a statement from OpenAI about its investigation as well as a confirmation regarding the source of the issue.
What was initially believed to be ChatGPT data leak is the work of a hacker according to OpenAI.
According to Ars Technica, a user named Chase Whiteside unwittingly received login credentials and personal information from what appeared to be a pharmacy customer on a prescription drug portal. Since this was in response to an unrelated query, Whiteside shared it with the tech news site.
SEE ALSO: OpenAI releases ChatGPT data leak patch, but the issue isn't completely fixed"I went to make a query (in this case, help coming up with clever names for colors in a palette)" wrote Whiteside in an email. "When I returned to access moments later, I noticed the additional conversations." In a statement to Mashable, OpenAI said the "misuse" was due to the account being hacked.
"Based on our findings, the users’ account login credentials were compromised and a bad actor then used the account. The chat history and files being displayed are conversations from misuse of this account, and was not a case of ChatGPT showing another users’ history."
The responses that appeared to leak information was the result of conversations created in Sri Lanka, not Whiteside's location in Brooklyn, which fit within the time frame of a login from the same place.
Per Ars Technica, Whiteside is skeptical that his account was compromised. He claims that he uses a nine-character password with special symbols, and a mix of lowercase and uppercase letters. Plus, he only uses it for his Microsoft account — nowhere else.
OpenAI said it hasn't seen this issue anywhere else.
The conversations appear to be from a frustrated employee troubleshooting issues with an app (name redacted by Ars Technica) used by the pharmacy. In addition to the entire text disparaging the app, the leak included a customer's username, password, and the employee's store number. It's unclear whether this is the case, but it looks like the entire feedback ticket was included in ChatGPT's response.
ChatGPT has raised concerns over privacy and data security. Hackers and researchers have discovered vulnerabilities that enable them to extract sensitive information, either through prompt injection or jailbreaking.
Last March, a bug was discovered that revealed ChatGPT Plus users' payment information. Although OpenAI addressed certain issues related to ChatGPT users, it doesn't protect from personal or confidential information shared with ChatGPT. This was the case when Samsung employees using ChatGPT to help with code accidentally leaked company secrets, and is why many companies have banned ChatGPT usage.
According to OpenAI's privacy policy, input data is supposed to be anonymized and stripped of any personally identifiable information. But the makers themselves can't always pinpoint what leads to certain outputs, which underscores the inherent risks of LLMs.
This instance may have been a hacker's handiwork, but it always bears mentioning: don't share any sensitive or personal information with ChatGPT.
Topics Artificial Intelligence ChatGPT
(Editor: {typename type="name"/})
Best tablet deal: Save $45 on Amazon Fire HD 10 tablet
Steven Moffat says Brexit is why he never cast a woman as the Doctor
Billy Bush says Trump's 'Access Hollywood' tape is 100 percent real
California's worst fire season just got even more devastating
NYT Strands hints, answers for April 23
Australia launches inquiry into how Google and Facebook affect local media
Madonna brings back Photo Booth, covers 'Toxic' for Britney's birthday
Scripted Wolverine podcast 'The Long Night' is Marvel's next phase
How to Easily Make iPhone Ringtones Using Only iTunes
My 'trickle down economics' tweet went crazy viral and here's a play
Best Samsung Frame deal: Free Music Frame with Frame Pro art TV purchase
'The Walking Dead' proved it'll never kill Daryl Dixon in episode 7
接受PR>=1、BR>=1,流量相当,内容相关类链接。