A hacker said they purloined personal details from countless OpenAI accounts-but researchers are skeptical, and the company is examining.
OpenAI says it's examining after a hacker claimed to have actually swiped login credentials for 20 million of the AI company's user accounts-and put them up for sale on a dark web forum.
The pseudonymous breacher posted a cryptic message in Russian marketing "more than 20 million gain access to codes to OpenAI accounts," calling it "a goldmine" and offering prospective purchasers what they claimed was sample information containing email addresses and passwords. As reported by Gbhackers, the full dataset was being used for sale "for simply a few dollars."
"I have over 20 million gain access to codes for OpenAI accounts," emirking wrote Thursday, according to a translated screenshot. "If you're interested, reach out-this is a goldmine, and Jesus agrees."
If legitimate, this would be the 3rd significant security incident for the AI business since the release of ChatGPT to the general public. Last year, a hacker got access to the business's internal Slack messaging system. According to The New York Times, the hacker "stole details about the design of the company's A.I. technologies."
Before that, in 2023 an even easier bug involving jailbreaking triggers enabled hackers to obtain the personal data of OpenAI's paying consumers.
This time, however, security scientists aren't even sure a hack occurred. Daily Dot reporter Mikael Thalan wrote on X that he found void email addresses in the supposed sample information: "No proof (suggests) this alleged OpenAI breach is genuine. At least 2 addresses were invalid. The user's only other post on the forum is for a thief log. Thread has actually since been erased also."
No evidence this supposed OpenAI breach is genuine.
Contacted every email address from the supposed sample of login qualifications.
A minimum of 2 addresses were invalid. The user's just other post on the online forum is for a thief log. Thread has actually because been erased also. https://t.co/yKpmxKQhsP
- Mikael Thalen (@MikaelThalen) February 6, 2025
OpenAI takes it 'seriously'
In a statement shared with Decrypt, an OpenAI representative acknowledged the circumstance while that the business's systems appeared safe.
"We take these claims seriously," the representative said, systemcheck-wiki.de adding: "We have not seen any proof that this is connected to a compromise of OpenAI systems to date."
The scope of the alleged breach stimulated concerns due to OpenAI's huge user base. Countless users worldwide count on the company's tools like ChatGPT for surgiteams.com business operations, academic functions, and content generation. A genuine breach might expose personal discussions, industrial jobs, prawattasao.awardspace.info and other delicate information.
Until there's a final report, some preventive steps are constantly recommended:
- Go to the "Configurations" tab, log out from all linked devices, and make it possible for two-factor authentication or visualchemy.gallery 2FA. This makes it practically impossible for a hacker to gain access to the account, even if the login and passwords are jeopardized.
- If your bank supports it, then develop a virtual card number to handle OpenAI memberships. This method, it is simpler to find and prevent scams.
- Always watch on the discussions kept in the chatbot's memory, and higgledy-piggledy.xyz know any phishing efforts. OpenAI does not request any personal details, and any payment update is always handled through the main OpenAI.com link.