Key facts at a glance:
- OpenAI launches Advanced Account Security for ChatGPT
- Requires physical security key (like YubiKey) or passkey
- Password-based login is replaced entirely
- Session lengths are shortened to limit stolen access
- Alerts sent on new account sign-ins
- Conversations from enrolled accounts are automatically excluded from AI training
- Email and SMS recovery is disabled; losing the key means permanent lockout
- OpenAI partners with Yubico for discounted two-key bundles
If you’ve ever worried about someone getting into your ChatGPT account, OpenAI has finally introduced something worth paying attention to. The company has rolled out a new opt-in feature called Advanced Account Security, and it is exactly what it sounds like. You can now lock down your account using a physical security key, which is now available to regular ChatGPT users.
What happens when you turn it on
The feature bundles several protections together rather than making you hunt through settings menus. Password-based login is disabled entirely and replaced by passkeys or physical security keys. Session lengths get shorter, so a stolen login can’t be used indefinitely. You get alerts when someone signs into your account. And conversations from enrolled accounts are automatically excluded from model training — no need to dig around for that toggle separately.
The account recovery side is where things get serious. Email and SMS recovery are disabled, so if you lose your keys, OpenAI Support cannot help you regain access. The most common way accounts get hijacked is through compromised email or phone numbers, so cutting that off is a meaningful step up.
Hardware security keys: why they matter
Physical security keys, such as those made by Yubico, represent one of the strongest forms of two-factor authentication (2FA) available today. Unlike SMS codes or authenticator apps, hardware keys use public-key cryptography to verify your identity. They are immune to phishing attacks, because the key only responds to the exact domain it was registered with. Even if a fake website looks identical to ChatGPT’s login page, the key will refuse to authenticate. This makes them far superior to other 2FA methods, which can be intercepted by sophisticated phishing kits or man-in-the-middle proxies.
Major technology companies have been pushing hardware keys for years. Google famously eliminated all forms of account takeover among its employees after requiring physical keys. Apple, Microsoft, and Meta have all built support for FIDO2 security keys across their platforms. OpenAI’s move brings ChatGPT into this elite security tier, which is especially important given the sensitive nature of conversations that many users now store in their accounts.
OpenAI partners with Yubico to lower the barrier
Rather than just pointing users to Google search results, OpenAI has partnered with Yubico — one of the most trusted names in hardware authentication — to offer discounted bundles of YubiKeys. The bundle includes two keys: one small enough to live permanently in your laptop port, and one with NFC for mobile use. It’s a smart move. The biggest barrier to hardware-based security has always been the friction of getting started, and removing the pricing hurdle helps.
The discounted bundle is a significant incentive. YubiKeys typically cost between $25 and $55 each, depending on the model. By offering a bundle, OpenAI makes it easier for users to protect both their primary device and their phone. The NFC-enabled key is particularly useful for mobile users who log into ChatGPT on the go. Tap your phone against the key, and you’re in — no typing required.
This partnership also signals that OpenAI is serious about security at scale. Yubico’s keys undergo rigorous certification, including FIPS 140-2 for government use, so they meet the highest standards. For enterprise ChatGPT users who handle proprietary data or legal documents, this level of assurance is critical.
Who should enable this now?
While this is a good initiative, most casual ChatGPT users probably don’t need this yet. But the landscape is shifting. People are using ChatGPT for sensitive work conversations, legal research, medical questions, and business strategy. An account that holds months of that context is a valuable target. OpenAI offering this now, before a major account-breach headline forces their hand, is the right call — and it’s a sign that AI companies are starting to take security as seriously as the data they’re actually holding.
For journalists, lawyers, doctors, executives, or anyone with confidential data in their chat history, this feature is nearly essential. Even if you think your account is low-risk, consider the long tail of information that accumulates over time: meeting summaries, personal reflections, draft documents, even code snippets. A single compromised account could leak intellectual property or private communications.
Additionally, users who travel frequently or log in from different devices will benefit from the shorter session lengths. If a laptop is stolen while you’re at a coffee shop, the thief will have only a limited window to access your account — and they won’t be able to log in at all without your physical key.
The evolution of AI account security
OpenAI’s move is part of a broader trend across the artificial intelligence industry. As large language models become integrated into daily workflows, the accounts that store prompts and outputs become prime targets. Rivals like Google have long enforced hardware key usage for high-risk accounts, but many AI startups still rely on traditional password-plus-SMS 2FA. By offering Advanced Account Security from the outset, OpenAI sets a standard that others will likely follow.
It’s worth noting that the feature is opt-in, not mandatory. That’s appropriate for now, because hardware keys still carry a learning curve and a cost. But as phishing attacks grow more sophisticated and AI-powered scams become harder to detect, the day when physical security keys become a default requirement may not be far off. OpenAI’s partnership with Yubico also hints at future integration: perhaps an OpenAI-branded YubiKey, or baked-in key support for the mobile app.
Behind the scenes, the technical implementation relies on the FIDO2 (CTAP2) and WebAuthn standards, which are supported by all major browsers and operating systems. That means you can use any FIDO2-compliant key, not only Yubico products. But the partnership ensures that users get a seamless experience when buying keys through the ChatGPT interface.
What remains the same
It’s important to understand that Advanced Account Security does not change how ChatGPT works once you are logged in. The interface, model behavior, and conversation history remain identical. The only difference is how you prove your identity when signing in, and how your data is treated for training purposes. The automatic exclusion from model training is a welcome privacy bonus, especially for users in regulated industries who cannot consent to their conversations being used as training data.
OpenAI has not announced any plans to make this feature mandatory, nor has it disclosed how many users have already enrolled. But early adopters will likely find peace of mind knowing that their ChatGPT account is as secure as their banking or email account.
For anyone who relies on ChatGPT for professional work, the time to enable Advanced Account Security is now. Grab a YubiKey bundle, follow the setup instructions, and lock down your account. The few minutes of configuration are a small price for protection against the growing wave of AI-targeted account theft.
Source: Digital Trends News