You are viewing a preview of this lesson. Sign in to start learning
Back to Authentication & Identity Fundamentals (2026)

Token Mechanics & JWT Security

JWTs in practice: signing, encryption, standard claims, validation pitfalls, key rotation, and sender-constrained tokens.

Last generated

Why Token-Based Authentication Exists and What It Solves

Imagine you've just built a web application with a login screen. A user enters their credentials, you verify them against your database, and now you face an immediate problem: HTTP is stateless. The server that processed that login request has no memory of it the moment the response is sent. The next request from that same user arrives as if they were a complete stranger. Something has to bridge that gap β€” something has to carry proof of who the user is from one request to the next. How that proof is carried, where it lives, and who is responsible for verifying it are not incidental implementation details. They are architectural decisions with deep consequences for security, scalability, and system design. This section frames exactly that problem, so that the mechanics of JSON Web Tokens and their signing algorithms in the sections that follow have genuine weight rather than feeling like arbitrary spec memorization.

The Classic Approach: Server-Side Sessions

For much of the web's history, the standard solution to HTTP's statelessness was the session. Here is how it works in its most common form:

  1. The user submits credentials.
  2. The server validates them and creates a session record β€” a small blob of data (user ID, roles, expiry time) stored somewhere the server can retrieve it.
  3. The server generates a random, opaque session ID and sends it to the client, typically as a cookie.
  4. On every subsequent request, the client sends that session ID back.
  5. The server looks up the session ID in its store, retrieves the associated data, and proceeds.
  CLIENT                          SERVER                      SESSION STORE
    |                               |                               |
    |  POST /login {user, pass}      |                               |
    |-----------------------------> |                               |
    |                               |  Create session record        |
    |                               |-----------------------------> |
    |                               |  session_id = "abc123"        |
    |  Set-Cookie: sid=abc123       |                               |
    |<------------------------------ |                               |
    |                               |                               |
    |  GET /dashboard               |                               |
    |  Cookie: sid=abc123           |                               |
    |-----------------------------> |                               |
    |                               |  Lookup "abc123"              |
    |                               |-----------------------------> |
    |                               |  {userId:42, role:"admin"}    |
    |                               |<----------------------------- |
    |  200 OK                       |                               |
    |<------------------------------ |                               |

This model works elegantly when you have one server and one user. The session store might be in-process memory, a database table, or a dedicated cache like Redis. The session ID itself is meaningless β€” it is just a random lookup key. All the real data lives on the server.

The trust problem in session-based auth is centralized: the server is the authority. If the session record says the user is an admin, they are an admin. To revoke access, you delete the session record. Done.

πŸ’‘ Real-World Example: A user's account is compromised. The security team forces a logout by deleting all session records associated with that user ID. Every active session β€” from every device, every browser tab β€” is immediately invalidated. The next request from any of those sessions gets a 401, and the user is redirected to login. This is clean, immediate, and entirely server-controlled.

The Scaling Problem That Sessions Create

Session-based auth carries a hidden cost that only becomes visible as systems grow. If your application runs on a single server, the session store is just a fast in-memory map or a local database. Simple. But modern deployments rarely run on a single server. They run on fleets of servers behind load balancers, in containers that spin up and down, across multiple geographic regions.

Now the session model strains:

πŸ”§ Sticky sessions are one workaround β€” the load balancer always routes a given user's requests to the same server. But this defeats the purpose of horizontal scaling and creates single points of failure. If that server goes down, the session is gone.

πŸ”§ Shared session stores are the more principled solution β€” every server reads from and writes to a centralized Redis cluster or similar. This works, but it introduces a new piece of infrastructure that every server depends on. That store becomes a bottleneck, a potential single point of failure, and an operational burden to maintain, replicate, and secure.

πŸ”§ Cross-domain authentication adds another wrinkle. Cookies are scoped to a domain. If your authentication server lives at auth.example.com and your application lives at app.example.com, sharing session state requires careful CORS configuration, subdomain cookie scoping, or explicit token passing. Expand to a true microservices architecture where Service A needs to make authenticated calls to Service B on behalf of a user, and session-based auth becomes genuinely awkward.

🎯 Key Principle: Session-based authentication requires shared state between the authenticating server and every server that needs to verify identity. Anything that requires shared state is inherently harder to scale horizontally than something that carries its own verification material.

Tokens: Moving State to the Client

Token-based authentication inverts the session model. Instead of storing state on the server and giving the client a lookup key, the server encodes the relevant identity information directly into a token and gives that token to the client. On every subsequent request, the client presents the token, and the server verifies it cryptographically β€” without consulting any external store.

  CLIENT                          SERVER A                     SERVER B
    |                               |                               |
    |  POST /login {user, pass}      |                               |
    |-----------------------------> |                               |
    |                               |  Validate credentials         |
    |                               |  Build token payload          |
    |                               |  Sign with private key        |
    |  token = "eyJ..."             |                               |
    |<------------------------------ |                               |
    |                               |                               |
    |  GET /api/resource            |                               |
    |  Authorization: Bearer eyJ... |                               |
    |--------------------------------------------> |               |
    |                               |               |  Verify sig   |
    |                               |               |  with pub key |
    |                               |               |  (no lookup!) |
    |  200 OK                       |               |               |
    |<-------------------------------------------- |               |

Notice what Server B is not doing: it is not calling back to Server A. It is not querying a shared database. It is applying a cryptographic verification algorithm to the token itself. If the signature checks out and the token's claims are valid, the request proceeds. This is what stateless authentication means β€” the verification logic is self-contained.

This architecture has concrete implications:

  • Horizontal scaling becomes straightforward. Any server with the right verification key can process any request. There is no session store to synchronize.
  • Cross-domain and cross-service authentication is natural. A token issued by your auth server can be verified by your API, your partner's API, or a third-party service β€” as long as they share the verification key or can retrieve it from a known location.
  • Microservices benefit directly. Service-to-service calls can carry the same token the user originally received, or a derivative token with reduced scope, without any centralized session state.

πŸ’‘ Mental Model: Think of session-based auth like a coat check. You hand over your coat (your identity), get a ticket (session ID), and the coat remains at the venue. The token model is more like a passport. The document itself carries your identity information, and any border agent (server) with the right verification tools can authenticate it on the spot, without calling back to the issuing country's central database.

This is a useful starting model, but like all analogies it has limits β€” a passport can be visually inspected and physically revoked; a token has its own distinct trust mechanics that the passport metaphor eventually obscures. Those mechanics are precisely what the rest of this lesson covers.

The Trust Problem Shifts, Not Disappears

Here is the key insight that learners often gloss over: token-based authentication does not eliminate the trust problem. It relocates it.

In session-based auth, trust is grounded in server-side state. The session record is authoritative. In token-based auth, trust is grounded in cryptographic verification. The token is trusted because it was signed by a key that only the legitimate issuer holds. If you can verify the signature, you trust the claims in the token.

This shift has profound implications:

❌ Wrong thinking: "Tokens are more secure than sessions because the server doesn't have to store anything."

βœ… Correct thinking: "Tokens transfer the security burden from state management to key management and token validation. Neither is inherently more or less secure β€” they have different threat models."

The session model's security depends on:

  • The session store being protected
  • Session IDs being unguessable
  • The server correctly associating session records with requests

The token model's security depends on:

  • The signing key being protected and never exposed
  • The token validation logic being implemented correctly
  • The token's claims being checked thoroughly on receipt

⚠️ Common Mistake: Treating the token as inherently trustworthy because it was present in the request. A token being present is not the same as a token being valid. A token is only trustworthy after it has been cryptographically verified and its claims have been checked. We will examine exactly what those checks entail in Section 4.

The Revocation Problem: The Real Tradeoff

The architectural advantages of stateless tokens come with a genuine cost that is worth understanding clearly before building on top of them.

In session-based auth, revoking access is trivial: delete the session record. On the next request, the lookup fails, and access is denied. The revocation is instantaneous and complete.

With stateless tokens, there is no record to delete. The token carries its own validity. Once issued, a token remains cryptographically valid until its expiry claim (exp) is reached β€” regardless of what has happened on the server side in the meantime. If an account is compromised, a user is terminated, or a permission is revoked, the token does not know.

  Timeline:

  T=0:00   Token issued, valid for 1 hour
  T=0:15   User's account is suspended
  T=0:20   Attacker uses the token  --> TOKEN IS STILL VALID
  T=0:30   Admin revokes all access --> TOKEN IS STILL VALID  
  T=1:00   Token expires             --> Access finally denied

This is not a flaw in JWT specifically β€” it is a fundamental property of any stateless token. The statelessness that makes tokens scale well is the same property that makes immediate revocation impossible without reintroducing some server-side infrastructure.

Common mitigations include:

  • Short token lifetimes paired with a refresh token mechanism β€” if the access token expires in 15 minutes, the damage window is bounded.
  • Token revocation lists (sometimes called blocklists) β€” a fast, distributed lookup (typically in Redis or similar) that the server checks against. This reintroduces state, but a much smaller and more targeted form of it: instead of storing full session data, you store only revoked token IDs.
  • Short-lived tokens with frequent rotation β€” combined with server-side validation of the refresh token, this approximates session-like revocation control while retaining much of the scaling benefit.

πŸ€” Did you know? The OAuth 2.0 specification defines a Token Introspection endpoint (RFC 7662) that allows a resource server to call back to the authorization server to check whether a token is currently active. This is effectively a stateful check bolted onto a stateless system β€” a practical acknowledgment that pure statelessness and immediate revocation are fundamentally in tension.

🎯 Key Principle: There is no free lunch with stateless tokens. You are trading revocability for scalability. The right tradeoff depends on your threat model: a low-risk API with short-lived tokens may be perfectly safe; a financial system with high-privilege tokens needs a revocation strategy.

JWTs Are a Format, Not a Protocol

Before going further, it is important to be precise about what a JWT actually is β€” and what it is not β€” because conflation here leads to real misunderstandings in system design.

JSON Web Token (JWT) is a compact, URL-safe format for representing claims as a JSON object that can be cryptographically signed or encrypted. That is all it is. A format. A data structure with rules about how it is encoded and verified.

JWTs are used inside larger protocols and frameworks:

  • OAuth 2.0 is an authorization framework that describes how to grant third parties limited access to resources. It does not require JWTs β€” access tokens in OAuth 2.0 can be opaque strings, JWTs, or other formats. JWTs are commonly used as OAuth access tokens because they can carry claims that resource servers can verify without calling back to the authorization server.
  • OpenID Connect (OIDC) is an identity layer built on top of OAuth 2.0. It does specify JWTs for its ID token β€” the token that carries user identity information to the client application. Here the JWT format is part of the spec.
  • SAML is a different, older federation protocol that uses XML-based assertions rather than JWTs. It solves some of the same problems.

❌ Wrong thinking: "We're using JWTs, so we're using OAuth."

βœ… Correct thinking: "We're using JWTs as our token format. Whether we're also using OAuth depends on whether we've implemented the OAuth authorization flows and endpoint contracts."

This distinction matters operationally. A team that issues their own JWTs using a shared secret, without any of the OAuth grant flows, is not getting OAuth's security properties β€” particularly its separation of concerns between the authorization server, resource server, and client. They are getting a signed data format, and the security of their system depends entirely on how carefully they handle that format.

πŸ“‹ Quick Reference Card: Session vs. Token Comparison

πŸ”’ Session-Based 🎫 Token-Based
πŸ“¦ State location Server (session store) Client (the token itself)
πŸ” Verification method Lookup by session ID Cryptographic signature check
πŸ—‘οΈ Revocation Delete session record (instant) Wait for expiry, or maintain blocklist
πŸ“‘ Cross-domain support Requires shared store or session sharing Natural (token travels with the request)
πŸ“ˆ Horizontal scaling Requires synchronized session store No shared state required
⚠️ Key attack surface Session store compromise, ID guessing Key compromise, validation bugs
πŸ”„ Typical use case Traditional web apps, monoliths APIs, microservices, federated identity

Putting It Together: Why This Foundation Matters

The mechanics of JWTs β€” the three-part structure, the signing algorithms, the claims, the validation steps β€” are not arbitrary trivia. They are direct answers to the problems framed in this section.

The header exists because the verifier needs to know which algorithm was used to produce the signature. The payload exists because the token needs to carry identity and authorization claims without a server-side lookup. The signature exists because any party that modifies the payload would invalidate it, giving the verifier confidence that the claims were issued by a trusted source and haven't been tampered with.

Each of the validation pitfalls you will encounter in Sections 4 and 5 maps back to one of the trust problems described here. The notorious alg: none vulnerability is a direct exploitation of what happens when a server fails to enforce which signing algorithm it will accept. The confusion between signed and encrypted tokens maps directly to the difference between integrity (the claims haven't been tampered with) and confidentiality (the claims can't be read by a third party).

πŸ’‘ Pro Tip: When you encounter a JWT-related security issue in the wild β€” in a CVE, a pen test report, or a code review β€” train yourself to ask: "Which part of the verification chain did this break?" Almost every JWT vulnerability can be traced back to either a key management failure, a skipped validation step, or a misunderstanding of what the token format itself guarantees versus what the implementation must enforce.

The rest of this lesson builds precisely that understanding, piece by piece, starting with the structure of the token itself.

Anatomy of a JWT: Header, Payload, and Signature

A JSON Web Token arrives as a compact string β€” three segments of text, joined by dots. That structure is not incidental; every part of it carries a specific role in the token's security model. Before you can reason about what a JWT protects, what it exposes, and where implementations go wrong, you need to understand exactly what those three segments are and how they relate to each other.

The Three-Part Structure

The dot-separated format of a JWT looks like this:

<header>.<payload>.<signature>

A real token in the wild might look like:

eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJ1c2VyXzEyMyIsImlzcyI6Imh0dHBzOi8vYXV0aC5leGFtcGxlLmNvbSIsImF1ZCI6Imh0dHBzOi8vYXBpLmV4YW1wbGUuY29tIiwiZXhwIjoxNzA5MDAwMDAwLCJpYXQiOjE3MDg5OTY0MDAsImp0aSI6ImFiYzEyMyJ9.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c

Each segment is independently Base64URL-encoded β€” a variant of Base64 that substitutes + with - and / with _, and omits padding characters (=), making the result safe to embed in URLs and HTTP headers without further escaping.

⚠️ Common Mistake 1: Treating Base64URL as encryption. Base64URL is an encoding, not an encryption. It transforms binary data into a printable ASCII string, but it provides zero confidentiality. Anyone who receives a JWT can decode the header and payload by reversing the transformation β€” no key required. The example token above, decoded, yields plain JSON that is immediately readable. Keep this in mind: the payload is an open book to anyone who handles the token.

Base64URL decode              Base64URL decode
      β”‚                             β”‚
      β–Ό                             β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”             β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Header    β”‚             β”‚       Payload         β”‚
β”‚  (JSON)     β”‚             β”‚  (JSON claims object) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜             β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                                      
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                   Signature                       β”‚
β”‚  HMAC or RSA/ECDSA over encoded header + payload  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The signature segment is not Base64URL-decoded into readable JSON β€” it is the raw output of a cryptographic function (bytes), which happens to be Base64URL-encoded for transmission. It cannot be "decoded" into plaintext the way the header and payload can.


The Header

The header is a small JSON object that describes the token itself β€” specifically, how to process it. After Base64URL decoding the first segment of the example above, you get:

{
  "alg": "RS256",
  "typ": "JWT"
}

The two fields here are the most common ones:

  • typ (Type): Identifies the media type of the token. The value "JWT" is conventional, but its presence is not required by the specification, and many libraries work correctly without it.
  • alg (Algorithm): Declares which cryptographic algorithm was used to produce the signature. This is the critical field. In the example, RS256 means RSA signature with SHA-256.

Other header parameters exist β€” kid (Key ID, to hint which key to use for verification), x5c (an X.509 certificate chain), and others defined by the JWA (JSON Web Algorithms) specification β€” but alg is the one that most directly affects how verification works and where vulnerabilities concentrate.

🎯 Key Principle: The alg header tells a validator which algorithm to use for verification. But here is the trap: a validator that reads alg from the token and then blindly applies whatever algorithm is named there has handed the attacker control over the verification process. The correct behavior is to check the incoming alg claim against a list of algorithms the validator explicitly permits β€” and reject anything outside that list. This is not a theoretical concern; it is one of the most reliably exploited implementation mistakes in JWT libraries. (Section 5 covers the concrete attack patterns this enables.)


The Payload

The payload is the claims object β€” a JSON document containing statements about a subject and the token itself. After decoding the second segment of the example, you get something like:

{
  "sub": "user_123",
  "iss": "https://auth.example.com",
  "aud": "https://api.example.com",
  "exp": 1709000000,
  "iat": 1708996400,
  "jti": "abc123"
}
Registered Claims and Their Semantics

The JWT specification (RFC 7519) defines a set of registered claims β€” names that have standardized, interoperable meanings. Validators are expected to understand and enforce these. The core set is:

ClaimFull NameTypeMeaning
πŸ”‘ issIssuerString (URI)Who created and signed this token. Validators should check this matches the expected issuer.
πŸ‘€ subSubjectStringThe entity the token makes claims about β€” typically a user ID.
🎯 audAudienceString or arrayWho this token is intended for. A validator must confirm it is a named recipient.
⏰ expExpiration TimeNumericDate (Unix timestamp)After this time, the token must be rejected.
πŸ• nbfNot BeforeNumericDateBefore this time, the token must be rejected. Less commonly used.
πŸ“… iatIssued AtNumericDateWhen the token was issued. Useful for age checks and key rotation logic.
πŸ”’ jtiJWT IDStringA unique identifier for this token. Used to prevent replay attacks by tracking used tokens.

Beyond registered claims, the payload can contain public claims (names registered in the IANA JSON Web Token Claims registry to avoid collisions) and private claims (application-specific names agreed upon between issuer and consumer). A real-world payload often mixes registered claims with application data:

{
  "sub": "user_123",
  "iss": "https://auth.example.com",
  "aud": "https://api.example.com",
  "exp": 1709000000,
  "iat": 1708996400,
  "jti": "abc123",
  "email": "alice@example.com",
  "roles": ["editor", "commenter"]
}

πŸ’‘ Real-World Example: Imagine an API gateway receiving this token. Because the payload is plain JSON (just Base64URL-encoded), the gateway can read aud and exp without a cryptographic key β€” useful for quick routing or logging. However, it must not trust those values for access control decisions until the signature has been verified. The readable payload is convenient; it is not a security boundary.

🧠 Mnemonic: Think of the payload as a postcard. Anyone who handles it can read it. The signature is the wax seal β€” it proves the postcard hasn't been altered, but it doesn't hide the message.



The Signature

The signature is what transforms a JWT from a self-described data structure into a verifiable one. It is computed by taking the Base64URL-encoded header, appending a dot, appending the Base64URL-encoded payload, and then running that concatenated string through a cryptographic function using a secret or private key:

Signature = SIGN(
    base64url(header) + "." + base64url(payload),
    key
)

Visualized as a flow:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    base64url     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Header     β”‚ ─────────────►  β”‚ eyJhbGciOiJSUzI1..  β”‚
β”‚   (JSON)     β”‚                 β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                            β”‚
                                            β”‚  concatenate with "."
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    base64url     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Payload    β”‚ ─────────────►  β”‚ eyJzdWIiOiJ1c2Vy..  β”‚
β”‚   (JSON)     β”‚                 β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                            β”‚
                                            β”‚
                                β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                β”‚  signing input string β”‚
                                β”‚  (header.payload)     β”‚
                                β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                            β”‚
                                     SIGN(input, key)
                                            β”‚
                                β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                β”‚      Signature        β”‚
                                β”‚  (base64url encoded)  β”‚
                                β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
Why the Signature Covers Both Header and Payload

This design has a precise consequence: any alteration to either the header or the payload invalidates the signature. Changing a single byte in either segment β€” say, changing "sub": "user_123" to "sub": "user_admin", or changing "alg": "RS256" to "alg": "HS256" β€” produces a different signing input string, which means the signature will no longer verify against the original key. A validator that catches this mismatch has caught tampering.

This is the core integrity guarantee a JWT provides: if the signature verifies, the header and payload are exactly as the issuer produced them.

πŸ’‘ Mental Model: The signature is computed over both header and payload together. This means the algorithm declaration in the header is itself covered by the signature β€” it cannot be changed without breaking verification. The problem arises only when a validator trusts the header's alg claim to select the verification algorithm before confirming authenticity. At that point, the attacker can substitute a different algorithm in a freshly constructed token (with no valid signature) and attempt to confuse the validator.

Symmetric vs. Asymmetric Signing

The alg value determines what kind of key is involved:

  • HMAC algorithms (HS256, HS384, HS512): Use a single shared secret. The same key signs and verifies. Both issuer and validator must hold this secret.
  • RSA algorithms (RS256, RS384, RS512, PS256, etc.): Use an asymmetric key pair. The issuer signs with a private key; validators verify with the corresponding public key. The private key never needs to leave the issuer.
  • ECDSA algorithms (ES256, ES384, ES512): Also asymmetric, using elliptic curve cryptography. Generally produces smaller signatures than RSA at equivalent security levels.

(This is a simplified picture β€” the JWA specification defines additional algorithm families, and the choice between them involves considerations like key management complexity and performance that Section 3 addresses in the context of JWS.)



The alg: none Case

The JWT specification permits a token with no signature at all. When alg is set to "none", the signature segment is an empty string, and the token looks like:

eyJhbGciOiJub25lIiwidHlwIjoiSldUIn0.eyJzdWIiOiJ1c2VyXzEyMyJ9.

Decoded header:

{
  "alg": "none",
  "typ": "JWT"
}

Note the trailing dot β€” the empty signature segment is still present. This is syntactically valid per the spec.

⚠️ Common Mistake 2: Accepting alg: none tokens in production. The alg: none case exists in the specification for contexts where integrity is guaranteed by other means (such as a direct TLS-protected channel with no intermediaries). In practice, production validators must explicitly reject tokens with alg: none. Historically, several JWT libraries accepted these tokens by default or could be tricked into treating them as verified β€” effectively letting an attacker forge arbitrary payloads. The correct behavior is to maintain a strict allowlist of acceptable algorithms, and none should never appear on it.

❌ Wrong thinking: "If the library doesn't throw an error when it sees alg: none, the token must be safe."

βœ… Correct thinking: "My validator must actively check the alg claim against a whitelist of algorithms I accept. If alg: none is not on that list β€” and it should never be β€” the token is rejected before any further processing."


Putting It Together: A Complete Picture

Here is how the three parts of a JWT relate to each other as a single coherent artifact:

          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚                        JWT                          β”‚
          β”‚                                                     β”‚
          β”‚  HEADER          PAYLOAD            SIGNATURE       β”‚
          β”‚  ────────        ──────────────     ──────────────  β”‚
          β”‚  alg: RS256      sub: user_123      cryptographic   β”‚
          β”‚  typ: JWT        iss: auth.ex.com   bytes covering  β”‚
          β”‚                  aud: api.ex.com    header+payload   β”‚
          β”‚                  exp: 1709000000                    β”‚
          β”‚                  iat: 1708996400                    β”‚
          β”‚                  jti: abc123                        β”‚
          β”‚                                                     β”‚
          β”‚  [visible]       [visible]          [verifiable]    β”‚
          β”‚  [tamper-        [tamper-           [covers both]   β”‚
          β”‚   evident]        evident]                          β”‚
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ“‹ Quick Reference Card: What Each Part Provides

Part πŸ” Readable? πŸ”’ Confidential? βœ… Tamper-evident? πŸ“Œ Purpose
πŸ—‚οΈ Header Yes No Yes (via signature) Declares token type and signing algorithm
πŸ“„ Payload Yes No Yes (via signature) Carries identity and session claims
πŸ” Signature Encoded bytes N/A This is the mechanism Binds header and payload to the issuer's key

πŸ€” Did you know? The fact that the payload is readable β€” combined with the fact that it's tamper-evident β€” is actually an intentional design trade-off. It allows intermediate parties (like API gateways or load balancers) to inspect claims for routing and logging without needing a cryptographic key, while the signature still ensures that by the time the token reaches a validator, any modification will be detected. This is a real architectural benefit, and it explains why JWT was designed this way rather than defaulting to encryption. But it means that anything sensitive β€” passwords, PII beyond what's minimally necessary, secrets β€” should never be placed in a JWT payload unless the token is also encrypted (covered in Section 3).


What Base64URL Encoding Is Actually For

It is worth being explicit about why Base64URL encoding is used at all, since it is sometimes misunderstood as a form of obfuscation. The reason is purely mechanical: JSON objects contain characters β€” curly braces, colons, quotes, Unicode β€” that are not safe to embed directly in HTTP headers, URL parameters, or cookie values without escaping. Base64URL encoding produces a string that consists only of alphanumeric characters, hyphens, and underscores, making it safe to transmit in any of these contexts without percent-encoding or other transformation.

The choice of Base64URL specifically (rather than standard Base64) avoids the + and / characters that carry special meaning in URLs, and the removal of padding (=) avoids issues in some query string parsers. That is the entirety of the reason for the encoding. It carries no security property whatsoever.

πŸ’‘ Pro Tip: When debugging a JWT, you can decode any segment by pasting it into a Base64URL decoder β€” or simply use a tool like jwt.io in a browser. The entire header and payload become immediately readable. This is useful during development, but it is also a reminder: treat JWTs as public documents for anything in the payload, and never store sensitive data there unless encryption is layered on top.



Section Summary

A JWT is three Base64URL-encoded JSON structures joined by dots. The header identifies the token type and, crucially, the algorithm used to produce the signature. The payload carries claims β€” both registered claims with standardized semantics (iss, sub, aud, exp, nbf, iat, jti) and application-specific ones. The signature is a cryptographic binding over the concatenation of the encoded header and payload; it guarantees that neither has been altered since the issuer produced the token.

Two properties are worth fixing firmly in mind before proceeding:

πŸ”§ The payload is not encrypted. Base64URL encoding is reversible by anyone. Confidentiality requires a separate mechanism β€” encryption, covered in Section 3.

πŸ”’ The alg header is part of what the signature protects, but it is also what the validator reads to select its verification logic. This creates a chicken-and-egg dependency that, if handled naively, becomes the root cause of several well-documented attack classes. Section 4 details the correct validation sequence; Section 5 catalogs what happens when it is skipped.

With the structure clearly in mind, the next section examines what it means to sign a JWT versus encrypt it β€” and when each property is the one your application actually needs.

Signing vs. Encryption: JWS and JWE Compared

A JWT on its own is not inherently secure. The security properties you actually get depend entirely on what you do to the token β€” and there are two fundamentally different operations available: signing and encryption. Many developers treat these as equivalent or interchangeable, but they protect against completely different threats. Understanding the distinction is not a pedantic detail; it directly determines whether private user data leaks to unintended parties, and whether a malicious actor can forge claims your server will trust.

The JOSE (JSON Object Signing and Encryption) family of standards formalizes these two operations into distinct specifications. A JWS (JSON Web Signature) is what most people mean when they say "JWT" β€” a token whose payload is signed but remains readable. A JWE (JSON Web Encryption) is a token whose payload is encrypted and therefore opaque to anyone who doesn't hold the decryption key. Conflating the two is one of the most common architectural mistakes in token-based systems.

What a Signed Token (JWS) Actually Gives You

When you sign a JWT, you are making a cryptographic promise about the token's integrity and authenticity. Integrity means the payload has not been tampered with since it was issued. Authenticity means you can verify who issued it. What you are not providing is confidentiality β€” the ability to keep the payload secret from parties who shouldn't read it.

This surprises many people because Base64URL encoding looks vaguely encrypted. It is not. Base64URL is a text-encoding scheme that transforms binary bytes into URL-safe ASCII characters. Any party who receives the token can decode the header and payload trivially β€” in a browser console, with a command-line tool, or on a site like jwt.io β€” without knowing any keys at all. The signature only protects against modification, not inspection.

Signed JWT (JWS) β€” Three dot-separated parts:

eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9   ← Header   (Base64URL encoded, readable)
.eyJzdWIiOiJ1c2VyXzEyMyIsInJvbGUiOiJhZG1pbiJ9  ← Payload  (Base64URL encoded, readable)
.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV...          ← Signature (binary, cryptographic)

Any party holding this token can decode parts 1 and 2.
Only a party with the correct key can VERIFY part 3.

The practical implication: if your JWT payload contains a user's email address, medical record ID, financial account number, or any other sensitive attribute, every intermediate system that handles the token β€” load balancers, logging infrastructure, CDN edge nodes, browser local storage β€” can read that data. The signature does nothing to prevent this.

🎯 Key Principle: A signed JWT (JWS) solves the problem of trust β€” "can I rely on these claims?" β€” not the problem of secrecy β€” "should only certain parties see these claims?"

πŸ’‘ Real-World Example: Consider an access token issued by an authorization server that includes the claim "role": "billing_admin". A properly verified JWS guarantees that this claim was set by the issuer and hasn't been altered β€” a user cannot change "billing_admin" to "superadmin" without invalidating the signature. However, the role claim is visible to anyone who intercepts or receives the token. In many designs, this is perfectly acceptable β€” role names aren't secret. The design becomes problematic when developers add fields like "ssn": "123-45-6789" to the payload without recognizing that the signature provides zero protection against disclosure.

What an Encrypted Token (JWE) Actually Gives You

A JWE (JSON Web Encryption) encrypts the payload so that only a party holding the correct decryption key can read its contents. This addresses the confidentiality gap that signing leaves open. Where a JWS has three parts, a JWE has five dot-separated parts: an encrypted key, initialization vector, ciphertext, authentication tag, and a protected header describing the encryption algorithms used.

Encrypted JWT (JWE) β€” Five dot-separated parts:

[Protected Header]
.[Encrypted Key]       ← The content encryption key, wrapped with recipient's public key
.[Initialization Vector]
.[Ciphertext]          ← The actual payload, encrypted β€” opaque without the decryption key
.[Authentication Tag]  ← Ensures the ciphertext hasn't been tampered with

Without the decryption key, the ciphertext reveals nothing about the payload.

JWE uses a two-layer encryption approach in the asymmetric case: a random Content Encryption Key (CEK) encrypts the actual payload using a symmetric algorithm (commonly AES-GCM), and then the CEK itself is encrypted with the recipient's public key. This design means you can encrypt the same payload for multiple recipients efficiently β€” each gets their own encrypted copy of the CEK, but the ciphertext is shared. (This multi-recipient case is less common in typical API designs but is a standard part of the JWE specification.)

⚠️ Common Mistake: Assuming JWE automatically means the content is also authenticated. Most standard JWE constructions do include an authentication tag that detects tampering (this is the fifth part of the compact serialization), but the issuer identity is not proven unless you also apply a signature. A recipient can confirm the ciphertext hasn't been corrupted, but they can't necessarily confirm who encrypted it without additional structure.

Combining Signing and Encryption: Nesting Order Matters

When you need both confidentiality and authenticated authorship, you combine JWS and JWE β€” but the order in which you nest them has real security consequences.

The two approaches are:

Sign then Encrypt (recommended in most cases):

  1. Create a signed JWS (the payload plus its signature).
  2. Encrypt the entire JWS as the payload of a JWE.

Encrypt then Sign (use with care):

  1. Create a JWE (encrypted payload).
  2. Sign the JWE as a JWS.
Sign-then-Encrypt flow:

  [Plaintext Payload]
        β”‚
        β–Ό
  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  β”‚  Sign (JWS) β”‚  ← issuer private key
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
        β”‚
        β–Ό
  [Signed JWS token]
        β”‚
        β–Ό
  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  β”‚ Encrypt (JWE)    β”‚  ← recipient public key
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
        β”‚
        β–Ό
  [Encrypted JWE β€” ciphertext contains the entire signed JWS]

Recipient decrypts β†’ reveals signed JWS β†’ verifies signature β†’ reads payload.
Both confidentiality and authentic authorship are achieved.

Why does order matter? With Sign-then-Encrypt, the signature is inside the encryption envelope. When the recipient decrypts and finds a valid signature from the expected issuer, they have strong confidence that the original signer produced this specific payload. With Encrypt-then-Sign, the signature is on the outside of the encryption envelope β€” meaning the signer is attesting to the ciphertext, not to the plaintext it contains. A subtle but real attack exists where an intermediary strips the outer signature and re-signs with their own key, making it appear they issued the content. For most cases where you care about proving authorship of the payload itself, Sign-then-Encrypt is the safer choice.

🧠 Mnemonic: Think of it like a letter: you write and sign the letter first (JWS), then seal it in an envelope (JWE). Opening the envelope reveals the signed letter, proving who wrote it. If you instead put a blank signed cover sheet on the outside of a sealed envelope, the signature proves nothing about what's inside.

πŸ’‘ Mental Model: JWS is a tamper-evident glass container β€” everyone can see what's inside, but they'll know if someone broke in. JWE is an opaque locked box β€” no one can see what's inside without the key. Combined (Sign-then-Encrypt), it's a signed letter inside a locked box: confidential delivery with provable authorship.

Symmetric vs. Asymmetric Signing: Who Can Verify?

Within JWS, there is a second consequential choice: whether to use symmetric or asymmetric signing. This choice determines which parties can verify the token β€” and that has architectural implications that are easy to underestimate.

Symmetric Signing (e.g., HMAC-SHA256, algorithm identifier HS256)

With symmetric signing, the same secret key is used to create the signature and to verify it. This means every party that needs to verify a token must hold a copy of the secret.

Symmetric signing:

  Issuer                     Verifier A          Verifier B
  ──────                     ──────────          ──────────
  [secret key K] ──────────► [secret key K]  +  [secret key K]
       β”‚
       β–Ό
  Signs token with K
                             Verifies with K     Verifies with K

  ⚠️ Every verifier must be trusted with K.
  If any verifier is compromised, K is compromised β€” and an
  attacker can then FORGE tokens, not just read them.

The security boundary collapses when the secret is shared broadly. If you have a single authorization server issuing tokens and a single resource server consuming them, symmetric signing is simple and perfectly appropriate. However, if you need five different microservices to verify the token, all five must hold the secret β€” and any one of them being compromised means an attacker can now mint arbitrary tokens.

Asymmetric Signing (e.g., RS256 using RSA, ES256 using ECDSA)

With asymmetric signing, there is a key pair: a private key used only to sign, and a corresponding public key used only to verify. The issuer keeps the private key secret and publishes the public key β€” often via a well-known endpoint like a JWKS (JSON Web Key Set) URI.

Asymmetric signing:

  Issuer                       Any Verifier (N of them)
  ──────                       ────────────────────────
  [private key] β€” SECRET       [public key] β€” published openly
       β”‚
       β–Ό
  Signs token with             Verifies with public key
  private key                  (cannot forge β€” no private key)

  The issuer's signing ability remains exclusive.
  Verifiers can confirm authenticity without gaining forgery ability.

This asymmetry is the key architectural advantage. Publish your public key, and any service β€” internal microservice, third-party partner, edge function β€” can verify your tokens without ever being able to create one. Compromising a verifier does not compromise the signing authority. This is why most public OAuth 2.0 authorization servers publish JWKS endpoints: they issue tokens with a private key and let any resource server fetch and cache the public key for verification.

🎯 Key Principle: The choice between symmetric and asymmetric signing is fundamentally a question of trust topology. Symmetric signing creates a shared-secret trust boundary that expands with every new verifier. Asymmetric signing keeps the trust boundary at the issuer β€” the private key never leaves.

πŸ’‘ Real-World Example: An identity provider issuing ID tokens to third-party relying parties must use asymmetric signing. The relying parties are external organizations β€” sharing a symmetric secret with them would give them the ability to forge tokens for any user on the platform. With asymmetric signing, the identity provider publishes its JWKS endpoint, relying parties fetch the public key, and verification works without any shared secret.

⚠️ Common Mistake: Using HS256 (symmetric) in a microservices architecture because it's the default in many JWT libraries. The moment a second service needs to verify the token, you have a secret distribution problem. If any of those services runs third-party code, has weaker deployment security, or is operated by a different team, you have implicitly granted that team the ability to forge tokens for your entire system.

Choosing the Right Tool for the Threat You're Solving

Deciding between JWS, JWE, or a combination requires being precise about which threats you are defending against.

πŸ“‹ Quick Reference Card:

πŸ”’ What it Protects πŸ”§ Use When ⚠️ Does NOT Protect
JWS (signed) Integrity, Authenticity Claims must be trustworthy but don't need to be secret Payload confidentiality
JWE (encrypted) Confidentiality Payload contains sensitive data that must be hidden Issuer authenticity (alone)
JWS + JWE (Sign-then-Encrypt) Integrity, Authenticity, Confidentiality Both trust and secrecy are required Nothing significant β€” this is the full combination
HS256 (symmetric) Same as JWS; simpler key setup Single issuer + single verifier Scales poorly; secret sharing expands trust boundary
RS256 / ES256 (asymmetric) Same as JWS; scalable verification Multiple verifiers; third-party consumers Performance vs HS256 (minor in practice)

A few concrete scenarios help anchor this:

  • Access tokens in a first-party single-service API: JWS with HS256 is often sufficient. The token is short-lived, the secret is shared between your auth module and your resource module in the same deployment, and the payload typically contains non-sensitive claim names like user IDs and scopes.

  • ID tokens issued to third-party applications: JWS with asymmetric signing (RS256 or ES256) is the standard approach. Third parties can verify without receiving any secret from you.

  • Tokens that traverse logging infrastructure or are stored in browser local storage and contain sensitive personal data: JWE is warranted. Even with HTTPS protecting data in transit, tokens stored in client-accessible storage or appearing in server logs should not expose sensitive fields in plaintext.

  • Tokens in a high-security federated health data exchange: Sign-then-Encrypt with asymmetric keys. The originating system's signature proves authorship; the encryption ensures only the intended recipient's system can read the payload.

❌ Wrong thinking: "My token is transmitted over HTTPS, so I don't need to worry about who can read the payload."

βœ… Correct thinking: "HTTPS protects the token in transit, but once the token arrives, it may be logged, cached, forwarded to downstream services, stored in browser storage, or passed through middleware. If the payload contains sensitive data, TLS alone is not sufficient β€” I need to consider JWE or redesign what claims belong in the token."

πŸ€” Did you know? The reason to prefer ES256 (ECDSA with P-256) over RS256 (RSA) in many modern designs is not primarily security level β€” both provide strong security at their recommended key sizes β€” but rather token compactness. ECDSA signatures are significantly shorter than RSA signatures, which matters when tokens are included in HTTP headers on every request across high-throughput APIs. This is a practical engineering tradeoff, not a security one, and the right choice depends on your environment's constraints.

What This Section Does Not Cover

This section has deliberately focused on the what and why of JWS versus JWE β€” which security properties each provides and what architectural consequences follow from symmetric versus asymmetric key choices. It has intentionally stayed at the conceptual and structural level on algorithm selection.

The specifics of which RSA key size to use, why certain ECDSA curves are preferred, which algorithms are considered deprecated, and how to configure algorithm allow-lists in your JWT library are covered in the JWT Structure & Algorithms child lesson. Those choices matter a great deal in practice β€” using an undersized RSA key or a deprecated algorithm like RS1 undermines everything discussed here β€” but they build on the conceptual foundation this section establishes. Understand why you're signing versus encrypting and which parties need to verify before optimizing the specific algorithm parameters.

The validation steps that a relying party must execute when receiving a JWS or JWE β€” including checks that are frequently omitted β€” are covered in the next section, Token Validation in Practice.

Token Validation in Practice: What Must Be Checked

Receiving a JWT is not the same as trusting one. A token arrives as three Base64URL-encoded segments separated by dots, and nothing about that structure guarantees the claims inside are legitimate. The relying party β€” whatever service consumes the token β€” must perform a specific sequence of checks before it acts on anything in the payload. The order of those checks matters as much as the checks themselves, and the ones most commonly skipped are also the ones that create the most exploitable vulnerabilities. This section walks through each required validation step in the order it should be performed, with attention to where implementations go wrong in practice.

The Validation Sequence: Order Is Not Optional

Before exploring individual checks, it is worth establishing why sequence matters. A JWT payload is just JSON β€” it can be decoded and read by anyone without any keys or secrets. This means a relying party could read the exp field and say "this token hasn't expired" before checking whether the signature is valid. That would be a mistake.

🎯 Key Principle: The signature check is a gate, not a detail. No claim in the payload should be trusted until the signature is verified. Reading exp, aud, or any other claim before verifying the signature means you are making trust decisions based on data that could have been crafted by an attacker.

The correct sequence looks like this:

Incoming Token
      β”‚
      β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  1. Parse the token          β”‚
β”‚     (split on '.')           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  2. Identify the signing key β”‚
β”‚     (from header: alg, kid)  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  3. Verify the signature     β”‚  ◄── GATE: stop here if invalid
β”‚     (reject if mismatch)     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  4. Validate iss (issuer)    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  5. Validate aud (audience)  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  6. Check exp (expiry)       β”‚
β”‚     and nbf (not before)     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  7. Check jti for replay     β”‚
β”‚     (if replay prevention    β”‚
β”‚      is required)            β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
               β–Ό
        Token is trusted

This ordering means that by the time you check exp, you already know the token was signed by a key you trust. An attacker cannot manipulate the expiry without invalidating the signature.

🧠 Mnemonic: PKSIANE β€” Parse, Key lookup, Signature, Issuer, Audience, Not-before/Expiry, JTI. Covers the primary checks in sequence (note: this mnemonic captures the most common checks; additional application-specific validations may follow).

Step 1: Signature Verification

Signature verification is the process of confirming that the token's header and payload have not been modified since they were signed by the issuer. For a signed JWT (JWS), the signature is computed over the concatenation of the Base64URL-encoded header and payload, separated by a dot. The verifier recomputes this operation using the appropriate key and compares the result to the signature in the token.

Two distinct failure modes exist here. The first is using the wrong key β€” for example, fetching a key from a JWKS endpoint without matching it against the kid (key ID) in the token header, and instead just using whatever key happens to be first in the set. The second, and more dangerous, failure mode is the alg: none vulnerability: some earlier JWT libraries would skip signature verification if the algorithm field in the header was set to none. An attacker could strip the signature, change the payload, and set alg: none to produce a token that a naΓ―ve verifier would accept.

⚠️ Common Mistake β€” Mistake 1: Trusting the alg field in the token header to determine which algorithm to use for verification without validating it against an allow-list of expected algorithms. The verifier should know which algorithm(s) it expects and reject tokens that specify anything else.

❌ Wrong thinking: "I'll use whatever algorithm the token header says."
βœ… Correct thinking: "I'll verify using the algorithm I expect, and reject
   any token whose header specifies something different."

A concrete approach: before verification, check that header.alg is one of a small explicit allow-list β€” for example, ["RS256", "ES256"] β€” and reject the token immediately if it is not. Never include none in that list.

πŸ’‘ Pro Tip: When using an asymmetric algorithm like RS256, the verifier needs only the public key, not the private key. If your relying party has access to the private signing key, that is usually a sign of a system design problem β€” the key should be isolated to the issuer.

Step 2: Issuer Validation

The iss (issuer) claim identifies who created the token. After the signature is verified, the relying party should confirm that the issuer is one it recognizes and trusts. In a system with multiple issuers β€” for example, an identity provider for human users and a separate service-account issuer for machine-to-machine tokens β€” failing to validate iss could allow a token from one context to be accepted in another.

Issuer validation is straightforward: compare the iss claim against an expected value or list of values. The check should be an exact string match, not a substring match. A check that accepts any issuer whose value contains auth.example.com could be bypassed with a hostname like auth.example.com.evil.io.

Step 3: Audience Validation

Audience validation is one of the checks most commonly omitted or implemented incorrectly, and the consequences are significant. The aud (audience) claim names the intended recipient(s) of the token. When a token is issued for Service A and Service B also accepts it, Service B has no way to know whether this token was legitimately obtained by its caller or was stolen and replayed from a different context.

This is a concrete instance of a confused-deputy attack: Service B is being used as a deputy to perform actions on behalf of a caller who holds a token that was never meant for Service B. The caller might have legitimately obtained a token for Service A through a low-privilege flow, and then present that same token to Service B to gain elevated access.

A concrete scenario: imagine an authorization server that issues tokens with aud: ["reports-service"]. A billing service that fails to validate aud will also accept this token. If the billing service performs more sensitive operations, an attacker who obtained a token for the reports service now has unauthorized access to billing.

Attacker flow (no aud validation):

  Attacker ──► reports-service  ──► gets token (aud: reports-service)
      β”‚
      └──────────────────────────► billing-service
                                        β”‚
                                  (no aud check)
                                        β”‚
                                  βœ… ACCEPTED β€” confused deputy
Correct flow (with aud validation):

  Attacker ──► reports-service  ──► gets token (aud: reports-service)
      β”‚
      └──────────────────────────► billing-service
                                        β”‚
                            (aud β‰  "billing-service")
                                        β”‚
                                  ❌ REJECTED

The aud claim can be a string or an array of strings. The validation rule from the JWT specification (RFC 7519) is that if the verifier's identifier is present anywhere in the audience array, the check passes. A relying party must know its own identifier and check for it explicitly β€” not just check that aud is non-empty.

⚠️ Common Mistake β€” Mistake 2: Checking only that the aud claim exists, rather than checking that it contains the expected value for this specific service. An aud claim set to ["some-other-service"] is not a valid audience for your service, even though the claim is present.

Step 4: Expiry and Not-Before Checks

Time-based claims exist to bound a token's validity window. The exp (expiration time) claim specifies the time after which the token must not be accepted. The nbf (not before) claim specifies the time before which the token must not be accepted. Both are expressed as Unix timestamps (seconds since the epoch).

The validation logic is straightforward:

  • Reject if current_time >= exp
  • Reject if current_time < nbf (when nbf is present)

But the practical complexity lies in clock skew β€” the difference in system time between the issuer and the verifier. Even with NTP synchronization, clocks across distributed systems are not perfectly aligned. A token issued with exp set to exactly 60 seconds from now might arrive at a verifier whose clock is 5 seconds ahead, causing it to appear expired immediately.

The standard mitigation is a clock skew tolerance (sometimes called a leeway): a small window, typically 30 to 60 seconds, added to both sides of the time window. In practice, this means the verifier accepts a token as valid if current_time < exp + skew_tolerance and current_time > nbf - skew_tolerance.

Time validity window (with skew tolerance):

  nbf - skew     nbf                exp              exp + skew
     β”‚             β”‚                 β”‚                   β”‚
─────┼─────────────┼─────────────────┼───────────────────┼─────
     β”‚  [grace]    β”‚   VALID WINDOW  β”‚     [grace]        β”‚
     │◄────────────►◄────────────────►◄──────────────────►│

πŸ’‘ Real-World Example: A mobile application that issues a token at noon with a 15-minute expiry sends it to a backend server whose NTP synchronization has drifted by 45 seconds. Without a skew tolerance, requests arriving in the last 45 seconds before expiry could fail unpredictably. A 60-second leeway absorbs this variance without materially extending the effective validity window.

⚠️ Common Mistake β€” Mistake 3: Setting clock skew tolerance too generously. A tolerance of 5 minutes or more defeats much of the purpose of short-lived tokens. If your tolerance is longer than your token's intended validity window, you have effectively removed the time-based protection. Keep the tolerance small β€” 30 to 60 seconds is the typical range β€” and fix the underlying clock synchronization problem instead.

One subtlety worth noting: the exp claim is registered and widely supported, but nbf is optional and not all issuers include it. Verifiers should check it when present and otherwise proceed. A missing nbf is not an error.

πŸ€” Did you know? The iat (issued-at) claim is a timestamp that records when the token was created, but it is not itself a validity check β€” RFC 7519 does not require verifiers to reject tokens based solely on iat age. Some applications use iat to implement their own maximum token age policy on top of exp, but this is application-specific logic rather than a standard validation step.

Step 5: Replay Prevention with the JTI Claim

Even a fully valid token β€” correctly signed, with the right audience, not yet expired β€” can be abused if an attacker intercepts it and reuses it before it expires. This is called a replay attack. The jti (JWT ID) claim is a unique identifier assigned to each token by the issuer, and it is the standard mechanism for replay detection.

The logic is: if the verifier tracks which jti values it has already seen, it can reject any token whose jti appears in that set, even if every other claim is valid.

Replay attack scenario:

  Legitimate user ──► presents token (jti: abc123) ──► Service βœ… accepts

  Attacker intercepts token (jti: abc123)
  Attacker ──────────────────────────────────────────► Service ?

  Without jti tracking:  βœ… accepted (replay succeeds)
  With jti tracking:     ❌ rejected (jti already seen)

The critical implementation requirement is the short-lived store of seen token IDs. The store only needs to retain jti values until the corresponding token has expired β€” once exp has passed, that token cannot be reused anyway, so the jti can be safely evicted from the store. This makes the storage requirement proportional to the token's validity window, not unbounded.

For a token valid for 15 minutes, the store needs to hold jti values for only 15 minutes (plus the clock skew tolerance). A Redis cache with a TTL matching the token lifetime is a common implementation choice.

⚠️ Common Mistake β€” Mistake 4: Issuing tokens with jti claims but never building the verification store. A jti claim that the verifier ignores provides no replay protection whatsoever. The issuer and verifier must coordinate: if jti is included, verification is the verifier's responsibility.

πŸ’‘ Pro Tip: Replay prevention via jti is most important when tokens are used in high-value or sensitive operations β€” financial transactions, one-time authorization grants, or anything where a second use of the same credential should be impossible. For general session tokens with short expiry windows, the risk of replay within a brief window may be acceptable without jti tracking, depending on your threat model. Be deliberate about whether you need it.

There is one simplification in the above picture worth naming: this replay detection approach assumes the verifier has a shared store accessible to all instances of the service. In a horizontally scaled deployment with multiple verifier instances, each instance needs access to the same jti store β€” local in-memory storage per instance will not provide reliable replay protection. This is a distributed systems coordination problem, not just a JWT problem.

Putting It Together: A Validation Checklist

The checks described above form a coherent sequence. Here is a consolidated reference:

πŸ“‹ Quick Reference Card: JWT Validation Steps

πŸ”’ Step πŸ”’ Check ⚠️ Common Failure
πŸ”§ 1 Parse token structure (3 segments) Accepting malformed tokens
πŸ”‘ 2 Resolve signing key via kid / JWKS Using wrong key; not validating alg
βœ… 3 Verify signature against allow-listed algorithm Trusting alg: none; skipping verification
🏷️ 4 Validate iss against expected issuer(s) Accepting tokens from unknown issuers
🎯 5 Validate aud against this service's identifier Skipping check; checking only for presence
⏱️ 6 Check exp (and nbf if present) with skew tolerance No tolerance; excessive tolerance
πŸ” 7 Check jti against seen-token store (if required) Including jti in tokens but never verifying it

Note that key rotation and JWKS endpoint validation β€” specifically how the verifier fetches, validates, and caches public keys β€” interact deeply with step 2 and are covered in depth in the Key Rotation & Validation Strategy section.

Why Libraries Don't Make This Automatic

A reasonable question arises: if this sequence is well-known, why don't JWT libraries handle it automatically? The short answer is that they often cannot β€” some of these checks require application-specific configuration that the library cannot know on its own.

A library can perform signature verification once you give it the right key. But it cannot know which audience your service expects, because that is your service's identity. It cannot know which issuers you trust, because that is your deployment's policy. Some libraries will skip aud validation unless you explicitly pass the expected audience. Some will skip jti checking because maintaining a distributed seen-token store is infrastructure the library cannot provide.

This means configuration choices matter as much as library choice. Before using a JWT library in production, verify that:

πŸ”§ Algorithm restriction is configurable β€” you should be able to specify an explicit allow-list. πŸ“š Audience validation is enabled and requires you to pass an expected value, not just checks for presence. 🎯 Clock skew is configurable and defaults to a reasonable value (not zero, not five minutes). πŸ”’ nbf validation is performed when the claim is present.

πŸ’‘ Mental Model: Think of a JWT library as a tool that provides the cryptographic and parsing primitives, while your application code is responsible for the policy decisions β€” which issuers, which audiences, and what to do with the jti. The library cannot substitute for understanding the validation sequence yourself.

Sequencing Errors in Practice

To make the sequencing point concrete: imagine a service that reads the sub (subject) claim to look up a user record in the database, performs a database query, and then verifies the signature. An attacker who can craft a token with an arbitrary sub β€” for example, the ID of an administrator β€” can cause the service to load the wrong user record before the signature check fails. Depending on what happens between the database lookup and the validation failure, this could leak information, trigger side effects, or in a poorly structured code path, accidentally grant access.

The fix is not just conceptual but structural: verification should occur at the perimeter of the system, in middleware or a dedicated validation layer, before any application logic runs. Application code should receive a validated token object, not a raw JWT string that it is expected to validate itself.

❌ Wrong thinking: Validate inline as part of business logic

  function handleRequest(rawToken, userId) {
    claims = jwt.decode(rawToken)          // decoded but NOT verified
    user = db.lookup(claims.sub)           // database hit on unverified data
    jwt.verify(rawToken, publicKey)        // too late
    ...
  }

βœ… Correct thinking: Validate at the boundary, pass verified claims inward

  middleware: verifiedClaims = jwt.verify(rawToken, {algorithms: ["RS256"],
                                                     audience: "my-service",
                                                     issuer: "auth.example.com"})
  function handleRequest(verifiedClaims, userId) {
    user = db.lookup(verifiedClaims.sub)   // safe: claims are verified
    ...
  }

(This pseudocode is illustrative of the structural pattern; exact API signatures vary by library and language.)

The validation sequence is not just a checklist to complete β€” it is an architectural boundary. Token validation should be the first thing that happens when a request arrives, and nothing else should be allowed to proceed until every required check has passed. Once that discipline is in place, the individual checks described in this section become reliable gatekeepers rather than scattered assertions spread through application code.

Common Vulnerabilities and Pitfalls in JWT Implementations

A JWT library that accepts a token is not the same as a JWT library that validates a token. The gap between those two statements is where most real-world JWT vulnerabilities live. The attacks in this section are not theoretical edge cases β€” they have appeared repeatedly in production systems, in open-source libraries, and in security audits. What makes them particularly instructive is that each one has a clear structural root cause: a system trusted data inside the token to make a security decision about the token itself. Once you internalize that pattern, you can recognize the failure mode even in contexts you have not seen before.

The Algorithm Confusion Attack

The most classically elegant JWT vulnerability is the algorithm confusion attack, sometimes called the alg confusion or key confusion attack. To understand it, you need to hold two facts in mind simultaneously.

First, RSA-based JWT signing (RS256, RS384, RS512) uses asymmetric cryptography: the server signs with a private key and verifiers check the signature with the corresponding public key. The public key is, by definition, public β€” it can be freely distributed and is often published at a well-known endpoint.

Second, HMAC-based JWT signing (HS256, HS384, HS512) uses symmetric cryptography: the same secret is used for both signing and verification. When a library verifies an HS256 token, it takes whatever bytes are configured as the "key" and uses them in the HMAC computation.

The attack exploits the interaction between these two facts:

Normal RS256 flow:
  Server signs with:    [private key]      β†’ produces RS256 token
  Server verifies with: [public key]       β†’ checks RS256 signature

Attack flow (alg confusion):
  Attacker receives:    [public key]       (legitimately, from a JWKS endpoint)
  Attacker crafts:      HS256 token, signed with that public key as the HMAC secret
  Server receives token, reads alg=HS256 from header
  Vulnerable library:   uses [public key] as the HMAC secret β†’ verification passes βœ“

The server verified the signature correctly β€” against the wrong algorithm with the wrong key material, but it passed the check. The attacker forged a token the server will accept.

⚠️ Common Mistake: The root cause is allowing the token header to determine which algorithm the server uses. A token is attacker-controlled data. Letting that data influence the cryptographic operation used to validate it is the mistake.

The fix is exactly one rule: the expected algorithm must be hardcoded in the server configuration, never read from the token header. In practice this means passing the allowed algorithm explicitly to your JWT library's verification function, not using any mode that auto-detects from the header.

❌ Wrong:
  verify(token)  // library reads alg from header, uses it

βœ… Correct:
  verify(token, algorithm="RS256", key=public_key)
  // if the token header says anything other than RS256, reject immediately

🎯 Key Principle: The algorithm is a server policy, not a token property. It belongs in your configuration, not in the token.

Accepting alg: none β€” Permissive Library Defaults

Closely related to algorithm confusion is the alg: none vulnerability. The JWT specification includes none as a valid algorithm value, intended for situations where the token's integrity is guaranteed by some outer transport mechanism. In practice, very few real deployments have a legitimate use for unsigned tokens.

The danger arises when a JWT library ships in a permissive mode where it does not reject alg: none by default. An attacker who intercepts or constructs a token can set "alg": "none" in the header and strip the signature entirely (or replace it with an empty string). A vulnerable library, seeing alg: none, skips signature verification and accepts the token.

Malicious token structure:
  Header:    {"alg": "none", "typ": "JWT"}
  Payload:   {"sub": "admin", "role": "superuser", ...}
  Signature: (empty)

  Encoded: eyJhbGciOiJub25lIiwidHlwIjoiSldUIn0.eyJzdWIiOiJhZG1pbiIsInJvbGUiOiJzdXBlcnVzZXIifQ.
  (note the trailing dot with nothing after it)

The library receives this, reads alg: none, and returns "valid" β€” because there is nothing to check.

πŸ’‘ Real-World Example: This class of vulnerability has been found in multiple JWT libraries across different languages. Library maintainers have issued patches that disable none by default, but codebases using older pinned versions or explicit configuration flags remain exposed. The vulnerability is not theoretical β€” it has been exploited in the wild against authentication systems.

The mitigation has two layers:

πŸ”’ At the library level: Always explicitly configure the list of allowed algorithms. If your system uses RS256, the allowed list is ["RS256"]. The string "none" should never appear on that list.

πŸ”’ At the architecture level: Treat unsigned JWTs as a feature you have deliberately disabled, not a case you handle gracefully. If a token arrives claiming alg: none, reject it immediately without further processing.

⚠️ Common Mistake: Some developers assume their library handles this safely by default without checking. Always verify the default behavior of the specific version of any JWT library you use, and set the allowed algorithms list explicitly regardless.

Once a JWT is issued, it has to be stored somewhere on the client so it can be attached to subsequent requests. The two dominant choices β€” localStorage and HttpOnly cookies β€” each carry a distinct risk profile, and the choice is a genuine trade-off, not a clear winner.

Storage location and attack surface:

  localStorage
  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  β”‚ Readable by JavaScript running on the page  β”‚
  β”‚  β†’ XSS attack can read and exfiltrate token β”‚
  β”‚  β†’ Stolen token usable from attacker's serverβ”‚
  β”‚ NOT automatically sent cross-origin          β”‚
  β”‚  β†’ CSRF is not a concern                    β”‚
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

  HttpOnly Cookie
  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  β”‚ NOT readable by JavaScript                  β”‚
  β”‚  β†’ XSS cannot read the token value          β”‚
  β”‚  β†’ XSS can still make requests that carry itβ”‚
  β”‚ Automatically sent by browser               β”‚
  β”‚  β†’ CSRF attacks can trigger requests with itβ”‚
  β”‚  β†’ Requires CSRF mitigations (SameSite, etc)β”‚
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Storing a JWT in localStorage exposes it to any JavaScript executing in the same origin. A successful XSS (Cross-Site Scripting) attack β€” injecting malicious script into your page β€” can read the token with localStorage.getItem(...) and exfiltrate it to an attacker-controlled server. The attacker now has a fully portable token they can use from anywhere, until it expires.

Storing a JWT in an HttpOnly cookie prevents JavaScript from reading the cookie value at all, which closes the XSS exfiltration path. However, browsers automatically attach cookies to same-origin requests, which means a CSRF (Cross-Site Request Forgery) attack β€” tricking a user's browser into making a request to your API from a malicious page β€” will carry the cookie along. The request arrives at your server with a valid JWT, even though the user did not intend to make it.

πŸ’‘ Mental Model: localStorage gives the attacker a copy of the token they can use anywhere. HttpOnly cookies keep the token inaccessible, but a forged request can use the token while it's still in the browser. Different threat, different mitigation required.

Mitigating CSRF for cookie-stored tokens typically involves one or more of:

  • SameSite=Strict or SameSite=Lax cookie attribute: prevents the browser from sending the cookie on cross-origin requests
  • Double-submit cookie or synchronizer token patterns for state-changing requests
  • CORS policy configured to reject cross-origin requests from untrusted origins

⚠️ Common Mistake: Switching from localStorage to HttpOnly cookies and considering security solved. You have traded one attack vector for another. CSRF protection must be implemented alongside the cookie approach.

🎯 Key Principle: There is no storage option that eliminates all risk. The question is which risk profile matches your threat model and which mitigations you are prepared to implement correctly.

Embedding Sensitive Data in a JWS Payload

A signed JWT (a JWS, JSON Web Signature) provides integrity β€” you can verify the payload has not been tampered with. It does not provide confidentiality β€” the payload is Base64URL-encoded, which is an encoding scheme, not an encryption scheme.

JWS structure (what an observer with network access sees):

  eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9
        ↓ base64url decode β€” no key needed
  {"alg": "RS256", "typ": "JWT"}

  .eyJzdWIiOiJ1c2VyXzEyMyIsImVtYWlsIjoidXNlckBleGFtcGxlLmNvbSIsInJvbGUiOiJhZG1pbiJ9
        ↓ base64url decode β€” no key needed
  {"sub": "user_123", "email": "user@example.com", "role": "admin"}

  .SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c
        ↑ signature β€” verifiable only with the key, but the payload above
          is fully readable without it

Anyone who can observe the token β€” in browser developer tools, in server logs, in a proxy or load balancer access log, in a debug trace β€” can decode the payload in seconds. Base64 is trivially reversible.

The practical consequence: never embed data in a JWS payload that should not be visible to any party who might handle the token in transit. This includes:

  • 🚫 Passwords or password hashes (obviously, but it happens)
  • 🚫 Social Security numbers, national ID numbers, or other government identifiers
  • 🚫 Full credit card numbers or financial account details
  • 🚫 Sensitive health or medical information
  • 🚫 Private keys or API secrets
  • 🚫 Detailed internal system identifiers that aid privilege escalation

When you need a token that carries data which must remain confidential, the correct tool is JWE (JSON Web Encryption), which encrypts the payload. Section 3 of this lesson covers the distinction between JWS and JWE in detail. As a quick decision rule: if you are about to embed a field in a JWT and you would not be comfortable seeing that field in a server log, either use JWE or move that data off the token entirely.

πŸ’‘ Pro Tip: Even with JWE, think carefully before embedding sensitive PII in a long-lived token. Revocation becomes harder when sensitive data is distributed across tokens in flight. Often the cleaner architecture is a short-lived token that carries only a pseudonymous identifier, with the sensitive data retrieved from a server-side store at access time.

⚠️ Common Mistake: Assuming that because a JWT must be validated to be used, its contents are therefore protected. Validation protects integrity. It says nothing about confidentiality.

πŸ€” Did you know? Server logs are one of the most common places sensitive JWT payload data surfaces unexpectedly. Many logging configurations capture the full Authorization header value, which includes the raw token. If that token contains a user's email address or government ID, it is now in your log aggregation system β€” likely indexed, searchable, and retained for months.

Bearer Tokens and the Problem They Cannot Solve

Every JWT discussed so far is, by default, a bearer token β€” a token that grants access to whoever presents it. The name is deliberately old-fashioned: like a bearer bond or bearer check, possession is the only credential required. There is no binding between the token and the specific client that was originally issued the token.

This creates a straightforward and serious risk: if an attacker obtains your bearer token through any means β€” XSS, a compromised log, a man-in-the-middle on an insecure connection, an over-permissive copy-paste β€” they can replay it from their own system and impersonate you until it expires.

Bearer token replay attack:

  Legitimate client                 Attacker
        β”‚                               β”‚
        β”‚ POST /api/data                β”‚
        β”‚ Authorization: Bearer <token> β”‚
        β”‚ ─────────────────────────────►│  (token intercepted)
        β”‚                               β”‚
        β”‚                               β”‚  POST /api/data
        β”‚                               β”‚  Authorization: Bearer <token>
        β”‚                               β”‚ ─────────────────────────────► Server
        β”‚                               β”‚                                  β”‚
        β”‚                               β”‚          200 OK β—„β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
        β”‚                               β”‚
  Server cannot distinguish these two requests.
  Same token = same access.

Sender-constrained tokens solve this by cryptographically binding the token to a specific client key or certificate. The server can then verify not only that the token is valid, but that the presenter is the same entity the token was issued to.

Two prominent mechanisms for sender-constraining tokens are:

DPoP (Demonstrating Proof of Possession)

DPoP (defined in RFC 9449) works by having the client generate an asymmetric key pair and prove possession of the private key on each request. The flow works roughly as follows:

DPoP flow:

  1. Client generates key pair: (pub_key, priv_key)

  2. During token request, client sends pub_key to authorization server
     Authorization server embeds a thumbprint of pub_key in the token:
     { ..., "cnf": { "jkt": "<hash of pub_key>" } }

  3. On each API request, client:
     a. Creates a short-lived DPoP proof JWT, signed with priv_key
        (includes the request method, URL, and a nonce)
     b. Sends: Authorization: DPoP <access_token>
               DPoP: <proof_jwt>

  4. Resource server:
     a. Extracts pub_key thumbprint from access_token's 'cnf' claim
     b. Verifies proof_jwt signature with the corresponding pub_key
     c. If they match β†’ presenter has the private key β†’ binding confirmed

  Attacker who steals the access_token cannot use it:
  they do not have priv_key, so they cannot produce a valid DPoP proof.

The DPoP proof is request-specific (it includes the method and URL) and short-lived, so replaying a captured proof against a different endpoint or after a brief window also fails.

mTLS-Bound Tokens

Mutual TLS (mTLS) token binding (RFC 8705) operates at the transport layer. During the TLS handshake, the client presents a certificate. The authorization server binds the access token to that certificate by embedding a thumbprint of it in the token. On subsequent resource requests, the resource server confirms the TLS client certificate matches the thumbprint in the token.

mTLS-bound tokens are common in high-security API environments, particularly where clients are services rather than browsers β€” the certificate management overhead is manageable in machine-to-machine contexts but less practical for end-user browser clients.

πŸ’‘ Mental Model: A plain bearer token is like a bus pass: whoever holds it can ride. A sender-constrained token is like a contactless payment card with a PIN: holding the card is necessary but not sufficient β€” you also need to demonstrate you know the PIN (or hold the private key).

⚠️ Common Mistake: Assuming that short token lifetimes solve the same problem as sender-constraining. Short lifetimes reduce the window of a replay attack, but they do not eliminate it. A 15-minute token stolen at minute 0 is still fully usable for 15 minutes. Sender-constraining means the stolen token is useless regardless of how much lifetime remains.

🎯 Key Principle: Bearer tokens shift the security burden entirely onto the confidentiality of the token in transit and at rest. Sender-constrained tokens make the token useless without also possessing a bound private key or certificate β€” a much stronger guarantee.

πŸ“‹ Quick Reference Card: Vulnerability Summary

πŸ”’ Vulnerability 🧠 Root Cause πŸ”§ Fix
πŸ”‘ Algorithm confusion Server reads alg from attacker-controlled header Hardcode expected algorithm in server config
❌ alg: none accepted Library default is permissive Explicitly configure allowed algorithm list
πŸ“¦ localStorage XSS Token readable by any JS on page Use HttpOnly cookies + handle CSRF separately
πŸ‘οΈ Sensitive data in JWS Base64URL β‰  encryption Use JWE, or keep sensitive data off the token
πŸ”“ Bearer token replay No binding between token and presenter Use DPoP or mTLS-bound tokens

The Underlying Pattern

Looking across these five vulnerabilities, a single structural root cause recurs: using attacker-influenced data to make security decisions about that same data. The algorithm confusion attack reads the algorithm from the token to decide how to verify the token. Accepting alg: none follows the same logic. Implicit trust in the payload structure without signature verification is the same mistake at a different layer.

The second recurring pattern is conflating encoding with protection. Base64URL encoding, URL encoding, HTML entity encoding β€” these are all reversible transformations for safe transport. They provide no security property. When you see data encoded, the questions to ask are: is it signed? is it encrypted? can the presenter tamper with it?

These patterns repeat because they are conceptually tempting shortcuts. Verifying a signature is work; reading a header field is easy. Encryption adds complexity; a field being encoded looks protected. The defense against both patterns is the same: be precise about what security property each mechanism actually provides, and resist the intuition that something that looks protected is protected.

πŸ’‘ Remember: The security of a JWT system is only as strong as the weakest validation step in your verification chain. A perfectly signed token with a correctly validated signature can still be exploited if it carries sensitive data in a plaintext payload, is stored where XSS can reach it, or grants access to any bearer regardless of who they are. Validation correctness and storage security and binding are separate, independent requirements β€” satisfying one does not substitute for the others.

Key Takeaways and What Comes Next

By the time a JWT arrives at a relying party, it has already traveled through a signing operation, an encoding step, a transport channel, and at least one library call on the receiving end. Each of those steps can either uphold or undermine the token's security guarantees β€” and the guarantees only hold if all of them work correctly together. That interdependence is the central insight of this lesson, and it's the lens through which every principle below should be read.

This section consolidates what you've covered, surfaces the handful of principles that carry the most practical weight, and maps the open questions to the lessons where they're addressed in depth.


The Three-Legged Foundation of JWT Security

A JWT's security is not a property of the token format itself. It emerges from three things acting in concert, and weakening any one of them collapses the others.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚              JWT Security: Three Required Legs              β”‚
β”‚                                                             β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚   β”‚  Trustworthy  β”‚  β”‚   Rigorous    β”‚  β”‚   Protected   β”‚  β”‚
β”‚   β”‚  Signing Key  β”‚  β”‚  Validator    β”‚  β”‚   Transport   β”‚  β”‚
β”‚   β”‚               β”‚  β”‚               β”‚  β”‚               β”‚  β”‚
β”‚   β”‚ β€’ Key secrecy β”‚  β”‚ β€’ All claims  β”‚  β”‚ β€’ TLS in      β”‚  β”‚
β”‚   β”‚ β€’ Key size    β”‚  β”‚   checked     β”‚  β”‚   transit     β”‚  β”‚
β”‚   β”‚ β€’ Rotation    β”‚  β”‚ β€’ alg locked  β”‚  β”‚ β€’ Sender-     β”‚  β”‚
β”‚   β”‚   hygiene     β”‚  β”‚ β€’ aud/iss/exp β”‚  β”‚   constrained β”‚  β”‚
β”‚   β”‚               β”‚  β”‚   enforced    β”‚  β”‚   binding     β”‚  β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β”‚           β”‚                  β”‚                  β”‚           β”‚
β”‚           β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜           β”‚
β”‚                              β”‚                              β”‚
β”‚                    β–Ό Security Guarantee β–Ό                   β”‚
β”‚         Token proves origin, integrity, and binding         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The signing key is the root of trust. If it leaks, every token ever signed with it is compromised β€” past, present, and future. If it's too short for the chosen algorithm, it can be brute-forced. If it never rotates, a breach you don't know about gives an attacker indefinite validity.

The validator is where most real-world JWT failures actually happen. A correctly issued token is worthless if the party receiving it accepts tokens with the wrong audience, skips expiry checks, or trusts the header's alg field to decide how to verify the signature. Validation is not a formality; it's the last line of defense.

The transport layer determines who can intercept the token in the first place. A perfectly signed, perfectly validated token delivered over plain HTTP is visible to any observer on the network path. Beyond basic TLS, sender-constrained mechanisms (such as binding a token to a specific client certificate or proof-of-possession key) ensure that intercepting a token doesn't make it usable.

🎯 Key Principle: JWT security is a system property. Auditing the token format, the signing algorithm, or the validation code in isolation will miss failures that only appear at the boundaries between those three legs.


Signed vs. Encrypted: Choosing the Right Property

One of the most common conceptual errors in JWT usage is conflating integrity with confidentiality. They are distinct properties, they require different mechanisms, and choosing the wrong one for your use case produces a system that appears secure but isn't.

JWS (JSON Web Signature) gives you integrity and origin. The payload is Base64URL-encoded, not encrypted β€” any party in possession of the token can decode and read the payload without knowing the signing key. What they cannot do (without the key) is forge or alter the token without detection.

JWE (JSON Web Encryption) gives you confidentiality. The payload is encrypted so that only the intended recipient with the correct decryption key can read it. JWE alone does not prove who created the token β€” for that, you'd typically nest a JWS inside a JWE.

πŸ“‹ Quick Reference Card: JWS vs. JWE

πŸ”’ JWS (Signed) πŸ” JWE (Encrypted)
🎯 Primary property Integrity + Origin Confidentiality
πŸ‘οΈ Payload visible? Yes (to anyone) No (only to recipient)
βœ… Prevents tampering? Yes Not directly
πŸ”‘ Key used for Signing/verification Encryption/decryption
πŸ“¦ Typical use Auth tokens, ID tokens Tokens carrying PII or secrets
πŸ”— Can be combined? Yes β€” JWS nested inside JWE Yes β€” for both properties

❌ Wrong thinking: "The token is safe because it's signed β€” no one can read the user's email inside it."

βœ… Correct thinking: "The signature proves the token hasn't been tampered with, but anyone who intercepts the token can read the payload. If the payload contains sensitive data, I need JWE, or I should not include that data in the token at all."

πŸ’‘ Real-World Example: An identity provider issues ID tokens that include the user's email address and phone number. If those tokens are transmitted via a redirect URL (as they sometimes are in certain OAuth flows), they may appear in browser history, server logs, and referrer headers. A signed-only token exposes that data to any system that sees the URL. The fix is either to use JWE for the ID token or β€” more practically β€” to limit what PII goes into the token and retrieve sensitive attributes from a protected endpoint instead.



The Real Attack Surface: Implementation, Not Cryptography

The underlying cryptographic primitives used in JWTs β€” HMAC-SHA256, RSA, ECDSA β€” are not the failure point in practice. The failures appear in the layer between the cryptography and the application: the code that parses the header, selects a verification algorithm, and decides which claims to check.

Three implementation errors account for the majority of JWT-related vulnerabilities seen in real systems:

1. Trusting the alg header claim. When a library reads the algorithm to use for verification from the token's own header, an attacker can forge a token by replacing "alg": "RS256" with "alg": "none" and stripping the signature. Some libraries have historically accepted "none" as a valid algorithm. The correct pattern is to configure the expected algorithm explicitly on the server side and reject any token whose header specifies something different.

2. Skipping aud validation. A token issued for one service in a system is not automatically valid for every other service. If the recipient doesn't verify that the aud claim matches its own identifier, a token legitimately issued to Service A can be replayed against Service B β€” a class of attack sometimes called audience confusion. This is especially easy to miss when all services in a system share the same signing key.

3. Accepting tokens issued with the wrong key type. The classic variant of this is the RS256-to-HS256 confusion: an attacker knows the server's RSA public key (which is often published), reissues a token signed with that public key using HMAC, and submits it to a server that is configured to accept HMAC-signed tokens using the same key material. The server verifies the HMAC with what it thinks is its HMAC secret β€” which is actually the attacker's chosen value, the server's own public key.

⚠️ Common Mistake β€” Mistake 1: Treating JWT library defaults as safe defaults. Many libraries prioritize flexibility over security in their default configuration, accepting a wide range of algorithms and performing minimal claim validation unless explicitly instructed otherwise. Read your library's documentation for the specific calls that lock algorithm selection and enable claim enforcement.

🧠 Mnemonic: "Lock the Algorithm, Check the Audience, Trust No Header" β€” three words for the three most exploitable gaps: Lock, Check, Trust-no-header. If your validation code does all three, it's already ahead of most real-world implementations.



Consolidating the Validation Checklist

Validation is the point where all of the preceding concepts converge into executable behavior. The following summarizes the required checks, grouped by what they protect against. This is a practical starting point β€” individual deployments may require additional checks (such as jti uniqueness for replay prevention).

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                  JWT Validation Checklist                      β”‚
β”‚                                                                β”‚
β”‚  STRUCTURAL                                                    β”‚
β”‚  βœ“ Token is well-formed (three dot-separated segments)         β”‚
β”‚  βœ“ Header, payload, and signature all decode without error     β”‚
β”‚                                                                β”‚
β”‚  ALGORITHM                                                     β”‚
β”‚  βœ“ alg in header matches server-configured expected algorithm  β”‚
β”‚  βœ“ alg is not "none" (explicitly rejected)                     β”‚
β”‚  βœ“ Key type matches algorithm family                           β”‚
β”‚                                                                β”‚
β”‚  SIGNATURE                                                     β”‚
β”‚  βœ“ Signature verifies against known, trusted key material      β”‚
β”‚                                                                β”‚
β”‚  CLAIMS                                                        β”‚
β”‚  βœ“ exp: token has not expired (with reasonable clock skew)     β”‚
β”‚  βœ“ nbf: token is not being used before its valid period        β”‚
β”‚  βœ“ iss: issuer matches expected value                          β”‚
β”‚  βœ“ aud: audience includes this service's identifier            β”‚
β”‚  βœ“ (optional) jti: token ID not previously seen                β”‚
β”‚                                                                β”‚
β”‚  TRANSPORT                                                     β”‚
β”‚  βœ“ Delivered over TLS                                          β”‚
β”‚  βœ“ If sender-constrained: proof-of-possession verified         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ’‘ Pro Tip: The order matters. Structural checks fail fast on malformed input before you attempt cryptographic operations. Algorithm checks happen before signature verification so you don't accidentally verify a signature with the wrong method. Claim checks happen after signature verification β€” there's no point enforcing expiry on a token whose signature you haven't yet confirmed.

⚠️ Common Mistake β€” Mistake 2: Checking expiry before verifying the signature. An expired token with an invalid signature should fail on the signature check, not pass the expiry check and then fail later. Process checks in order: structure β†’ algorithm β†’ signature β†’ claims.


Summary Table: Core Concepts at a Glance

πŸ“‹ Quick Reference Card: Lesson Concepts

🧠 Concept βœ… What It Means in Practice ⚠️ What Goes Wrong Without It
πŸ”‘ Signing key hygiene Secret keys stay secret; key size matches algorithm requirements; rotation is scheduled Leaked or weak key β†’ all tokens forgeable
πŸ”’ JWS vs. JWE distinction Choose based on whether you need confidentiality, not just integrity PII exposed in signed-only tokens; or unnecessary encryption complexity
πŸ“‹ Claim validation All relevant claims checked on every request; none skipped for convenience Token replay across services; expired tokens accepted
🚫 Algorithm lockdown Server enforces expected alg; none always rejected alg: none or key confusion attacks succeed
🌐 Transport security TLS minimum; sender-constraining where bearer token risk is too high Token interception β†’ full impersonation
πŸ”„ Key rotation Keys have defined lifetimes; rotation doesn't break existing valid tokens Long-lived keys β†’ larger blast radius on compromise


What This Lesson Has Given You

Before working through this material, it's easy to treat JWTs as a black box β€” paste in a library, generate a token, assume the framework handles the rest. What this lesson has equipped you to do is reason about each layer independently and identify where a given system's security claims might not match its implementation.

Specifically, you can now:

🧠 Explain why a signed token isn't a confidential token, and which mechanism (JWS vs. JWE) provides which property β€” and why you'd sometimes nest one inside the other.

πŸ“š Enumerate the validation steps that must happen on the receiving end, in the right order, and identify which are most commonly omitted in real implementations.

πŸ”§ Recognize the root cause of the three most exploited JWT vulnerabilities: trusting the alg header, skipping aud validation, and key-confusion between algorithm families.

🎯 Frame JWT security as a system property spanning the signing key, the validator, and the transport β€” not a property of the token format alone.

πŸ”’ Apply the checklist above to audit a JWT validation implementation, regardless of which language or library it uses.


Open Questions and Where They're Answered

This lesson has deliberately deferred two significant topics. They require focused treatment and belong in their own lessons.

Algorithm Selection, Key Sizes, and JWKS Distribution

This lesson has named algorithms (RS256, HS256, ES256) without specifying how to choose between them, what key sizes are appropriate for each, or how to publish and rotate public keys using a JWKS endpoint. Those questions have non-obvious answers β€” for example, the trade-offs between RSA and elliptic curve algorithms involve not just security level but performance characteristics, library support, and key size on the wire.

The JWT Structure & Algorithms lesson addresses exactly this: how each algorithm family works at a mechanical level, what key lengths correspond to what security margins, and how JWKS-based key distribution allows issuers to publish keys that validators can discover and cache automatically.

πŸ’‘ Mental Model: Think of this lesson as having taught you that algorithm selection matters and why it matters. The JWT Structure & Algorithms lesson teaches you what to select and how to operationalize it.

Revocation, Rotation Workflows, and Validation Strategy at Scale

JWTs are stateless by design β€” and that statefulness trade-off bites hardest when you need to invalidate a token before it expires. A user changes their password, an admin revokes a session, or a signing key is suspected to be compromised: in all of these cases, the token is cryptographically valid but should no longer be trusted. Handling this without reintroducing per-request server-side state requires deliberate design.

The Key Rotation & Validation Strategy lesson covers revocation approaches (blocklists, short-lived tokens, token families), key rotation workflows that avoid invalidating all existing sessions simultaneously, and how to structure validation logic in distributed systems where multiple services are consuming tokens from the same issuer.

πŸ’‘ Pro Tip: If your current system uses long-lived tokens (hours or days) without any revocation mechanism, the Key Rotation & Validation Strategy lesson is the highest-priority next step. Long expiry combined with no revocation is the configuration that makes key compromise or session hijacking most damaging.


Three Practical Next Steps

Before moving to the child lessons, there are three things worth doing with what you've learned here:

1. Audit your library's default validation behavior. Pick the JWT library your current or most recent project uses and read its documentation specifically for: (a) whether algorithm selection is configurable or defaults to trusting the header, (b) which claims it validates by default versus which require explicit configuration, and (c) whether it has a history of security advisories related to alg: none or key confusion. Most mature libraries have addressed these issues, but often only when configured explicitly.

2. Map your tokens to a threat model. For each JWT your system issues, answer three questions: Who is the intended audience? What would happen if the token were intercepted? What would happen if it were replayed to a different service? The answers tell you whether you need aud validation, JWE, or sender-constraining β€” and they surface assumptions you may not have known your system was making.

3. Trace a token through all three legs. For one representative flow in your system, trace a JWT from issuance through transport to validation. Identify the specific component responsible for each leg: which service holds the signing key, which code performs each validation check, and what the transport mechanism is. Gaps in this trace β€” steps you can't clearly assign to a specific component β€” are the places most likely to hide vulnerabilities.


⚠️ Final critical point to carry forward: The JOSE specifications (the family of standards that defines JWTs, JWS, and JWE) are well-designed. The cryptographic primitives are sound. The vulnerabilities that appear in practice emerge from the gap between what the specification requires and what an implementation actually enforces. When you evaluate a JWT implementation β€” your own or someone else's β€” the right question is not "does it use JWTs?" but "does it enforce everything the spec requires, in every case?" That question is harder to answer, and it's the right one to ask.


Continue to JWT Structure & Algorithms to go deep on algorithm mechanics, key sizing, and JWKS-based key distribution. When you're ready to address operational concerns around revocation and rotation, Key Rotation & Validation Strategy picks up where this lesson leaves off.