Authenticating Clients without Mutual TLS

by Parteek Saran
13 min read

Transport Layer Security

TLS is an important cryptographic foundation for the secure transmission of bytes on the internet. It allows your computer to authenticate the remote peer on the other end of a connection. When implemented properly, it prevents adversaries from listening in on your internet communications, for example, with your bank’s servers. It even prevents adversaries from pretending to be your bank, or your hospital, or your email provider, etc., and making off with your API credentials and thus access to your account. TLS is web scale.

TLS allows your computer to make sure it’s talking to the correct server. But, what prevents an adversary from connecting and simply connecting themselves and telling the server that they are in fact you? The prevailing mechanism used to identify a client to a server is a bearer token. When you give the server a username and password during login, it verifies the password is correct and, if so, issues a token to your client that your client (e.g. browser or mobile app) presents on all future requests. Passwords and tokens are strictly worse than the asymmetric cryptography that happens during a TLS handshake.

The details of why asymmetric crypto is preferrable to symmetric secrets are not in scope for this essay. In short, TLS doesn’t require a secret to be shared between parties over an insecure channel. Your client can verify who the server is without the server presenting any information a third party could use to masquerade as it.

The asymmetry of the situation is weird. Clients authenticate servers with TLS but servers authenticate clients with passwords and tokens. Of course, TLS can be made to work both ways such that clients and servers mutually authenticate each other. Thankfully this practice is becoming increasingly common in the infrastructure realm and you absolutely should be using it everywhere you can. Networks are inherently untrustworthy and the proliferation of convenient TLS deployment tooling is allowing the industry to hoist itself out of a rut where applications end up trusting their VPC network. But mutual TLS is not application scale.

TLS only works at the connection level. However, applications don’t care about connections. They primarily run on top of infrastructure that handles connections independently and forwards along the contained requests. Application servers are usually designed to handle requests, with connection logic buried deep in the http framework of choice. By the time a request lands at your actual application server’s handler code, it is essentially unauthenticated, but trusted, plaintext.

There are other issues with using TLS to handle application level user authentication. The certificate authority system that TLS enshrines at the application/browser to service edge boundary is truly the web’s PKI (and not yours). Any network of trust you merge in comes with CA oligarchy baggage. Additionally TLS is pretty tightly coupled to the DNS. These may seem like the big issues at first but they’re workable. If you take a step back, the more prevailing practical issue is simply the impedance mismatch between the abstraction layers.

The important point is that while TLS does provide a way for servers to authenticate clients and is definitely a good thing is many scenarios, the process happens at the transport layer, not the application layer where it’s actually ultimately needed.

The solution to securely authenticating clients without mutual TLS is to sprinkle on cryptographic assertions about the integrity of each request hitting the application. Colloquially, these are called signed requests.

Signed requests are not a new trick. Security conscious applications often sign sensitive requests. When interacting with AWS for example, the client signs requests using a keyed digest scheme. ACME, the automatic TLS certificate management protocol, naturally implements signed requests with replay prevention (at a fundamental level ACME is bootstrapping TLS and thus needs other independent security measures). In general RFC7515 defines a standard for applying both HMAC and asymmetric DSA style signatures to the JOSE family of HTTP communication structures. There is even a digest mode specified for the http WWW-Authenticate header which supports keyed signatures.

Despite its age, support (albeit rudimentary) by browsers, and recent uptick in tooling thanks to JWS, the idea of signing requests has struggled to gain adoption and is generally not part of common API design parlance. I’m going to try and demystify the concept by walking through an existing implementation because signed requests are great. They’re more secure than bearer tokens. And everyone should consider them for new applications.

Building Signed Requests

To give you an idea of how signed requests work, I’m going to cover the design of Uno’s request authentication and authorization system. The Uno API service ensures 3 important things:

  1. Every request is independently authenticated in such a way that sensitive credentials are never transmitted on the wire
  2. Requests cannot be replayed
  3. Unfettered resource creation is discouraged

The first is covered by using a user’s existing signing keys to sign requests allowing the server to asymetrically verify a user's identity. The second is achieved by including a per-request nonce in the signed data. And the third involves a hash difficulty negotiation between the client and server.

Server Challenge

Let’s look at the logistics of how the server primes the client for signature auth and how the client responds to the authentication challenge.

The very first request in a flow results results in a 401. The server sees the client has not provided any Authorization header and tells the client about the authentication options available. The client cannot preempt because it does not have a nonce yet. The standard Www-Authenticate header is used communicate authentication options:

Www-Authenticate: Tuned-Digest-Signature 
    nonce=X8F3RvU55PwO2Keiferd5P1F5UClfPZ8xsMQj2VqSkI;
    algorithm=$argon2d$v=19$m=65536,t=3,p=8;
    actions=read

Uno uses an authorization scheme we crafted for our use case called tuned-digest-signature. Notice the argon2 parameters used to specify the hash difficulty. The namesake comes from the fact that we ask the client to compute a tuned digest using the specified parameters and then sign it. It’s really just ordinary digest auth but with a slow signature and difficulty parameter negotiation.

The client has made its initial request and received a nonce and tuning parameters. Next it must complete the challenge by binding the nonce to the request it is trying to make and thus proving to the server that it has performed unique work for the request in flight.

Binding Requests

At a high level, signing a request simply involves collecting sensitive request parameters including http path, method, body digest, and a per-request client nonce and server nonce, concatenating them in a deterministic order, hashing them, and then computing a signature over that digest. In other words, determine the important pieces of the request, normalize them, and then sign them and attach the signature.

Requirements as to what gets signed may vary based on the application or even section of the API. In one case just the URI might be sufficient and in another the application might want to sign the entirety of the request (save the signature piece itself). It’s even possible the client and server dynamically specify which data should be signed on a per-request basis. In Uno’s case, we apply the same strategy consistently across the API surface.

If you are building an application and don’t know what include, just use JWS.

It is important that both the client and server are in agreement on which parameters to sign and the normalization strategy to apply so that each one can compute or verify the digest independently.

The client gathers the:

  • Nonce
  • Request http method
  • Request path, and
  • Base64(blake3(request body))

It parses the argon2 parameters from the header, sets up the hash context, generates client salt, and then hashes the string:

"nonce|method|path|body_digest"

The resulting hash binds the nonce to the request. The final step is to bind the user to the request. The client signs the resulting response with its signing key and constructs the following Authorization header:

Authorization: Tuned-Digest-Signature
  identity="51cBN9gxEge6aTv4yvF0IgSsV6ETCa+puinqlpRj4pg";
  nonce="ij4SWiKZAkdL0SftSavftcuKJJUX9ZOutn4zg56cPDo";
  response="Zm9vZGJhYmU$/fwnKozofi8OfqZEt0+3z3n10GZG3pekDvE0WvW66NE";
  signature="N+xFiSOAJWIx5JGwRrNvlWVXD+3vzv0NZASETEdfDm61nY...(64)"

The identity field is the base64 Ed25519 public key of the user. The response is the argon salt and the base64 hash of the previously constructed request string. The signature is the result of signing the response using the private key corresponding to the public key specified in the identity field.

The response is close to being redundant to the signature since a signature is just a fancy hash and could be computed directly over the request string instead. But it is important for the client to incorporate its own entropy into the signature to prevent chosen plaintext attacks on the client. And it it nice to keep the mechanism for binding a nonce to a request separate from binding a user to a request.

The client attaches the Authorization header to the request and retries.

Server Response

When the server receives the updated request it sees the Authorization header. Looks up whether it previously issued the nonce and rejects the request if it did not. If it did issue the nonce it will also find the associated argon parameters and valid actions for the nonce since the client can’t be trusted to specify those. It checks to make sure the action the client is trying to perform is contained in the valid list of actions for the nonce. It then proceeds to duplicate the client’s challenge response by constructing the same normalized request string the client used and performing the same argon hash using the parameters from its nonce database and salt from the client. If the response cannot be duplicated byte for byte the request fails. If it does match the final step is to verify the signature is valid for the challenge response hash and subsequently fail or pass the request.

In this scenario the nonce would only be a good nonce if it expired after one use. Otherwise the client could just do the hard work once for every different type of request and reuse similar requests in the future. Hashing the body bytes kinda helps but we need real handling for this.

Naively the server would just store all the nonces it generates and check if one has ever been used before when a request comes in. That would result in remembering a bunch of nonces only to make sure they weren’t used again in the future. Instead we invert the idea: we only remember active nonces. Anything else is disallowed. This works because our nonces are statistically incredibly unlikely to collide since they’re 32 bytes of entropy.

When any request comes in, the nonce used is burned regardless of whether the challenge was successful or not and a new nonce is issued. After a day, stale nonces are removed. On all requests, the server attaches a next-nonce which allows the client to chain requests and avoid experiencing 401s on every single request. The Authentication-Info header looks like:

Authentication-Info: 
    nextnonce=O4AaqraoK28Ad0S8hwZZDTYX72mFWoWUkLK9sPspFLE
    argon=v=19$m=65536,t=3,p=8
    scopes=read,update

We attach one auth-info header for the create scope and one for all other scopes.

Scopes

Until this point, I’ve described a pretty standard signed requests strategy. In this final section I’m going to cover the mechanism we use to deter at-scale API abuse like unfettered account creation. Simply, the idea is to ask clients to do a lot of work for requests that consume new resources on our end. Right now we’ve generalized this to any restful create action.

Scopes are how the server models its understanding of how difficult the argon tuning parameters need to be. When a request comes in the server determines whether the resource the request pertains to exists or not. For example, a resource is created when a new user installs the app for the first time and claims their vault. In this case the server ramps up the tuning parameters to the point where the client spends seconds computing the hash. For everything else the server sees that the resource already exists and specifies very easy parameters so that day to day use of the service remains snappy.

Even in the existing resource case, the proof of work mechanism is an effective way of throttling requests. Though not yet implemented a dynamic request throttling mechanism is conceptually easy to incorporate into the flow since there are no hardcoded difficulty parameters or even algorithms.

You may be getting all antsy about the fact that we don’t use an asymmetric “crypto” style proof of work scheme since it’s not hard to imagine the duplication of effort by the server ultimately being an easy denial of service vector for an adversary. Rest assured we are currently working on adding support for an asymmetric proof of work hashing scheme so we avoid the issue altogether.

Wrapping Up

Mutual TLS is an effective and robust tool for authenticating client connections. The same even goes for Wireguard and building cryptographically secure networks. However, at the application layer, the analog is signed requests. Next time you’re designing an API surface, leave mTLS as an infrastructure concern and consider picking up a standard request signature implementation like JWS. Your server framework most likely already supports it and it’s a nice way to up the security of your API (and thus your clients) and take a step beyond old school bearer token authentication.

Icon Share
Share the article