is
2018/11/12
12 Nov, 2018

OAuth 2.0 Threat Landscape

  • Prabath Siriwardena
  • Senior Director - Security Architecture - WSO2

Not long ago — on a day in May, I got a mail from a good friend of mine, which I opened without hesitation. He was sharing a Google doc — and once I clicked on the link to open it up, a suspicious-looking screen appeared, asking permissions from me to read, delete, send, and manage my emails. Why on the earth was ‘Google Docs’ asking my permissions to access my emails? Well, yes — the screen said it's ‘Google Docs’. I really didn’t bother to check if it’s real Google Docs — or fake, but I didn’t proceed any further after this. I have developed this habit to be quite suspicious when some applications asks permission to access my accounts. This happens quite a lot with Facebook. Any application asking permission to post to my Facebook wall never goes past the consent screen. It’s not true for all of us — on 3rd May, 2017, many people didn’t bother giving the fake ‘Google Docs’ app permission to access their emails.

Google Docs Phishing Attack

The fake Google Docs OAuth 2.0 app was used by the attacker as a medium to launch a massive phishing attack targeting Google users. The first targets were media companies and PR agencies. They have a large amount of contacts — and the attacker used email addresses from their contact lists to spread the attack. It went viral for an hour — before the app was removed by Google.

Is this a flaw in the OAuth 2.0 protocol that was exploited by the attacker or a flaw in how Google implemented it? Is there something we could have done better to prevent such attacks?

Let’s take a look at OAuth 2.0 first. The following diagram depicts the most common usage of OAuth 2.0. Almost all the applications you see on the web today use this authorization code grant flow in OAuth 2.0. The attacker exploited step 2 in the diagram by tricking the user with an application name (Google Docs) known to them. In addition,  the attacker used an email template which is close to what Google uses in sharing docs, to make the user click on the link. For anyone who carefully looked at the email or even the consent screen could have realized that phishing was happening — but unfortunately very few do care.

It’s neither a flaw of OAuth 2.0 nor how Google implemented it. Phishing is a prominent threat in cyber security. Does that mean there is no way to prevent such attacks, other than proper user education? There are basic things Google could do to prevent such attacks in the future. Looking at the consent screen, ‘Google Docs’ is the key phrase used there to win user’s trust. When creating an OAuth 2.0 app in Google, you can select any name you want. This immensely helps an attacker to misguide users. Google could easily filter out known names and prevent app developers from choosing these names to trick the users.

Another key issue is that Google does not show the domain name of the application (but only the application name) on the consent page. Having the domain name prominently displayed on the consent page will provide some hint to the user where he is heading. The image of the application on the consent page also misleads the user. The attacker has intentionally included the Google drive image there. If all these OAuth applications can go through an approval process, before they're launched in public, such mishaps can be prevented. Facebook already follows such a process. When you create a Facebook app, only the owner of the application can login  at first, and it has to go through an approval process before it is launched to the public.

G Suite is widely used in the enterprise. Google can give the domain admins more control to allowlist which applications the domain users can access from corporate credentials. This prevents users from phishing attacks who unknowingly share access to important company docs with third party apps.

The phishing attack on Google is a good wake up call to evaluate and think about how phishing resistance techniques can be occupied in different OAuth flows. One particular event I attended in the Bay area featured an engineer from the Google Chrome security team, who explained how much effort they put in to design the Chrome warning page for invalid certificates. They do tons of research even to pick the color, the alignment of text — and what images to be displayed. Surely, Google will bring up more bright ideas to the table to fight against phishing.

Identity Provider Mixup

OAuth 2.0 is the most commonly used identity standard on the web today for access delegation — and of course for login. Even though OAuth 2.0 is about access delegation, people still work around it to make it work for login purposes. That’s how login with Facebook works. Then again, OpenID Connect, which is built on top of OAuth 2.0  is the right way of using OAuth 2.0 for authentication. A recent research done by one of the leading vendors in the IAM domain confirmed that many new development project over the past two years at the enterprise level selected OAuth 2.0/OpenID Connect over SAML 2.0. That’s the trend I’ve witnessed too, working with many WSO2 customers. All in all  OAuth 2.0 security is a hot topic. In 2016, Daniel Fett, Ralf Küsters, and Guido Schmitz published a research paper on OAuth 2.0 security titled "A Comprehensive Formal Security Analysis of OAuth 2.0." Identity Provider mixup (IdP) is one of the attacks highlighted in this paper.

Let’s try to understand how IdP mixup works.

  1. The OAuth 2.0 client application provides multiple IdP options to login. Let’s say foo.idp and evil.idp. We assume that the client application knows nothing about the evil.idp. It can also be a case where evil.idp is a genuine identity provider, which could possibly be under attack.
  2. The victim picks foo.idp from the browser and the attacker intercepts the request and changes the selection to evil.idp. Here we assume the communication between the browser and the client application is not protected with TLS. OAuth 2.0 specification does not talk about it and it’s purely up to the web application developers. Since there is no confidential data passed in this flow, most of the time the web application developer may not be worried about using TLS. At the same time, there were few vulnerabilities discovered over the past on TLS implementations (mostly openssl). Therefore, the attacker could possibly use such vulnerabilities to intercept the communication between the browser and the client application (web server), even if TLS is used.
  3. The client application thinks it’s evil.idp and redirects the user to evil.idp. The client application only gets the modified request from the attacker who intercepted communication.
  4. The attacker intercepts the redirection and modifies the redirection to go to the foo.idp. This is how redirection works - the web server (in this case the client application) sends back a response to the browser with a 302 status code — and with a Location http header. If the communication between the browser and the client application is not on TLS, then this response is not protected, even if the Location http header contains an https url. Since we have already assumed that communication between the browser and client application can be intercepted by the attacker, then the attacker can modify the Location header in the response to go to the foo.idp — which is the original selection — and no surprise to the user.
  5. The client application gets either the code or the token (based on the grant type) and now will talk to the evil.idp to validate it. The authorization server will send back the authorization code (if used the code grant type) to the callback url, which is under the client application. Just looking at the authorization code, the client application cannot decide to which identity provider the code belongs to. We can then assume it tracks the identity provider by some session variable — so as per step-3, it thinks it’s the evil.idp, and talks to the evil.idp to validate the token.
  6. The evil.idp gets hold of the user’s access token or the authorization code from the foo.idp. If it’s the implicit grant type, then it would be the access token, otherwise the authorization code. In mobile apps, most of the time people used to embed the same client id and the client secret into all the instances . As such, an attacker having root access to his phone can figure it out what the keys are and then with the authorization code, gets the access token.

There is no record of the above attack being carried out in practice , but at the same time we cannot totally rule it out. There are couple of options available to prevent such attacks:

  1. Have separate callback URLs by each identity provider. With this, the client application knows which identity provider the response belongs to. The legitimate identity provider will always respect the callback URL associated with the client application. The client application will also attach the value of the callback url to the browser session, and once redirected, check whether its' on the right place by matching with the value of the callback url from the browser session.
  2. Follow the mitigation steps defined in the IETF draft specification: OAuth 2.0 IdP Mix-up Mitigation. This specification proposes to send a set of mitigation data from the authorization server back to the client, along with the authorization response. The mitigation data provided by the authorization server to the client includes an issuer identifier, which is used to identify the authorization server and a client id that is used to verify if the response is from the correct authorization server and is intended for the given client. This way the OAuth 2.0 client can verify from which authorization server it got the response back and based on that identify the token endpoint or the endpoint to validate the token.

I gave a talk on this subject at the Cloud Identity Summit (which is now renamed to Identiverse) in Chicago this year — where Nat Sakamura was in the audience. Nat is the chair of the OpenID Foundation. He confirmed that approach  1 is the recommended one and the draft specification mentioned in 2 is now expired  or obsolete.

Cross Site Request Forgery (CSRF)

Generally, a Cross Site Request Forgery (CSRF) attack forces a victim who's logged into a particular browser to send a forged HTTP request, including the victim’s session cookies and any other automatically included authentication information to a vulnerable web application. Such an attack allows the attacker to force a victim’s browser to generate requests, which the vulnerable application thinks are legitimate requests from the victim.Open Web Application Security Project (OWASP) identifies this as one of the key security risks in web applications in its 2017 report.

Let’s see how CSRF can be used with OAuth 2.0 to exploit a vulnerable web application.

  1. The attacker tries to log in to the target website (OAuth 2.0 client) with his account at the corresponding identity provider. Here we assume the attacker has a valid account at the identity provider, trusted by the corresponding OAuth 2.0 client application.
  2. The attacker blocks the redirection to the target website and captures the authorization code. The target web site never sees the code. In OAuth 2.0, the authorization code is only good enough for one time use. In case the OAuth 2.0 client application sees it and then exchanges it to an access token, then it’s no longer valid. The attacker then has to make sure that the authorization code never reaches the client application. Since the authorization code flows through the attacker’s browser to the client it can be blocked easily.
  3. The attacker constructs the callback URL for the target site  and makes the victim click on it. In fact it would be the same callback URL the attacker can copy from the step 2. Here the attacker can send the link to the victim’s email or somehow trick him to click on the link.
  4. The victim clicks on the link, logs into the target website with the account attached to the attacker, and adds his/her credit card information. Since the authorization code belongs to the attacker, the victim logs into the target web site with the attacker’s account. This is a pattern many web sites follow to authenticate users with OAuth 2.0. Login with Facebook works in the same way. Once the web site gets the authorization code, it will talk to the authorization server and exchanges it to an access token. Then using that access token, the website talks to a special endpoint in the authorization server to find user information. In this case since the code belongs to the attacker, the user information which was returned from the authorization server will be related to him , therefore the victim logs in to the target web site with the attacker’s account.
  5. The attacker too logs into the target website with his/her valid credentials and uses the victim’s credit card to purchase goods.

The above attack can be mitigated by following these best practices:

  1. Short-lived authorization code. By making the authorization code expire early, the attacker has a very short time to plant an attack. For example, the authorization code issued by LinkedIn expires in 30 seconds. Ideally the life time of the authorization code should be in seconds.
  2. Use the state parameter as defined in the OAuth 2.0 specification. This is one of the key parameters to use to mitigate CSRF attacks in general. The client application has to generate a random number and pass it to the authorization server along with the grant request. Furthermore, the client application has to add the generated value of the state to the current user session (browser session) before redirecting the user to the authorization server. According to the OAuth 2.0 specification, the authorization server has to return the same state value with the authorization code to the redirect_uri (to the client application). The client must validate the state value returned from the authorization server with the value stored in the user’s current session. If there's a mismatch, then not move forward. Going back to the attack, when the user clicks on the crafted link sent to the victim by the attacker, it won’t carry the same state value generated earlier and attached to the victim’s session (or most probably the victim’s session has no state value) or the attacker does not know how to generate the exact same state value. Therefore,  the attack won’t be successful and the client application will reject the request.
  3. Use PKCE (Proof Key for Code Exchange). PKCE was introduced to protect OAuth 2.0 client applications from the authorization code interception attack, mostly targeting native mobile apps. Use of PKCE will also protect users from CSRF attacks once the code_verifier is attached to the user’s browser session. I will discuss PKCE in more detail towards the latter part of this article.

Token Reuse

OAuth 2.0 tokens are issued by the authorization server to a client application, to access a resource on be half of the resource owner. This token is to be used by the client   and the resource server will make sure it’s a valid one. What if the the resource server is under the control of an attacker  and wants to reuse the token sent to it to access another resource, impersonating the original client? Here the basic assumption is that there are multiple resource servers, which trust the same authorization server. For example, in a microservices deployment there can be multiple microservices protected with OAuth 2.0 which trust the same authorization server.

How do we make sure at the resource server side that the provided token is good enough to access it? One approach is to have properly scoped access token. The scopes are defined by the resource server  and updates the authorization server. If we qualify each scope with an URN specific to the corresponding resource server, then there cannot be any overlapping scopes across all the resource servers ,  and each resource server knows how to uniquely identify a scope corresponding to it. Before accepting a token, it should check whether the token is issued with a scope known to it.

This does not completely solve the problem. If the client decides to get a single access token (with all the scopes) to access all the resources, a malicious client can still use that access token to access another resource by impersonating the original client. To overcome this, the client can first get an access token with all the scopes, then it can exchange the access token to get multiple access tokens with different scopes following the OAuth 2.0 Token Exchange specification. A given resource serve, will only see an access token having scopes only related to that particular resource server.

Let’s look at another example of token reuse. Here assume that you login to an OAuth 2.0 client application with Facebook. Now the client has an access token, which is good enough to access the user info endpoint (https://graph.facebook.com/me) of Facebook and find out who the user is. This client application is under an attacker and now the attacker tries to access another client application, which uses the implicit grant type, with the same access token as shown below.

https://target-app/callback?access_token=

The above will let the attacker log in to the client application as the original user. How do we over come this? There are multiple options available:

  1. Avoid using OAuth 2.0 for authentication ,  instead use OpenID Connect. The ID token issued by the authorization server (via OpenID Connect), has an element called aud (audience) and its value will be the client ID corresponding to the client application. Each application should make sure that the value of the aud is known to it before accepting the user. If the attacker tries to replay the ID token   it will not work since the audience validation will fail at the 2nd client application.
  2. Facebook log in does not use OpenID Connect ,  and the above attack can be carried out against a Facebook application which does not have the proper implementation. There are few options introduced by Facebook to overcome the above threat. One way is to use the undocumented API  (https://graph.facebook.com/app?access_token=) to get access token metadata. This will return back in a JSON message, the details of the application which the corresponding access token is issued to. If it’s not yours , reject the request.
  3. Use the standard token introspection endpoint of the authorization server to find the token metadata. The response will have the client_id corresponding to the OAuth 2.0 application ,  and if it does not belong to you reject the login request.

There is another favor of token reuse ,  rather I will call it token misuse. When implicit grant type is used with a single page application (SPA), the access token is visible to the end user   as it’s on the browser. It’s the legitimate user ,  so the user seeing the access token is no big deal. But the issue is that a user would probably take the access token out of the browser (or the app) and automate or script some API calls, which would generate more load on the server that's unexpected in a normal scenario. There's also the cost of making API calls. Most client applications are given a throttle limit , meaning a given application can only do n number of calls during a minute or some fixed time period. If one user tries to invoke APIs with a script , that could possibly eat out the complete throttle limit of the application   making an undesirable impact on the other users of the same application. To over come such scenarios, the recommended approach is to introduce throttle limits by user by application  -  not just by the application. In that way if a user wants to eat out his own throttle limit , go out and do it!

Token Leakage/Export

More than 90% of the OAuth 2.0 deployments are based on bearer tokens ,  not just the public/internet scale ones ,  but also at the enterprise level. Use of a bearer token is just like using cash. For example, when you buy a cup of coffee from Starbucks and pay in cash, no one will wonder how you got that ten dollar note ,  or if you are the real owner of it. OAuth 2.0 bearer tokens are similar to that. If someone takes the token out of the wire (just like stealing a ten dollar note from your pocket), he/she can use it just as the original owner of it ,  no questions asked.

Whenever you use OAuth 2.0 it’s not just recommended but a must to use TLS. Even though TLS is used , a man in the middle of an attack can still carry it out with various techniques. Most of the time the vulnerabilities in TLS implementations are used to intercept the TLS-protected communication channels. The Logjam attack discovered in May 2015 allows a man-in-the-middle attacker to downgrade vulnerable TLS connections to 512-bit export-grade cryptography. This allows the attacker to read and modify any data passed over the connection.

Also not everyone is fully confident to use OAuth 2.0 bearer tokens  by simply trusting the underlying TLS communication. I’ve met several people , mostly from the financial domain, who are reluctant to use OAuth 2.0, just because of the bearer tokens.

There are few things we need to worry about as precautions first to keep the attacker away from having access to the tokens:

  1. Always be on TLS (use TLS 1.2 or later).
  2. Address all the TLS level vulnerabilities both at the client authorization server and the resource server.
  3. The token value should be >=128 bits long and constructed from a cryptographically strong random or pseudo-random number sequence.
  4. Never store tokens in clear text , but the salted hash.
  5. Never write access/refresh tokens into logs.
  6. Use TLS tunneling over TLS-bridging.
  7. Decide the lifetime of each token based on the risk associated with token leakage, duration of the underlying access grant (SAML grant or JWT grant), the time required for an attacker to guess or produce a valid token.
  8. Prevent reuse of the authorization code  just once.
  9. One time access tokens. Under the OAuth 2.0 implicit grant type, access token comes as an URI fragment , which will be in the browser history. In such cases it can be immediately invalidated by exchanging it for a new access token from the client application (which is an SPA).
  10. Avoid using OAuth 2.0 implicit grant type.
  11. Use strong client credentials. Most applications only use the client ID and client secret to authenticate the client application to the authorization server. Rather than passing credentials over the wire , the client can use either the SAML or JWT assertion to authenticate.

In addition to the above measures, we can also cryptographically bind the OAuth 2.0 access/refresh tokens and authorization codes to a given TLS channel . As such, these cannot be exported and used elsewhere. There are few specifications being discussed under the IETF Token Binding working group to address this aspect.

The Token Binding Protocol allows client/server applications to create long-lived, uniquely identifiable TLS bindings spanning multiple TLS sessions and connections. Applications are then enabled to cryptographically bind security tokens to the TLS layer, preventing token export and replay attacks. To protect privacy, the Token Binding Identifiers are only conveyed over TLS and can be reset by the user at any time.

A Token Binding is established by a user agent (client) generating a private- public key pair (possibly, within a secure hardware module, such as TPM) per target server (for example the authorization server), providing the public key to the server, and proving possession of the corresponding private key on every TLS connection to the server. For example, if your browser supports TLS token binding it will generate a key pair and keep that in the memory. The proof of possession involves signing the exported keying material (EKM) from the TLS connection with the private key. The corresponding public key is included in the Token Binding identifier structure. Token Bindings are long-lived i.e. they encompass multiple TLS connections and TLS sessions between a given client and server. To protect privacy, Token Binding IDs are never conveyed over insecure connections and can be reset by the user at any time.

When issuing a security token to a client that supports Token Binding, a server includes the client’s Token Binding ID (or its cryptographic hash) in the token. Later on, when a client presents a security token containing a Token Binding ID, the server ensures the ID in the token matches the ID of the Token Binding established with the client. In the case of a mismatch, the server rejects the token (details are application-specific). In order to successfully export and replay a bound security token, an attacker needs to also be able to use the client’s private key, which is hard to do if the key is specially protected, for example generated in a secure hardware module.

In the course of a TLS handshake, a client and server can use the Token Binding Negotiation TLS Extension to negotiate the Token Binding protocol version and the parameters (signature algorithm, length) of the Token Binding key. This negotiation does not require additional round-trips.

In order to use the Token Binding protocol, the client and server need to agree on the Token Binding protocol version and the parameters (signature algorithm, length) of the Token Binding key. Transport Layer Security (TLS) extension for the negotiation of Token Binding protocol specifies the version and key parameters. The client uses the “token_binding” TLS extension to indicate the highest supported Token Binding protocol version and key parameters. The server uses the “token_binding” TLS extension to indicate support for the Token Binding protocol and to select the protocol version and key parameters.

The Token Binding over HTTP specification describes a collection of mechanisms that allow HTTP servers to cryptographically bind security tokens (such as cookies and OAuth tokens) to TLS connections. The document describes both first-party and federated scenarios. In a first- party scenario, an HTTP server is able to cryptographically bind the security tokens it issues to a client, and which the client subsequently returns to the server, to the TLS connection between the client and server. Such bound security tokens are protected from misuse since the server can generally detect if they are replayed inappropriately, for example over other TLS connections. Federated token bindings, on the other hand, allow servers to cryptographically bind security tokens to a TLS connection that the client has with a different server than the one issuing the token.

While the Token Binding Protocol defines a message format for establishing a Token Binding ID, it does not specify how this message is embedded in higher-level protocols. The purpose of this specification is to define how Token Binding messages are embedded in HTTP. The Token Binding messages are only defined if the underlying transport uses TLS. This means that Token Binding over HTTP is only defined when the HTTP protocol is layered on top of TLS (that is HTTPS).

The OAuth 2.0 Token Binding specification defines how to apply Token Binding to access tokens, authorization codes, and refresh tokens. This cryptographically binds these tokens to a client’s Token Binding key pair, possession of which is proven on the TLS connections over which the tokens are intended to be used. This use of Token Binding protects these tokens from man-in-the-middle and token export and replay attacks.

Open Redirector

An open redirector is an endpoint hosted on the resource server (or the OAuth 2.0 client application) end, which accepts a URL as a query parameter in a request  and then redirects the user to that URL. An attacker can modify the redirect_uri in the authorization grant request from the resource server to the authorization server to include an open redirector URL pointing to an endpoint owned by him. To do this, the attacker has to intercept the communication channel between the victim’s browser and the authorization server ,  or the victim’s browser and the resource server.

Once the request hits the authorization server and after the authentication, the user will be redirected to the provided redirect_uri, which also carries the open redirector query parameter pointing to the attacker’s endpoint. To detect any modifications to the redirect_uri ,  the authorization server can carry out a check against a preregistered URL. But then again, some authorization server implementations will only worry about the domain part of the URL and will ignore doing an exact one to one match. Therefore any changes to the query parameters will be unnoticed.

Once the user is redirected to the open redirector endpoint ,  it will again redirect the user to the value(URL) defined in the open redirector query parameter , which will take him/her to the attacker’s endpoint. In this request to the attacker’s endpoint, the Referer http header could carry some confidential data, including the authorization code (which is sent to the client application by the authorization server as a query parameter).

How to prevent an open redirector attack:

  1. Enforce strict validations at the authorization server against the redirect_uri. It can be an exact one to one match or regex match.
  2. Validate the redirecting URL at open redirector and make sure you only redirect to the domains you own.

Code Interception Attack

Code interception attack could possibly happen in a native mobile app. OAuth 2.0 authorization requests from native apps should only be made through external user-agents, primarily the user’s browser. The OAuth 2.0 for Native Apps specification details the security and usability reasons why this is the case, and how native apps and authorization servers can implement this best practice.

The way you do single sign on in a mobile environment is by spinning up the system browser from your app and then initiate OAuth 2.0 flow from there. Once the authorization code is returned to the redirect_uri (from the authorization server) on the browser, there should be a way to pass it over to the native app. This is taken care of by the mobile OS  and each app has to register for a URL scheme. When the request comes to that particular URL, the mobile OS will pass its control to the corresponding native app. But, the danger here is, there can be multiple apps get registered for the same URL scheme and there is a chance a malicious native could get hold of the authorization code. Then again ,  since many mobile apps embed the same client id and client secret for all the instances of that particular app, the attacker can find out what it is. By knowing the client ID and client secret, and then having access to the authorization code, the malicious app can now get an access token on behalf the end user.

PKCE (Proof Key for Code Exchange) was introduced to mitigate such attacks. Let’s see how it works:

  1. The OAuth 2.0 client app generates a random number (code_verifier) and finds the SHA256 hash of it , which is called the code_challenge.
  2. Sends the code_challenge along with the hashing method in the authorization grant request to the authorization server.
  3. Authorization server records the code_challenge (against the issued authorization code) and replies back with the code.
  4. The client sends the code_verifier along with the authorization code to the token endpoint.
  5. The authorization server find the hash of the provided code_verifier and matches it against the stored code_challenge. If it does not match, reject the request.

With this approach, a malicious app just having access to the authorization code, cannot exchange it to an access token without knowing the value of the code_verifier.

Summary

OAuth 2.0 is an excellent standard for access delegation to cater real production use cases. There is a huge ecosystem building around it , with a massive adoption rate. Whenever we use OAuth, it's best to ensure that we follow and adhere to security best practices , and always use proven libraries and products which already takes care of enforcing these best practices.

 

About Author

  • Prabath Siriwardena
  • Senior Director - Security Architecture
  • WSO2