ddos – Stay N Alive

Oh, the Trouble With OAuth

OAuthThis article has been sitting on my desk for the past week or so, and recent activities around the Twitter/Facebook/LiveJournal/Blogger DDoS attacks have made it even more applicable, so it’s good I waited. The problem centers around the “Open” authentication protocol, OAuth, and how I believe it is keeping companies like Twitter who want to be “Open” from becoming, as they call it, “the pulse of the Internet”. The problem with OAuth is that, while it is indeed an “Open” protocol, it is neither federated, nor decentralized. We need a decentralized authentication protocol that doesn’t rely on just the likes of Twitter or Flickr or Google or Yahoo.

Let’s start by covering a little about what OAuth is. OAuth centers, as the name implies, on Authorization. This is not to be confused with identity, which other decentralized solutions like OpenID focus on. The idea behind OAuth is that any website, or “Service Provider”, will accept a certain set of HTTP requests, handle them, and send them back to the developer, or “Consumer” in exactly the same way as any other OAuth protocol does. OAuth tries to solve the issue of phishing and storage of plain-text usernames and passwords by sending the user from the Consumer website, to the Service Provider’s website to authenticate (through their own means or means such as OpenID), and then authorize. On Twitter this process is done via an “Allow” or “Deny” button the user can choose to enable an application to make API calls on their behalf. Once authorized, the Service Provider sends the user back to the Consumer’s website, which is given a series of tokens to make API calls on behalf of that user.

OAuth’s strengths are that it is easily deployable by any site that wants a central, secure, and understood authorization architecture. Any developer can deploy an OAuth instance to communicate with APIs that provide OAuth architectures because libraries have been built around the architecture for developers’ preferred programming languages, and adapting to a new site implementing OAuth is only a matter of changing a few URLs, tokens, and callback URLs. I’m afraid that’s where OAuth’s strengths end, though.

Let me just put this out there: The User Experience behind OAuth is horrible! From a user’s perspective, having to go to an entirely new website, log in, then go back to another authorization page, and then back to the originating website is quite a process for an e-commerce or web company that is focusing on sales around that user. No e-commerce company in their right mind would put their users through that process, as the sale would be lost with half the users that tried it. Not to mention the fact that (and I don’t know if this has anything to do with the actual OAuth protocol) with most OAuth implementations there is no way to customize the process the user goes through. For example, on Twitter, I can’t specify a message for my users specific to my app when they authorize it. I can’t customize it in any way to my look and feel. I completely lose control when the user leaves my site to authorize and authenticate.

Let’s add to that the problem of the iPhone, desktop apps, and other mobile apps. Sure, you can redirect the user within the app to a website to authorize, but again, you’re taking them away from the app flow during that process. It’s a pain, and headache for users to log in using that method! Not to mention they have to do that EVERY. SINGLE. TIME. they log in through your app since there’s a good chance they were not logged into Twitter or Flickr or other OAuth app in the first place. It’s a huge problem for OAuth developers on these devices, and less-than-ideal.

Now, back to my original point. The biggest problem with OAuth is that it requires a centralized architecture to properly authorize each application. We see this is a problem when entire apps like my own SocialToo.com can’t authenticate users when Twitter gets bombarded by DDoS attack. The need for centralized control of each app on their platform is understandable, in that in the end the companies implementing OAuth still need a way to “turn off” an application if an app gets out of hand. Of course, one solution to this from the developer’s (Consumer’s) perspective is to implement their own authentication and authorization scheme rather than relying on someone like Twitter’s. This is less than ideal though, since most of our users all belong to some other network that already handles this process for us. Why require our users to repeat the “account creation” process to overcome centralization?

I think there is a better solution though. What if a distributed group of “controlling sources” handled this instead, giving each company admin control over their own authorization? What I propose is that a new layer to OAuth be created (or new protocol, either way), enabling trusted “entities” to, on a peer-to-peer (federated) basis, sync authorization pools of users and their distinct permissions between each Consumer app and Service Provider. Companies/Service Providers could then register with these “controlling sources”, and they would have admin access to turn Consumer apps on or off in the event of abuse within their app.

So let’s say you’re Twitter and you want to let your developers authorize with your API. You register on one of these “controlling sources”, they confirm you’re legit (this could possibly be done via technology in some form, perhaps OpenID and FOAF), and let you create your own “domain” on the “controlling source”. Twitter would now have their own key on the “controlling source” to give developers, and the controlling source would divvy out tokens to developers wanting to access Twitter’s API. Twitter’s API could verify with the controlling source on each call that the call is legit. To kill an application, they would just need to log into the controlling source and deny the application. The application would get denied at the controlling source before it even hit Twitter’s API.

What makes this open is that, if this were itself written under an open protocol, anyone could theoretically create one of these “controlling sources”. So long as they operated under the same protocol, they would operate and work exactly the same, no matter who they were. Developers could then pick and choose what “controlling source” they wanted to authorize through. If one went down, they could switch to another. Of course, there are some security issues and authenticity of “controlling source” issues that need to be worked out, but you get the idea. This would essentially completely de-centralize the entire authorization process. Authorization itself would quickly become a federated process.

Now, that still doesn’t solve the User Experience issues I mentioned earlier. To solve those, I think we should look at Facebook and what they’re doing with Facebook Connect. With Facebook Connect, the user never leaves the Consumer’s website to authorize and authenticate. They click a button, a popup comes up, they log in, and a javascript callback notifies the app the user has been authorized and authenticated. It’s essentially a simple, 3-step process that completely leaves the website owner in control. In addition, Facebook has provided Javascript methods allowing the developer to confirm various states of authenticity, without the user having to leave the website. I’d like to see OAuth emulate this model more. Right now I’d rather implement Facebook Connect than OAuth for these reasons.

I think, as both Dave Winer and Rob Diana point out, there are some serious issues being brought up from the recent DDoS attacks against Twitter and other sites. Twitter’s inability to handle the DDoS attacks when compared to the others I think shows we need much more Federation from the site, as well as the “Open” protocols it is trying to build around. Twitter wants to become a utility. There is no way that will ever happen until they Federate, and I think that has to start with a change to the OAuth protocol.