OSS – Stay N Alive

Developers, It’s Time to Open Up Twitter’s API

TwitterIf you’ve read my previous post on this, you’ll notice how I re-worded the title of this article.  That’s because I’m delusional to think Twitter is going to open source their API any time soon – I’ve been requesting this for over a year now.  I think I’ve come to a new understanding that if we’re to see an open standard built around Twitter’s API, it’s going to be we, the developers, who implement this.

It won’t be Twitter.

I mentioned earlier that developers are starting to catch onto this idea.  It all started almost 2 years ago when Laconi.ca introduced their own Twitter-compatible API into their platform allowing any client library based on the Twitter platform to very simply add Laconi.ca instances to their preferred Twitter client.  Unfortunately it took 2 years for that idea to catch on.  Finally, we’re seeing some big players in this, and my predictions are coming true.  Automattic just released their own Twitter-like API into WordPress.com.  Tumblr just released their own Twitter-like API.  The problem here is that all these developers are re-inventing the wheel every time they re-produce Twitter’s API, and any time Twitter releases a new feature they are stuck re-configuring and re-coding their server code to stay up with Twitter’s new API features.  That’s fine though – this is all a step in the right direction.

The Vision

Imagine now if there were a standard that, at first duplicated what Twitter was producing on their end, but other developers could code off of.  Open Source software could be built around this standard, and now any provider would be able to easily install code that integrated well with their own environments very easily.  We’d see many more providers than just WordPress and Tumblr and Laconi.ca instances like TodaysMama Connect (of which I am an advisor) integrate this.  We’d see big brands and big companies start to implement this.

Soon Twitter will be in the minority amongst services these “Twitter clients” (like TweetDeck, Tweetie, or Seesmic) support.  The Twitter clients will no longer feel obligated to cater to just Twitter, and new layers, such as real time and meta APIs could be added to this API standard in a way that benefits the community, not a single company.  Twitter would no longer have a choke-hold on this and we would have a new, distributed architecture that any developer can implement.

The Proposal

What I’m proposing is that we work together to build an open source set of libraries and services, perhaps a gateway of some sort, all built on a standard that we set (it will most likely copy Twitter’s API at the start).  I’ve built a Google Group, called “OpenTwitter” for now, with the purpose of opening up Twitter’s APIs.  The group will have a primary focus of determining how we want to build this software, establishing documentation for such software, and attaching a completely open standard on top of all that we can modify as it makes sense.  The goal here is that the public will now control how this data gets distributed, not a single company.

But What About RSS?

The question always comes up, why not just push these clients to support RSS and rssCloud or Pubsub Hubbub?  The answer is that we’ve been trying this for too long.  It may still happen, but it’s going to take Twitter clients a lot longer to modify their code to support RSS than an open Twitter-compatible standard.  Ideally, a Twitter client, which there are many, ought to be able to quickly and easily just change the base domain of where calls are sent, and everything with the providing service should “just work”.  RSS complicates this too much.  The fact is that Twitter has taken over and we need to accept that as a fact and work with it.

The Invitation

If you can, I’d like to invite you to join our “OpenTwitter” list on Google.  Let’s get some conversations going, and get this thing off the ground.  My hope is that we can get people like Dave Winer, Matt Mullenweg, Chris Messina, David Recordon, Joseph Smarr, DeWitt Clinton, and Mike Taylor all joining this effort.  My goal is that we can even get Twitter involved in this effort – this isn’t meant to snub Twitter by all means.  The entire goal here is to build a much more open, distributed web in as simple a manner as possible.

You can join the “OpenTwitter” list here.  I’ll be posting a kickoff message there very soon.

Twitter, It’s Time to Open Source Your API

twitter.pngWith the recent launch of a “Twitter API” by both Automattic (WordPress.com) and Tumblr, it is evident that developers have a need to implement similar APIs, on similar platforms, reducing the effort to retrieve data from multiple platforms in a single client.  With Tweetie, for instance, you can simply change a single URL to “WordPress.com” or “Tumblr.com” or “Identi.ca” and immediately be receiving updates from your friends on those services, and even post back to those services.  I argue this approach is very closed though, as for each and every implementation of a “Twitter API” (which ironically has nothing to do with Twitter), the developers need to completely re-invent the wheel and copy what Twitter has done based on documentation of Twitter’s own API to access its data.  Readwriteweb even went to the extent of calling this approach “open”.  There’s nothing open about it.  Each developer implementing their own “Twitter API” (and especially calling it such) is blatantly ripping off Twitter’s API to do so under no license whatsoever and Twitter’s just standing back and watching.  I think it’s time Twitter releases their API under an Open Source license to relieve this mess and protect their IP.

Open Sourcing APIs is nothing new.  Of course, Google, with OpenSocial, did it and even standardized their own API for “containers” to easily implement the same API across multiple sites.  All the code was provided for developers to do this and we quickly saw sites such as MySpace, Hi5, Orkut, and others all implement the same standard, reducing the code needed to port an app from platform to platform.

Facebook did the same with their platform.  A little known fact is that any developer can go to http://developers.facebook.com/opensource.php and download the Facebook Open Platform, along with many other very useful open source tools.  Immediately they have access to enable FBML, FBJS, and other aspects of the Facebook API to developers on their own sites, standardizing the Facebook platform amongst sites that implement it.  Bebo was one of those who took up Facebook on this offer.  Others can too.

What we need now is a standardized platform for sharing micro-content.  Some have proposed RSS do this, which is fine with me, but since developers already have apps built on Twitter which this would go with it makes sense to also enable a standardized platform for developers to code on for these types of apps.  Such an open-sourced code-base would enable developers to not have to change their code to enable access to similar sites beyond just Twitter.  Twitter right now is a closed platform, plain and simple.  With the exception of OAuth, they are based on a proprietary API, do not support open content protocols, and even their real-time stream is proprietary.

A good step for Twitter would be to open source this API.  Enable sites such as WordPress, Tumblr, Status.net, and others to easily integrate it into their own platformse without the need to re-invent the wheel.  Put it under an open license, and then your IP remains protected.  Until that point  developers are going to continue ripping off Twitter’s API, and Twitter’s IP slowly starts to go down the drain.  I’d love to see Twitter take a lead in this process – it took Facebook just about 6 months to open source their API.  Why haven’t we seen this yet from Twitter?

Or are they the next Compuserve?

The Future Has No Log In Button

Graphic Courtesy Chris Messina - http://factoryjoe.com/blog/2009/04/06/does-openid-need-to-be-hard/

Since last week’s Kynetx Impact Conference I have gained an entirely new vision for the open web.  I now foresee a web which the user completely controls, lives in the browser, syncs with the cloud, and has no boundaries.  This new web completely makes the entire Social and Real-time paradigms miniscule in terms of significance.  What I see is an internet that, regardless of what website you visit, you will never have to enter your login credentials again.  I see the end of the log in button.

It all centers around identity.  The idea comes with a technology called Information Cards, and a term called the “Selector”.  With these technologies, websites will rely on the client to automatically provide the experience you want without need for you to log in ever again.  It relies on OpenID, doesn’t really need oAuth (since all the authorization ought to happen on the client), but the best part is you, the user, don’t ever have to know what those technologies are.  It “just works”.

OpenID

OpenID LogoLet’s start with what you might already be familiar with.  You’ve probably heard about OpenID before.  If not, you might notice a little vertical orange line with a little gray arrow going from the line in a circle on some sites you visit.  Google just announced today that their profiles are now OpenIDs.  That basic concept is that you can specify on any website on the web a “provider”.  When you log in via Open ID, all you have to enter is your preferred website that specifies this “provider”.  The website you’re logging in to then redirects you to that provider, you provide your password, and it takes you back to the authenticating site.  It’s a simple authentication mechanism that enables sites to know who you are, just via a simple URL.  StayNAlive.com is a identifying URL for me, and points to my provider, myopenid.com.

In addition, utilizing technologies such as “FOAF” (Friend of a Friend), and the Google Social Graph APIs and other technologies, you can do cool things with identity.  Since I know your provider ID is being linked by your website, I know both your website and that provider are the same person.  You can link sites together, and now you know which profiles around the web are truly you – it becomes much harder to spoof identity in this manner, especially as more and more sites begin to adopt this methodology.  The problem with OpenID is its still a little confusing (even for me), and not everyone is familiar with entering in a URL into a log in space to identify themselves.

Information Cards

OpenIDSelector

Enter Information Cards.  This is a new space for me, but a fascinating one.  An information card is a local identity, stored in your browser or on your operating system, which you can “plug in” to any website, and it tells that website about you.  Theoretically, they could even sync off of a local server somewhere, but Information Cards (so I understand) are controlled on the client.

The cool thing about Information Cards is that you can store lots of different types of information on them (again, if I understand correctly).  At a very minimum, information cards allow you to store an identity about an individual.  In an ideal environment, you would be able to download an information card program like Azigo, visit a site like Yahoo.com, select your Yahoo information card, and just by clicking the information card it would immediately log you

into Yahoo.  The cool thing is that ideally, this completely avoids the phishing problem because Yahoo is the only one that can read your information card for Yahoo.com.

Here’s the kicker though – you can store more than just the log in for an individual in an information card.  Imagine storing privacy preferences.  What if I don’t want Yahoo to have access to my birth date, for instance?  Or what if I wanted to go even further and completely customize my experience?  What if I wanted Microsoft to provide updates for me right on top of Yahoo.com?  What if I wanted to get a completely customized experience based on the websites I really like around the web?  This is where the next part comes in.

The End of the Cookie and Birth of “the Selector”

Imagine a web where you, the viewer or user or consumer, are able to browse and get a completely customized experience that you control.  What if you are a Ford user and want to see comparable Ford cars on Chevy’s website? (I talked about this earlier)  Or here’s one I’ve even seen in production: I’m a big Twitter user.  What if I want to learn what others are saying about the websites I visit on Twitter without ever having to leave those websites?  Or say I’m a AAA member and want to know what hotels I’m searching for are AAA-supported?  What if I don’t like the way a website I visit is rendering content and I want to customize it the way I want to?   All this stuff is possible with the Selector.

Azigo Action Cards in Action

In the past you usually were at the mercy of these websites unless they provided some way for you to create your own context.  This is because these sites are all reliant on “cookies”, pieces of information stored on the browser that are reliant on IP that are only readable by the websites that generated them.  With a cookie there is no identity.  There is only IP.  With a cookie the website controls the experience – each website is in its own silo.  The user is at the mercy of each silo.

Kim Cameron and Craig Burton have been big proponents of a new identity technology intended to replace the cookie.  It’s called “the Selector”.  The idea of the Selector is that you, the user, use Information cards in a manner allowing you to fully control the experience you have as you peruse the web.  The idea uses an extension to information cards, called “action cards“, which enable users and consumers to specify their own preferences as to who shows them data and when around the web.  The cool thing is that businesses have a part in this as well that the users can opt into.

For instance, Ford could provide an action card (or “Selector”) using technologies like Kynetx to display comparisons of Ford products right next to Chevy’s right on the Chevy.com website.  Chevy.com can do nothing about it (other than provide their own selector) – it is 100% user-controlled, and the user’s choice to enable such.  Or, let’s say I’m a big Mac user and I want to see what Dell products are compatible with my Macbook – I could simply go to Dell.com and find out because hopefully Apple has created a Selector for Dell.com.  Not only that, but these sites, Dell.com, Apple.com, Ford.com, Chevy.com can all track my interest based on preferences I set and customize the experience even further so I am truly gaining a “purpose-based” experience around the web.

All of the sudden I’m now visiting “the web” instead of individual sites on the internet, and the entire web becomes the experience instead of a few websites.  The possibilities are endless, and now imagine what happens when you add a social graph full of truly contextual identities on top of all this.  Now I can feed my friends into this contextual experience, building an experience also based on the things they like and adding it onto the things I like.  There are some really cool possibilities when the web itself is a platform and not individual websites.

Ubiquity

The future of the web is Ubiquity, the state or capacity of being everywhere, especially at the same time.  Users will be ubiquitous.  Businesses will be ubiquitous.  There are no boundaries in the web of the future.  I’ve talked about the building block web frequently but that just touches the surface.  In the future these building blocks will be built, and controlled by the users themselves.  Businesses will provide the blocks and the users will stack them on top of each other to create their own web experience.

Businesses will have more sales because the consumers will be getting what they want, and consumers over all will be more productive.  This new approach to the web will be win-win for both sides, and we’re just getting started.

Where We Are At

Here’s the crazy thing that blew me away last week – we’re so close to this type of web!  We see Google building an operating system entirely out of a browser.  We have Information card and Action card/selector platforms such as Azigo, which enable users to seamlessly integrate these experiences into the browser.  We have developer platforms like Kynetx which enable the creation of such an experience.

Imagine if Google were to integrate information and action cards right into ChromeOS.  What if Kim Cameron were to get Microsoft to integrate this into IE and Windows? (hint – they will)  What if Apple integrated Information Cards into the Keychain so you actually had context with your log on credentials?  All this is coming.

Where We Still Need to Go

We’re not there yet, but we’re so close!  I want to see more focus on this stuff and less on the Social web and real-time technologies.  For those technologies to fully succeed we need to stop, take a deep breath, step back, and get identity right.  We’re not quite there yet.

I want to see technologies such as Mozilla Weave integrate Information Cards for their browser (rather than reinvent the wheel, which is what they appear to be doing).  We need more brands and more companies to be writing contextual experiences on the Kynetx platform (which is all Open Source, btw).  We need more people pushing companies like Google and Microsoft and Apple to be integrating these technologies so the user can have a standardized, open, fully contextual experience that they control.  I want to see Facebook create an experience on these platforms using Facebook Connect.  I want Twitter to build action cards.

For this to happen we need more involvement from all.  Maybe I’m crazy, but this future is as clear as day for me.  I see a future where I go do what I want to do, when I want to, and I get the exact experience I asked for.  This is entirely possible.  Why aren’t we all focusing on this?

Sign in Graphic Courtesy Chris Messina

All Your OS Are Belong to Google – Why Aren’t We Worried?

Kool-Aid ManI’m following the stories of the Google Chrome OS release today and am a bit concerned about some of the claims that are being made.  Mashable even goes to the extent of predicting Google is going to “destroy the desktop” with it.  Google is banking on the fact that many users use their computers solely for accessing Twitter, Facebook, and E-mail through a browser.  They’re right – we’re becoming more and more of a web-reliant society, and the cloud is rendering much of the fluff that happens on the traditional operating system unnecessary.  However it concerns me when a company so known for wanting to run that operating system fully from the cloud is the one pushing this model.  Let’s not kid ourselves here – Google wants you to run as many of their services as possible (since they’re a web company) so they can own more information about you.  That’s not always a bad thing – the more they know about you, the better an experience they can provide for you with as little effort on your part as possible.  I argue it’s the wrong approach though, and it’s harmful to user-controlled and open identity approaches on the web. My hope is that Google has a plan for this.

A Client-based OS vs. a Web-based OS

Let’s look at the old (well, I guess it’s not old yet) approach to operating systems.  They were all about the user.  A user booted up a computer they could very well have even built themselves.  The user logged in to that computer.  On Windows machines they have a Control Panel where they can adjusted their system settings.  On Macs they have System Preferences.  On *nix they have command-line (okay, I’m joking there, mostly).  They can install the programs they like.  They can adjust who can and can’t log in through the computer.

The problem with putting the user in control is that they have to be responsible for their data.  They have to be responsible for their Hard Drive not dying and losing an entire life history because of that lack of attention.  Most users don’t know how to do that.  Not only that but allowing user control includes additional overhead on the operating system, slowing boot times down, adding complexity, and increasing the learning curve for most users that just want to access their e-mail or visit Facebook, etc.

This is why a web-based OS could make sense.  The web OS focuses on one thing and one thing only – moving the user experience wholly to the cloud.  The cloud becomes the new OS, and services can be provided from there to shift the burden from the user to the cloud in storing their data.  Great!  Where do I sign up?

The Problem With a 100% Cloud Solution

There’s still a problem with this model though – with a 100% cloud-based solution the user loses all control over the experience and puts it into the hands of one or two very large entities.  The only approach to ubiquity for users is for those entities to have their hands across every website those users visit and every web app those users run.  That’s a little scary to tell you the truth.  With a 100% web-based approach the user loses control of their identity and puts it in the hands of the BigCo.  As Phil Windley puts it, this puts the focus back on Location, which is business focused, rather than Purpose, which is consumer focused.

Let’s try to look at this from another angle.  What if we were all on 100% Web-based Operating Systems and Facebook were to successfully get Facebook Connect into the hands of every single website and every single company on the web in some sort of open manner?  You’d be able to visit any website, bring your contacts from Facebook and other data from Facebook to those sites and they’d be able to customize the experience to you and provide context, right?  That’s partially true.  A server-based approach can provide some context.

However, let’s say I’m a huge Ford fan and I want to see what types of Ford cars compare with the cars I’m viewing on Chevy’s website.  Sure, Ford could provide an API to enable other websites to integrate their own context into other websites, but do you think Chevy is ever going to integrate this?

Heck, if we go back to the standpoint of Facebook, even Google and Facebook are having issues working together on that front (look at Google Friend Connect – see Facebook in any of their providers?).  The fundamental flaw of a server-based approach is there is absolutely no way organizations are going to cooperate enough to be able to provide context across 100% of the web.  No matter how many foundations are formed there will always be some disconnect that hurts the user.  The only way that’s going to happen is via the client.

Enter Information Cards and the Selector

OpenIDSelectorAs I mentioned earlier I’ve been attending the Kynetx Impact conference here in Utah hosted by Phil Windley , author of Digital Identity published by O’Reilly, also attended by such Identity superstars as Kim Cameron (who probably made Microsoft more open than it ever has been with his pioneering of the Information Card concept), Doc Searls (author of The Cluetrain Manifesto), Drummond Reed, and Craig Burton.  My eyes have truly been opened – before anything Social can truly perfect itself we have to get identity right, and a 100% web-based approach just isn’t going to do that.  I’ll be talking about that much more on this blog over the next bit – this is the future of the web.

Kim Cameron pioneered a concept called Information Cards, in which you, as a user, can store different profiles and privacy data about yourself for each website you visit.  When you visit the websites you frequent around the web, you can be presented by your client or browser with previously used Information cards that you can choose to identify yourself with. This can be a very useful and secure approach to combatting phishing (when users become reliant on information cards the authenticating site can’t obtain their log in credentials), for instance.  Check out the “Good Tweets” section of his blog post here for context.

Another great use of Information Cards, a client-based approach, is the ability to provide browser or OS-based context for each user.  This is something Kynetx is working to pioneer.  Craig Burton has talked about the concept of the “Selector”, and how the next evolution of identity from the cookie has now moved to user-controlled context as their accessing the web.  The idea is, as you select an information card, a service such as Kynetx can run on the browser (right now via extension, but future browsers will most likely have this built in) and provide a contextual experience for the user based on the “Selector” for each website that user visits.  The user sets the privacy they want to maintain for those sites, and they are given a contextual experience based on the selectors they have enabled, regardless of where they are visiting on the web.

One example of this, as I mentioned earlier, was at the Kynetx Impact conference when I visited Facebook.com I was presented with HTML in the upper-right corner of Facebook asking me to become a fan of Kynetx and providing me with the latest Tweets talking about the conference.  Among other examples shown, for AAA Auto service, members could provide a selector so that when they’re searching for hotels AAA can customize the experience on Google.com or Hotels.com or anywhere they want to let the user know which hotels provide a AAA discount and what the discount is.  AAA doesn’t need to provide an API to these sites.  They don’t need to negotiate deals.  They can just do it, and enable the users to turn it on at their full discretion.  The consumer is in full control with these technologies and they’re available to any brand right now.  Kynetx has an open API for this that they just launched yesterday.

This form of ubiquitous context for the user can’t happen in a full web-based model.  Users will always lose some sort of context if the entire experience is controlled by the web.  There has to be some involvement by the client to allow the user to truly own their identity and control the experience they have on the web.

Google Has a Responsibility to Do This Right

Google hasn’t revealed their end game in this yet, but my hope is that they continue their “Do No Evil” approach and take this as an opportunity to give the user some more control in the Web OS experience.  There is a huge opportunity for Google to be leaders in this space, and that goes beyond just Open ID.  Google could integrate Information Cards and selectors right into the Chrome browser, for instance, forcing an open, user-controlled approach to identity and introducing a new approach to marketing on the web that is controlled by the consumer.

I hope that the leaders in open standards take note and continue to push Google in this process.  The user deserves this control.  I still think the Web OS has a huge place in our future, but my hope is that we do it right from the start and keep the user in control of this process.  The way it stands it’s looking a little too Google controlled.

Be sure to check out my Twitter stream from tonight for a few more links and thoughts on this subject.

Information Card Image Courtesy Kim Cameron

Kynetx Kills the Portal, Launches Identity Platform for Developers

indexToday at Kynetx Impact Conference Kynetx is changing the future of Web Identity and privacy as we know it by taking the power away from the server and moving it over to users’ desktops, mobile phones, or other client-based technology.  Dr. Phil Windley, company CTO and co-founder in his keynote shared that the web client is the “forgotten edge” when it comes to open software development and identity management.  Currently the traditional model in identity has been one of location base, instead of purpose-based, as Dr. Windley has suggested is the future of internet activity.  Today Kynetx is releasing a developer platform which intends to enable that purpose-based identity on the web.

About a year ago I wrote on LouisGray.com about how sites like Twitter have become the “portal” of Web 2.0.  The idea is that users are starting to use Twitter as a gateway to post content to the other sites that they actually use.  Portals have been around for awhile, Yahoo perhaps being one of the most prominent and brings content all into one location, intended to personalize the aggregation of content to the user.  In that sense, sites like FriendFeed are also modern portals.

Identity Solution #1: The Silo

The weakness of the traditional portal is that it is Location-based.  Dr. Windley suggested that users that visit websites aren’t there to visit a location – they have a purpose as to why they visit the website and portals can’t solve this problem.  Server-based solutions cannot determine the purpose of users visiting each website, as they are only capable of tracking an IP address for that user, which in and of itself isn’t even always reliable.  Sites like Facebook have tried to resolve this problem by bringing the user into a Silo, enabling them to tell others in that Silo about themselves, allowing better privacy since it is all controlled in a Silo.

The problem with the Silo method is that one single entity owns the user’s data in that case.  Users are at the mercy of the Silo to get their data out of the Silo and if the Silo ever goes away or the user ever leaves the Silo, so does their identity.  What Kynetx is doing is working to remove the need for that Silo, hopefully enabling sites like Facebook that intend to respect user privacy and user choice (something I defined earlier as another definition of “open”), taking the user’s identity information and allowing them to store that information on their desktop or in the browser itself.

Identity Solution #2: The Client and “Information Cards”

Currently through an open technology called “Information Cards”, users are able to store identity information for the various websites they visit on their own desktop.  This information is owned by the user, does not get stored on a developers server anywhere, and gives an even more detailed view of the user than any other source can give.  Kynetx is looking to bridge these Information Cards to the browser via an API through which developers can utilize these cards, and customize the browsing experience a user has as they have a purpose they want to accomplish on the web.

One example Dr. Windley shared was that of AAA (triple-A) automobile service.  Using the Kynetx engine, a developer can take AAA data, and mesh it with search results on Google.com and Yahoo.com, and based on a user’s Information Card identify the search results that might be pertinent to that user in relation to AAA.  Another example of this is on the actual wireless network they are using at the Kynetx Impact Conference, in which they are placing various markers to give more information about the conference.  For instance, as I type this, I am seeing a little “Schedule” tab to the right that I can click any time and have the schedule for the conference pop up.  Anyone can implement this technology, and Kynetx is enabling any developer to write their own layer to the web utilizing a user’s true identity and bring that identity on top of the web itself.  This stuff is powerful!

Imagine these applications in the mobile space – what if a developer could take this similar conference technology Kynetx is using at this conference, and apply it to a mobile browser, showing the location of everyone else in the conference on a map, but also showing their identities, perhaps grouping people together by experience and what their interests are.  Or, if you take this to the shopping experience, a vendor could cater a completely customized shopping experience that is completely controlled by the user.  With Kynetx, the customer truly is the boss.

Kynetx is doing some amazing things in the identity space.  It’s amazing to watch as the leaders of this space – Phil Windley, Craig Burton, Doc Searls, Drummond Reed, and Kim Cameron are all here working to change the way we view identity.  True identity belongs on the client.  True identity belongs in the hands of the user.  Kynetx has just changed everything with their new platform.  I encourage you to check it out. You can learn more about Kynetx at http://developer.kynetx.com.

Here’s an interview I did at a dinner they invited me to last night with Phil Windley where he explains the concept:

[youtube=http://www.youtube.com/watch?v=IyC3fUbo3X0&w=425&h=344]

Ebay Suggests Identity API – Can They Do it Alone?

Paypal X Innovate 2009Ebay’s CTO, Mark Carges, today announced at Paypal X Innovate plans by Ebay, Inc. to begin incorporating the Paypal login process as an identity platform for consumers to eventually open up to developers.  The platform, Carges said, aims to use the existing Paypal login ID which includes address and phone number verification, bank account attachment, and more to identify individuals as real people.  He stated Paypal already goes through great lengths to protect these users’ identity, suggesting this was a natural move towards identity in the cloud.

The move makes sense, but searching Twitter during the Keynote revealed a different story.  Audience members are skeptical, stating things like “scary morning talk by the Paypal CTO. all your ID belonging to us. a closed OpenID?” and “wonder if this is what @timOreilly is afraid of – platforms becoming the OS?”.  In many ways these audience members have a point – is it possible for Paypal to go alone in this identity space when they could either be leading or joining existing identity efforts such as OpenID?  I may be wrong but I do not recall any mention of the word “open” in his proposal.  And when he mentions things like “they are working with Government” it gets a little scary that a single company may control all this along with government.

At the same time, maybe this is the solution.  Will the solution to identity be a closed platform that has devoted ways of verifying identity like Paypal and Ebay can provide?  Does the web need a “more secure” closed platform to finally solve the identity problem?

I’m very interested to see how Paypal progresses on this.  My hope is that they either lead or join existing open standards in this effort, and rather than taking this alone they approach others.  A platform is always a good thing, but a platform is not “open” until it is based on open technologies and the technologies themselves are built by the community.  This is especially applicable in the identity space.

Paypal’s CEO yesterday reiterated that through the years payment itself was controlled by a few big entities.  Paypal’s vision is “Into the hands of many” , intending to pass that control to the developers.  He even compared it to Linux and how the future is in the community and no one company having control.  My hope is that Paypal maintains this standard in the identity arena.  Based on their vision so far it looks hopeful – let’s hope they don’t feel the need to take the Identity platform alone.

When it’s uploaded you can listen to the whole Keynote in my Cinch folder.

The Open Web – Is it Really What We Think it is?

OneWebDayYesterday was OneWebDay, a day to celebrate the open web and bring more awareness to technologies. I just wrote about one thing Google is doing to make the web more open, something I strongly support.  I want to touch on something Facebook is doing which I don’t think is being fully appreciated.  And it’s not what you think it is.  First, I want you to watch this video – it’s Mark Zuckerberg’s keynote from Facebook’s F8 conference for developers last year.  Don’t read on until you see it or you may not understand what I’m trying to get at here.

In the video, Mark Zuckerberg states that Facebook’s mission is in “giving people the power to share in order to make the world more open and connected place.”   I want you to give that some thought. We’ve always talked about the open web being the opening up of content so everyone has access to it.  That’s the essence of the web. It has no borders or boundaries, and has no controls over it.  That is how it was built and how it should be.  The web is about linking documents to each other, and indexing those documents so they are easily accessible and retrievable by those that want to find it.  The traditional open web is about the power to receive.

Enter the social web.  Now we have all these social networks – Facebook, MySpace, Twitter, Orkut, Hi5, LinkedIn, and many others all striving to redefine the web, each in their own way.  In the end each of these networks is giving a layer to the web which connects people instead of documents and in the end brings people together.  At the same time we’re indexing people, and from those people comes relevancy and documents which others can share with one-another.  Many argue that this method of indexing is even more accurate, because it is spread from person-to-person, and it’s real-time.

There’s one problem with the social web in terms of openness.  People don’t want their lives exposed.  They just want the documents they prefer to share with the world exposed.  In the end, because we’re dealing with people, there still needs to be some bounds of privacy, yet people should still have the control to make what they want open, open. Without these controls, there is no freedom, as people are required to completely expose their lives to reveal even a bit of content with the rest of the world.

This is why I think on the Social Web, “Open” is defined much differently.  I think Facebook sees this. In a social environment, the role of technology should be in making relationships more open, making the ability to share more open, not necessarily the documents people are sharing themselves. In a Social Web “Open” is about how “Open” you are to enabling your users to make the decision whether they want to make their documents public or not, and fully enabling them to do so if they want to.  The thing is, a Social ecosystem is not “Open” if it doesn’t give users the freedom to keep those documents private if they want to as well.

Facebook takes this new layer of “Open” to another level though. As of last year they have been branching out of their walls, enabling other websites to take these tools, giving each website the control to extend this level of control to their own users.  Now websites can take the existing social graphs of users and enable those users to automatically share what they want with their friends, respecting the privacy controls of those friends.  I should note that Google Friend Connect is doing similar things in that realm (albeit with less privacy controls, IMO making it a less “open” or “free” ecosystem to allow users full control of that data).

I think what we may be defining as a “Walled Garden” or “closed ecosystem” may indeed be the actual definition of “Open” on the social web.  Remember, it’s about opening up the control of the user to share all, some, or none of the content they want to share.  The more “Open” a system is to doing this, the more open users are to share data, the more open it is to having their friends see that data, and the more open it is to allowing others search for that data, while at the same time being open to letting the users that want to control that data keep it under closed wall.  The web has lacked this ability until recently.  In a true “Open” Social Ecosystem, if data is not available via search and other means, it is the fault of the users, not the network itself.  Data that is available to the web is the responsibility of the users, not the responsibility of the network itself. I think Facebook is the closest to this definition of “Open” out there right now.  I think that’s why they have over 300 million users and are still growing.

On the Social Web, “Open” is about the power to give.

<img class="aligncenter size-full wp-image-2489" title="I <3 the web." src="http://staynalive.com/files/2009/09/3929246011_9776c72b28_o.png" alt="I

Let’s Take This Just One Step Further Google

ChromeI think I speak for all developers when I say that having to develop for IE browsers sucks.  Internet Explorer, unfortunately still the most widely used browser on the internet, has failed the development community and the web in general in keeping up with internet standards. While developers can do some really cool stuff with HTML 5 and open source browsers like Chrome, Firefox, and Webkit-based Safari, IE misses the mark. Unfortunately this goes for even the most recent versions of Microsoft’s browser.

This is why I was really happy to see Google produce a plugin for IE called Chrome Frame, which when installed, loads a Chrome browser within IE for the user giving the user all the added functionality of a modern HTML 5-compliant browser without having to do much at all to switch to a new environment or fiddle with the default browser settings.  I think it’s a pretty clever idea.

What I think is even more clever is that Google is now requiring users to install the plugin if they are going to use their upcoming product, Google Wave.  When Google Wave launches, if users visit the product in Internet Explorer, they will get a message that looks like this:

chrome frame message

I think most users won’t even blink an eyelash to installing it, and, just like Flash or Quicktime or any other type of Internet Explorer plugin they’ll have no problem agreeing and installing it within their browser.  This is especially if they want to use Google Wave, something I predict could very well replace Gmail and the way we communicate today in the future.  But I think Google should do more.

Let’s take this one step further.  I think it would be really cool if Google provided simple HTML/JavaScript code that provides the exact html you see above, that any developer can install on their website.  Any developer can do that now by writing their own browser detection code in JavaScript, but let’s make this as easy as possible and standardize it. If users become familiar with this style and look, they will be much less likely to complain and much more likely to install.

As a developer I would be more than happy to install such code on my site, reducing the amount of time I have to spend switching computers to test in IE and messing with entirely different standards, increasing the time I have to develop my app.  As an entrepreneur and business owner it’s simply too costly to have to worry about so many different browsers at once.  If I could focus on simply the standards and get all the new HTML functionality right now without duplicating my effort in 2 browser environments that would be a huge win for me, and definitely worth the investment. I’d install it in a heartbeat.

So how about it Google? Let’s provide that message and plugin install widget for all developers and make this a much more open and modern web outside of the control of Microsoft.  I’m loving where Google is going with this.

Facebook Development for Beginners

This morning I had the opportunity to present via O’Reilly Webinar on Facebook development. I covered the basics of how to get started in Facebook development, and the resources that would get you going. I mentioned I’d post these slides online, so here they are. I was hoping to get audio attached to them, but we’re still waiting on that. Regardless, if you want me to present this to your organization or group, feel free to contact me.

Facebook Development for Beginners[swfobj style=”margin:0px” width=”425″ height=”355″ src=”http://static.slideshare.net/swf/ssplayer2.swf?doc=fb-development-beginners-1226133333982294-8&stripped_title=facebook-development-for-beginners-presentation” type=”application/x-shockwave-flash” allowscriptaccess=”always” allowfullscreen=”true”]

View SlideShare presentation or Upload your own. (tags: facebook fbml)

Open Source – Do You Share Your Experiences for This Life or the Next?

01-1.jpgThis is a picture of my Great-Grandfather, Joseph Stay. With a son named after him, I’ve spent some time reading about him and learning about the experiences of his life that I can pass down to my son. One of my favorite things to do in my spare time (when I get any) is to read about the lives of my ancestors. My faith teaches about life both before and after this life, and as such, it’s important for me to know who came before me and how I came to be. Besides that, it’s just plain fun.

Some of my ancestors were very good at tracking their lives and what they did. Some of them kept journals and records, so that their progenitors could learn about them after they passed away. I have a journal like this, as do my parents and grandparents. These journals show a glimpse into our successes, trials, and failures, and what we did to overcome them in hopes that our children and those that come after us can learn from our own mistakes and make their lives better.

This concept is great, except it only applies to those after this life – only they can learn from us because we often keep these details secret. What if we could share the skills we have, let others try them out, play with them, learn from them, just as we’re able to do with the experience we’ve learned from our ancestors, but in this life?

This is the reason I like the concept of “Open Source”, which started with Software, but really, could be applied in all expertise. The concept of “Open Source” is all about sharing the experiences we have in this life and allowing others, still in this life to try those experiences out, apply their own experience, and continue to share with others. It’s just what our ancestors did for us, but applied to this life.

What if we all, in everything we did, shared what we did with those in this life, instead of planning for the next, so that we could start that legacy of learning right here and right now. What if we as a society were working together instead of just us and those that follow us after this life? Why do we have to wait until we’re dead to let others learn about what we’ve done?