open standards – Page 2 – Stay N Alive

I’ve Been a Little Rough on Google Lately

For having posted just less than 1,000 posts, this blog has gotten a lot of attention in just the last one or two years.  It used to be when I posted something I would get few comments (I still wish I had more), little traffic, and I knew it was only going to perhaps a few hundred eyes at most in an RSS Reader somewhere.  But it took off.  I’m not saying this to gloat, and I accept that I’m nowhere near a TechCrunch or a Mashable in terms of readers or traffic, but I’ve quickly learned that some times when I say things here it seems to have a lot of influence. Some times my articles end up on Techmeme.  Some times people like TechCrunch and Mashable mention what I say.  Some times Google employees talk about them.  Not only that, but it goes out to near 25,000 people on Twitter, thousands on FriendFeed, not to mention the thousands of subscribers that read this in their RSS Reader.  I tend to forget that when I talk here, it has the potential for a lot of people to read what I say. It’s not the old days of when I would just strive to get someone to read my stuff.  For that, I apologize – I’ve been a little negative on Google lately without realizing the implications, and I want to make ammends.

The truth is, I like Google for a lot of stuff.  My main e-mail client is Gmail.com.  In fact I also use it as my FriendFeed, Facebook, and Twitter client.  Despite my frustrations, I still use Google Reader as my main RSS Reader, not because it’s Google, but because it’s still by far the best Reader out there.  There have been various Chrome releases that have been by far the fastest and best browser out there.  Google Calendar is my favorite scheduling application – it’s the best of any tool I’ve come across.  I’ve replaced the phone icons on my iPhone with Google’s http://voice.google.com Google Voice client.

Truth be told, I still love Google.  They’re an amazing company.  They’re a company full of amazing talent and smart people.  Perhaps I hold them to a higher standard, and hence my criticism.

I think it’s obvious that I also have a bit of a Facebook bias.  I’ve written many apps both for myself and others on the Facebook platform, wrote two books about it, and I’m very close with many of their team over there.  Most of my business is to help other businesses integrate Facebook technology into their products – with over 400 million users and still growing, a very accessible API, and a lot of rules that go with that API, my help is often needed, and I’m happy to provide.  I’m just as passionate, if not more about Facebook as I am Google, but I think some of that gets to me at times.

I am also passionate about open standards.  I admit Facebook is not open across the board as others like to define it, but neither is Google.  Ideally, I guess I’d like to see a web that is completely free of the big guys like Facebook and Google – sure, they’ll still have a presence, but the user will be in control, not these companies or even developers.  There is no one perfect solution right now.  This is why I talk about Kynetx a lot.  I don’t think any of the open standards available right now completely tackle this, so I get passionate, perhaps too passionate about it at times.

So, to Google, DeWitt, and any of the team there I may have offended, I apologize.  I’d like to make ammends.   Sure, we may disagree at times, but as my Mom always taught me, “If you don’t have anything nice to say, don’t say anything at all”.  I’m going to be much more careful with what I say online from now on, especially when I disagree.  I’d like the things you take from here to be positive.  I want to influence, but in a good way.  To start the mending process, I’m re-creating my Google profile and re-opening my Buzz account, which you can find here.

Let’s open the air here – what else can I improve about this blog and what I share online?  Am I making the right choice in backing down in my criticism?

Image courtesy http://www.youpimped.com/comment_graphics/i_am_sorry

Did Google Reinvent the Wheel by Adopting the Protocols They Chose?

In a response to my article here, DeWitt Clinton of Google defined what he deemed the definition of “open” to be.  According to DeWitt, “the first is licensing of the protocols themselves, with respect to who can legally implement them and/or who can legally fork them.”  I argue if this were the case, then why didn’t Google clone and standardize what Facebook is doing, where many, many more developers are already integrating and writing code for?  Facebook itself is part of the Open Web Foundation, and applies the same principles as Google to allowing others to clone the APIs they provide to developers.

DeWitt’s second definition of “open” revolves around, according to DeWitt, “the license by which the data itself is made available. (The Terms and Conditions, so to speak.) The formal definitions are less well established here (thus far!), but it ultimately has to do with who owns the data and what proprietary rights over it are asserted.”  Even Facebook makes clear in its terms that you own your data, and they’re even working to build protocols to enable website owners to host and access this data on their own sites.  Why did Google have to write their own Social Graph API or access lesser-used protocols (such as FOAF or OpenID) when they could, in reality, be standardizing what millions (or more?) of other developers are already utilizing with Facebook Connect and the Facebook APIs to access friend data?  Google could easily duplicate the APIs Facebook has authored (even using the open source libraries Facebook provides for it), and have a full-fledged, “open” social network built from these APIs many developers are already building upon.  I would argue there are/were many more developers writing for Facebook than were developing under the open protocols and standards Google chose to adapt.  I’d like to see some stats if that is not the case.  Granted, even Facebook is giving way to Google to adopt some of these other “open” standards so developers have choice in this matter, even if they were one of the few adopting the other standards.

I still think Google is adopting these standards because it benefits Google, not the user or developer.  If Google wanted to benefit the majority of the audience of developers they would have cloned the already “open” Facebook APIs rather than adopt the much lesser-adopted other protocols they have chosen to go by.  This is a matter of competition, being the “hero”, and a brilliant marketing strategy.  Is Google evil for doing this?  Of course not.  Do I hate Google for this?  Only for the reason that I have to now adapt all the apps I write in Facebook to new “open” APIs Google is choosing to adopt.

IMO, if Google wanted to truly benefit the developer they would have chosen to clone the existing “open” APIs developers were already writing for.  This is a marketing play, plain and simple.  It may have started with geeks not wanting to get into the Facebook worlds, but management agreed because in the end, it benefits Google, not their competitors.  If you don’t think so, you should ask Dave Winer why Google is not implementing RSS or rssCloud instead of Atom and PSHB (I’m completely baffled by that one, too).

Image courtesy http://northerndoctor.com/2009/04/17/re-inventing-the-wheel/

The Web is No Longer Open

“So it can benefit everyone.”

That’s what a Google employee said today as he tried to explain Google’s recent push to have websites use the ‘rel=”me”‘ meta HTML tags to identify pages a user owns on the web.  It’s not a bad strategy – index the entire web, know every single website out there, and when they change, and now the web is your network.  The thing is, since the “open” web hasn’t had a natural way of identifying websites owned by users, Google, the current controller of this network, needed a way to do it.  Why not make people identify their websites to Google’s SocialGraph network, and call it “open” so it benefits everyone?  I’m sorry, but the “open” web that we all grew up in is dead now that 2 or 3 entities have indexed it all.  This is now their network.

Let’s contrast that to Facebook, the “Walled Garden”, criticized for being closed due to tight privacy controls and not willing to open up to the outside web.  Of course, all that is a myth – Facebook too has provided ways for website owners to identify themselves to Facebook on the “open” web, making Facebook itself the controller of that social graph data, thereby giving Facebook a new role in who “owns” the “open” web.  Facebook has even made known in its developer roadmap its intention to build an “OpenGraph API”, making every website owner’s site a Facebook Fan Page in the Facebook network.  Don’t kid yourself that Facebook wants a role in this as well.  They’re a major threat to Google, too because of this.

Then there’s Twitter, just starting to realize how to play in this game, now starting to collect user data for search in their own network.  Don’t count them out just yet, as they too will soon be trying to find ways to get you to identify your website on their network.

So we’ll soon have 3 ways of identifying our websites on the “open” web.  I can identify my site through Facebook, as you see by the Facebook Connect login buttons scattered around.  I can identify myself in the Google SocialGraph APIs, which, if you view the source of this site you’ll see a ‘rel=”me”‘ meta tag identifying my site so Google can search it.  Who knows what Twitter will provide to bring my site into its network.  Each network is providing its easiest ways of identifying your site within their own Social Graph, and calling it “open” so other developers can bring their stuff into their networks easily, without rewriting code.

I think it’s time we stop tricking ourselves into thinking the web is open at all.  Google is in control of the web – they have it all indexed.  Now that we are seeing that he who owns the Social Graph has a new way of controlling and indexing the web, which we are seeing by Facebook’s massive growth (400+ million users!), I think Google feels threatened.  They’ll play every “open” term in the book to gain that control back.  Of course the new meta tags are beneficial – is it really beneficial to “everybody” though?  I argue the one entity it benefits most is Google.  Yeah, it benefits developers if we can get everyone to agree on what “open” is, but that will never happen.  I think it’s time we accept that now that the web is controlled and indexed by only a few large corporations, it is far from “open”.  “Open” is nothing more than a marketing term, and I think we can thank Google for that.  No, that’s not a bad thing – it’s just reality.

Do these technologies really “benefit everyone” when no other search startup has a remote chance of competing with owning the “open web” network?

Further note:

How do we solve this?  I truly believe the only solution to giving the user control of the web again is via client-side, truly user-controlled technologies like what Kynetx offers.  Action Cards, Information Cards, Selectors, and browser-side technologies that bring context back in the user’s hands are the only way we’re going to make the web “open” again.  The future will be the battle for the client – I hope the user wins that battle.

Image courtesy Leo Reynolds

UPDATE: DeWitt Clinton of Google, who wrote the quote above this post is in response to, issued his own response here.  The comments there are interesting, albeit a lot of current and former Google employees trying to defend their case.  I still hold that no matter what Google does now, due to the size of their index, any promotion of the “open web” is still to their benefit.  I don’t think Google should be denying that.

UPDATE 2: My response to DeWitt’s response is here – why didn’t Google just clone Facebook’s APIs if their intention was to benefit the developer and end-user?

A Christmas Story: OpenID, OAuth, My Home, and Your Privacy

905450_merry_christmasHere it is, Christmas Eve, almost time to celebrate Christmas in all the traditions it brings in our household.  We usually go visit my wife’s family, and then follow it up with telling the Christmas story out of the Bible and then we sing Christmas songs and each of us opens one present from another sibling or family member.  In our household, Christmas is all about spending time with family.  It’s all about home.  It’s all about spending personal time with those you’re closest with and maintaining the traditions you hold private and dear.

Thinking about home and family and Christmas, I realized today there’s a disconnect on the open web right now.  The privacy I mention is available in forms on the web such as Facebook, Gmail (to an extent), and in various forms amongst other web services throughout the web.  However when it comes to real life, there is a missing link when it comes to maintaining the privacy of where you are physically, and sharing that on the web so only your close friends and family know where that exact location is.

For instance, let’s say I want to have a Christmas party for just my immediate family, and maybe some close friends that I know follow me on Twitter or Facebook.  Right now the only way to do that is to either e-mail them each individually and reveal my exact location to each one, or blast it out publicly, potentially compromising the intimate experience we were trying to create.  At the same time I would be putting my family at risk by allowing unknown people to know where they are.

Another example is mail.  Let’s say this Christmas I want to arrange an easier way for my friends to send me gifts.  I publish some of the things I want for Christmas (I’m of course not that greedy to actually do that), and then I need a way to have you send me those gifts.  Or let’s take a more humble approach – perhaps I want to arrange sending money to a friend in need.  Or let’s say it’s my wedding and I want all my friends to know where they can send wedding gifts.  Right now there is absolutely no way you can blast that out publicly without compromising your physical location in some way.

Paul Carr of TechCrunch wrote about this exact issue several weeks ago.  He cited examples of people coming to his apartment for parties or get-togethers (on Halloween in this instance), and all checking in on FourSquare.  Immediately the exact coordinates of Paul could be made available to the world, all without Paul’s permission.  This is dangerous, especially to a writer of a publication whose employees and writers have been known to get constant threats and even death threats on a regular basis!  There has to be a solution.  Let’s move on to a few technologies I think could solve this.

DNS – the Router for the Web

DNS is the technology that pretty much powers the web from you, the user’s perspective.  I mentioned earlier that we are about to see a “war” at the same level as the browser wars of the late 90s and early 2000’s where companies like Google and Microsoft and others are all going to be fighting for a piece of the DNS pie.  Here’s how DNS works: with DNS, you type in a domain name, and that domain name gets translated through a sequence of various “name servers” throughout the web that eventually tell your browser the IP, or location of that content on the web.  Once your browser knows the location, it knows where to retrieve the content it needs to render to you.

The advantage of DNS to you as a user is that you do not need to know where each server is located.  You simply have to know an easy-to-remember name that the web “just knows” how to translate into an actual location (or IP).  You type in staynalive.com and it just knows how to find the servers that are producing the page you are reading this on.  In fact, many domains actually map to multiple locations, so having a single name to remember is advantageous, and provides a routing layer that can easily be changed.  I actually do this with my e-mail address.  jesse@staynalive.com points right now to my Gmail account.  Because I own the domain, staynalive.com, I can easily point that to just about any e-mail provider I like, and I completely control where my mail gets routed.  You the user only have to know the e-mail address though – it doesn’t matter where it ends up.  The web takes care of that based on how I set it to work.

There’s one problem with DNS though – it’s too anonymous.  Right now it’s all or nothing.  If you put something on the web, anyone can find out your location on the web, and in return, anyone can gain access to your content.  At the same time, there’s no way with DNS alone to identify actual people.  Your website just maps to a location, and anyone can see that location without any other measures in place.  Right now if you want to prevent a certain user from accessing your site, you’re stuck guessing just their IP, which they can technically change if they like.  It’s not a real person visiting your site – it’s just an IP – it’s just a location mapping back to your site.

Solving the Identity Problem Through OpenID

To solve the anonymity problem there had to be another layer added.  A protocol called OpenID was invented, which you, the website owner, could “identify” your website with a specific identity provider using just your DNS identifier (or Domain).  With your website linked to an identity provider, you can now use that specific domain (which remember, maps to a location or IP), to actually identify you as a real person.  By simply typing in your domain on participating OpenID-supported websites, they can automatically verify with your identity provider that it is in fact you logging in as the owner of that website.  Now, every website can also be associated with an actual individual, perhaps even more than a location.  Now you know with close certainty that the content my location is producing is actually coming from me.

There’s still a problem with this though.  You can know the content is coming from me.  However, there’s no way for me to control who’s seeing my content.  Sure, with OpenID I could in theory identify each and every person that visits my website as an actual person (assuming I provide the means to do so), but how do I filter that traffic so only those I want seeing my content are seeing it?

This goes back to the exact same problem I was mentioning with real-life locations – privacy.

The Future of the Open Web is Open Privacy Standards

The web still needs better ways to protect user privacy in an open, standardized way.  Facebook has built this into their API but they haven’t standardized it so others can integrate it into the traditional web experience.  You have to be a Facebook user to get full privacy from Facebook.

Currently there are several open standards in the works that are trying to attack this head on.  One of OAuth’s successors, WRAP, which Facebook is very involved in at the moment, strives to do this.  It is also in the vision for OAuth 2.0 (if I understand correctly), another successor to OAuth.  The success of the future Open Web, ironically, lies in privacy.  It lies in customized roles and authorization.  Ironically we’re going right back to the same problems Novell was trying to solve with the Enterprise market back in the 90s, but this time on a much larger, global scale.

Ubiquity

Now, I’d like to take a step back to my little Christmas story, and where especially around the Holiday season, I’d like to maintain a little privacy.  It’s time we stop thinking about just the web itself, and now start looking towards the future where the web, and our real lives are all going to be meshed into one.  Privacy is critical in this not-so-distant future of a world.

For the Open Web to succeed, it needs to be ubiquitous.  It needs to stretch far beyond just the browser and into our every day real lives.  When I was visiting the Kynetx offices last week Craig Burton shared a vision he has, where he sees people being able to go from room-to-room in a house, and having each room identify who the individual is.  Once identified the room can provide a contextual experience in the room itself for that user (adjust the lights, turn on the favorite TV channel, adjust the chair comfort, etc., etc.).  This is another reason I like what the Kynetx team is working on – open technologies must stretch far beyond just the browser!  You will see this in the next 5 years or less, by the way.

My hope is that we can keep in mind privacy, in not just a browser context, but real-life context as the Open Web is growing and being discussed and architected.  I want to be able to give the Post Office my OpenID on an envelope and have them immediately be able to verify my identity and know where to route my mail.  I want to be able to, on a whim, change where that mail is routed without changing the OpenID I give the Post Office.  I want to give certain close friends and family permissions (which I could revoke at any time) to look up my physical location, based on my OpenID if I choose.  I want to only provide my OpenID to apps like FourSquare and have them also respect that OpenID and not reveal my physical location to people I choose not to share it with.  OpenID and at the forefront, DNS, should be the routers, and at the same time, protectors of our physical locations and our real-life experiences.

This Christmas I want a web that thinks beyond its borders. I want a ubiquitous web that travels with me and gives me full power, not just on the web, but in my real life regarding the context I choose to receive.  I want the limits of DNS to go far beyond IP and into the walls of my own home.  Most of all I want all this to happen with open standards.  I want a web that protects my family.

My hope this Christmas is that you can be inspired.  May you spend a little more time thinking about how you can contribute to this effort.  How can you understand these technologies a little more?  How can you sacrifice a little to make the world a little more open?

May you all have a Merry Christmas and Happy Holiday Season.  Hopefully in 5 years I’ll be able to even tell you where I’ll be and where you can spend it with me and not worry about it getting in the hands of the wrong people..  Even in an Open Web, it’s all about Location, Location, Location!

Kynetx Launches Chrome Extension Support for Their Platform

Editor’s note: Kynetx is something you have to use to fully understand!  If nothing in this article makes sense, please skip down to the bottom and at least try out the extensions these guys have built in their app directory and you can see the power of what this platform can do!  This is very powerful technology – I really believe this is the future of the web!

kynetxFriday afternoon Kynetx launched support in their developer platform to build extensions for the Google Chrome Browser.  The company, which provides a standardized, open framework for building web browser extensions (among other supported technologies such as Action Cards), became the first extension-building platform that supports all 3 of the top browsers on the web.  The move is unprecedented, as now with Kynetx in comparison to GreaseMonkey, possibly their closest comparison in this instance, you can write code once, and immediately have extensions and plugins that work in Firefox, Chrome, and even IE with the click of a button. Kynetx makes customization of the user experience in the browser a cinch with their platform.

I visited Kynetx on Friday for their weekly Kynetx developers lunch (which they invite the public to, just asking that you let them know in advance), and they were hard at work getting the final quirks worked out of the Chrome extension.  Developers like myself are now rejoicing, as Chrome is very quickly, with the backing of Google, proving to be one of the most responsive, most extensive browsers on the internet.  It also has an integrated development environment so extensions such as Firebug for Firefox don’t even need to be installed.  They all come with the browser, providing a much smoother and faster experience for the developer.

Kynetx is positioning themselves to become the ubiquitous controller for user experience and context on the web.  With their technology, users have the potential to fully control what they allow and don’t allow to be displayed on the web.  At the same time businesses are each given the opportunity, with the user’s permission, to change the experience for that user on the web.

Kynetx recently launched a tool with the Better Business Bureau enabling, with installation of a simple extension (in any of your favorite browsers now!), display of BBB accredited business seals in Google Local search results.  When a business has been approved by the Better Business Bureau a little seal appears next to their name in search results, enabling a more educated experience for users in the browser.  All of this is done without any need to form a special relationship with Google to customize those results.  Because of the ease of development and broad install base for extensions like the new Chrome extension launched Friday, any business has the potential to customize the experience for the user in a similar manner.

better

The new Chrome extension works across all versions of Chrome that support extensions.  While the official Chrome for Mac does not yet support this yet, the PC version does, as do developer builds of Chromium for Mac.  It is rumored that Chrome for Mac will be supporting extensions very soon.  The other advantage Chrome brings to the Kynetx environment is the availability of Jails for each extension.  With Chrome, developers can enable extensions to not be able to talk with each other or affect each other.  This introduces some interesting and secure identity and authentication/authorization implications which I’m sure we’ll be seeing from the Kynetx team in the future.

If you’re a developer with some knowledge of the DOM and Javascript, you should really check out the power of what the Kynetx platform can bring to your company and business.  This goes way beyond the browser, and makes context-aware applications a user-controlled standard that goes with that user anywhere.  Be sure to check out a little glimpse of what this stuff will enable in my previous article.  You can get started developing for this platform immediately on their AppBuilder site.

Just a user?  Be sure to check out their App Directory here, download the extensions and try them out in your favorite browser!

Developers, It’s Time to Open Up Twitter’s API

TwitterIf you’ve read my previous post on this, you’ll notice how I re-worded the title of this article.  That’s because I’m delusional to think Twitter is going to open source their API any time soon – I’ve been requesting this for over a year now.  I think I’ve come to a new understanding that if we’re to see an open standard built around Twitter’s API, it’s going to be we, the developers, who implement this.

It won’t be Twitter.

I mentioned earlier that developers are starting to catch onto this idea.  It all started almost 2 years ago when Laconi.ca introduced their own Twitter-compatible API into their platform allowing any client library based on the Twitter platform to very simply add Laconi.ca instances to their preferred Twitter client.  Unfortunately it took 2 years for that idea to catch on.  Finally, we’re seeing some big players in this, and my predictions are coming true.  Automattic just released their own Twitter-like API into WordPress.com.  Tumblr just released their own Twitter-like API.  The problem here is that all these developers are re-inventing the wheel every time they re-produce Twitter’s API, and any time Twitter releases a new feature they are stuck re-configuring and re-coding their server code to stay up with Twitter’s new API features.  That’s fine though – this is all a step in the right direction.

The Vision

Imagine now if there were a standard that, at first duplicated what Twitter was producing on their end, but other developers could code off of.  Open Source software could be built around this standard, and now any provider would be able to easily install code that integrated well with their own environments very easily.  We’d see many more providers than just WordPress and Tumblr and Laconi.ca instances like TodaysMama Connect (of which I am an advisor) integrate this.  We’d see big brands and big companies start to implement this.

Soon Twitter will be in the minority amongst services these “Twitter clients” (like TweetDeck, Tweetie, or Seesmic) support.  The Twitter clients will no longer feel obligated to cater to just Twitter, and new layers, such as real time and meta APIs could be added to this API standard in a way that benefits the community, not a single company.  Twitter would no longer have a choke-hold on this and we would have a new, distributed architecture that any developer can implement.

The Proposal

What I’m proposing is that we work together to build an open source set of libraries and services, perhaps a gateway of some sort, all built on a standard that we set (it will most likely copy Twitter’s API at the start).  I’ve built a Google Group, called “OpenTwitter” for now, with the purpose of opening up Twitter’s APIs.  The group will have a primary focus of determining how we want to build this software, establishing documentation for such software, and attaching a completely open standard on top of all that we can modify as it makes sense.  The goal here is that the public will now control how this data gets distributed, not a single company.

But What About RSS?

The question always comes up, why not just push these clients to support RSS and rssCloud or Pubsub Hubbub?  The answer is that we’ve been trying this for too long.  It may still happen, but it’s going to take Twitter clients a lot longer to modify their code to support RSS than an open Twitter-compatible standard.  Ideally, a Twitter client, which there are many, ought to be able to quickly and easily just change the base domain of where calls are sent, and everything with the providing service should “just work”.  RSS complicates this too much.  The fact is that Twitter has taken over and we need to accept that as a fact and work with it.

The Invitation

If you can, I’d like to invite you to join our “OpenTwitter” list on Google.  Let’s get some conversations going, and get this thing off the ground.  My hope is that we can get people like Dave Winer, Matt Mullenweg, Chris Messina, David Recordon, Joseph Smarr, DeWitt Clinton, and Mike Taylor all joining this effort.  My goal is that we can even get Twitter involved in this effort – this isn’t meant to snub Twitter by all means.  The entire goal here is to build a much more open, distributed web in as simple a manner as possible.

You can join the “OpenTwitter” list here.  I’ll be posting a kickoff message there very soon.

DNS is the New Browser War

googleToday Google decided to go head-to-head with a smack to OpenDNS, announcing their own “Public” DNS which users could integrate to bypass their own DNS provider, get faster speeds, and “improve the browsing experience for all users.”  The announcement comes head-to-head with their announcement a couple weeks ago that they were creating their own operating system built around the browser.  Let’s make no doubt about it that this is a play by Google to take one more step to having their hands in every bit of the internet experience for users that they can.  This is just one more “building block” for them.

The move sounds eerily similar to that of Microsoft’s early days, who, with Windows 98 (or was it 95?), started bundling Internet Explorer as the default browser for the OS, making it impossible to uninstall, and difficult to replace as the default browser.  Anti-compete lawsuits ensued from the likes of Netscape and eventually Novell and other companies seeing similar moves.  Microsoft’s browser is still in place as the default today.  Becoming the “default”  and controlling the experience is a natural move for any company building an operating system, except that this one has the internet as its foundation.

While at the Kynetx Impact conference a couple weeks ago (ironically during the Google Chrome OS announcement), Kynetx had set up their rule engine on the network so that everyone who joined the network would have their internet experience customized to brand Kynetx into the experience.  Every page I visited had a little link I could expand to view the schedule for the conference.  Every time I visited Facebook.com a little piece of code popped up on Facebook asking me to fan Kynetx, and also showed the latest Tweets for the conference.  All of this was built on the Kynetx engine.  It was pretty cool to see the potential!  The advantage of Kynetx was that it was all dependent on users installing the code to customize the experience.  While maybe untrue for the conference as a whole, it wasn’t intended to be controlled by one single entity over the entire internet.

Now that you see the potential for controlling the network, you realize that on the “open web”, he who controls the network controls the entire internet.  That’s powerful from a monetization and marketing, and especially advertising standpoint (which Google has a vested interest in).  When one company controls DNS, that company has the potential to control those that connect through that DNS.  Now what happens when Google makes this “Public DNS” the default DNS for its users of the Chrome OS?  Now, not only will Google have an edge in the desktop market, but they also now have an edge on the internet itself.

I predict DNS will become the new Browser War.  Now that we have the players in the window to the internet (IE, Firefox/Mozilla, Chrome, Safari), the competition is now shifting to the internet itself, and who controls the actual browsing experience for the user.  You’ll see players like Microsoft and maybe Apple, and maybe even Facebook enter this race.  Let’s hope Google continues to follow its model, “Do no evil” as they approach this.  I hope they build open architectures allowing users to control their data and control the experience rather than Google itself.  I hope Google stays competitive, rather than knocking services like OpenDNS out of service.  I hope they find ways to work with others as they do this.

There’s a new “war” a-brewing and we’ve moved beyond the browser to who controls the web itself.  Does Google get first-mover advantage?

It’s Time to Free the Twitter Client

infocard_114x80Dave Winer wants a programmable Twitter client. I think it’s a great idea.  It’s something the browser has had for quite awhile now via extensions, frameworks, and plugins.  Up until this point Twitter clients have been closed systems that can’t really be extended in any way.  Loic Lemeur thinks he has the answer with the ability to extend his company’s Seesmic Desktop client – I applaud them for this – it’s something that I think would allow apps like my SocialToo.com to help clean up the stream both in and out of Twitter.  In this way the Twitter client isn’t stuck with exclusive relationships where partners have to pay large sums of money to participate.  Developers and users have full control over the experience they get from the client.  I have a recommendation for Loic, Iain, and other social browser developers though: extend your browsers using open standards if you’re going to do it.

Up to this point we’ve been talking server-based open architectures.  You have OpenID, OAuth, Wave, rssCloud, Pubsub Hubbub, heck, you even have HTTP, SMTP, and even TCP/IP.  But up until now there haven’t been many client-based architectures that extend across any client enabling developers to easily write code for any web client on the client side and have that port from the AIR client to another AIR client, to the browser, and to any other app that touches the web.  Fortunately that technology is here now, and I think the Twitter and Facebook client developers have the opportunity to push this stuff mainstream and put pressure on the generic web browser developers to do the same with their own extension architectures.  That technology is the Selector – Action cards.

Craig Burton said the Cookie is dead, and this is why:  Cookies can’t extend across multiple applications on a single computer.  The Selector has that potential.  Imagine a plugin architecture that read an Information Card to identify you on Twitter or Facebook, etc.  You could add to it an Action card from a site like SocialToo (my site), and based on that Action card and the settings set forth in the Action card by the user the entire Seesmic Desktop experience will be customized based on the settings SocialToo set for that user based on their preferences.  The cool thing about this is it can all be done in simple (and open) Javascript using frameworks like Kynetx’ KRL.

If I were Loic Lemeur I would seriously be studying the open standard of Information cards and especially Action cards right now.  He has the opportunity to follow an open standard in this plugin development architecture that would extend across his app into other apps and even the browser.  This is Seesmic’s opportunity to lead in this effort.  If not other clients will take the ball by embracing these standards – developers will flock to this if it’s done right.

My hope is that Seesmic and any other Twitter or Facebook client can do these plugin architectures the right way.  Information cards and Action cards right now are the most open and extensive way for any desktop (or even mobile) client to put control back in the developers’, and more importantly, the users’ hands.  I hope they do the right thing.

I commented on Loic’s blog post but did not receive a response – hopefully we can hear more about their plans for this new architecture soon, and let’s hope it’s built on open standards.  If you write the first Twitter client to support Information cards or Action cards let me know and you’ll get a big fat blog post here promoting the heck out of it.  As far as I’m concerned, that’s the future of the web and we need to be pushing it as much as possible.  I’m calling all client developers to action.

Be sure to read my article on my vision for no log in buttons here – it will help you understand this stuff, and more of my vision, even further.

[youtube=http://www.youtube.com/watch?v=ISWOrI0WaLs&w=425&h=344]

When Did Facebook Remove RSS for Friends’ Status Updates?

RSSYesterday it was mentioned in an interview with Facebook iPhone developer, Joe Hewitt, on Dave Winer and Marshall Kirkpatrick’s podcast, “Bad Hair Day”, that Facebook did not have the ability to retrieve your friends’ status updates via RSS. I was taken by this, as this was something I wrote about back in January, and it was indeed possible, with a special key only known to the profile owner, to get an RSS feed of your Facebook Friends’ status updates.  So I tweeted out the link last night thinking it would be useful.

I tested out the link today, and it turns out that Facebook some time in the last several months seems to have removed the Friends’ RSS updates feature. Joe Hewitt, who works for Facebook, seemed to be surprised by it yesterday as well, as he too had seemed to think it was possible until looking for it.

My hope is that this was just overlooked and Facebook in the near future will release the Friends’ status RSS feed again.  Facebook has made many changes to their News Feed and Wall lately, and my assumption is that it just got overlooked at some point.  Having that available via RSS, and a user-controlled level should be no problem so long as it is only displaying status updates marked with the privacy control “everyone”.  Maybe that was the problem and they’re fixing it.

So for now it would seem you’ll have to get down and dirty with the API to have any sort of access to a person’s friends status updates.  That’s okay in my book, but having a little more standards-controlled ways of retrieving this information would also be useful on a different level. Help me David Recordon, you’re my only hope!

UPDATE: It looks like if you previously had subscribed to your friends via RSS the link is still working.  It seems that just the link has been removed – does anyone know how to find it now?

star-wars-episode-iv-a-new-hope-limited-edition-20060627040646167-000