open standards Archives - Stay N Alive

Better Noise Control on Google+ is Coming, but its More Beautiful Than You Think

4ba36-imgres-2118252

Over the last 2 or 3 years Facebook, Google, Plaxo, Myspace, and others have all been working on a standard to make fixing the noise problem easier. It’s called ActivityStrea.ms (pronouced “Activity Streams”), and its intention was to make it so companies don’t have to duplicate the efforts of re-creating stream formats in their news feed or “activity stream”, providing an open format to make implementation of this simple for both companies and developers. The standard encompasses multiple types of formats for presenting data, some even mimicking Facebook’s combined story syntax (“collections”) where you’ll see multiple shares in a single post, or collapsed commenting. In addition, it provides a simple API for accessing this data in Atom and MediaRSS format.

The key players?

There were others, but at the forefront were Chris Messina and Joseph Smarr, currently both major players on the Google+ team in directing its design. See this video from 2009, when both were not working for Google:

Here’s the deal though. ActivityStrea.ms isn’t just about a common API or noise control. See this presentation by Chris Messina about a year ago:

In it he talks about the vision of using ActivityStrea.ms for distributed social networks. So, not only would you have a beautiful, noise-controlled format on sites like Facebook and Google+, but you also get to bring your content to other networks. Through this format, developers can bring content from Google+ over to Facebook, or from Facebook over to Status.net, and for any site that supports this standard, the users get to choose what content they want to share, and where they want to share it to. It’s federation at its finest.

So as we’re talking about any noise problems on Google+ (I think the choice to launch with a noisy stream was smart because it means users are always seeing a constant flow of information and followers, and that makes people feel good), keep in mind that the people behind fixing the noise problem in the open standards world are also in charge of the design for Google+. I have no doubt that noise control, but much more than that, distributed social networks, are on their way to Google+, and the solution is going to be beautiful.

Oh, and Google Buzz already supports ActivityStrea.ms in case you were wondering (as does MySpace, and Facebook used to).

You can follow me on Google+ at http://profiles.google.com/jessestay.

Facebook Listens. RSS Added Back to Pages. Will Twitter be next?

72813-screenshot2011-05-21at12-56-58pm-5878389

In perhaps one of my most controversial articles (unintentionally), I wrote a week or two ago about how both Twitter and Facebook both quietly removed RSS from user accounts and Pages. Of course, with Facebook, on user accounts that made sense since they were intended to be private, but with Pages, 100% public versions of the site, it didn’t make sense that they would remove the links and access to be able to subscribe to updates via RSS. It appears that Facebook listened though, as there is now a “Subscribe via RSS” link on Facebook Pages, and the source now links to an atom feed for clients that want to auto-discover the feeds. You can see it by looking down at the bottom left on any Page now.

David Recordon, Senior Open Programs Manager at Facebook, mentioned in the comments of my previous article“I actually think you’re misinterpreting the reasoning here. Today JSON based APIs are quite a bit more powerful than RSS feeds and have become preferred by the vast majority of developers when building on the platforms you mentioned. This means that it’s worth investing more time and energy into APIs over feeds. So I don’t think it’s that anyone is looking to actively remove feeds, rather they’re just stagnating over time as more functionality is built into APIs.” Of course, he had a point. It was also something I mentioned in my previous article, that sites are moving more and more towards proprietary JSON APIs vs openly available and reproducible RSS. The problem is API or not, Facebook’s Graph API (not to be confused with Open Graph Protocol) is still closed – until they open that up as a standard, it will not be easily accessible across clients and content consumption programs.

It’s really good, that on top of their existing (and really easy to use) Graph API, to see Facebook move towards something that not just developers can easily consume, but any user can also consume and do things with in a simple fashion. Until (and if) Facebook opens up its own API, this is the right approach to take, and they should be commended.

There is a glimmer of hope with this move by Facebook. Of course RSS isn’t dead, but my worry is that as we see Twitter and others slowly removing remnants of the protocol one bit at a time, these open standards may be swallowed up in favor of more proprietary APIs and formats. I’m really proud of Facebook taking a lead here in open standards adoption as they have done in the past – let’s hope they continue to do so in the future.

The question now is, in this regard, does this make Facebook more open than Twitter? (I argue Facebook has always been more open than Twitter in various capacities, but in this regard, I think it says something about Facebook’s motivations vs. Twitter’s) I’d really like to see Twitter follow suite and reconsider their stance on removing RSS moving forward.

Twitter and Facebook Both Quietly Kill RSS, Completely

539cb-imgres-5129218

Last year I shared how Twitter was moving more and more towards a closed, less-standards oriented model of sharing content as they upgraded their design to bring more people to the Twitter.com website. At that time, they removed the prominent RSS icons and made it only possible to access an RSS feed for an individual by logging completely out of Twitter, and visiting that individual’s profile page. After reading my post, Isaac Hepworth, a developer for Twitter, tried to comfort me in a response to my post on Buzz, saying:

“I’ve been talking to people internally to work out what happened here so that I could untangle it properly.
Here’s the scoop: the RSS itself is still there (as Jesse’s roundabout method for finding it shows). Two things were removed in #NewTwitter:
1. The hyperlink to the RSS on the profile page; and
2. The link to the RSS in the profile page metadata (ie. the element in the ).
(2) was wholly accidental, and we’ll fix that. In the meantime, Jesse’s way of finding the RSS is as good as any, and you can still subscribe to user timelines in products like Google Reader by just adding a subscription to the profile URL, eg. http://twitter.com/isaach.
(1) on the other hand was deliberate, in line with the “keep Twitter simple” principle which we used to approach the product as a whole. Identifying RSS for a page and exposing it to users per their preferences is a job which most browsers now do well on their own based on s.
Hope that helps!”

Unfortunately, it seems #2 was not accidental, as it was never fixed. Now #1 is also removed as far as I can see (and looking at the HTML source I see no evidence of any RSS feed). It seems Twitter has completely removed the ability to consume their feeds via the open standard of RSS in favor of their more proprietary API formats.

At the same time, Facebook seems to have done the same. Facebook has gone back and forth on this though so it is no surprise on their part. They started with an RSS link you could subscribe to on profiles (this for awhile was how you added your feed to FriendFeed), but didn’t seem to have similar for Pages. Later, in a Profile redesign they completely removed the RSS link for profiles. Then, in a recent Page redesign, they added the ability to subscribe to Pages via RSS. I know because I had several Pages added to Google Reader, and I remember fishing through the HTML source and seeing the RSS link in the code. It would seem that Facebook has again removed the ability to subscribe via RSS on Pages, completely removing any ability to subscribe via RSS on the site (also in favor of their proprietary Graph API).

People have been speculating, “RSS is dead” for some time now. I’ve written that RSS isn’t dead, but the concept of “subscribing” is. However, as more and more sites move away from RSS, quite literally, in favor of these proprietary APIs I fear RSS could in fact be dying, not only as a subscription interface, but as a protocol in general.

My hope is that both of these sites overlooked keeping RSS subscription in place as they upgraded their interfaces. But seeing as I’m the only one who noticed, I have a feeling they have little reason to re-add the open protocol back into their interface. Personally, I think it’s a shame, as it makes it so only developers like myself can code anything to extract that data – the average user has no way of pulling that data out of Twitter or Facebook.

It seems in 2011 and the era of Facebook and Twitter we’ve completely lost any care for open standards. Maybe it’s not just RSS that is dying – it’s the entire premise of open standards that is dying, and I think that’s really sad, and really bad for not just developers, but users in general.

Am I missing something here? Where can I subscribe, via RSS, to Facebook or Twitter?

UPDATE: Dave Stevens shared a hack around this in the comments that you can use with the Twitter API. It’s not readily available to users, and based on Twitter’s current trend, could go away, but it works for now:

“Can can access RSS through the twitter API, if you read the documentation you are able to choose rss/atom for the feed options in some of the cases; for example: https://api.twitter.com/1/statuses/user_timeline.rss?screen_name=daveisanidiot
is my home timeline in rss format. So although they may have removed links from the pages there is still a method to get at it. (http://dev.twitter.com/doc)

UPDATE 2: In case you were wondering about Twitter’s attitude towards RSS, read this Help article in their Help section titled, “How to Find Your RSS Feed“:

“Twitter recently stopped supporting basic authentication over RSS in favor of OAuth, an authentication method that lets you use applications without giving them your password. You can read more about the change here: http://blog.twitter.com/2010/08/twitter-applications-and-oauth.html 

Because of this change, we no longer directly support RSS feeds on Twitter. 

  • If you would like to continue using RSS feeds from Twitter accounts, we recommend using a 3rd-party service.
  • Or, if you are comfortable with coding, use our developer resources to retrieve statuses.

Privacy is Not an On and Off Switch – "Do Not Track" is Not the Answer

privacy-9756530Victoria Salisbury wrote an excellent blog post today on “Who’s Creepier? Facebook or Google?“.  I’ve been intrigued by the hypocrisy over criticism of Facebook’s own very granular privacy controls when sites like Google, Foursquare, Gowalla, Twitter, and others have an all-or-nothing approach with some things (location and email in particular) that are even more private than anything Facebook is currently making available at the moment (if you want some good examples read Kim Cameron’s blog).  The fact is that Facebook, despite the amount of private data available, will always be my last resort as a hacker when I want to track data about an individual online due to the granular control of data available, and lack of default public data.  However, despite all this, even Facebook isn’t at the ideal place right now in terms of privacy. The fact is my private data is still enclosed on Facebook’s servers, and with that, there will always be some level of risk in storing that data, no matter where it is.  So what’s the solution?

Browsers such as Mozilla and Chrome are now beginning to implement “fixes” around this problem of tracking data about users across online services (note my article on how even Wall Street journal is tracking data about users), called “Do not track.”  The extension, or in some cases native browser functionality, seeks to give users the option of completely turning off the ability for sites to track a user around the web, removing any personalization of ads and in some cases the removal of ads completely from the browsing experience.  This experience is fine and dandy – it gives the user an option.  But as my friend Louis Gray puts it, “all it does is ensure off-target ads with a crappy experience.”  It is clear an on and off approach is the wrong approach, and I fear those behind these extensions and browser integrations are missing out on an important opportunity.

So where can we go from here if “Do Not Track” is not the answer?  The answer lies in the problem I stated above – the problem being that individual user information is being stored on 3rd party servers, without the control of users and assumed risk of relying on a 3rd party.  We saw this as Facebook made a temporary mistake earlier in 2010 when they launched Instant Personalization on 3rd party websites along with other 3rd party website features, but in doing so accidentally opened up a majority of their users private information with little notice to users (I did get an email warning of the change, however).  Facebook quickly fixed the privacy problem with even better privacy controls than before, but by that point the damage was done.  It was proof positive that there is huge risk in storing private information on 3rd party websites.  The advice I give to customers and users and news organizations in interviews I give is, “if you’re not okay sharing it with the world, don’t share it at all, regardless of privacy controls.”  It’s an on or off solution at the moment, and I’m afraid there are no better choices.

There is a solution though.  Chrome, and Firefox, and IE, and every browser out there should be working towards this solution.  We need to take the granular controls that sites like Facebook provide, and put them in the browser.

Awhile back I spoke of a vision of mine I call “the Internet with no login button.”  The idea being that using open technologies (we already have Information Cards, for instance), the more private information about users can be stored in the browser, reducing the risk of that information being shared by accident with 3rd party websites.  Rather than something like Facebook Connect (or Graph API), for instance, a browser-driven version of OpenID would control the user authentication process, identify the user with a trusted provider (Facebook, Google, Religious institutions, Government institutions, you choose), and then be able to retrieve private information about individuals directly from the browser itself.

The fact is I already use tools to do some of this.  1Password, for instance, allows me to keep a highly encrypted store of my passwords, credit card, and other data on my hard drive and provide that data, as I choose, to the websites I visit.  A browser-native experience like this would make this process automatic.  I would specify which sites I give permission to have my data – name, address, phone number, email, location data, etc. – and I would also be able to choose what users have access to that data.  I could then choose to store my more public data on services such as Facebook and elsewhere, with the same option to still store it on my own hard drive if I choose.  With such a fine-tuned integration my more private information is completely in my own control.  My browser controls access to the data, not any 3rd party website or developer.

At the same time keys could be given to 3rd party websites to store my data on their servers.  In order to render that data, they need my computer’s permission to render the data.  The solution is not quite evident yet, but some how a trusted, separate service should be able to provide the permissions to render that data, and when that permission is revoked, all data, across all 3rd party websites, becomes disabled.  Or maybe just a few sites become disabled.  The goal being control is completely handled by the user, and no one else.  Maybe sites get disabled by my browser sending a “push” to the sites, forcing their data of mine to delete completely off their servers (or render useless).

Chrome and Mozilla have a huge opportunity here, and it’s not to provide an on or off switch for privacy.  I should be able to decide what information I want to be able to provide to ads displayed to me, and that data shouldn’t come from Facebook, Twitter, or Google.  My browser should be controlling that access and no one else.  Privacy belongs on the client.

I’m afraid “Do Not Track”, in the browser or by government, is no the answer.  There are better, much more granular solutions that browsers could be implementing.  It is time we spend our focus on a dimmer, not an on-and-off switch, for the open, world wide web.  I really hope we see this soon.

Are Toll Roads Open?

Twitter proved me wrong. Well, sorta.

After my last article I had a whole slurry of rebuttals by Twitter employees suggesting my last article had “serious factual errors” and that the move by Twitter to charge $360,000 a year for 50% access to their full firehose through Gnip actually made Twitter “more accessible” and “open”, and not more closed as I was claiming.  Before I start I want to make sure it’s clear to those Twitter employees – what’s business is business – I have made no personal attacks here guys.  Please take this constructively.  I’m only stating my viewpoint as one of your developers, and, I think if you look at the replies to my post and retweets (and the comments of that post), you’ll see many other devs that agree with me.

I’ll give Twitter that credit, and I applaud them for it.  Compared to yesterday, even with a Paywall, Twitter’s firehose is “more accessible”.  In addition, Twitter is one of the only content sites out there that even provides an API to their full firehose of data, and, for that, they should be applauded.  It doesn’t matter if 2 years ago all this data was available for free via an XMPP feed and that really isn’t correct – Twitter is still one of the only sites at least giving an option to scan their massive database.  I think that’s a powerful thing and I’m definitely not discouraging that.  I want to make sure we’re absolutely clear on that – what Twitter did today was a good thing.

However, let me explain what I was getting at in my previous article.  Even though Twitter is one of the only sites allowing this data, there is a dangerous precedent they’re setting towards “open data”.  In essence, they’re saying, “You can have access to an individual’s Tweet stream. (with limits)  You can have access to the Tweet stream of your site’s users (with limits).  But to access all our data, you have to pay us.”  Now, let’s go back to my “Pulse of the Planet” reference and compare it to a highway system.  If Twitter was a Highway, anyone could have access and go where they want, as they please, all for free.  All destinations are possible as a result.  However, by closing their firehose to only those that pay, they are offering only one road, to one destination.  The problem is that anyone else can still get to that destination for free via other Highway systems – it’s just more difficult to do so.  By creating a “Toll Road”, Twitter is, in essence, creating a single way that guarantees direct access to the full data that Twitter provides. Everyone else is stuck finding their own way, and what happens is a result is they plan new destinations that are cheaper to get to.  Which route is more open?  The Toll Road, or the free Highway system?  This is actually a big debate in many cities – it’s not an easy question to answer, so you may decide for yourself what that means and maybe I was wrong in calling it closed earlier.  However, I will argue that the “open” web is a Highway.  Twitter, at the moment, along with Facebook, Google’s Search Index, Google Buzz, MySpace, and many others’ data are toll roads.  Which is more open?  I’m not even saying it’s wrong to be a toll road.  Maybe you guys can debate in the comments.

What I’m getting at is now that Twitter is charging for the full firehose, your data has a specific value to them.  Their bottom line now relies on them charging for access to half of their users’ data.  My concern is that now that Twitter is profiting off the full firehose, what happens when they realize this is making them money and they start charging for other pieces of their data?  Money is tempting, and my concern is that this is a path that is leading them towards more paywalls and more areas that just aren’t open to the general public or normal developers.  Call that “open” or not, as a developer, I’m very worried about that.  I’d almost rather Twitter keep their firehose closed than charge exorbitant fees for it.  Or, just charge for the whole thing already and put us all out of our misery.  On a site where it’s very unclear how they’re making or going to make money, this is a very scary thought to a developer that has been relying on a free API.

I’d like some comfort in this matter.  Can Twitter guarantee they won’t charge for any more of their data?  Or is this the path they are moving towards?  What’s the roadmap so we, as developers, can prepare for it?

I hope Twitter employees that disagree can do so in the comments this time – it’s much easier to have a sane conversation when your limit isn’t 140 characters.  Let’s keep this conversation going.  I hope there is some clarification on the matter.

Image courtesy http://www.carandhomeinsurance.co.za/home-insurance/articles/open-road-tolls-will-change-driver-habits_319

Twitter’s Gnip Deal Ensures a Closed Ecosystem

Today, at Defrag conference, Twitter announced a new deal with the Real-Time stream proxy Gnip, where for $360,000 per year they will search 50% of all content posted to Twitter.  This move follows the move, as I mentioned earlier, of Twitter moving further and further away from an open platform, and more towards one they fully control, which, as I’m sure is their opinion, will hopefully bring them more revenue in the future.  The problem with this move is that rather than open up the data of users on Twitter by embracing a real-time standard such as PubsubHubbub or RSS Cloud where middleware Proxies can filter the traffic coming through which anyone can set up, they’ve entirely blocked the potential for such by ensuring their revenue source instead comes from what should be open data.

This move is troubling.  This move means Twitter’s revenue is entirely reliant on them being a closed ecosystem.  The more they block data from the open web, the more they profit.  This sets a very bad precedence that could very well seep into other systems of theirs in the future.  In fact I bet it will.

Twitter has every right to protect their main firehose – they have pretty much done so already.  The Gnip deal seals that direction even further though, and builds Twitter’s entire business model around content that should be free in the first place.  To be “the pulse of the planet” you cannot be a toll road.

Twitter has made similar moves recently, with their move towards their own user interface that only “preferred partners” can interface with (meaning you have to pay to provide an interface to users on Twitter.com that might have an interest in your product), and also removing the obvious RSS feed links to users that are logged into Twitter that made it easier for users to retrieve and parse content of those they want to follow on the service.  This move also comes after Twitter reduced rate limits and removed capabilities for applications to do popular features on the site.

If one thing is obvious, it’s that Twitter wants more control over its ecosystem.  Unfortunately, control means a more closed environment.  Unfortunately, the deal with Gnip seals that closed environment in stone for some time to come.  I think we can pretty much count on Twitter being a closed, walled garden in the years to come.

I hope they prove me wrong.

UPDATE: Twitter did prove me wrong, sorta – read how here.

Mobile, Tablets, and the Need for an Extended E-Reading Experience

amazon-kindle-1-276x300-5427261Imagine buying a book from the book store and only being allowed to use a yellow highlighter to highlight that book and not being able to add any notes as you read it.  Seems pretty ridiculous, doesn’t it?  Yet we’re forced into that with today’s default readers on devices such as the iPhone and iPad, or even Amazon’s Kindle or many readers on Android devices.  Right now when you read books, you’re forced into the experience the manufacturer of the device you’re reading on has decided they want you to experience.

On the iPhone and iPad, we’re provided with iBooks, a beautiful reading experience and great store to go with it that will even let you import PDFs and ePub-formatted books and documents.  However, for the static content we read on these devices, we’re stuck with only the ability to highlight in the colors they give us, copy, select, and a limited set of features to extend that reading experience.  What if I want to draw a picture on the book?  What if I want to add a text note?  What if I want to share the text I just highlighted to Facebook?  The same goes for other devices like the Kindle, and even Android, and I bet the same for upcoming Windows smart phones.  It has been this way on PDA Readers since Palm and Handspring even.  The reading experience on these readers of static, published content simply isn’t extendible, and it hasn’t evolved much in ages.

We need a Reader that has an API attached to it.  The API should tie into the highlighting, the selecting, the turning of the pages, the rendering of the content, the bookmarking, and more, so app developers can alter the reading experience beyond what comes with the device.  I’m talking about a plugin-type architecture for Reader apps that render static content.

Currently just about all modern web browsers support plugins.  If I want to render a website in a slightly different manner than what the website owner intended for my personal uses, I can do so, and it sticks to my browser and my browsing experience.  Currently, in Gmail I use Rapportive to provide more information about the people who are e-mailing me.  It uses a simple browser plugin that reads, identifies, and alters the content of Gmail in a manner that is relevant to me, in a manner that the makers of Gmail probably never considered (nor did the makers of my browser).

Imagine as you’re reading a book, being able to pull in the relevant Tweets of other people reading that book at the same time.  Imagine being able to share bits about what you’re reading with your Twitter and Facebook friends.  Imagine reading a book, and having it automatically notice your Facebook account, it reads information about you from that Facebook account, and it alters the content of the book based on who you are, perhaps even bringing you into the experience.  Imagine the ramifications of this for Text Books that can learn about you as they present information you can learn from.

Currently we’re reinventing the wheel over and over again as developers create new mobile apps that recreate the reader experience in various ways.  My publisher, O’Reilly, for instance, is creating individual applications in the app store just so they can have more control over the publishing experience for their books (at least I’m guessing that’s why they do it), and their readers get the experience they want to provide. (search for “FBML Essentials” in the app store to find my book)  What would happen if Apple instead provided the basic reader, and O’Reilly could then provide just the extension necessary for that basic reader to customize the experience for their readers.

By extending the basic book reader on mobile and tablet devices, I think we’ll see a new revolution in the way books are published that print books simply cannot provide.  It’s time we break out of the static book reading experience and provide an open, extendible experience that any developer can use to alter the way your books are presented to you, and at the same time you, the reader get to choose the best way you want to read that book.  This is the future.  This is the future with no log in button I talked about earlier.  It’s the Building Block Web, applied to books.

I wonder if Kynetx could power such an experience.

Pornography and Choice – The Dilemma Over the Future of Open

I’ve been following the Ryan Tate late-night rant (language) over Steve Jobs’ desire for a world “free from porn” and his objections therein (while still not completely sure the purpose for his rant).  While pornography was only one of the things Jobs highlighted, Tate, who has no children of his own, seemed to focus on it, considering a world “free from porn” an infringement on his own privacies.  I’d like to take a different angle and share my own views, as a parent of 4 children, and how I really feel the web as we know it infringes my own freedom as a parent.  It also infringes on my children’s own freedom, in the the native choices technology-wise that I have access to in order to protect my children and my family from pornography.  That’s right, I said it (well, I’ve said it before) – the web, while open, is not entirely free.  Let me explain.

Let me start with the point that, while outside this blog I may have my own opinions and beliefs, I am not saying in any way or form whether porn is “evil”, or “not evil”, or whether it is “good”, or “bad” for society.  That is not the purpose of this article, and I’ll leave that for you to decide.  One thing I think we can all agree on however is that, for good or for bad, pornography affects us all, and, as an individual, or father of 4 children, I don’t have much choice in the matter.  Let’s face it – whether I want it or not, my children are going to see porn, probably many, many times in their life, perhaps way before they are old enough to even know what it is.  As a parent, at least the way the open web works, at a native level I don’t have any choice in that matter.  Is that freedom?

Right now we live on a very open web.  It’s a vast web, linked together from website to website, which enables sites like Google and MSN and others to index that content and provide answers to many questions.  We have a whole lot more knowledge because of that.  At the same time it’s a very wild west atmosphere – the very “Net Neutrality” we are all fighting for is keeping any sort of control that parents and families so desperately want for their children from accidentally stumbling on things they don’t want to see.  This is probably why much more closed environments like Facebook are thriving – we’re being given some level of control, as parents and individuals, over this very open atmosphere.  We need an open way to fix this problem.  Or maybe closed is the only solution…

Let me share an example:  My daughter, who is 9 (not even starting puberty yet), told us the story of her friends at school talking about various sexual topics.  She told us about one friend, a boy, who wanted to know what sex was, so he Googled “sex” on the internet, something he knew how to do from school when he had a question about how something works or what something was.  Needless to say, as parents, at age 9, we were fortunate enough to have our daughter ask about this before Googling herself, but we were now forced to give “the talk” to a 9 year old.  I can only imagine that boy’s parents – I hope he talked with them about what he found.

As a father of 4, I’m scared to death what my kids are going to have to go through.  I certainly don’t want to shelter them from the world, but at the same time I want to be the one introducing them to the world, not the world getting to them first.  We need innovation in this area.  I’m worried it’s an area that gets little attention because the innovators in this space either aren’t parents themselves, or have no objections to their children seeing it.  The thing is, this isn’t a “good” vs. “bad” battle.  This is a battle about true “freedom”.  This isn’t about anyone telling you that you can’t watch porn.  This is about those on the web that don’t want to watch it or come across it being able to avoid it entirely, as a native component of the web.

Right now all the solutions out there are hacks.  Solutions like (my favorite – I’ll be doing a review soon) Net Nanny, Norton Internet Security, and others are great at helping parents to monitor what their kids are doing and even protecting them from things their parents don’t want them to see, but in reality they’re just solving a problem the web should have solved in the first place.  Pornography, sexual content, violence, or anything else we, as parents and individuals want a handle over should be elements that are handled at the core of the web.  The web needs elements to identify this type of content, and ways to punish those that don’t identify their content, taking away the overall freedom that is inherent to the web.  The web should be about choice.  It’s not at the moment.

At the same time, operating systems, like Windows, OS X, the iPad, Android, and the iPhone, all need to have layers built in that give parents and individuals more control over the content they want to see.  I should note that Facebook, at the moment, has no way for me as a parent to monitor what my child is doing on the site – I can’t let my kids on it until I have that control.  Don’t even get me started about Google Chat.

I’m not quite sure what the solution is, but we need innovation in this area.  Perhaps XRD or the new JRD and identifiers for content are the solution.  Maybe Google and Microsoft and others that index this content could reward sites with higher search rankings that properly identify their data.  Maybe a “.xxx” TLD is the solution.  At the same time we have to take into account chat, and how people interact online.  Maybe verified identity is the solution in this area.  On the open web we can’t give up on this effort though, or the more closed solutions, like Jobs inferred with the iPad, are going to win, and rightfully so.

Steve Jobs is right, whether Ryan Tate likes it or not – as a parent I am not free on the web right now.  The only freedom I have is to just turn off the computer, keep my kids from learning technology at a young age, and hope they don’t see it at school, or at a friends’ house, or the elsewhere (which they will).  Freedom is about choice – we should all have the choice in this matter, and that choice just doesn’t exist on the web at the moment.  I hope the Open Web can fix these problems before Apple, or Microsoft, or Facebook do it in a closed environment.  Either way, I welcome the extra freedom I will get from it.

From one parent to another:  Thank you Steve, for trying to make my life as a parent a little more “free”.

Facebook Launches OpenGraphProtocol.org: Adds Second Product to the OWFa

Just two years ago at OSCON, Facebook, Google, Myspace, and others all joined forces to create the Open Web Foundation, a sort of GPL-like agreement for platform builders to have a common agreement users could understand.  Facebook announced their first support of the OWF agreement in November of 2009 with the launch of the OAuth WRAP protocol, an experimental protocol intended to lead to a more open authentication and authorization platform for Facebook.  Today, with the launch of a new, non-Facebook centric protocol page for OpenGraphProtocol.org, Facebook announced their second entry under the Open Web Foundation Agreement.  According to Tantek Celik, and confirmed by Facebook’s David Recordon on the OWF mailing list, ‘Facebook’s “The Open Graph Protocol” is the most recent user/adopter of the OWFa’.

What does this mean? Basically, it means that the new Open Graph Protocol announced by Facebook yesterday is under a completely open license agreement that other platform creators can adopt, use, and freely distribute without worry of patent.  As I said, in many ways it is similar to the GPL, in that platforms created under this agreement are intended to be re-used and distributed across the web, keeping the license in tact.

The Open Graph Protocol defines specific meta tags which sites can integrate to identify themselves as a “Page” on Facebook’s social graph.  Doing so, and identifying it with Facebook, enables that Page to receive likes, activity updates, and more via Facebook users and “Social Widgets” they can incorporate from Facebook on the site.  I’m still unclear how this benefits anyone but Facebook.

While Facebook’s internal APIs still appear to remain proprietary, it’s good to see Facebook starting to open up.  The good thing about this protocol is anyone can mimick it or duplicate its functionality for their own purposes.  This is something, other than OAuth WRAP, which Facebook just hasn’t had up until this point.  Let’s hope this trend continues.