Technology – Page 32 – Stay N Alive

Facebook Prepares to Compete With Gmail: Launches Messaging API

FacebookFacebook is moving too fast this week for me to keep up.  On the heels of the Acquisition of Gmail-creator-founded FriendFeed, along with the launch of their new search interface, Facebook just took it one step further today.  In an announcement on their blog along with associated documentation on their developers wiki, Facebook released a set of new APIs for developers to begin writing software that enables them to read and display a user’s inbox and messages from Facebook Platform.

While Facebook has offered a rich set of APIs since the launch of their developer platform in 2007, Facebook’s messaging system has remained stagnant and seemingly untouched during the entire period.  Developers have been itching to get into the messages of a user on their behalf to help fix this.  Not only is Facebook opening this up for developers, but they are also getting ready to launch an entirely new messaging system being tested by a small group of users currently, set to launch “in the coming weeks”.  Facebook also launched an interface into their notifications API enabling developers to read and notify users when they receive new notifications.  I expect this to be used in Desktop applications such as Seesmic.

While people are speculating the fate of FriendFeed after the new acquisition, there seems to be two things on the mind of Facebook recently: Messaging and Search.  With the creator of Gmail on their team and co-founder of one of the best real-time search engines on the internet (that just so happens to have a superior Direct Messaging system as well), you can bet Facebook is already putting him hard at work in helping them on such features.  I hope and expect to see this new API implemented into FriendFeed’s own messaging system, as well – hopefully enabling you to import your Facebook inbox into FriendFeed’s own DM box.  We’ll wait and see.

It’s no secret that E-mail is an old and out-dated technology.  It only goes to say that we’re in a race now for the fastest, most real-time, and responsive messaging system to replace e-mail.  While Google moves forward with Wave, you can bet Facebook will be doing the same with their own messaging.  With the ability to now truly identify individuals socially without need for an actual “address”, E-mail may actually be going by the wayside.

Gmail has yet to launch any sort of API into its own messaging (that I’m aware of) – this move by Facebook is unprecedented.  While Facebook will not allow developers to actually send messages on behalf of users (a wise and careful move, I’m sure), this makes Facebook even more “open” in my book.

Developers can sign up for the new messaging platform by signing up for the Inbox API whitelist.

Facebook to FriendFeed: "You Complete Me"

friendfeed-facebookI have to admit I’m a little behind on the news of Facebook acquiring FriendFeed. My day today consisted of driving through up-state New York, and tonight I sit here typing just about a 15 minute walk from the beautiful Wonder of the World, Niagara Falls. There are so many better things to think about! (just see the pictures I took tonight) Yet, as I got the initial text via Tweet from Louis Gray today stating Facebook had acquired FriendFeed, I couldn’t get my mind off of what that would mean for both services.

For those that know my background, I’ve written 2 books on Facebook, both from a developer/platform perspective, and a marketing/user perspective (if you want to see what those books are, check out the upper-right nav of my blog). I’ve written numerous apps for Facebook, even sold one of them in just a few months after developing it, and spend much of my time consulting and helping major companies and app developers understand Facebook better.

At the same time, I have always been extremely bullish on FriendFeed. I love the open nature of FriendFeed and how it allows me to express myself publicly in ways I couldn’t before. I love the messaging capabilities, and especially the search and notification options FriendFeed offers me. I almost wrote a post about the perfect Trifecta of FriendFeed, Twitter, and Facebook, and how the three just work well together. I even predicted earlier this year that FriendFeed would be acquired this year. I have been rigorously researching and working on the developer platform that FriendFeed offers, and loe the open nature of it and their relationship with developers.

It’s because of this background that I had to think seriously about what I thought about this new relationship between Facebook and FriendFeed. I came to the conclusion that the two complete each other.  They’re like two puzzle pieces just waiting to be joined. The thing is, I can’t think of any reason why they shouldn’t be together. Here are just a few reasons why:

FriendFeed Needs Privacy Controls

I was actually about to put together a post on this exact topic.  I’ll try to keep it short here.  I noticed recently that Louis Gray’s wife joined FriendFeed.  I was excited, because this meant my wife, who is friends with Louis’s wife (albeit virtually), could actually have a chance at being convinced to do the same.  The problem is that we have children old enough that I would prefer we kept their identities off the internet as much as possible.

At the same time I have had threats of physical assault before via my blog and elsewhere.  I want to be more careful about what I type and who sees it.  The problem with FriendFeed is that not only do my friends see it, but so do their friends, and their friends’ friends, and that pattern has the potential to go on forever.  We saw this with the “mob” mentality that drove Michael Arrington off the site much earlier.  Yes, this is also a strength and what makes FriendFeed powerful, but I want on occasion to have some control over who gets to see that, and from what  Friend List I have on the site.

Privacy is Facebook’s main strength.  Imagine FriendFeed being able to educate the Facebook team based on their own experiences with allowing “friends of friends” to see the data and cycling newly “liked” items back to the top, while at the same time allowing Facebook to give their own expertise of allowing those same users to make their items more private?

I think there’s a lot of power in that, and it’s something FriendFeed doesn’t have yet.

FriendFeed Needs Profiles

I’ve asked for profiles for quite awhile now.  I want a way to identify myself on FriendFeed, so people can know who the person is regarding the feeds their reading.  This one is simple – there’s no doubt that Facebook is good at this.

Facebook Needs Search

Facebook just proved today that this is a focus for them.  And guess what – one of the first things they announced in their rollout of a new search engine was their recent acquisition of the FriendFeed team!  This gives a lot of potential, not only for search, but also for the ability for broad, real-time search, across all of Facebook and more, something the FriendFeed team is really good at.  The FriendFeed team will also be very good at making this search more accessible to the public, all of this while respecting their privacy preferences.

Facebook Needs Better Notifications

We all know FriendFeed is good at this.  On almost every page (and it was soon also going to be on search pages) you have the option to have posts, or posts and comments sent to you via e-mail or IM.  Also, each page has its own RSS feed, with support for PubSubHubbub, something the FriendFeed team helped instigate.  This enables real-time updates via RSS.  Facebook enables RSS on only very few pages – the FriendFeed team is very aware of this, as they have been trying to import that data from Facebook themselves!  The FriendFeed team knows the headaches of the Facebook platform more than anyone!

Now, what if we could take Facebook’s current SMS capabilities, something FriendFeed does not have, and apply them to the FriendFeed-style posts, agreggation, updates, and search notifications we see on FriendFeed currently?  I know I would certainly become a happy user, as this is something I’ve been asking for on FriendFeed since day 1!

Facebook Needs Better Messaging

I saved the best for last.  Facebook has had its static Inbox for quite awhile now.  We know they’re working on a new solution from posts by TechCrunch, AllFacebook, InsideFacebook, and others, but we know this is something Facebook just isn’t very good at.  The FriendFeed Team re-invented messaging with their bare hands. Let’s put this in simple terms: Paul Bucheit, cofounder of FriendFeed, is the creator of Gmail.  He now works for Facebook.  End of Story.

Be at Ease

Sure, this news took a lot of people by surprise (although it shouldn’t have if you read my blog).  Some people dislike Facebook.  FriendFeed was the “new early adopters playground”.  Early adopters don’t like going back to old things.  It’s a little scary for some.

I suggest you wait a little.  FriendFeed has a very competent team.  We still don’t know what was in that contract they signed.  Sure, we have some hints, but FriendFeed has yet to let us down.  They have a perfect track record for long-time users of their service.  They know what’s going to happen at Facebook, and while sure, things can change, you better bet they’ll be fighting for their existing loyal users.  You now can know you have a team you can trust at Facebook, if you didn’t have that trust before.

Also, here’s what we do know: FriendFeed.com is still up and working just as well as always.  Facebook now owns FriendFeed. FriendFeed may start integrating more deeply with Facebook.  We don’t know that for sure.  That’s all we know.  Before you jump, think of your trust for the FriendFeed team and their track record so far.  Let’s see what happens, and when they do take action that effects us we can make our decisions.  Until then I’ll be waiting and supporting them and celebrating their recent success.

The two businesses could not have asked for a better relationship.  I think because of this, Facebook all of the sudden has put out the announcement that they want to become more open like FriendFeed.  We should be applauding them!  FriendFeed has just announced that they leaped over Twitter and are more mainstream than they could ever imagine, as the exact same service they always were.  We should be applauding them, too!  Now think about that for awhile and let’s see what happens.

Now, if you’ll excuse me, I’m going to go spend some more time at the ‘Falls.

Photo courtesy http://blog.friendfeed.com/2009/08/friendfeed-accepts-facebook-friend.html

Oh, the Trouble With OAuth

OAuthThis article has been sitting on my desk for the past week or so, and recent activities around the Twitter/Facebook/LiveJournal/Blogger DDoS attacks have made it even more applicable, so it’s good I waited. The problem centers around the “Open” authentication protocol, OAuth, and how I believe it is keeping companies like Twitter who want to be “Open” from becoming, as they call it, “the pulse of the Internet”. The problem with OAuth is that, while it is indeed an “Open” protocol, it is neither federated, nor decentralized. We need a decentralized authentication protocol that doesn’t rely on just the likes of Twitter or Flickr or Google or Yahoo.

Let’s start by covering a little about what OAuth is. OAuth centers, as the name implies, on Authorization. This is not to be confused with identity, which other decentralized solutions like OpenID focus on. The idea behind OAuth is that any website, or “Service Provider”, will accept a certain set of HTTP requests, handle them, and send them back to the developer, or “Consumer” in exactly the same way as any other OAuth protocol does. OAuth tries to solve the issue of phishing and storage of plain-text usernames and passwords by sending the user from the Consumer website, to the Service Provider’s website to authenticate (through their own means or means such as OpenID), and then authorize. On Twitter this process is done via an “Allow” or “Deny” button the user can choose to enable an application to make API calls on their behalf. Once authorized, the Service Provider sends the user back to the Consumer’s website, which is given a series of tokens to make API calls on behalf of that user.

OAuth’s strengths are that it is easily deployable by any site that wants a central, secure, and understood authorization architecture. Any developer can deploy an OAuth instance to communicate with APIs that provide OAuth architectures because libraries have been built around the architecture for developers’ preferred programming languages, and adapting to a new site implementing OAuth is only a matter of changing a few URLs, tokens, and callback URLs. I’m afraid that’s where OAuth’s strengths end, though.

Let me just put this out there: The User Experience behind OAuth is horrible! From a user’s perspective, having to go to an entirely new website, log in, then go back to another authorization page, and then back to the originating website is quite a process for an e-commerce or web company that is focusing on sales around that user. No e-commerce company in their right mind would put their users through that process, as the sale would be lost with half the users that tried it. Not to mention the fact that (and I don’t know if this has anything to do with the actual OAuth protocol) with most OAuth implementations there is no way to customize the process the user goes through. For example, on Twitter, I can’t specify a message for my users specific to my app when they authorize it. I can’t customize it in any way to my look and feel. I completely lose control when the user leaves my site to authorize and authenticate.

Let’s add to that the problem of the iPhone, desktop apps, and other mobile apps. Sure, you can redirect the user within the app to a website to authorize, but again, you’re taking them away from the app flow during that process. It’s a pain, and headache for users to log in using that method! Not to mention they have to do that EVERY. SINGLE. TIME. they log in through your app since there’s a good chance they were not logged into Twitter or Flickr or other OAuth app in the first place. It’s a huge problem for OAuth developers on these devices, and less-than-ideal.

Now, back to my original point. The biggest problem with OAuth is that it requires a centralized architecture to properly authorize each application. We see this is a problem when entire apps like my own SocialToo.com can’t authenticate users when Twitter gets bombarded by DDoS attack. The need for centralized control of each app on their platform is understandable, in that in the end the companies implementing OAuth still need a way to “turn off” an application if an app gets out of hand. Of course, one solution to this from the developer’s (Consumer’s) perspective is to implement their own authentication and authorization scheme rather than relying on someone like Twitter’s. This is less than ideal though, since most of our users all belong to some other network that already handles this process for us. Why require our users to repeat the “account creation” process to overcome centralization?

I think there is a better solution though. What if a distributed group of “controlling sources” handled this instead, giving each company admin control over their own authorization? What I propose is that a new layer to OAuth be created (or new protocol, either way), enabling trusted “entities” to, on a peer-to-peer (federated) basis, sync authorization pools of users and their distinct permissions between each Consumer app and Service Provider. Companies/Service Providers could then register with these “controlling sources”, and they would have admin access to turn Consumer apps on or off in the event of abuse within their app.

So let’s say you’re Twitter and you want to let your developers authorize with your API. You register on one of these “controlling sources”, they confirm you’re legit (this could possibly be done via technology in some form, perhaps OpenID and FOAF), and let you create your own “domain” on the “controlling source”. Twitter would now have their own key on the “controlling source” to give developers, and the controlling source would divvy out tokens to developers wanting to access Twitter’s API. Twitter’s API could verify with the controlling source on each call that the call is legit. To kill an application, they would just need to log into the controlling source and deny the application. The application would get denied at the controlling source before it even hit Twitter’s API.

What makes this open is that, if this were itself written under an open protocol, anyone could theoretically create one of these “controlling sources”. So long as they operated under the same protocol, they would operate and work exactly the same, no matter who they were. Developers could then pick and choose what “controlling source” they wanted to authorize through. If one went down, they could switch to another. Of course, there are some security issues and authenticity of “controlling source” issues that need to be worked out, but you get the idea. This would essentially completely de-centralize the entire authorization process. Authorization itself would quickly become a federated process.

Now, that still doesn’t solve the User Experience issues I mentioned earlier. To solve those, I think we should look at Facebook and what they’re doing with Facebook Connect. With Facebook Connect, the user never leaves the Consumer’s website to authorize and authenticate. They click a button, a popup comes up, they log in, and a javascript callback notifies the app the user has been authorized and authenticated. It’s essentially a simple, 3-step process that completely leaves the website owner in control. In addition, Facebook has provided Javascript methods allowing the developer to confirm various states of authenticity, without the user having to leave the website. I’d like to see OAuth emulate this model more. Right now I’d rather implement Facebook Connect than OAuth for these reasons.

I think, as both Dave Winer and Rob Diana point out, there are some serious issues being brought up from the recent DDoS attacks against Twitter and other sites. Twitter’s inability to handle the DDoS attacks when compared to the others I think shows we need much more Federation from the site, as well as the “Open” protocols it is trying to build around. Twitter wants to become a utility. There is no way that will ever happen until they Federate, and I think that has to start with a change to the OAuth protocol.

Taking a Stand on Twitter’s Auto-DM Policy #endautodm

buck stops hereI’ve long mentioned my annoyance with automatic DMs after follow and elsewhere. It’s one of the reasons I built SocialToo, and we’re doing things there to combat the process. Unfortunately it’s not perfect. In fact, even with the anti-spam measures SocialToo has in place, it’s getting to the point that most of the DMs I receive are non-legitimate messages that many of the users probably have no idea were sent on their behalf. My other followers get hurt because of that because I can often miss their messages. Chris Brogan mentions fun140.com, which I admit recently is a major perpetrator of my DMs. But there are others too: Tweetlater (which has the ability to opt out, and we’ll even do it for you on SocialToo), Twollow, Twollo, Mob Wars, SpyMaster, and many others. Too many to know which one needs to go to and opt out of, assuming they even have a solution (which most do not).

Twitter could fix this easily. Facebook already does this – they allow users to identify that they do not want to receive any more messages or invites from a specific app. Then, the minute they do that, the app can keep sending invites, but that user will never see them again. In addition, the app gets dinged a “spaminess score”, reducing the number of app invites it can send out per user. Users have full control, and they can still be friends with people that like to use these apps.

Twitter needs a similar system – it wouldn’t be very hard to require all apps to identify themselves via a developer Terms Of Service (I’ve talked about this before), either by OAuth or some other means, and then provide the tools necessary to allow users to opt out of receiving DMs and @mentions generated by these applications if the user does not want to receive them. Based on a current discussion in the developers mailing list for Twitter, I’m guessing developers wouldn’t be opposed, either. At the very least, open up the API to allow the identification of these applications while requiring them to identify themselves. Blacklist and ban the applications not willing to comply.

picture-5

I’m getting sick of the auto-dms. Chris Brogan is sick. Sean Percival is sick. Robert Scoble is sick. Jeremiah Owyang is sick. The list goes on and on, and we’re not the only ones. Starting today I’m taking a stand. I want to show what my inbox looks like right now. For that reason, I’ve taken a screenshot of my Twitter DM box and posted it as my Twitter avatar. If you are against this practice, change your avatar to your own DM inbox, and retweet this to your followers (click on it, and it will auto-populate Twitter for you):

I’m changing my avatar to my Twitter DM inbox in protest of automated DMs on Twitter #endautodm

You can include a link to this article if you like, but that’s not required. I want to send a message to Twitter that this is a serious problem. It’s time to end automated direct messages once and for all. I’m done with them. Will you join me?

Here’s a FriendFeed Real-time search of #endautodm – will you contribute? Just end your tweets with #endautodm:

http://friendfeed.com/search?q=endautodm&embed=1

Twitter’s New Monetization Strategy?

picture-4

This is just too good not to share. Twitter CEO, Ev Williams, Tweeted this tonight – I’m not quite sure if it was supposed to be a DM, a drunk Tweet, or if he really did actually mean to send that, but I’m clueless as to what it is supposed to mean.  Is this Twitter’s new monetization strategy?  Or did Twitter just find a new way to make itself a “utility”?  Let’s see who can come up with the most creative explanation.

There’s More Than One Way to Store a Password – PerlMonks Hacked

nirvana-smells-like-teen-52041Hackers are in a state of Nirvana as it would appear they hit the gold mine of programmer passwords in a hack of the popular Perl forums and resource site, PerlMonks.com yesterday.  The hack claims to have gained access to the database of more than 50,000 passwords, which insanely were stored in plain text in the database for anyone to see.  The hackers subsequently published the list to several mirrored servers (I can’t find a link to verify, but it’s not something I would publish anyway), along with the following statement:

“There is a really simple reason we owned PerlMonks: we couldn’t resist more than 50,000 unencrypted programmer passwords.

That’s right, unhashed. Just sitting in the database. From which they save convenient backups for us.

Believe it or not, there is actually debate at perlmonks about whether or not this is a good idea. Let’s just settle the argument right now and say it was an idea that children with mental disabilities would be smart enough to scoff at. We considered patching this for you but we were just too busy and lazy. I’m sure you can figure it out yourselves.

This isn’t a bad set of passwords, either. Programmers have access to interesting things. These Perl guys are alright, just a little dumb apparently. A lot of them reuse. You can explore them yourselves, I really do not want to point out anyone in particular.

In case you guys are worried, we did NOT backdoor dozens of your public Perl projects. Honest. Why would we want to do that?

Not worth our time ;)”

It’s unclear exactly who, and how many were compromised, but the site is recommending all who have previously had accounts on PerlMonks.com to change their passwords immediately.  In addition, one of the worlds largest repositories of open source code, the CPAN network, has also recommended that its authors change their passwords, as evidently somehow the two sites are connected.

As a Perl developer, and CPAN author, this is a bit concerning.  First, it would be one issue if this were just some random group of people whose passwords had been hacked, but this is a database of tens of thousands of developers, probably most with root access to the machines they write code on, and according to the hackers, many using passwords that are being re-used elsewhere.  These are the passwords of developers like Chromatic, Brian D Foy, Andy Lester, engineers at major corporations and government entities, and more.  The hackers couldn’t have picked a worse server to crack and expose.

I’m baffled at what the PerlMonks developers and admins were thinking storing their passwords in plain-text, something that, in my own opinion is amateurish, and should have some sort of repercussions at their lack of responsibility in handling their users passwords.  This is something that not only has been in Perl since version 1.0, but has also been integrated natively in almost every database environment on the planet.  That said, there is no privacy policy that I can see on the PerlMonks website, so maybe the users should have paid better attention.  I don’t expect the PerlMonks admins to say that, though. I’m ashamed as a Perl developer, and this gives a huge black eye to the entire Perl community.  It only gives further validation to the rest of the world’s claims that Perl is for messy code.

I hope the PerlMonks developers and admins can make right of this situation and not only fix their database, but make amends with the community, and the rest of the world, whose trust they just violated. After this, I’m seriously considering switching to another language for my next project.

Hey Utah, You Have a Tech PR Problem

Laptop MegaphoneThose like myself that live in Utah know there is a thriving tech startup community here.  From early startups like Omniture, Freeservers, and Wordperfect, to newer ventures like SocialToo, TweetBeep, TodaysMama.com, FusionIO, i.TV (previously number 1 in the iTunes app store), and FamilyLink (the makers of the Facebook App, We’re Related, one of the top 5 apps on Facebook) there’s no shortage of innovation in the Tech community in Utah.  Add to that some very talented investors like Bryce Roberts, co-founder of O’Reilly AlphaTech ventures, Peterson Partners, and the entire Sorenson Capital and vast array of angel investors and private equity options available, there’s no shortage of innovation and capital to support that innovation.  Unfortunately though, money and innovation are only part of the equation.  A company needs eyes.  It is extremely difficult to grow a tech company without the attention of Silicon Valley and the technorati out there.  So why is it that we so rarely see Utah companies in TechCrunch, or Mashable, or Gizmodo, or ReadWriteWeb even?

What amazes me is the vast amount of attention Boulder, Colorado startups get.  I think they know how to generate news, because the main “incubator” for lack of a better term) of those companies is Tech Stars, and Tech Stars has an amazing success rate at cranking out fairly successful companies in relatively short amount of time.  But I really don’t think Utah has any shortage of tech startups in similar timeframes when compared to Boulder.  In fact, our startups in many ways have shaped the internet (University of Utah was one of the first 4 nodes of the internet, after all).  On FriendFeed, I compiled a list of all the tech startups that either started in Utah and are now flourishing, or that are brand new and working to get off the ground that I could think of – this is what I came up with:

Of course, that list is just off the top of my head – there are many more that I’m sure will come up in the comments.  I look at this list of companies, and I look at the bustling activity of jam-packed rooms full of people at iPhone dev garages, Social Media developers garages, Tweetups, Social Media Club meetings, Launchups and more, why in the world is Utah having such a hard time getting into the tech Press of Silicon Valley?  Utah has a serious tech PR problem, and I’d like to help fix it if I can.

So why the PR problem?  Well, for one, correct me if I’m totally wrong here, but I’m not aware of many Tech bloggers in the area visible in the Silicon Valley scene, with over 1,000 subscribers that can get the word out easily.  I’m aware of three right now, please correct me if I’ve missed you: Matt Asay, Phil Windley, and myself.  Are there any more?  I think this could change if more people in Utah focused on technology in their blogging.  I’ve noticed a trend in Utah recently of many bloggers completely giving up on that, and it’s depressing, personally.

Secondly, of those 3 bloggers (sorry Matt and Phil – you’re going to hate me after this, I know), we’re not getting pitched by Utah companies.  The majority of my blog audience right now, as you can see, are Silicon Valley, and states outside of Utah.  Chances are that if you’re reading this you’re not even in Utah, and I think that’s sad, personally.  Utah has a huge opportunity to get the bias of their local tech bloggers, which in turn could lead to TechCrunch mentions, TechMeme exposure and more, and they’re not even taking advantage of it.  If you run an Open Source company, you should be pitching Matt Asay to write about you in his Open Road blog on CNet.  Phil Windley is also very interested in that (as am I, occasionally), along with interesting startups and people for his IT Conversations podcast.  If you’re building a social, real-time, or otherwise just plain cool tech startup you should be pitching me to write either here or on LouisGray.com, where I occasionally write.

picture-8

The darker states represent the higher traffic areas to StayNAlive.com

If you run a tech startup in Utah, money is hard to come by these days.  Exposure is easier than you think though.  If you’re hiring an expensive PR company to do this for you, you’re doing it wrong.  You should start by pitching locally, then if that doesn’t work (sorry, like an investor, bloggers have to turn down pitches as well), get on Twitter, build an audience, and most importantly, start your own blog.  If you ever want any advice in doing that please don’t hesitate to contact me.

There are hundreds, if not thousands of new startups in Utah right now.  I don’t know who you are.  There are hundreds of tech bloggers in the area, I’m sure, which can easily build an audience and help these startups.  I don’t know who you are.  I’m not sharing this to boast of my own subscribers, but rather to offer a call for help.  Utah, let’s work together to let Silicon Valley know we’re out here.  I think if we do it right, we could, and should, very well be considered the next “Boulder” of the MountainWest.  How can I help Silicon Valley know more about you?

If you live in Utah, or run a business in Utah, let’s retweet this around so we can help each other out.  Please be sure to share it with your friends.

FriendFeed Opens Up the Firehose to Developers

friendfeed-logo.jpgFriendFeed seems to be staying one (or two or three) step(s) ahead of Twitter in everything they do. Today FriendFeed released their real-time stream of data in beta to any and all developers wishing to write applications. Unlike Twitter, there is no application necessary, no NDA to sign, and all is controlled by simple OAuth. This also means users of FriendFeed-based applications will no longer need to get their special key to manually enter as was previously required.

The real-time stream is based on long-polling techniques to receive near-immediate updates of data from FriendFeed. With Long-polling, developers send a request to a given address, which the server holds open until data is ready for that request. The result is real-time data from the polled source, in this case FriendFeed. It is also less server-intensive as compared to the typical push updates similar to what Twitter is using for their /track and real-time streams, so in theory will scale better (and to me shows the maturity of the FriendFeed team as compared to Twitter’s).

In addition to their real-time stream, FriendFeed released an OAuth solution to developers, enabling users one-click access to the FriendFeed data stream for compatible apps using the platform. SocialToo, my service currently using the Twitter and Facebook platforms, will be using this authentication as well as we integrate FriendFeed into our environment. It will enable simple, one-click login and registration into our system, making it much easier for users to use socially-based applications.

My favorite addition is the integration of social graph data into the stream returned by FriendFeed. Previously, only the list of people a user subscribed to was available via the FriendFeed API. Now, both the list of those subscribed to, and those subscribed to a user are provided, enabling apps like my SocialToo to very soon be able to provide useful analytics around those following you on FriendFeed. Yes, this will also enable auto-follow and auto-unfollow (to keep out spammers) as well if users opt to do so.

Other features released in the API are the ability to upload almost any file attachment to a user’s FriendFeed stream, access to the powerful (and more than 140 character) direct message features of FriendFeed, sharing to multiple streams at once, and more. In addition, FriendFeed is returning the HTML for users and groups, so developers don’t have to differentiate between the two. Hopefully, this will also enable FriendFeed to maintain control of the API and, if you ask me, provide advertising and monetization opportunities via the API in the future as well, which Twitter has completely lost control over.

FriendFeed’s API has proven to have potential as a much more flexible option for developers than Twitter’s in the past, and I think they’re proving that with the new features. In addition to the features launched today, developers can also opt to customize the requests they send to FriendFeed, specifying query parameters about exactly what information they want to retrieve about users, allowing much smaller and much fewer requests to the platform. This is a welcome site as compared to the Twitter platform, which forces entire requests to pull information about a user and their friends, forcing much larger data requests, and higher costs for developers in the end.

FriendFeed is putting the pressure on Twitter with this release. My hope is that developers will see this, and try the platform out, giving Twitter more pressure to fix their own platform issues. If you haven’t tried it, today is the day for Social Platform developers to try FriendFeed’s API.

With No Notice, Twitter Adds More Limits – Password Trouble Ensues

twitter fail whaleTwitter is up to their old antics of adding limits again, changing the API, and not telling developers as they do so.  This morning Twitter released into production new limits around their verify_credentials() method in the API, only allowing users to verify their usernames and passwords through Twitter applications 15 times per hour.  The problem is they didn’t tell any of the developers.

Sure enough, searching Twitter (the issues are intermittent), users are having password issues across the Twittersphere, wondering what is going on.  It even affected my service, SocialToo, as we were using that method as a backup to verify users were indeed authenticated (and hence enabling us to notify them if they forgot to change their password with us).  I e-mailed Twitter, and while very respectful as always, they seemed surprised at the issues we were having.  When I asked if it had been announced anywhere they responded, “It wasn’t, no, because [we] assumed (apparently incorrectly) that people were only using this method occasionally.”   There has still been no announcement by Twitter on the new limits.

Apparently, on June 29th, new text was added to the Developer API Wiki stating (regarding the verify_credentials() method in the API), “Because this method can be a vector for a brute force dictionary attack to determine a user’s password, it is limited to 15 requests per 60 minute period (starting from your first request).”  The new limits don’t appear to have been put in place until this morning however, as that is when we noticed it at SocialToo.

So if you’re using the verify_credentials() method in your app, you may want to consider finding some other way to be sure your users are verified – I’m happy to announce it here.  It now only takes a few runs by only a few apps to hit that limit for each user, and then users are stuck in the water until the next hour is up until apps begin to adapt to these new limits.  That is why we’re seeing the issues across all of Twitter.  According to Twitter, the best way is to look for a 401 response code returned in your API calls, as unauthenticated users will return as such when using the API.  Twitter only suggests using verify_credentials() for new users.  My conversation with Twitter ended with the suggestion from them, “Migrating to OAuth avoids the risk of a user changing her password, FWIW.”

FWIW, OAuth is still in beta and not yet suggested for use in Production. In their exact words, “For us, ‘beta’ really means ‘still in testing, not suitable for production use’.” In other words, use the Twitter API at your own risk.

You can follow the password problems as they happen in real-time on FriendFeed below:

http://friendfeed.com/search?q=password+service%3Atwitter&embed=1

Twitter Looking to Raise the Dead With Previous Tweets

thriller zombieOne of the biggest complaints about Twitter is that it is only a “present-tense” service. To pull up a previous conversation or post, I either have to have favorited it, or it has to have been in the previous 3200 Tweets of a user. Anything beyond that disappears forever, or so it would seem. Twitter has previously said they are still archiving these old Tweets, giving comfort to some that maybe their conversations are not gone forever. Today Twitter gave further evidence to that, adding “Get the full archive of a user’s tweets” to their V2 platform roadmap on their developer Wiki. (under “Users”)

While the V2 Roadmap is not set in stone, nor is it intended as an announcement platform for Twitter, it does suggest that some time in the near future we may see access to your previous Tweets open for public consumption. It also suggests that it is something Twitter is currently working on, or has plans to be worked on.

One of the reasons I joined Twitter was that it could be used as a journaling service. It was a public way I could journal the little tidbits of life that perhaps, while insignificant to most, would enlighten and entertain generations to come that would like to learn more about me and my life. It lost that value however when I hit my limit of 3200 (or so) Tweets and could no longer retrieve my past posts, thoughts, and conversations. My hope is that Twitter releases this soon and I can again utilize Twitter as an archival, as well as communications platform.

Have an old conversation you just wish would go away? Looks like getting it off your stream may not be permanent after-all. Twitter seems to be getting ready to bring back those zombies again.  Now where’s Buffy when you need her?