The launch of Facebook’s Graph Search has led to the expected feeding frenzy from the media. The product was long-expected from the company, but what was delivered was different from what was expected. The hope was that the company would launch a product that would take Google head-on, but what was delivered was a query parser that uses a restricted dialect closer to natural language and considerably improved results display and filtration options.
The buildup to this event was seen in the stock price of the company in the past week with it rising to its highest in a while and going by the reaction to the product launch it has not been received well as the stock is down, but holding the $30 mark it had risen to. While the stock market is hardly a good indicator of the health of a company (ask Apple), it would seem that all is not well.
The fundamental problem with the Facebook v/s Google narrative is that till Facebook starts crawling the open web, they don’t represent a threat to Google as far as search goes. The same holds true in the opposite direction: till Google explicitly starts building a social network to pull people off Facebook they don’t represent a threat for Facebook. In short, Facebook Graph Search is not search as you know it, Google+ is not the social network that you use in Facebook.
Most of the divergence is the search approach for Facebook and Google boils down to two things:
1. Intent & Context: Most of the content posted on Facebook is posted with an active intent to be consumed within Facebook. This context is vastly different from content on the open web where the context is determined by Google using their secret sauce.
2. Universe: In Facebook, the universe of data is what is created and shared between the network of users.If it is not shared or liked by someone on the network, it will not exist in Facebook. In Google, the universe is every page that can be crawled out there.
For both companies battling the other one is not the most significant challenge they face. Google needs a framework in place that will, over time, reduce their dependence on open crawling (pull) and move in a direction where content publishers will intentionally (push) data into their index. This has the additional benefit of allowing them to fend off lawsuits regarding sourcing (crawling) and preferential display (using Google+ pages for local data).
For Facebook, user retention and overcoming Facebook fatigue is the big challenge. They can build many wonderful things on mobile and elsewhere, but it will all come to naught if a good chunk of its users start to find it is no longer fun to be on the service or be active on it anymore. The company has a long way to go to de-risk the core part of their business.
Coming back to the product specifics, it will be interesting to see user reactions once the feature is rolled out across its entire user base. Natural language querying has been an interesting niche for a while. It was considered as a panacea to all search aliments a while ago, but we discovered then that query parsing is only one half a brilliant search experience, the other — most critical — part is result quality and relevance.
On Facebook, quality is going to be awesome when the results are available, but I’ll wait and see how does availability work out for a large spectrum of queries. The trouble with socially networked data is that the results I see may not be the results you get to see. Having designed and run a private network for a while, I can tell you that this is a pretty significant challenge that few understand clearly.