:::: MENU ::::

Revisiting Linux With elementaryOS, Acer Aspire V5

With the old Macbook getting on in age (it is an early 2008 model MacBook4,1) the move to find a replacement for it was always on the cards. The machine had served me well, travelling with me to different parts of India, including high-altitude passes in Himalayas. Of late, even after a complete reinstall, the machine has been showing its age and with persistent heating problems and lock-ups, the writing was quite clearly on the wall. I could get it repaired, which I eventually will, but the board only supports DDR2 and the memory is maxed out as it is at 4GB. The only other option is to upgrade to a SSD, fix the problems and hope for the best after that.

The primary candidate for the replacement was to go for the 13″ Macbook Air. After the millionth (failed) attempt to find a reasonably priced Linux laptop solution that just stayed out of the way, I was pretty sure that I’d have to stick to OS X and Apple, and have no choice but to gulp down the high premium that Apple charges for the fire-and-forget experience it is more than justifiably famous for. In the midst of all of this, I ran into this interesting so-called Linux laptop from Acer. It is called the Aspire V5-431 and I found a pretty decent price at Flipkart for it.

At this point, I must digress a bit about the non-Apple laptops. Dear god, some of them,  especially the Lenovo ultrabooks, are such a ‘slavish’ ripoff of the Apple laptop line up. I can imagine smartphones looking much like each other these days. There are not too many different ways in which you can design a phone, but that’s not the case with laptops and it is really shameful the extent to which the copying happens here. I guess none of these copies are much of  a threat to Apple in the market, so it is probably not worth suing the manufacturers for it, but it still is not a great thing to see. The V5-431 also suffers from a bit of this ‘inspiration’ problem, but it is hard to mistake it for an Apple unit.

The laptop comes pre-installed with Linpus Linux, which is instantly discarded by most users. But having a Linux laptop meant that I could have some degree of certainty that most of the bits and pieces would work well should I run some other Linux distro on it. It has been a while since I have used a Linux desktop as my main platform and it seems that while the underlying platform has changed a lot (and for the better), the user experience is still ghastly and inconsistent, featuring interfaces and UX that can only be created and loved by engineers.

That was when I came upon this project called elementaryOS. It is based on Ubuntu (current version is built on Precise: 12.04.2), but has an awful lot of work that has gone into making the front end user experience clean, consistent and fast. It is hard to miss the very obvious OS X inspiration in a lot of the visual elements, but once you start use it a bit more, the differences start to show up and it does that in a nice way. Linux on the desktop/laptop has been begging for something like this for years and I am really thrilled to see someone finally do it right. If you care to take apart the bits on top, you’ll find a familiar Ubuntu installation underneath, but, you really should not bother doing that.

I have gone through some three re-installs for the OS so far due to various reasons. One thing you need to watch out for, while sorting out eOS on the V5-431 is to stick to the 32-bit OS as things get quite a bit crazy should you attempt mixing 686 and X86_64 platforms while using virtualization. The eOS 32-bit kernel is PAE-enabled, so you can use more than 4GB RAM on the machine, but I would highly recommend sticking to 32-bit on everything (OS, Vritualbox, any guest OS) and you’ll not have a reason to complain. I discovered all of this the hard way as my primary requirement is to have a working Vagrant installation on the laptop and eventually had to go through redoing the base box in 32-bit (the original from the Macbook was 64-bit Centos 6.4) in the end.

The experience has been pleasant so far with the laptop. I have ordered more memory (8GB, to be precise) and even at 2GB the machine feels a lot faster and stabler than the ailing Macbook. I will hold off on getting a SSD at least for now as I feel the machine is quick enough for me at the moment and the extra memory will only make things much better. After many attempts at customizing the interface what I have realized is that it is best left alone. The developers have done a great job of selecting the defaults and 9/10 times the modifications you’ll make are not going to make it any better. The only thing you’ll need is to install the non-free TTF fonts, enable them in your browser’s font selection and get on with the rest of it.

Other than that, the main issue is of color calibration of the monitor. The default install has a blue-ish tint with the monitor and the blacks don’t render true on it, which was infuriating when you get that on a glossy screen. I finally fixed the problem by calibrating the display on a Windows installation and pulling out the ICC profile from it. I’ll share the link to the profile at the end of this post and if you have the same machine and are running Linux on it, use it. It makes a world of a difference. You will have to install Gnome Color Manager to view the profiles.

After all of that, the machine seems quite a good deal for me. It does not heat up too much, is extremely quiet and weighs a bit over 2-kilos. The 14″ screen is real estate I appreciate a lot, coming from the 13″ Macbook. The external display options are standard VGA and HDMI. My primary 22″ monitor has only DVI-D and DVI-Sub inputs, so I’m waiting for the delivery of a convertor cable to hook it up to that one. The battery is a not the best, though. Acer has cut some corners on that, but you can’t have everything at such a low price. Even with the memory upgrade, the machine will still cost me less than 1/3rd of what a new Macbook Air (the base model, that is) will do right now. I’m getting around 2.5 hours on real hard core usage, which is not bad at all.

The stack is otherwise quite stable. It reads something like below:

  • Google Chrome
  • LibreOffice
  • Virtualbox
  • Vagrant
  • Sublime Text 2
  • Skype
  • Dropbox
  • VLC
  • Darktable

I’m not exactly a power user and 90% of my work is done in a text editor, web browser and VLC, but the combination of eOS and the Aspire V5-431 is something that I can easily suggest to a lot of people looking to break away from regular Linux/Windows/OS X and that too at a good price. There is a new version of the laptop that is out with the next generation of the chip, but I have not seen any great benefits that you’ll get from that upgrade which will cost a bit more. You can spend that money on getting more RAM instead.

eOS is also a nice surprise and it is a pretty young project. With time it will only get better and eventually become quite distinct from an OS that looks similar to OS X.


Encryption Is Not The Answer

The strangest reaction about the entire privacy mess that is unraveling is the quest for even stronger encryption. If you ask me, that is trying to solve the problem at the wrong end. We can, in theory, go in for unbreakable-grade encryption and hope to keep everything away from prying eyes. That would have been fine if the problem we are dealing with was limited to having an expectation that your communications will be private by default.

The question we need to ask is if all this snooping actually delivers the results that we are looking for. And it is a particularly tricky one to answer as a certain amount of force, once applied, will produce at least minimum level of results. Any sort of enforcement, once deployed, will result in at least some impact on crime anywhere in the world. So, yes, you will catch at least a few bad guys by doing things to catch the bad guys.

Thus, things turn a bit more nuanced than the binary “will it” or “won’t it”. It also becomes a question of efficiency and effectiveness. And this is where the tenuous contract between the state and its subjects comes into play. Historically, this has always been a clearly understood tradeoff. In exchange for giving up absolute freedoms and an absolute right to privacy, the state provides you a stable and secure socio-economic environment.

You Are With Us, But The Data Says You Are Against Us

The efficiency and effectiveness of any system is not always determined by how wide a coverage can the system aim to accomplish. A good example of this is prohibition. That system worked by outlawing production of alcoholic beverages. The coverage was complete, yet, it was hardly foolproof and led to other major problems. In instances like these, the contract is greatly strained and other than the exceptions of war or episodes of tyrannical rule, it inevitably breaks.

The power of any state, especially democratic ones, is drawn heavily from allowing the majority of the population to feel the state looks after their best interests. This keeps the state and the subjects on the same lines of the divide, even when the state has always been more powerful than the individual. This works well only when the system assumes that the majority of the participants are good people, with reasonable margin for error.

The same tradeoff, in free societies, allows you to keep knives at home without suspected of being a killer, even as many (albeit a smaller number) have killed others using a knife. If one fine morning, the state starts treating anyone who has a knife as a potential killer, the system will eventually break down. A state’s power may be considerable, but it is still a power granted by the majority of its subjects. The moment a state makes almost all of its subjects suspects in crimes that may or may not happen, the contract breaks and it breaks for good.

If you concern yourself with the systems — the design or study of it — one that will stand out before long is that there is no perfect system or law. The best ones are the ones that aim to get it wrong the least number of times, with allowances for fair redressal than the ones that aim to get it right all the time and try to be absolute. In a healthy system, the subjects don’t have an expectation that the state will always be right and the state does not have an expectation that the subjects are always wrong. This is what keeps the tradeoff a viable option for both parties and like any good bargain, it requires both parties to behave within expected lines.

A healthy system is less likely to punish the innocent, even at the cost of letting more of the guilty escape punishment.

The breakdown aside, there is the question of efficiency. Systems that try to examine every interaction will always provide the initial rounds of success. Over time, though, the participants in any evolving system (consciously or sub-consciously) adapt to the examination and soon you have a system that tracks everything, yet it catches nothing as you have now given the majority of the population an incentive to be evasive (for the fear of wrongful prosecution). It is easier to find 50 bad apples in a batch of 200 than it is to find them in a batch of 200,000.

In one fell swoop, you have made every subject a potentially bad person, leaving the utterly distasteful task of proving the negative as the default. Even if you ignore the issue of false positives, such systems are impossible to sustain over longer periods of time as they get more and more expensive through time, while becoming less efficient.

Role Of Computing

Major developments in computing in the new millennium can be broken down into two things. First is the ability to capture vast amounts of data. Second is the ability to process them in parallel and find patterns in them. Collectively, we have come to call this “big data” inside and outside tech these days.

We have always had the ability to capture data. The concept of accounting itself is almost as old as the human civilization. Data has always been part of our lives; it is only the extent of the data that was captured that has grown over time. Given enough resources, you can capture pretty much everything, but data itself is worthless if you can’t process it. This is the reason why we never thought much of big data until now.

One of the greatest silent revolutions of computing in the past decade has been the shift from identification through the specific to identification through patterns. In the late 1990s, when the internet was taking its baby steps to becoming the giant it is today, the identification of you, as an individual, was dependent on what you chose to declare about yourself.

There were other subtle hints that were used, but most of anyone’s idea of who you were was dependent on what you chose to disclose. Over time, that changed to observing everything you would do and figuring out really who you were likely to be, based on the actions of a known group of people whose actions match your actions, even if what they have declared about themselves have nothing in common with what they system has decided they are about.

In daily life, you see this in action in contextual advertising and recommendation systems. In fact, almost the entire sub-industry of predictive analysis depends on making inferences such as these. This, aided by the vast amount of public data that we produce these days, has meant that profiling a person (provided there exists a vast amount of profiled known data) as of a particular type can now be done in seconds, compared to weeks or months earlier.

“If he looks like a traitor, walks like a traitor, and talks like a traitor, then he probably is a traitor”

The above line could easily fit how any overly suspicious state thinks of its subjects, but it is just an adaptation of the most famous example of inductive reasoning called the ‘Duck Test‘. The earlier concept of knives in societies will make a lot more of sense when seen in the light of this test and big data.

Even in earlier times, we could collect all information about every knife made and sold in a country, but mining useful intelligence out of it was a hard job and even harder was to get it done at a reasonable speed. After all, there was no point in finding out now that Person A, who bought a knife 6-months-ago, was likely to commit murder, which he in fact did 4-months-ago.

The advances in computing now enable us to predict who all are likely to buy a knife in the next four months and given the profile of activity of murderers in our records, we can also predict who, of the lot of knife buyers in the last three moths, all are likely to commit murder in the coming months, at what time of the day and which day of the week.

That has to be a good thing, right?

Not really.

How Wrong Does Wrong Have To Be To Be Really Wrong?

If you are smart, the truth that you quickly learn from dealing with large amounts of data is that it is an imperfect science. The science is good enough to build an advertising business that will wrongly recommend tampons to someone who is very much a male or wrongly suggest an ex-husband as a potential mate on a social networking site; but it is nowhere close to being good enough to identify potentially bad people, based on patterns and inferences.

If we go back to the earlier point about what constitutes a good system — something that gets it wrong least number of times, systems that are built on aggregating data (or metadata) are terrible ones. It is not that these systems don’t get it right; they do and probably even to the extent of 70-80% of the times, but they also get it terribly wrong the other 20% of the time. When you get an advertising or recommendation system wrong, it causes a bit of embarrassment and maybe much ire, but you get a surveillance system wrong and you wind up putting way too many innocent people behind bars and destroy their lives.

People who work with big data in advertising and other online operations will be the first ones to tell you that these systems need constant tweaking and that they’re always prone to known and unknown biases based on the sampling and collection. In working with big data sets, the first assumption you make is that you are probably seeing what you want to see as what you are collecting often has the bias of your desired outcome built into it.

The Sordid Tale Of Bad Outcomes Born Of Good Intentions

With all of these flaws, why is there this major attraction in intelligence, law & enforcement communities to wholly embrace these flawed technologies? The answer lies in how the nature of conflict has changed in the 21st century.

Once upon a time, wars were simple affairs. A strong army would, more often than not, decimate a weak one, take over the lands, wealth and people of the defeated and expand their kingdom. These used to be pretty isolated and straightforward affairs.

Modern warfare bears little resemblance to any of that. For one, absolute might has become of less relevance in these times. The fear of a lone bomber these days cause more invisible damage than an actual bomb that kills many. This asymmetry has brought about a substantial shift in placing an absolute importance on prevention than retaliation.

The good intention is prevention. The bad outcome is all the snooping and data collection.

Enforcement and intelligence, anywhere, loves preventive measures. The fine balancing act of imprisoning 20 innocents to catch two really guilty to save 20 million has always been a debate that rarely finds a conclusion that is agreeable to everyone.

What makes the outcome so dangerous is that such profiling is based on actions that are performed by the majority of the population who have absolutely nothing in common with a person looking to blow up something.

Problem is that drawing such inferences gives enforcement and intelligence a magical shortcut to identifying subsets of people who can be further investigated on the basis of their belonging to the same bucket of people. Given how the inferences are made, it is easy to be bucketed in the same group if you have the same usage profile on a handful of harmless websites as a known suspect has.

And given the fact that pretty much everyone would have done something that’s not entirely right at some point in their lives, this also opens up a vast avenue for abuse by an overactive arm of enforcement, purely based on suspicion than any actual fact.

More Encryption Is Not The Answer

Coming back to where we started from, the fact is that encrypting anything and everything does not keep you safe from any of this. In fact, using so much of encryption will probably identify you as someone suspicious from the outset and that suspicion can be used to procure permission that will either force you or organizations that are intermediaries (ISPs, web hosts, the list is endless) to cooperate.

Another reason why encryption fails is this: even on a fully encrypted loop, if the other party you are communicating with is susceptible to pressure, all that is required is for the other party to silently cooperate with whoever is investigating them. That requires no brute forcing or any other fancy tech. It just requires a single weak link the chain and, unfortunately, the chain has many weak links.

In conclusion, the problem at hand is not a quandary that is technical in nature. It is one that is about the relationship between the state and its subjects. In a rather strange twist of fate, this is exactly what the terrorists wanted the modern state to become — one that lives in fear and lets that fear oppress its subjects.

Once we reach that point it is a endless slide down the rabbit hole and I am afraid we won’t realize the extent of that slide before a lot more of damage is done.


On The Content Business

Fair disclosure: I have no idea why Jeff Bezos bought WaPo. You won’t find much about that in this post. This is going to be a rambling, ill-focused post.

Much of the discussion around the content business eventually comes around to the question of paywalls and subscriptions. I feel this is the wrong approach to trying to find a future for an industry that always has had a key role to play in the society. The business of content has not been supported by subscriptions for a long time and that has been the case even before the internet became as big as it is right now.

The scale that the bigger content businesses achieved during their glory days was not because the consumers of the content were paying a price that was close to what it took to produce that piece of content. The scale was there because of the advertising the content producers could bring in. The majority of the damage has happened on that front and trying to repair that damage by getting subscriptions to cover for it is bound to fail.

The business of content is really quite simple:

What do you publish?

How is it consumed?

Who gets to consume what is published.

The three factors together makes a publication a platform play for advertising. Yes, subscriptions are there, but they only make for a bit of nice loose change in the larger picture.

A hypothetical publication, if it attempts to explain its business of content may wind up looking like this:

What do you publish: A weekly magazine on automobiles.

How is it consumed: Print and internet.

Who gets to consume it: 20-45 year-old, 80% male, from the top three metros in the country.

Where internet has been destroying the old content business is in identifying the ‘who gets to consume it’ part of the business. You are not going to make up for the losses on that front by trying to fix the subscriptions part of the business. That horse bolted long long ago and the fact is that, as a large publication, you can’t hope to survive and revive based on how you can charge your subscribers more.

The key question is: how can you deliver a better audience for your advertisers, without compromising the quality of what you publish? There seems to be little effort being put into addressing that crucial question. Audiences these days, like good content, need to be curated and nurtured.

It won’t, though, be an easy thing to do as traditional advertising is used to picking quantity over quality and a historical lack of instrumentation in the industry has allowed them to get away with this. So, even the newer products and models are essentially reinventing the older flawed way of doing things and a way forward that is different seems to be nowhere in sight.

Javascript Corruption In Vagrant Shared Folders

If you are serving javascript files from your typical LAMP stack in Vagrant using shared folders, you will hit a problem where the JS files will be served truncated at arbitrary lengths. Curiously, this does not seem to affect other static text file types and it could be a combination of headers and caching that is responsible for this.

By the looks of it, the problem is not something that’s new.  This thread from the Virtualbox forums addresses the same issue and it goes back all the way to 2009. And the last post in the thread provides the right solution, which is to turn off sendfile in httpd config.

Curiously, EnableSendFile defaults to ‘off’ in the stock installation, but disabling it specifically gets rid of the problem. This should be fun to dig into and unravel, but I will leave that for another day.

Quick Tip On Shared Folders And Logging In Vagrant

Continuing with the recent posts on Vagrant, today, we’ll look at the tricky issue of shared folders and using them as locations to store logs.

My idea with using Vagrant was to keep all development-related files, including logs, on the host machine through shared folders, while the guest would only access these files through shared folders. This gives you the best of both worlds, as you can use your editor of choice on the host, while the files are executed on the guest. This works fine on a set-up that has only few shares and not more than port or two that are forwarded.

For a bit of background, this is how Vagrant goes through its start-up cycle.

First cycle is all network-related. It detects any forwarding conflicts, cleans up old forwarding settings and once the coast looks clear, it sets up all the forwards specified in the Vagrantfile.

Next cycle is the actual VM boot, where a headless instance of the VM is kicked into life.

Lastly, Vagrant loads all the shared folders.

The problem comes starts when the guest machine starts processing its init.d directives at the second cycle. The shared folders often take a good chunk of time to load, and depending on the level of panic triggered by the software started by init.d when it encounters missing files that are missing because, well, the shared folder that has them has not been shared yet, life may move on peacefully (with adequate warnings) or the software may just error out and die.

One such software is the Apache HTTPD daemon. It can start-up without issues if it can’t find the documents it has to serve, but it simply throws up its hands and quits if it can’t find the log files that it is supposed to write to. And a good developer always logs everything (as she/he should).

The solution, in the case of HTTPD, is to ensure that you log to a volume that is on the guest machine and not on the host. This does mean that you can’t tail the log file to see errors and requests stream by, from the host, but it is not a big problem compared to figuring out mysterious deaths of the HTTPD daemon, which starts-up fine after you do a ‘restart’ once the VM is fully up and running.

Port Forwarding Small Port Numbers With Vagrant On OS X

While working with a Vagrant set-up it is easy to forward ports with the forwarded_port directive.

This is accomplished by making entries in the format below in your Vagrantfile:

config.vm.network :forwarded_port, guest: _guest_port_number, host: _host_port_number

The catch here is that Vagrant won’t forward ports when it comes to small port numbers on the host machine. This means that you will have to access the service on a higher port number, which is a bit of a downer considering the fact that we are going through all of this pain to have a development environment that is nearly an exact clone of what we will find on production.

The solution is to use ipfw (the humble IP Firewall in Unix-based and Linux systems), to forward the low port to a higher port and then forward that higher port to the corresponding low port on the VM.

Let us assume that you want to forward both HTTP (Port 80) and HTTPS (Port 443) to the Vagrant VM.

First, use ipfw to forward the ports with the host:

sudo ipfw add 100 fwd,8080 tcp from any to me 80

sudo ipfw add 101 fwd,8443 tcp from any to me 443

Then forward the lower ports to higher ones in the Vagrant file.

#forward httpd
config.vm.network :forwarded_port, guest: 80, host: 8080

#forward https
config.vm.network :forwarded_port, guest: 443, host: 8443

I do realize that this is a bit of a loopy way to go about accomplishing this, but when you have to juggle port numbers in a complex deployment environment, the overheads of keeping in mind the difference (and the set-up/code changes that will handle it) and the propensity to make mistakes will only keep increasing through time.

As far as I know, you can do the same with iptables on Linux, if ipfw is not your poison of choice, but I have not tested it.

Vagrant, FreeNas And An Automated Life

Somewhere during the week my OS X Leopard installation became so broken that it made more sense to go for a clean install than attempt another workaround. The installation was well over two-years old and for the amount of fiddling and development work that gets done on the machine it had held on quite well. But after a point the debt accrued from numerous hacks and fixes around problems started eating up more time and thought and I felt it was better to just dedicate a day or two and start from scratch with the OS X.

Having chosen that path, I decided to change the manner in which I do my development. Having heard a lot of good things about Vagrant, I wanted to give it a shot. One of the broken things on the old installation was QT, which meant that I could not get Virtualbox installed as it was dependent on QT. Thus, moving to a Vagrant-based development set up would have necessitated a reinstall in any case. The other change I made was to convert my old netbook into a FreeNas machine, which means that it would handle DLNA, storage and file transfers on its own and also consume very little power as it is an Atom-based machine.

Instead of using one of the many available boxes for Vagrant, I built my own base box with the following set-up:

Centos 6.4 (64 bit)

Once I can find a bit of time, I will upload the base box and related Vagrant files for anyone who would like to use the same.

The current set up forwards everything to port 80 to the VM’s port 80 using ipfw and using Vagrant’s file sync, all my web directories remain on my main hard disk. This has wound up giving me a very consistent development environment, much like my production one and it also means saying goodbye to the confusion one has to go through every time to compile and get things to run on OS X, even though it has gotten so much easier compared to 6-7 years ago.

This also has the benefit of making the primary OS a much more simpler affair to handle. It has now has only a code editor, Git, Subversion, Vagrant, couple of browsers, Dropbox and LibreOffice as its main components. My development stack is almost a level now that I can have nearly complete device independence and portability as I can always easily move my development Vagrant box to another machine and get started there in less than 5-minutes.

The timing for doing this, though, was not that great. I lost a good two-days in it, but the longer it was kept off, the more it would have cost in time and money along the way and this is one of the many investments that need to be made towards running a smart and lean operation. Next steps will be to fully automate provisioning, testing, logging and deployment, which will enable me to focus more on the key aspects of running the business than spend many hours doing repetitive tasks while waiting for finding talented people who can do all of this.

Farewell To An IP

Slicehost, the company, stopped existing a while ago after they were acquired by Rackspace. So, the title is a bit inaccurate that way, but having kept the same VPS with them since 2008 (04/04/2008, to be precise), I still consider the server as a Slicehost box than a Rackspace one. It had gone through a couple of rebuilds in the years since 2008, but the IP remained the same (, but it had not hosted anything worthwhile in a long time and I had kept it going since I was still using the DNS management, which remained as an item on the to-do list, that was dealt with in the past couple of days.

So much of the technology stack has changed in the past 4-5 years. There is now literally an ocean of high quality managed services are available now for so little that it is ushering in a world of a new generation of companies who do only integration and value addition by connecting the dots together. Once you dip your toes into those waters, either as a customer or as someone who builds the integrated solutions, the possibilities are endless. Those possibilities are also rapidly changing the world of enterprise, especially at the smaller end.

What used to cost thousands of dollars and at least a small team to build and maintain can now be put together and maintained using a fraction of the cost, if you know how to go about doing it. The playing field almost changes every 8-12 months with new technology and pricing becoming available in the market. This, though, may not always be a good thing as the choices can often be quite confusing and cost alone is not the only determining factor. Organizations that leverage these changes early and effectively will have a huge advantage over the ones that are slow.

And it should be a worrying fact that the cycle of hyper growth, to consistent growth, to plateauing growth, to declining growth is rapidly shrinking these days. Multi-billion dollar companies are created and disappear well within decade’s time, while it used to take close to that to get to hyper growth for companies at one point in time. The high end consulting and service companies won’t have it that easy either for a lot longer.

Go West, Young Man And Other Tales From The Entrepreneurial Crypt

Washington is not a place to live in. The rents are high, the food is bad, the dust is disgusting and the morals are deplorable. Go West, young man, go West and grow up with the country. — Horace Greeley

The context maybe different, but the theme — that the fight is simply not worth it here, aim for the Western market — is a recurring one in the digital entrepreneurial space in India. The difficulties in starting up in India are well known and documented. The most recent notable one was Dev Khare‘s ‘The Silent Killers of Startup Growth‘.

The popular thesis seems to be that it is better not to build a product specifically aimed at the Indian market, but at the global one. This thesis is backed by the two kinds of proof – the first being the success story of Wingify and the second being stories like Linea, which, reportedly, has raised $4 million recently. An app like that would not stand any chance in India, no matter how well executed it may be.

The problem has different parts:

1. Lack of funding.

2. Lack of an existing market.

3. Lack of exits, M&A activity.

4. Product DNA that’s not tailored to the Indian audience.

Most of these factors actually compound each other, so the effect is rather drastic on both activity and perception of the market.

But, Hold That Thought

The story is not all of gloom and doom, as shown by the SAIF Partners’ story. The fund, apparently, made 4x returns on their first fund and are on course to do a 5x return on their second fund. Not bad for a country that seems to be a bad bet for entrepreneurs, eh?

The devil, though, in any story (positive and negative) is always in the details. SAIF’s portfolio is not limited to digital and it is spread across different domains. They also struck out with iStream, which recently shut shop and the prospects for the e-commerce plays are not too bright at the moment (Zovi maybe an exception due to their manufacturing background).

Even then, their willingness to make big bets across sectors and have more hits than misses in a market like ours is remarkable. And, having met the team couple of times, I have to say that they are very approachable and low key.

Let us be honest. The Indian story is not a straight forward one. As pointed out rightly by Archit Gupta on a Hacker News thread, success here can often be about having the right connections. A good product and a great team addressing a potentially huge market opportunity is absolutely no guarantee of success here. Connections, above everything else, matters.

Even when corruption and regulation are not determining factors, who you know in a company and how much you can influence them is more critical to closing a sale here often than having an excellent product. Unfortunately, it is also reality that we cannot choose to ignore if we have to grow in the market.

The way out of this morass is neither simple nor easy. There are some really excellent people in every part of the ecosystem who are good and who are looking to good, but they are nowhere close to being empowered to do it. For all of us who care enough, it is imperative to make all the changes we can make, even if it looks hopeless. It is even more important for those who are in influential positions to make this change.

It will take time, it will be hard, but we can break this wall down, one brick at  a time.

Please Don’t Stop The Music

And just like that Flipkart announced the demise of the Flyte, their digital music offering. And the numbers are pretty damning. 100K paying customers is not a great number, when you consider that even a single track purchase at Rs.5 can be considered as a paying customer and we don’t know the detailed breakup of the numbers.

The biggest downside of this development is that it will now set a sort of benchmark at 100K users for any paid digital content product in India, at a really low ARPU. This will have a pretty damning effect on anyone who is looking to get into this segment as Flipkart’s failure will loom large for a long time to come; at least until the fundamentals of the market changes.

While it is hard to figure out what exactly caused Flipkart to shut Flyte down within a year (sorry, no insider info), from the outside, it would seem that the company miscalculated the market size and costs. The product probably made sense two-years-ago when it was critical for the company to widen its base of offerings and topline; much has changed (drastically) since that time.

Even when you keep aside the licensing costs (the minimum guarantee mess), it still costs a lot to deliver the product. Going by NBW’s 2.5 million downloads/100K users number and Medianama’s Rs 9-12 ARPU, the revenue barely touches Rs. 1.5 crore. Another, slightly more liberal, calculation does not push the revenue over Rs. 3 crore for the same time period. Even the most optimistic scenario barely covers the licensing cost, in a segment rife with issues in hitting hyper growth.

None of this should have come as a surprise to the company, as these are well known facts about the digital goods market in India. What has changed is the outlook in the primary business Flipkart is in and the bleak prospects there. With their road ahead firmly set (grow massively big or die quick), they can’t afford to be in niches that won’t enable hyper growth. Flyte seems to be the first casualty of that.

And, oh, incidentally, if you think the Spotify clones are doing any better out here, you are mistaken. They have to pay per stream (at least the cases I know of), monetization is scant and some are already looking for more money to sustain themselves in the long run.


The Community Edition: LocalCircles, Tumblr, WordPress, Quora

Online communities predate the content publisher/consumer face of the internet by many years. The earliest communities were the famed bulletin board services and usenet groups and now they’re making a comeback. This post will take a look at some of them.


Screen shot 2013-05-29 at 8.37.44 AMLocation-based social networks have a huge amount of unrealized potential and LocalCircles (in invite-only beta) is an attempt to do that in the Indian market. I am a big believer in this product segment and wanted to build a product in it, but wanting and building and two different things. In the U.S. market, a product called Nextdoor has been getting some good press of late and LocalCircles is pretty similar to that.

The big idea is very simple. If you can cover around 500 localities in 2-years and if each locality generates an average of  10000 page views per day (not entirely impossible if the product picks up) it will make for a neat half-a-million pageviews in a day. For a locality to be allowed to exist, it needs to have at least 20 members. You can easily average 200 members per locality once the engines get cranking and in the same 2-year period you can have 100000 registered users who are validated by location, interests and by the fellow circle members.

The monetization options are numerous: Classifieds, recommendations, deals — the list is endless. The true power in the product lies in the core of it — its exceptionally good quality of users.

Now that I have gushed enough about the positives, it is time to look at the problems. It all revolves around one little word called curation. Good communities, as a rule, need to be curated aggressively. Well, there are exceptions, which we’ll tackle later, but a good community is like a good garden. Everyone envies a good one, but few can manage the effort that goes into curating a good one.

The first step for a new product like LocalCircles is seeding/boostrapping individual communities. While the power of network effects is rather well-known, but brand new networks have little of that in their early days. To bootstrap 500 communities in the same manner will take a lot of effort and money. And once you get a locality going you have to then curate for the quality of content and member behaviour (any kind of arbitration/resolution in online communities is tedious). If you don’t set and enforce the rules early enough in the game, it can all unravel rather easily and once it does it is a genie that can’t be put back in the bottle.

Which can kind of work completely against what gets the valuation bells ringing rather loudly these days — the big ramp-up. I can only hope both the company and its investors are aiming for the long slow game than the short grow-like-mad-at-all-costs-and-be-valued-at-a-billion-USD game.


Speaking of curation and the impact it has on the quality/health of a community, I guess it is now OK to come out and speak about the acquisition of Tumblr by Yahoo!. Enough has been written about whether Marissa Mayer will do a Geocities Redux (you have to look up the Fred Wilson connection there. The Geocities deal kicked off his life in the big league.), so I’ll not go down that route. If Mayer had not snagged the deal (once the exclusive talk time was over, the bidding war would have started in earnest), the press and punditry would have roasted her alive and they are roasting her alive now that she’s done the deal. Ergo, the roasting is a given. No point paying much attention to it.

Anyway, coming back to curation, Tumblr is one of the most un-curated networks out there. Yet, the scale of the content flowing through it is so massive that someone had to step up and buy it (for the sake of convenience, I’ll ignore the terrible monetization issues they have and how much it costs to keep it going). In the world-before-Tumblr, posting someone else’s content on your page/site was a complete no-no. Tumblr popularized the reblogging concept, essentially making it easy to produce reams of good content on your page, even if you didn’t produce a single piece of your own.

Now, that sounds a lot like curation, does it not? Almost every active user on Tumblr acts as a curator of fine things in the topic of their liking. Which is why it is a massive hit with the adult content (*cough* porn) community. You have some of the best experts in any domain (there are a lot of them in the adult content niche, it would seem), picking out the best for you, in a format with nearly no ads, no crazy app requests or ‘real name’ issues. There’s only hours and hours and nearly endless goodness.

So, technically, there is curation on the Tumblr network. But it is a network of curators than the curated and therein lies a significant difference.


Speaking of being different, we come to WordPress, which recently celebrated its 10th birthday. WordPress is an interesting product (yes, it is a product as the company who manages it is called Automattic). The open source product (as seen at WordPress.org) that is the foundation of the various products is available for everyone to install and use. The company also runs the biggest managed/hosted WordPress network on WordPress.com and it also has a VIP program.

Automattic has been profitable for a while now and they have been one of the quiet success stories of the content and community world. They are also quite a boring company (in a nice way, that is), in that they publish a lot of their stats, saving you all the guesswork, instead of taking the time-tested route of hammering out the ‘explosive-month-on-month-growth’ riff on any stage available. To do the 10-year celebration in style, they even got some money for the early investors and founders (around $50 million at a rumoured slightly-less-than-$1-billion-valuation’) in a secondary deal.

Over time, they have integrated various community aspects into the product (network/Buddypress) and have seeded a less-visible but massive developer community and ecosystem around the main product. There have also been less than spectacular succeses on the community front. Gravtar was an early starter in the identity provider space, but it never really grew into something big. Same is the issue with IntenseDebate, which has been completely eclipsed by Disqus (there goes off the Fred Wilson portfolio alert again).

Fortunately, the core products like WordPress.com and the VIP program continue to do well and will stay strong for years to come, keeping the quiet success story going for both the community around it and the owners and investors.


To end, I’ll take a quick look at the elephant in the room — Quora. I can’t but be intrigued by a product that is so polarizing on the Internet. The legion of people who used to love Quora and now hate it must be as big as the entire user base of Medium, which is the new pretty one in town. That said, Quora still has a lot of active users (reportedly, majority of them are the minority of Indians, who can write well, seeking refuge from the onslaught of the masses elsewhere) and vital signs seem to be good.

At $60-million raised in three-years (including a good bit of coin from the co-founder’s own pocket),  the company has enough money to last at least another couple of years, in which it will probably find a good suitor. One of the reasons why a lot of people walked off (including the rumoured sidelining of one of the cofounders) in a huff was the aggressive push towards increasing the user base and page views on the site. The hype cycle will play out through the next few years and if the money does not run out first, they will be acquired by one of the larger digital companies who will suddenly discover that they always had a latent desire ” to share and grow the world’s knowledge”.

Sounds much like a future quote from Marissa Mayer, does it not?