Mobile Data Tales From Rural India

One of the rather unfortunate aspects of most of us switching to air travel as a primary mode of getting to places in the country is that we miss out on a lot of what goes on in vast regions of the country that don’t fall into the urban/metro bucket. It is important know what goes on in these regions because unlocking the potential in our billion-plus market has a crucial dependency on producing products and services that make sense for this market. With travel being sparse this year, it was a pleasure to hit the highways once again a week ago and we traversed some rarely-visited parts of Uttarakhand, Himachal Pradesh and Haryana. It was back-breaking in parts due to the road conditions, but it was, as always, extremely informative.

State of Rural Mobile Data

At least in the regions we covered, mobile data was fairly ubiquitous. The speeds were not much to write home about. Outside urban centers, 3G in India is a joke, so I’ll not even try to address it. Airtel offers up EDGE wherever it can get a signal to you (which is, commendably, almost everywhere), but possible peak throughput on the network is often thwarted by abysmal backhaul. At one place (in Chakrata, Uttarakhand), I was getting 30000 ms pings to Google’s servers. Even 1997-era wired internet in India didn’t have to put up with something as terrible as that.

BSNL provides a service that is mostly there to make up numbers. If you put up a tower on a hill that can cover four villages, you’d be technically saying you cover four villages, but if within those villages you reception is limited to certain areas, what good is that service anyway? Similarly, they do offer data in these remote places, but they are often only GRPS services and not even EDGE. Then there are the longstanding complaints about the generators that power the towers running out of diesel and going offline for big chunks of time.

Low Data Usage In Low End Smart Phones

Between Airtel, Idea and Tata Docomo, you can now travel the country and stay connected with a barely-acceptable level of service. The modern, full-experience web is unusable under 20 Kbps and that is exactly the kind of data quality/speed on offer in a big chunk of the country. If you keep this in mind you can imagine why data services have not taken off at the lower end of the market. The first problem is cost, which I’ll tackle later. The speed and quality is so awful that it won’t be possible to even download apps that are less are 5-10 MB in size.

The small towns work around this problem, to an extent, with “mobile downloading” shops that sideload apps. This will work for the popular apps, but for ones that don’t fall into the popular bucket, you’ll be out of luck. And then there is the case of app discovery, which will be non-existent in an environment like that. Next time you feel like making fun of people using nearly zero data on lower end smartphones, do keep in mind that this is not just because they may be stingy (compared to the high-end smartphone owners), there are also other factors involved in it, which you won’t be aware of unless you travel regularly out of well-connected cities.

Form Factor, UI/UX Comfort

What was heartening to see, though, was that the newer devices have catapulted over a lot of interface-related issues that has hampered PC penetration in these areas. Even now, many decades after its introduction, the PC is still not an easy device to master or interact with. Even experienced users have a degree of discomfort in using a PC, which is hard to explain. Mobile devices, somehow, have seemingly decimated this problem. Even the lousiest tablet interface is picked up easily in no time by a user who otherwise has a lot of trouble using a PC.

Is this because there are fewer things — couple of buttons and a touch screen v/s mouse, keyboard, screen — to coordinate? I’ll leave that to the interface and UX experts to determine, but the change is palpable. It was interesting to see a bunch of soldiers in the small tea shop that we were in had one in their group who was immersed in his 7″ tablet. Compared to previous years, the mobile phone shops also seemed to market and stock a lot more of the 4-5 inch screen phones. Three-years-ago, such a thing was a rarity.

Where Are The Products?

The adoption gap, in terms of time, between rural and urban areas, for smartphones may be reducing, but we still don’t have products that mean much for them. Most of the existing products address either extremes — the really upscale and urban audience or the other extreme pushing alerts about agriculture and similar things — and there is really no product that is available that makes sense for the consumer in those markets. So, we wind up again with a situation where content served up on these phones are basically pushed in through the aforementioned “mobile downloading” centers that has been the perennial revenue leakage fountain in India.

Cost Of Data

Every year we have big numbers that are put out by different agencies about mobile data usage in India, but I have a hard time buying that. A remotely usable data plan in India will cost at least Rs 200 a month, even now. And that’s on EDGE and not even 3G. Considering that there’s still major resistance to a wired data connection at the Rs 500-Rs 600 price point, the Rs. 200 per month cost is non-starter. A wired connection can potentially be reused by members of the family, while mobile data is rarely shared.

Even if you consider two as an average number of devices in a household, it is a Rs. 400 outlay on a fixed cost basis for a family. In a market that is a price sensitive as ours this is not a good thing. Mind you, even at Rs 400 you are not going to get the full speed experience, which means that the user is more likely to try it and not continue because of the poor experience.

Adventures In Entrepreneurship Of An Asocial Person

For someone who would rather not interact with anyone at all, if I could help it, trying my hand at entrepreneurship was probably the most illogical thing to do. See, the thing with taking risks, for smart people, is to take manageable risks. And for an asocial person to attempt being an entrepreneur is akin to trusting an addict with the keys to the contraband store. The end results are not often pretty and quite predictable. And that is as bad as it gets when it comes to taking manageable risks and getting it miserably wrong.

It, honestly, is not fun to face the end of the year (well, yet another year) with the same conclusion — that you need to try harder and that for all the effort, results are just not there — at what is effectively the fifth year of trying out entrepreneurship as a means to make a living. As the opportunity costs mount, derision is abundant and faith — from yourself and from close ones in yourself — is non-existent, the logical choice is to be logical (and smart) and bail while you still can.

I have read probably hundreds of accounts of where I find myself right now. Yet, living it feels like nothing I have read before. Yes, I know the drill. You should reach out and find the support in people around you and it will all become bearable, if not a whole lot better. That is the theory, real life is quite a bit different. Since the choices that led to now have been only mine, the failure exclusively belongs to me and me alone. Success — well, that finds many owners.

The dull, dark and dreary stuff aside what I can tell you is that entrepreneurship is the best test of who you really are. And I don’t mean, by entrepreneurship, freelance gigs. Actual entrepreneurship tests you like nothing else. Everything begins and ends with you. And, should you fail, it will all exclusively belong to you. Considered as a coin toss series, in this game, the odds are loaded against you. Lose — it is all yours. Win — it is a collective effort.

In short, it is not for the faint of heart. Those who have succeeded, you have my greatest respect. Those who have failed, you have even more of my respect. Those who have failed and outlived the failure, I respect you the most.

At an earlier phase in my working life, I have really held it against the talkers in the business. I mean, how could you, without anything to back it up, talk your way into a fine deal? It looked unjust, much on the lines of a lot of successful managers being good only at managing people with no other core skill of their own.

And, as always, life becomes the greatest teacher. Business, is the best social activity out there. It is all about people. You need to be able to read people well. You need to be able to read people well to understand what they are looking for and appeal to the part of them that needs appealing to. That, is the primary enabler of a deal and not any other ability of yours. Thus exists the baffling bias towards who can paint a pretty picture better without having a clue about how to bring that picture to life.

Business is all about the persona, it is all about the narrative, it is all about being able to trust, just on a hunch. The longer you survive trying to do this, the more you realize that the overbearing bias towards a positive narrative and the persona is not an aberration, but a reasonably reliable social signal that is valid within a particular context. The popular exposition of this extends from not being fired for having Microsoft or Oracle as a vendor to understanding why that four figure (in dollars) tux is well worth the price you have paid for it.

If you are smart enough, you will realize that value is not what you perceive as value, but value is what the key players in the game perceive as value. There are probable instances of overlap there — between what you think as value and what the crucial players think of as value — but it is a rare overlap to find, should it be the case that you are asocial by nature.

For me, it is a journey that has taken five years too long. I wish I could have seen and known all of that I know right now, when I started out in 2008. But, as life will make it amply clear to you, there are things you learn by thinking and imagining and there are things you learn from experience. This is something that took me five long years to come to understand. It is frustrating as hell, and I won’t lie about it. But there is no other way I could have learnt all of this.

But, for all the losses, I am not giving this up. This is what I will sink or swim with.

Onward and upward!

About The Imminent Online Future Of Indian Media

NYT’s India Ink takes a swipe at that contentious topic of the future of media in India, seen through the eyes of an emerging online media scene in India. The post covers interesting aspects of the problem and is well worth a read, but it also misses a few key points.

For one, niche, experimental new media websites are hardly a new thing in India. In some ways, we have been ahead of even the western markets on that front. There used to be this fantastic (but way too costly to run) product called The Newspaper Today from the India Today Group and the first incarnation Tehelka was another of these experiments. Now, if you consider that, both were products from the 2000 – 2003 period, you will realize that our experiments in the space go that long back.

I was involved with both products for very short periods of time early in my career and I went on to work at digital operations of many other media companies after that. The idea that good content, somehow, will change the game was a popularly held misconception then and it remains the same even now and someone is bound to revisit that theme every couple of years, only to go home pretty singed by the whole experience.

Secondly, it is not the quality, but the cost that makes the proposition rather untenable in India. It costs way too much to create even less-than-average content here (points tackled in a bit more detail in an earlier post here), creating good quality content, on the lines of a daily, is even harder and costlier. The concept has been a first love of sorts for me, since content and journalism is where I started my career, and every now and then I wonder if I should try doing a venture there. By the time I am done with even the most basic financial models on it, the stark reality always holds me back.

Thirdly, the myth of the booming class of novueau-riche Indians who are dying for quality English content is something that is created by people like me who want to read more of this type of content and imagine ourselves as a growing tribe. Let me break it to everyone, we are not a growing tribe. We are a vocal, somewhat visible group given to group-think and internal amplification like any other group. Unfortunately, the group is so tiny that most niche online publications in India consider even half-a-million page views in a month as an excellent month.

Lastly, it is not impossible to have a growing, scaleable online content business in India. It will be in a non-English language, with content that probably won’t appeal to the upper class and it will need the backing of some really good investors who are patient enough to put money into a team and a business that will take 3-5 years to bootstrap properly.

P.S: Ironically, one of the people interviewed in the post, P V Sahad of VCCircle, was a colleague at The Newspaper Today. He’s one of the smarter guys in the business who realized early enough in the game that there is no money in doing content if you want to do a lot of it.

How Not To Build Software For SMBs: SAP’s 3 Billion Euro Story

Tucked away in the story about SAP closing down its SMB suite is one significant detail: It cost the company about 3 billion euros to develop.

SAP, one of the world’s biggest makers of business management software, originally projected that Business by Design – which was launched in 2010 – would reach 10,000 customers and generate $1 billion of revenue.

The magazine reported, however, that the product, which cost roughly 3 billion euros to develop, currently has only 785 customers and is expected to generate no more than 23 million euros in sales this year.

By comparison, in the second quarter, SAP’s software and software-related service revenue stood at 3.35 billion euros.

I am astonished by how on earth can you spend 3 billion to develop almost any software, leave alone one that is aimed at small and medium-sized businesses. The ERP/CRM/PLM landscape these days is an ocean of riches for companies looking for an implementation; be it customization of a generic or product or niche, extremely vertical ones. When anything costs as much as that to develop, it is hamstrung from the word go. I am just surprised that they have managed o keep it going for ten-years. For some perspective, three billion is still considerably north of what most successful software companies are valued at with revenue in the hundreds of millions.

Looking at the pricing for the product (link) it makes no sense at all. If you have to price a product/service like that and yet spend even a billion euros on developing it is nothing short of suicidal. They were hopeful of doing a billion dollars in revenue (annual, I assume); which is astounding considering that their main offerings brought in under 4 billion euros in the second quarter. Normal SAP implementations are long-winded expensive affairs, which plays a key factor in how the company makes most of its money. When big companies lose their way, they tend to do it a spectacular manner like this.

The SMB marketplace is extremely price sensitive and resistance to any change is fairly common place. The newer crop of companies who provide similar services also operate on lower pricing, have no contracts and don’t have many other nice bits companies like SAP are used to. Not surprisingly, the revenue is not expected to top 23 million euros this year, which puts in only in the league of a successful newer SaaS companies.

The upside to all of this is that it makes acquisitions a better option companies like SAP. Three billion euros could easily have been spent on 300 million every year on acquisitions and it would not be a stupid bet to assume that they could have had at least a couple of winners or more in that pick. That said, the conflicts in the business models of the newer crop of software companies and the older mammoth-sized companies is far from a resolution. Companies that are acquired this way have to often force fit themselves into the larger picture, which can be a huge drag. But, that’s a different story altogether, for some other time.

Is this the best we can do?

This is a post that has been in the works for a while, even though I have not been able to find a way to frame it in a manner that is to my liking. It does not cover the usual tech/early stage/digital topics that I prefer to write about here. In a sense, it actually does cover all of them, but it goes a little bit further than that.

The time since June this year has really been quite an intense one, mostly thanks to a complete switch to focusing only on executing plans made over the past year. Somewhere in that time period I stopped logging into Twitter (I don’t have Facebook/G+ accounts other than ones I keep for work) to intensify the effort to get more done as I’m really prone to getting distracted easily. The idea also was to hit a few targets before I allowed myself to be active on Twitter. Some of those targets have been met, while others have not (that’s a post for later) and even though I have started lurking on-and-off on Twitter these days, something about the state of affairs in this very (digitally) social age bothers me.

The question that keeps coming back to me is: “Is this the best we can do?”

If you look at the history of mankind, this very moment that you are reading these words is the most enabled entire societies have been able to do good. A vast chunk of humanity carry in their pockets more computing power than what was available to an individual, irrespective of the money, even as recently as 50-years-ago. We have access to information, at practically zero cost, on our fingertips, the creation and access for which  tens and thousands have fought and died for in earlier times. We can connect and communicate with others, sitting half way across the globe, at the speed of light, while 50-years-ago, two-way-communication was still a marvel of technology that was accessible to a handful of people.

All of this should have made better people of us. We should have be more open, considerate and warmer towards our fellow beings. Yet, for how all of this should have enabled us, we only seem to have grown a stronger sense of entitlement. As people, we communicate more (actively and passively); yet, we are more isolated from each other than ever before. All this technology should enable governments to serve who they truly serve — the people — a lot better; yet, the same technology is being used to shackle people than to free them.

This, I must stress, is not a holier-than-thou exposition on my part. In the past months I have had fleeting episodes where I could set aside my own limitations, prejudices and conditioning to reflect on the life that I have lived and the values that I have lived by and it is not a pretty picture. I have often reveled in being sarcastic, dismissive and not doing even 1/10th of what I could really do. I am as much part of the problem as anyone else is and my disappointment is with myself as much as it is with anyone else.

We think of legacies as what we leave behind at a particular point in time. We are wrong in thinking that. Our legacy is what we create over a lifetime of individual moments. If we are not living the best lives we can live and be the best that we can be through most of our lives, chances are that our legacies are not what we would ideally have liked it to be. We also leave the fate of our legacies to circumstances, bosses, political leadership and and a million other factors, while the truth is that we are the only people who really control it, while anything else is just an excuse to shy away from doing what you say that needs to be done.

What is also lost in all the noise is that most of my generation is slowly progressing towards middle age. We are the age group that will determine where things go from here. Most of us are no longer twenty-year-old youngsters who really don’t wield much influence. A lot of us are in places and positions of influence and if we truly desire a world that is better, we should use that influence in a better manner than just sit on the sidelines lamenting how wrong things are.

And it need not even be about going out there and starting a revolution. It is about stepping up, taking the responsibility towards your immediate environment. Be nicer to people,  be more helpful. Help others succeed while you chart your own course for success. Be less negative and snarky. You have far more with you than what most others have and to get more you need to first learn to give more; not just that what can be touched, but also that cannot be touched.

At least, that is what I feel. That it is not enough to just want better things for myself, but also for the world around me and back it up with action. A first small step towards that for me is stop being negative, cranky and being proud of being an ass. In the end, for me, it is about using these great tools I have been provided with, in a better manner. Yes, the world usually uses these same tools in a negative manner, but I can choose to use the same things in a different way and that’s my first step small step.

Scaling Notifications On Elgg To Support Rich, Context-Aware Emails

One of the core aspects of a social networking site is its ability to notify its users by leveraging different frameworks. Social networks that have complex access restrictions are entirely different beasts to build and scale compared to sites that are either mostly open, or are those where the content generation can only be done by a handful of users.

I have been running an Elgg site for an old client since 2009, which is a private gated network. At an early stage itself we ran into problems with the newsletter that had to go out to the entire user base. This was from a time when products like MailChimp were not an option and we were also working with a fairly limited budget. At the first stage, we mitigated the problem by using a job queue that was built on MySQL.

As any engineer will tell you that a job queue based on an RDBMS that can only run one worker, or even worse depends heavily on locking to run multiple workers is not a job queue. Eventually, it will cause more trouble than what it is worth and that is what we got into. Besides, as an Elgg site grows and you introduce more features to it, something that can farm out jobs and handle them asyc is worth its weight in gold.

Eventually, I wound up creating a simple set-up using Beanstalkd. The notification handler and the generic mail handlers are overwritten to add jobs to the Beanstalk queue and a PHP worker job (managed by Supervisord) processes the jobs in the background. I could go a level deeper and even leave out the individual job creation to Beanstalk itself, but the current approach seems to be holding up well for the moment, so, that next step can easily wait for a while longer.

Couple of pitfalls you need to watch out for, should you attempt to do the same thing:

1. Content encoding. This will drive you nuts if your scripts, DB tables and the CLI environment are different in how their locales are set up. Do not assume that everything that works in the browser will work the same in CLI. It won’t.

2. Access: The CLI script loads the Elgg environment and has no user. So, be aware of any functions that use sessions to return results.

3. Valid entities: PHP will error out when faced with an attempt to call a method on a non-object. If you don’t kick or bury a job (which is not possible when the script exits with an invalid object error) that is causing the error, the script will endlessly start and stop again. You have to obsessively check every object for validity before you attempt to do anything with it.

4. Use MailCatcher on your development set up. It will save you a ton of time, even though it does make the server itself a bit sluggish.

There are few other options available in the Elgg ecosystem to do the same like Jettmail and the upcoming Async notifications feature in Elgg 1.9. But both have their own complexities and issues and I could not wait till 1.9 and I needed something that didn’t require as much fiddling as Jettmail.

It is also possible to further extend this kind of development to leverage some of the transactional email services out there to use the inbound email feature to post to Elgg with webhooks. There are, though, no plans to roll that out right now and I will update this post if we ever get around to doing that.

Running 3.8.0-29 Kernel On ElementaryOS Luna

After a bit of tweaking and fiddling I have managed to get the 3.8.x kernel running on the Acer Aspire V5 431. Unlike the previous time when I tried it and failed to get bcmwl-kernel-source to compile from the package manager, this time it worked with a different approach. Thanks to this post on AskUbuntu, I picked up the latest bcmwl-kernel-source (6.30.223.30) and installed it.

The package installs without any issues and it enables WiFi for the machine. If hit the problem where the driver is shown and installed and activated, yet, you can’t seem to get the WiFi going, just make sure the other WiFi modules are blacklisted and disabled.

My blacklist looks something like this:

blacklist b44
blacklist b43legacy
blacklist b43
blacklist brcm80211
blacklist brcmsmac
blacklist ssb

You also have to make sure that the ‘b43’ is commented out in cat /etc/modules if it is present there.

I have also been able to make the Huawei EC1260 Wireless Data Modem (Tata photon+ being my provider) to work with the kernel. You will need to configure usb_modeswitch for that. After which the device will show up with the 12d1:140b profile.

The profile data looks like this:

DefaultVendor= 0x12d1
DefaultProduct=0x140b
#HuaweiMode=1
MessageEndpoint=0x08
MessageContent=”55534243123456780000000000000011062000000100000000000000000000″
NeedResponse=1
CheckSuccess=10
DisableSwitching=0

The 3.8.x kernel seems to be pretty good. The machine runs a lot cooler than what it has with the 3.2.x kernel and I am yet to run into any issues. The older kernel seemed to have the odd lock-up now and then. I have not experienced that in a day or two now. It has been a wrthwhile upgrade for me.

Moving Away From OS X, Switching Over Fully To Linux

Most of the reasons for the move has already been documented in a previous post, so I’ll skip the immediate compulsions that pushed me in this direction. Even while I writing that post, I was not very sure if it would all come together well in the end. After  much experimentation (and some really frustrating times) I’m glad to say that the transition is complete and I won’t be going back to an Apple laptop for a while.

The overall Linux on desktop experience is a marked improvement from the last time I had attempted it. This was during a time when I was only glad to tinker around endlessly and when it was more than OK for me to insert a module into the kernel to get the sound card to work. That time, though, is long gone and I prefer having systems with me that just stay out of the way. Which was why OS X and the Apple laptops were wonderful for me.

That said, I have recently been feeling that the premium you pay for getting that experience is a bit over the top with Apple. But replicating that experience on another platform (Windows does not cut it for me because I am simply way too used to having a *nix environment to work than due to any other reasons) has been more than a painful experience every time I have tried it.

In a lot of ways, the Linux on desktop story right now resembles a lot of what the Android story was like around the time of Froyo. And that comparison is meant cover only the technical aspects, you can safely ignore the market share part of the story. Even with this marked improvement, it will be a long long time before Linux becomes a serious player in the desktop/laptop market.

Coming back to the comparison, I find the quality of apps on Linux have improved significantly. They are still not as pretty or as consistent as OS X apps, but the story is a drastic improvement from the earlier times. Then there are the projects like elementaryOS, where the teams have made a concerted effort to make everything a lot more consistent and well thought out.

In the overall picture, none of that will matter. Most of the big companies that sell desktops and laptops are all primarily tied to Microsoft and the ecosystem around it. There have been efforts like Dell’s Developer Edition, but those are hardly mainline efforts and since we are living in an age where a platform is no longer simply about the hardware and the OS, without major muscle behind it, the Desktop Linux story will always be a minor one.

For me, the Linux story has so far been extremely positive so far. Save the exception of not being able to run iTunes without virtualization or emulation (one of the sad outcomes of the demise of Flipkart’s digital music business), there is nothing that I have been unable to do on Linux that I was able to do on OS X. The UI/UX aspect is no longer an issue with eOS, which, surprisingly feels a lot less OS X once you start using it a lot more.

There are some terrors that remind me of the good old days of Desktop Linux when everything was a lottery, but once you get a stable system in place the beast just keeps chugging on and stays out of your way and I do foresee a long and fruitful association for us this time around.

Do Not Upgrade Kernel While Using elemetaryOS On Acer Aspire V5-431

Edit: Figured out a way to run the 3.8.x series kernel here. I am running 3.8.0-31 at the moment, without any issues. This, though, is not recommended by the eOS team and should something go wrong, you will be on your own.

One of the best post-installation resources on elemetaryOS is the elementaryupdate.com site. They conclude their post on what more you can do to customize and update the OS after installing the current version (Luna), with a recommendation to upgrade the kernel to raring-lts. If you do this on the Acer Aspie V5-431, you will break your Broadcom BCM43228 (14e4:4359) driver as the bcmwl-kernel-source module will not build on the 3.8.0-29-generic kernel and many hours of frustration will follow.

In short, stick to the 3.2.x.x series kernels till the eOS team will suggest otherwise, as they do recommend sticking to the 3.2.x.x series in this post. There are good reasons to move to the latest kernel as a lot of things seem to work better — auto-dimming of the display for one — with the new kernel, but this kind of breakage is severe and it will be a good idea to stay away from any kernel upgrades that don’t get pushed through the software update process.

This is really one of the annoying things about using Linux on the desktop as you would expect something that worked out-of-the-box in an older version of the kernel to do the same in a much newer version. I fully understand the reasons why things work this way, but it is extremely poor user experience and even for someone like me, who is a bit better than the average user in figuring out these things, it is frustrating and a waste of time.

Revisiting Linux With elementaryOS, Acer Aspire V5

With the old Macbook getting on in age (it is an early 2008 model MacBook4,1) the move to find a replacement for it was always on the cards. The machine had served me well, travelling with me to different parts of India, including high-altitude passes in Himalayas. Of late, even after a complete reinstall, the machine has been showing its age and with persistent heating problems and lock-ups, the writing was quite clearly on the wall. I could get it repaired, which I eventually will, but the board only supports DDR2 and the memory is maxed out as it is at 4GB. The only other option is to upgrade to a SSD, fix the problems and hope for the best after that.

The primary candidate for the replacement was to go for the 13″ Macbook Air. After the millionth (failed) attempt to find a reasonably priced Linux laptop solution that just stayed out of the way, I was pretty sure that I’d have to stick to OS X and Apple, and have no choice but to gulp down the high premium that Apple charges for the fire-and-forget experience it is more than justifiably famous for. In the midst of all of this, I ran into this interesting so-called Linux laptop from Acer. It is called the Aspire V5-431 and I found a pretty decent price at Flipkart for it.

At this point, I must digress a bit about the non-Apple laptops. Dear god, some of them,  especially the Lenovo ultrabooks, are such a ‘slavish’ ripoff of the Apple laptop line up. I can imagine smartphones looking much like each other these days. There are not too many different ways in which you can design a phone, but that’s not the case with laptops and it is really shameful the extent to which the copying happens here. I guess none of these copies are much of  a threat to Apple in the market, so it is probably not worth suing the manufacturers for it, but it still is not a great thing to see. The V5-431 also suffers from a bit of this ‘inspiration’ problem, but it is hard to mistake it for an Apple unit.

The laptop comes pre-installed with Linpus Linux, which is instantly discarded by most users. But having a Linux laptop meant that I could have some degree of certainty that most of the bits and pieces would work well should I run some other Linux distro on it. It has been a while since I have used a Linux desktop as my main platform and it seems that while the underlying platform has changed a lot (and for the better), the user experience is still ghastly and inconsistent, featuring interfaces and UX that can only be created and loved by engineers.

That was when I came upon this project called elementaryOS. It is based on Ubuntu (current version is built on Precise: 12.04.2), but has an awful lot of work that has gone into making the front end user experience clean, consistent and fast. It is hard to miss the very obvious OS X inspiration in a lot of the visual elements, but once you start use it a bit more, the differences start to show up and it does that in a nice way. Linux on the desktop/laptop has been begging for something like this for years and I am really thrilled to see someone finally do it right. If you care to take apart the bits on top, you’ll find a familiar Ubuntu installation underneath, but, you really should not bother doing that.

I have gone through some three re-installs for the OS so far due to various reasons. One thing you need to watch out for, while sorting out eOS on the V5-431 is to stick to the 32-bit OS as things get quite a bit crazy should you attempt mixing 686 and X86_64 platforms while using virtualization. The eOS 32-bit kernel is PAE-enabled, so you can use more than 4GB RAM on the machine, but I would highly recommend sticking to 32-bit on everything (OS, Vritualbox, any guest OS) and you’ll not have a reason to complain. I discovered all of this the hard way as my primary requirement is to have a working Vagrant installation on the laptop and eventually had to go through redoing the base box in 32-bit (the original from the Macbook was 64-bit Centos 6.4) in the end.

The experience has been pleasant so far with the laptop. I have ordered more memory (8GB, to be precise) and even at 2GB the machine feels a lot faster and stabler than the ailing Macbook. I will hold off on getting a SSD at least for now as I feel the machine is quick enough for me at the moment and the extra memory will only make things much better. After many attempts at customizing the interface what I have realized is that it is best left alone. The developers have done a great job of selecting the defaults and 9/10 times the modifications you’ll make are not going to make it any better. The only thing you’ll need is to install the non-free TTF fonts, enable them in your browser’s font selection and get on with the rest of it.

Other than that, the main issue is of color calibration of the monitor. The default install has a blue-ish tint with the monitor and the blacks don’t render true on it, which was infuriating when you get that on a glossy screen. I finally fixed the problem by calibrating the display on a Windows installation and pulling out the ICC profile from it. I’ll share the link to the profile at the end of this post and if you have the same machine and are running Linux on it, use it. It makes a world of a difference. You will have to install Gnome Color Manager to view the profiles.

After all of that, the machine seems quite a good deal for me. It does not heat up too much, is extremely quiet and weighs a bit over 2-kilos. The 14″ screen is real estate I appreciate a lot, coming from the 13″ Macbook. The external display options are standard VGA and HDMI. My primary 22″ monitor has only DVI-D and DVI-Sub inputs, so I’m waiting for the delivery of a convertor cable to hook it up to that one. The battery is a not the best, though. Acer has cut some corners on that, but you can’t have everything at such a low price. Even with the memory upgrade, the machine will still cost me less than 1/3rd of what a new Macbook Air (the base model, that is) will do right now. I’m getting around 2.5 hours on real hard core usage, which is not bad at all.

The stack is otherwise quite stable. It reads something like below:

  • Google Chrome
  • LibreOffice
  • Virtualbox
  • Vagrant
  • Sublime Text 2
  • Skype
  • Dropbox
  • VLC
  • Darktable

I’m not exactly a power user and 90% of my work is done in a text editor, web browser and VLC, but the combination of eOS and the Aspire V5-431 is something that I can easily suggest to a lot of people looking to break away from regular Linux/Windows/OS X and that too at a good price. There is a new version of the laptop that is out with the next generation of the chip, but I have not seen any great benefits that you’ll get from that upgrade which will cost a bit more. You can spend that money on getting more RAM instead.

eOS is also a nice surprise and it is a pretty young project. With time it will only get better and eventually become quite distinct from an OS that looks similar to OS X.