There And Back Again; To Windows

After three long years with the Acer Aspire V5-431 it was time to get a new laptop for me. The Acer was a wonderful companion and it has travelled everywhere with me and survived everything from 50 degrees Celsius to -23 degree Celsius, plenty of dust, moisture and whatnot.

It was also starting to show its age and with a processor that was really under-powered, it was starting to get in the way of work. Additionally, it had a severe limitation of being only a 32-bit laptop, even though the CPU was 64-bit. Acer, in all their wisdom, decided this was a perfectly brilliant thing to do and made life miserable for people like me.

The age of 32-bit computers are rapidly coming to an end. Google has pulled support for Chrome on 32-bit for Linux a while ago. With the state of security online these days, running an unmaintained browser is begging for trouble.

Additionally, there is a host of software these days (Docker is a prime example), that don’t have officially supported 32-bit versions. It is far easier to move to a proper 64-bit computing environment than lose your hair trying to get oddball 32-bit ports of these things working.

The replacement laptop is an Acer again, this time the Aspire V13 (V3-372T-5051). It is a nice lightweight laptop with a great display, good battery life, 256 GB SSD drive and has the ability to take a maximum of 16 GB of RAM. That last detail is very important for couple of reasons as we’ll get to see soon.

Back To Windows

The laptop came pre-installed with Windows 10. For the past three years, I have been using the really nice ElementaryOS and had been quite happy with it. My usage of a computer in the past 3-years has also changed considerably to far lesser development work and more of text and various other kinds of documents. This meant that Linux was not really required for me as must-have to do development work and the little that I had seen of Windows 10, I quite liked it. It has the right amount of bling without going overboard with it as OS X has done in recent years.

While the idea was to always dual boot and use Linux as my primary device, having used only Windows in the past couple of days, I have come around to the idea of using only Windows. For most of my development needs, there is Vagrant and Docker (which is where the 16 GB RAM will come in quite handy) and other things that I am tinkering with on tech (Rust and Go) have officially-supported Windows builds.

Set A Different public_path for Laravel applications on Webfaction

In a previous post, we had covered the issue how to get a Laravel application running on Webfaction.

While the approach worked well enough, the idea of symlinking to the ‘public’ directory was not a good one as every newly created file would have to be symlinked to make it available in the ‘webapps/appname’ folder. An Envoy task could easily accomplish this, but it just won’t scale on any website that has user uploaded files that has to be shown on the site.

Thankfully, Laravel provides an easy fix to this, which is to use the IoC container to bind the value of of public_path() to a different location. Since we don’t have this problem on the local setup, it is also a good idea to throw in environment detection into the mix.

All you have to do is open your bindings.php file (saved as app/bindings.php and included from app/start/global.php) and add the following lines:

 

if (\App::environment(‘production’))
{
\App::bind(‘path.public’, function()
{
return ‘/home/{{username}}/webapps/{{appname}}’;
});
}

 

 

Deploying A Laravel App to Webfaction Using BitBucket

It is not uncommon to use a shared host to run a Laravel application and I have been doing that for a while on my preferred choice of shared host, Webfaction. This post will help you do the same without having to modify much about your application.

The first problem you will face is that default PHP on Webfaction is still 5.2.x. This is not a problem when you are using the webapp itself, as the .htaccess file that is used by Webfaction will ensure that you use PHP 5.5.x, if you chose that as the app profile when you set it up.

The problem arises when you have to run artisan commands either directly or as scripts from Composer. These will pick up the default PHP in /usr/local/bin/php than /usr/local/bin/php55, which is the correct one for our purposes.

To get around this you need to do a couple of things.

First is to symlink /usr/local/bin/php55 to /home/username/bin/php

Second is to download a copy of composer to /home/username/bin/composer

Once these two steps are done, add /home/username/bin to your path.

Now we will install the Laravel app. I usually choose the location for this as /home/username/app since /home/username/webapp is controlled by Webfaction’s setup.

This post won’t cover how to set up a Bitbucket Git repo. There are plenty of great resources that will allow you to that, but I will make the following assumptions:

1. Git will be set up to be accessed using SSH on Bitbucket.

2. You have set-up key-based SSH access on your Webfaction account to Bitbucket.

3. Your Laravel application has been properly pushed into the repo.

These are the last few steps to be followed:

1. Clone your Laravel app into /home/username/apps/appname.

git clone git@bitbucket.org:xxxx/xxxx.git /home/username/apps/appname

2. Run Composer update to pull in all the dependencies.

/home/username/bin/php /home/username/bin/composer update

3. Remove the .htaccess file in /home/username/webapps/webapp_name/

4. Symlink the public folder to the webapp folder.

ln -s /home/username/apps/appname/public/* /home/username/webapps/webapp_name/

5. Edit the .htaccess file in the public folder to add the following lines:

Action php55-cgi /php55.cgi
<FilesMatch \.php$>
SetHandler php55-cgi
</FilesMatch>

You are pretty much good to go after that. One thing you need to keep in mind that if you check in other folders or files into the public folder, you will have to symlink the new files.

For a much more convenient two-step deployment, you can use Envoy to manage everything on the server.

My workflow is mostly:

1. Push code into Bitbucket.

2. Run an Envoy task that pulls the changes into the server, runs composer dump-autoload and gets going from there.

UIDAI, NIC And India’s Data Security Nightmare

Should the worst happen to India’s official information technology infrastructure, AS4758 is a term that will feature prominently in it. The term denotes a unique name/number (ASN) for a network that is used for routing traffic over IP networks and AS4758 is operated by the National Informatics Center. This prefix represents a vast majority of the servers and sites (the 164.100.0.0 – 164.100.255.255 IP address range) operated by the NIC. Some of the key sites operating from this network include UIDAI, website of the Chief Electoral Officer, Delhi and the NIC Certifying Authority. These three are just a minor part of the vast array of sites and services, that cover everything from the personal information of the citizens of the country, to key information about the government itself.

This post is one that I have been putting off writing for a while. The main reason is that it is not right to identify weak points in our key IT infrastructure in such a public manner. But the fact is that the speed with which we are going ahead to centralize a lot of this information, without thinking through the requisite safeguards is an issue that overrides that concern. Improperly secured, this information is a grave risk to everyone, including the government. And from the evidence seen in public, there is not adequate knowledge or expertise within the system to even take a call on what is adequate security for an undertaking this grave in nature. The secondary reason is the inadequacies of the underlying technology in mining this information. They are immature and not accurate enough and it will lead to a flood of false positives in a system where the legal system itself is under-equipped to make key differentiation when it comes to the evidence that supports the case made by the false positive.

Another point to note is that I am hardly a security expert, the little that I know is what I need to know to keep my applications secure. Whatever I have seen is a tiny percentage of what is available for everyone to see. Information security has become such a complicated and specialized field now that it is no longer good enough to know some of the factors involved in keeping an application and infrastructure secure from prying eyes. I would not dare to certify a client website/application as secure based on my own knowledge. I would rather get a specialized security firm to do that, even if they cost a lot of money. The important bit here is that if I can see these issues, someone with malicious intent can see a hundred other things that can be used to gain unauthorized access.

All Eggs In One Basket

Coming back to As4758, it is a case of keeping too many eggs in one basket. From the outside, it looks like multiple vendors have access to the servers on that network. Forget forcing users to SSL-enabled versions of the sites, most of them don’t even give that as an option. This is true of both the UIDAI website and the Delhi CEO’s website where users have to enter personal information to retrieve more personal information. A compromised machine on the network can easily listen to all network traffic and silently harvest all this data without anyone knowing about it.

A year ago, NISG, which is one of the key service providers for the NATGRID and UIDAI project was running its website on an old Windows desktop (Windows XP or 97, if I remember correctly). Thankfully, NISG seems to have moved to a Linux machine recently. Also, the NISG set-up is not hosted within the NIC’s network, so any the possibility of damage from the machine would have been comparatively lower. Though, we will never know for sure.

That said, even being on different networks won’t provide iron-clad security, if you don’t design networks, access protocols and authentication as the first order of business. Done as an afterthought, it will never be as effective as it needs to be. Agencies often require data from each other to be mashed up (example: overlay UIDAI data over NATGRID data) and this is often managed at the protocol level by restricting access by IP. In the hypothetical case of the NISG server being allowed access to UIDAI data and the former is compromised, you have a scenario where even the most secure UIDAI data center will leak information due to compromise in another network.

Cart Before Horse

A moot point here is the assumption that the UIDAI infrastructure is secure enough in the first place. An NISG requirement for a data center security and risk manager position does not give us confidence in that assumption one bit. As the saying goes, the chain is only as strong as its weakest link and in this case, it seems that security is an afterthought. Part of the problem is that there is not enough experience within the government machinery to even determine what is secure enough. A simple rule about getting work done by someone is that you need to know, better than the person you are engaging to get that work done, what you are looking to get done. We just don’t have that in place in India at the moment.

These systems need to be designed primarily with security in mind and that does not seem to be the case. My fear with these systems is not as much that the government itself will misuse the data (which is a valid and important concern for me), but that it will be quietly pilfered away by foreign players and nobody would know about it. Having such information about all of the citizens of a country opens up millions of avenues for the malicious players to recruit people to their cause as all those people become potential targets to blackmail. Since we are going to collect information about everyone in the country, the potential of who can be blackmailed can range from the richest and most powerful, to the poorest and the weakest. And the best part is that what exposes people to blackmail need not even be illegal behaviour, it can be perfectly legal behaviour that affects social and professional standing of an important person.

We are going to present all of that information to interested parties with a nice bow on top.

Access, Identity, Authentication, Logging

  1. Any secure system will require you to control access to the resource as a whole and/or parts of the resource itself. This planning has to start from physical access to the core and nodes that access the core and it has to then take into account the applications that will provide access to the information and the applications that will access this information from the nodes.
  2. Any secure system will have a clear policy in assigning identities to people who can access those resources. This needs to be consistent across the core and the nodes. This makes the system rather inflexible and a pain to operate, but it is necessary to mitigate even the weakest of attacks.
  3. Any secure system will clear mechanism of of authenticating the identity of a valid user in the system. There cannot be any backdoors built into such a system as it has been proven time and again that the backdoors become a point of major weakness over time.
  4. Any secure system will log all actions at all levels in the system and establish triggers for any out-of-band activity that covers even legitimate use.

The above four points are just an amateur attempt by me at defining the outlines of a reasonably secure system. A proper attempt at this by a real security professional will have a hell of a lot more of points and also go into a great deal of detail. But these points should give you a rough idea about the complexity involved in designing security for systems like these. You simply cannot slap on top security as an afterthought here.

Mining Nightmares

Which brings us to the issue of accuracy in data mining for initiatives like NATGRID.

Personally, I do believe that there is a valid case for governments to either collect or have access to information of any kind. What I do not like is unfettered collection, mining and access and zero oversight on any of those processes.

The reason why mining big data as a sort of Google search for suspicious activity is a terrible idea is simple. It does not work accurately enough to be of use in enforcement. The same technology that results in mis-targeted marketing phone calls and the tech that serves you ads that are irrelevant to you are the ones that are going to be used to determine whether a person or a group of people are likely to do bad things. Even in marketing or advertising it works with an appalling rate of failure, using it in intelligence, surveillance and enforcement will lead to an ocean of false positives and wind up putting a lot of innocent people behind bars for no good reason.

Even worse is the fact that legal system itself has such a weak grasp on these matters that appeals are likely to fall on deaf ears as the evidence is likely to be considered the gospel as there is no understanding available within the system that can say it is not the case. And then there is the potential for real abuse — not limited to planting evidence through spyware — that can ruin lives of anyone and everyone.

Conclusion

Our approach to security and centralized information collection is terrible beyond what can be expressed in words. It needs to be stopped in its tracks and reviewed closely and should be redesigned from the ground-up to keep security as the first objective and data collection as a final objective. We need to codify access laws to data collected in this manner and ensure that all of it does not reside in a single place and access to a complete picture is available only in the rarest and most exceptional of circumstances. What is happening right now is none of that and I am afraid we will find that out in the most painful manner in the coming years.

Building A Digital Product: Part I

There used to be a time when building and launching a digital product was a straight forward affair. The steps were something like this:

  1. Start with a rough idea of what you were looking for
  2. Find someone who could design and build it for you
  3. Find people who would help you run it and go live.
  4. Find ways to market the product.
  5. Find ways to sell the product.

Other than the really big players, most of the regular Joes would handle most of the steps on their own or, in some extreme cases, handle all the steps on their own.

In the last five to eight years the steps have been shred to bits, thrown out to the dustbin and replaced with a set of steps that bear no resemblance to the earlier ones.

Most of this disruption can squarely be blamed on mobile computing. The revolution that started with telephonic devices being able to access bits of textual data (read SMS), was turned on its head when the same devices were transformed into data devices that could also do telephony as one of the many things it could do.

The other significant development that has caused the playbook to be thrown out is the commoditization of many of the moving parts that are used to build a digital product. From databases, to job queues, to logging, to any other part of the technical stack, finds an array of plug-and-play software service platforms that a new product can leverage from day one.

In the early days, teams had to develop everything — from email subscription systems, to delivery systems to, logging infrastructure — to get going. With all these new services, product builders are less builders and more integrators these days.

While this has created an entirely new universe of opportunities and possibilities, it is also responsible for creating a lot of confusion for companies and individuals looking to build products.

What this series will attempt to do, in the first part, is to bring some structure to the steps and elaborate on the steps a bit with an aim to reducing the amount of confusion in the market regarding the steps.

I have no illusions that this will be a definitive list as there are parts of the stack and ecosystem I am completely unaware of. My idea is to fill in the gaps that I can and I’ll be more than happy to bring in suggestions about what else can I cover here.

I am going to tackle the more technical aspects in this post:

Design: Designs are best approached first with storyboards. The story boards are used to create process flows. The process flows lead to wireframes. The wireframes lead to the final design.

You can skip all of the steps and go directly to a design, but the odds are that you will struggle at a later stage to force fit that disciplined a process into an existing system that has grown without it.

What is more important — short term gain or long term pain — make your pick.

Development: The choice of framework/language to build on is made at this stage. Unless you are someone who knows technology very closely, avoid using the latest fancy framework in town.

You have to establish coding standards, documentation standards, bug tracking, version control systems and release management processes.

Testing: Set up both automated and manual tests to address both logic and real-world usage. Testing infrastructure built right will include a good set of unit and behavioural tests and a continuous integration framework that will catch most errors during the build phase itself.

Deployment: No (S)FTP. Simple. Deployment options are available these days from the simple, to the ridiculously complicated. It gets harder when you have to update code on a pool of application servers that need a rolling update/restart cycle.

The more challenging part in this is to abstract away this part of the stack to a simple interface that the developers can use. You cannot and should not expect developers to debug problems in the deployment infrastructure.

Distribution: A local CDN or an international one — which is the right one to use? Should I use a CDN at all? Recently, a company that I spoke to had a response time to their origin server that was 1/5th of what they were getting from their CDN. This was done to leverage cheaper CDN bandwidth and is a classic case of cost optimization at the wrong place.

Is Couldfront the right solution? Can my preferred CDN provider handle wildcard SSL termination at reasonable cost? How costly is it to do a cache purge across all geographies. Is it even possible? Is it important to purge CDN caches? Is a purge important to avoid compliance hurdles for some obscure requirement in my market of choice?

Mobile-specific Parts: Native, cross-platform or HTML5? Do I need a mobile application at all? Which platforms should I target? What is the minimum OS level that I should support on each of those platforms? How do I align those decisions with the target audience I am going to address?

Outbound, non-consumer-facing Services: Should I expose any of my internal data with a developer-facing API? What should I use to expose that API? Do I build it own on my own or do I use a hosted platform like Apigee? What sort of authentication should I use? What sort of identity management should I use. Should I even try to split identity and authentication as two different services?

Inbound, non-consumer-facing Services: What do I use to handle data that I fetch from other sources? How do I ensure that I cache my requests to respect rate limits. What is a Webhook? How do I go about implementing one?

Replication & Redundancy: What is the maximum acceptable downtime for my application? Is there a business case for a multi-DC deployment? How extensive does my disaster recovery plan have to be?

AWS, Rackspace, good old dedicated rack in a datacenter? Should I use Glacier? What should I use for DNS management?

Analytics & Instrumentation: DAU, MAU, WAU — what all do I have to track? Are bounces more important than acquisition? Is acquisition more important than repeat transactions? How do I bucket and segment my users?

How do I measure passive actions? Should I start tracking a minor version of an otherwise less-used browser as my javascript error tracking reports are showing that the current release is breaking critical parts for my most valuable demographic who use that exact obscure browser?

Wait, I can track client side javascript errors?

Conclusion

As you can see, the list raises more questions and provides no answers. This is intentional as there is no one-size-fits-all answer for these questions. Even within specific company lifecycle segments (early stage, stable start-up, established company), the internal circumstances vary from company to company.

These list is more a starting point than a destination in itself. Use it to build a better framework that is suited for your organization and your product. And if you need more help, just ask!

Scaling Notifications On Elgg To Support Rich, Context-Aware Emails

One of the core aspects of a social networking site is its ability to notify its users by leveraging different frameworks. Social networks that have complex access restrictions are entirely different beasts to build and scale compared to sites that are either mostly open, or are those where the content generation can only be done by a handful of users.

I have been running an Elgg site for an old client since 2009, which is a private gated network. At an early stage itself we ran into problems with the newsletter that had to go out to the entire user base. This was from a time when products like MailChimp were not an option and we were also working with a fairly limited budget. At the first stage, we mitigated the problem by using a job queue that was built on MySQL.

As any engineer will tell you that a job queue based on an RDBMS that can only run one worker, or even worse depends heavily on locking to run multiple workers is not a job queue. Eventually, it will cause more trouble than what it is worth and that is what we got into. Besides, as an Elgg site grows and you introduce more features to it, something that can farm out jobs and handle them asyc is worth its weight in gold.

Eventually, I wound up creating a simple set-up using Beanstalkd. The notification handler and the generic mail handlers are overwritten to add jobs to the Beanstalk queue and a PHP worker job (managed by Supervisord) processes the jobs in the background. I could go a level deeper and even leave out the individual job creation to Beanstalk itself, but the current approach seems to be holding up well for the moment, so, that next step can easily wait for a while longer.

Couple of pitfalls you need to watch out for, should you attempt to do the same thing:

1. Content encoding. This will drive you nuts if your scripts, DB tables and the CLI environment are different in how their locales are set up. Do not assume that everything that works in the browser will work the same in CLI. It won’t.

2. Access: The CLI script loads the Elgg environment and has no user. So, be aware of any functions that use sessions to return results.

3. Valid entities: PHP will error out when faced with an attempt to call a method on a non-object. If you don’t kick or bury a job (which is not possible when the script exits with an invalid object error) that is causing the error, the script will endlessly start and stop again. You have to obsessively check every object for validity before you attempt to do anything with it.

4. Use MailCatcher on your development set up. It will save you a ton of time, even though it does make the server itself a bit sluggish.

There are few other options available in the Elgg ecosystem to do the same like Jettmail and the upcoming Async notifications feature in Elgg 1.9. But both have their own complexities and issues and I could not wait till 1.9 and I needed something that didn’t require as much fiddling as Jettmail.

It is also possible to further extend this kind of development to leverage some of the transactional email services out there to use the inbound email feature to post to Elgg with webhooks. There are, though, no plans to roll that out right now and I will update this post if we ever get around to doing that.

Running 3.8.0-29 Kernel On ElementaryOS Luna

After a bit of tweaking and fiddling I have managed to get the 3.8.x kernel running on the Acer Aspire V5 431. Unlike the previous time when I tried it and failed to get bcmwl-kernel-source to compile from the package manager, this time it worked with a different approach. Thanks to this post on AskUbuntu, I picked up the latest bcmwl-kernel-source (6.30.223.30) and installed it.

The package installs without any issues and it enables WiFi for the machine. If hit the problem where the driver is shown and installed and activated, yet, you can’t seem to get the WiFi going, just make sure the other WiFi modules are blacklisted and disabled.

My blacklist looks something like this:

blacklist b44
blacklist b43legacy
blacklist b43
blacklist brcm80211
blacklist brcmsmac
blacklist ssb

You also have to make sure that the ‘b43’ is commented out in cat /etc/modules if it is present there.

I have also been able to make the Huawei EC1260 Wireless Data Modem (Tata photon+ being my provider) to work with the kernel. You will need to configure usb_modeswitch for that. After which the device will show up with the 12d1:140b profile.

The profile data looks like this:

DefaultVendor= 0x12d1
DefaultProduct=0x140b
#HuaweiMode=1
MessageEndpoint=0x08
MessageContent=”55534243123456780000000000000011062000000100000000000000000000″
NeedResponse=1
CheckSuccess=10
DisableSwitching=0

The 3.8.x kernel seems to be pretty good. The machine runs a lot cooler than what it has with the 3.2.x kernel and I am yet to run into any issues. The older kernel seemed to have the odd lock-up now and then. I have not experienced that in a day or two now. It has been a wrthwhile upgrade for me.

Moving Away From OS X, Switching Over Fully To Linux

Most of the reasons for the move has already been documented in a previous post, so I’ll skip the immediate compulsions that pushed me in this direction. Even while I writing that post, I was not very sure if it would all come together well in the end. After  much experimentation (and some really frustrating times) I’m glad to say that the transition is complete and I won’t be going back to an Apple laptop for a while.

The overall Linux on desktop experience is a marked improvement from the last time I had attempted it. This was during a time when I was only glad to tinker around endlessly and when it was more than OK for me to insert a module into the kernel to get the sound card to work. That time, though, is long gone and I prefer having systems with me that just stay out of the way. Which was why OS X and the Apple laptops were wonderful for me.

That said, I have recently been feeling that the premium you pay for getting that experience is a bit over the top with Apple. But replicating that experience on another platform (Windows does not cut it for me because I am simply way too used to having a *nix environment to work than due to any other reasons) has been more than a painful experience every time I have tried it.

In a lot of ways, the Linux on desktop story right now resembles a lot of what the Android story was like around the time of Froyo. And that comparison is meant cover only the technical aspects, you can safely ignore the market share part of the story. Even with this marked improvement, it will be a long long time before Linux becomes a serious player in the desktop/laptop market.

Coming back to the comparison, I find the quality of apps on Linux have improved significantly. They are still not as pretty or as consistent as OS X apps, but the story is a drastic improvement from the earlier times. Then there are the projects like elementaryOS, where the teams have made a concerted effort to make everything a lot more consistent and well thought out.

In the overall picture, none of that will matter. Most of the big companies that sell desktops and laptops are all primarily tied to Microsoft and the ecosystem around it. There have been efforts like Dell’s Developer Edition, but those are hardly mainline efforts and since we are living in an age where a platform is no longer simply about the hardware and the OS, without major muscle behind it, the Desktop Linux story will always be a minor one.

For me, the Linux story has so far been extremely positive so far. Save the exception of not being able to run iTunes without virtualization or emulation (one of the sad outcomes of the demise of Flipkart’s digital music business), there is nothing that I have been unable to do on Linux that I was able to do on OS X. The UI/UX aspect is no longer an issue with eOS, which, surprisingly feels a lot less OS X once you start using it a lot more.

There are some terrors that remind me of the good old days of Desktop Linux when everything was a lottery, but once you get a stable system in place the beast just keeps chugging on and stays out of your way and I do foresee a long and fruitful association for us this time around.

Do Not Upgrade Kernel While Using elemetaryOS On Acer Aspire V5-431

Edit: Figured out a way to run the 3.8.x series kernel here. I am running 3.8.0-31 at the moment, without any issues. This, though, is not recommended by the eOS team and should something go wrong, you will be on your own.

One of the best post-installation resources on elemetaryOS is the elementaryupdate.com site. They conclude their post on what more you can do to customize and update the OS after installing the current version (Luna), with a recommendation to upgrade the kernel to raring-lts. If you do this on the Acer Aspie V5-431, you will break your Broadcom BCM43228 (14e4:4359) driver as the bcmwl-kernel-source module will not build on the 3.8.0-29-generic kernel and many hours of frustration will follow.

In short, stick to the 3.2.x.x series kernels till the eOS team will suggest otherwise, as they do recommend sticking to the 3.2.x.x series in this post. There are good reasons to move to the latest kernel as a lot of things seem to work better — auto-dimming of the display for one — with the new kernel, but this kind of breakage is severe and it will be a good idea to stay away from any kernel upgrades that don’t get pushed through the software update process.

This is really one of the annoying things about using Linux on the desktop as you would expect something that worked out-of-the-box in an older version of the kernel to do the same in a much newer version. I fully understand the reasons why things work this way, but it is extremely poor user experience and even for someone like me, who is a bit better than the average user in figuring out these things, it is frustrating and a waste of time.

Revisiting Linux With elementaryOS, Acer Aspire V5

With the old Macbook getting on in age (it is an early 2008 model MacBook4,1) the move to find a replacement for it was always on the cards. The machine had served me well, travelling with me to different parts of India, including high-altitude passes in Himalayas. Of late, even after a complete reinstall, the machine has been showing its age and with persistent heating problems and lock-ups, the writing was quite clearly on the wall. I could get it repaired, which I eventually will, but the board only supports DDR2 and the memory is maxed out as it is at 4GB. The only other option is to upgrade to a SSD, fix the problems and hope for the best after that.

The primary candidate for the replacement was to go for the 13″ Macbook Air. After the millionth (failed) attempt to find a reasonably priced Linux laptop solution that just stayed out of the way, I was pretty sure that I’d have to stick to OS X and Apple, and have no choice but to gulp down the high premium that Apple charges for the fire-and-forget experience it is more than justifiably famous for. In the midst of all of this, I ran into this interesting so-called Linux laptop from Acer. It is called the Aspire V5-431 and I found a pretty decent price at Flipkart for it.

At this point, I must digress a bit about the non-Apple laptops. Dear god, some of them,  especially the Lenovo ultrabooks, are such a ‘slavish’ ripoff of the Apple laptop line up. I can imagine smartphones looking much like each other these days. There are not too many different ways in which you can design a phone, but that’s not the case with laptops and it is really shameful the extent to which the copying happens here. I guess none of these copies are much of  a threat to Apple in the market, so it is probably not worth suing the manufacturers for it, but it still is not a great thing to see. The V5-431 also suffers from a bit of this ‘inspiration’ problem, but it is hard to mistake it for an Apple unit.

The laptop comes pre-installed with Linpus Linux, which is instantly discarded by most users. But having a Linux laptop meant that I could have some degree of certainty that most of the bits and pieces would work well should I run some other Linux distro on it. It has been a while since I have used a Linux desktop as my main platform and it seems that while the underlying platform has changed a lot (and for the better), the user experience is still ghastly and inconsistent, featuring interfaces and UX that can only be created and loved by engineers.

That was when I came upon this project called elementaryOS. It is based on Ubuntu (current version is built on Precise: 12.04.2), but has an awful lot of work that has gone into making the front end user experience clean, consistent and fast. It is hard to miss the very obvious OS X inspiration in a lot of the visual elements, but once you start use it a bit more, the differences start to show up and it does that in a nice way. Linux on the desktop/laptop has been begging for something like this for years and I am really thrilled to see someone finally do it right. If you care to take apart the bits on top, you’ll find a familiar Ubuntu installation underneath, but, you really should not bother doing that.

I have gone through some three re-installs for the OS so far due to various reasons. One thing you need to watch out for, while sorting out eOS on the V5-431 is to stick to the 32-bit OS as things get quite a bit crazy should you attempt mixing 686 and X86_64 platforms while using virtualization. The eOS 32-bit kernel is PAE-enabled, so you can use more than 4GB RAM on the machine, but I would highly recommend sticking to 32-bit on everything (OS, Vritualbox, any guest OS) and you’ll not have a reason to complain. I discovered all of this the hard way as my primary requirement is to have a working Vagrant installation on the laptop and eventually had to go through redoing the base box in 32-bit (the original from the Macbook was 64-bit Centos 6.4) in the end.

The experience has been pleasant so far with the laptop. I have ordered more memory (8GB, to be precise) and even at 2GB the machine feels a lot faster and stabler than the ailing Macbook. I will hold off on getting a SSD at least for now as I feel the machine is quick enough for me at the moment and the extra memory will only make things much better. After many attempts at customizing the interface what I have realized is that it is best left alone. The developers have done a great job of selecting the defaults and 9/10 times the modifications you’ll make are not going to make it any better. The only thing you’ll need is to install the non-free TTF fonts, enable them in your browser’s font selection and get on with the rest of it.

Other than that, the main issue is of color calibration of the monitor. The default install has a blue-ish tint with the monitor and the blacks don’t render true on it, which was infuriating when you get that on a glossy screen. I finally fixed the problem by calibrating the display on a Windows installation and pulling out the ICC profile from it. I’ll share the link to the profile at the end of this post and if you have the same machine and are running Linux on it, use it. It makes a world of a difference. You will have to install Gnome Color Manager to view the profiles.

After all of that, the machine seems quite a good deal for me. It does not heat up too much, is extremely quiet and weighs a bit over 2-kilos. The 14″ screen is real estate I appreciate a lot, coming from the 13″ Macbook. The external display options are standard VGA and HDMI. My primary 22″ monitor has only DVI-D and DVI-Sub inputs, so I’m waiting for the delivery of a convertor cable to hook it up to that one. The battery is a not the best, though. Acer has cut some corners on that, but you can’t have everything at such a low price. Even with the memory upgrade, the machine will still cost me less than 1/3rd of what a new Macbook Air (the base model, that is) will do right now. I’m getting around 2.5 hours on real hard core usage, which is not bad at all.

The stack is otherwise quite stable. It reads something like below:

  • Google Chrome
  • LibreOffice
  • Virtualbox
  • Vagrant
  • Sublime Text 2
  • Skype
  • Dropbox
  • VLC
  • Darktable

I’m not exactly a power user and 90% of my work is done in a text editor, web browser and VLC, but the combination of eOS and the Aspire V5-431 is something that I can easily suggest to a lot of people looking to break away from regular Linux/Windows/OS X and that too at a good price. There is a new version of the laptop that is out with the next generation of the chip, but I have not seen any great benefits that you’ll get from that upgrade which will cost a bit more. You can spend that money on getting more RAM instead.

eOS is also a nice surprise and it is a pretty young project. With time it will only get better and eventually become quite distinct from an OS that looks similar to OS X.