Deploying A Laravel App to Webfaction Using BitBucket

It is not uncommon to use a shared host to run a Laravel application and I have been doing that for a while on my preferred choice of shared host, Webfaction. This post will help you do the same without having to modify much about your application.

The first problem you will face is that default PHP on Webfaction is still 5.2.x. This is not a problem when you are using the webapp itself, as the .htaccess file that is used by Webfaction will ensure that you use PHP 5.5.x, if you chose that as the app profile when you set it up.

The problem arises when you have to run artisan commands either directly or as scripts from Composer. These will pick up the default PHP in /usr/local/bin/php than /usr/local/bin/php55, which is the correct one for our purposes.

To get around this you need to do a couple of things.

First is to symlink /usr/local/bin/php55 to /home/username/bin/php

Second is to download a copy of composer to /home/username/bin/composer

Once these two steps are done, add /home/username/bin to your path.

Now we will install the Laravel app. I usually choose the location for this as /home/username/app since /home/username/webapp is controlled by Webfaction’s setup.

This post won’t cover how to set up a Bitbucket Git repo. There are plenty of great resources that will allow you to that, but I will make the following assumptions:

1. Git will be set up to be accessed using SSH on Bitbucket.

2. You have set-up key-based SSH access on your Webfaction account to Bitbucket.

3. Your Laravel application has been properly pushed into the repo.

These are the last few steps to be followed:

1. Clone your Laravel app into /home/username/apps/appname.

git clone git@bitbucket.org:xxxx/xxxx.git /home/username/apps/appname

2. Run Composer update to pull in all the dependencies.

/home/username/bin/php /home/username/bin/composer update

3. Remove the .htaccess file in /home/username/webapps/webapp_name/

4. Symlink the public folder to the webapp folder.

ln -s /home/username/apps/appname/public/* /home/username/webapps/webapp_name/

5. Edit the .htaccess file in the public folder to add the following lines:

Action php55-cgi /php55.cgi
<FilesMatch \.php$>
SetHandler php55-cgi
</FilesMatch>

You are pretty much good to go after that. One thing you need to keep in mind that if you check in other folders or files into the public folder, you will have to symlink the new files.

For a much more convenient two-step deployment, you can use Envoy to manage everything on the server.

My workflow is mostly:

1. Push code into Bitbucket.

2. Run an Envoy task that pulls the changes into the server, runs composer dump-autoload and gets going from there.

Big Move Of The Year: Migrating Team-BHP To E2E Networks

It is always a great feeling to bring together two good organizations together and that has certainly been the case for me with Team-BHP and E2E Networks. Team-BHP is arguably one of the biggest automobile forums on the internet, run by a bunch of really passionate petrolheads, with an audience that even with a restricted membership is massive by any standards. E2E is the spiffy young hosting provider on the block, especially for sites that have a big chunk of their traffic originating from India, run by a bunch of geeks who make everyone’s life much easier by knowing, better than anyone else, what they are talking about when it comes to hosting infrastructure.

The Background

I had already done two projects with Team-BHP, both dealing with custom development of certain new features on the site. They have been one of the best clients I have worked with, being meticulous in what they do and most importantly, they know exactly what they want to do get done, which makes a vendor/consultant’s job a breeze, compared to the usual one-line brief that we often get to work with. The site had been hosted with WiredTree since 2009 and it had largely been a good experience, but things had slipped in recent times with the growing requirements, so they were on the lookout for a new home, which was preferably in India as the majority of the traffic for the site originates from here.

Decisions about infrastructure at this scale is never straight forward. There are numerous factors to be taken into consideration, some of which are:

  1. Level of support (managed or un-managed?)
  2. Cost of bandwidth
  3. Hardware SLAs.
  4. Application-level support.
  5. Traffic mix (is it equally geographically spread or largely local?)
  6. Connectivity at the service provider’s end.

Taking an informed decision about this is often not possible for most organizations as it requires both experience and knowledge that is usually quite specialized and not readily available within organizations. This a the crucial gap bridged by a consultant like me. Moreover, a property that is the size of a Team-BHP would normally have been in operation for at least 4-5 years, meaning the application stack often has a lot of legacy issues and complicated dependencies to handle. They have to be taken into account and best practices have to be rolled out where it is possible, without disrupting existing operations.

The Big Move

After evaluating various options, we decided to go with E2E Networks. They were already doing managed services for some substantial online properties from India and were also hosting some of the properties of the companies I knew personally and the feedback had never been anything short of spectacular. I am also very partial towards top management in infrastructure providers who are reachable on the phone and know what exactly are they doing and Tarun from E2E is a prime example of that. What helped was also the fact that they went out of their way earlier, before signing up with E2E, to fix a major problem that was bringing down the site while on WiredTree. It is not often that you get to see something like that in the industry.

Finally, after much testing and some delays (caused by the rather ill-timed Heartbleed bug and a DNS reflection attack), we started moving the sites late this week and the final piece of the puzzle (the main forum) was moved to E2E today. As things stand they look quite good and stable and hopefully both Team-BHP and E2E Networks will have a long and fruitful association.

Why Not AWS?

An important question in this regard that I often face is, “why not AWS and a CDN?”. For one, AWS is not cheap, especially when you have to get decent, hands-off managed support for it. Insecure public-facing web servers are the bane of the cloud hosting world and unfortunately a staggering number of companies learn about this the wrong (and often costly manner) way in the end. A poorly secured box with multiple cores, 10+ GB of RAM and a 100 Mbit unrestricted port is any hacker’s wet dream. And there are just way too many of them out there.

In the case of a CDN, it is not a panacea for site delivery. The effectiveness of a CDN depends on a variety of factors. For one, other than Akamai and Bitgravity, the other CDNs don’t have POPs in India. Which means that your Indian traffic will be served by routing it out of the country. Secondly, they are quite expensive and don’t make sense till you push incredible amounts of traffic, which not many sites actually do. One of the reasons why we chose E2E was that they had decent peering (via Netmagic) to all the major networks in India, which made it a faster option compared to most CDNs.

Edit: As pointed out by Manu J on Twitter both Cachefly and Cloudfront now have POPs in India.

The Ideal Infrastructure: Seamless Develop, Test & Depoly

In today’s world, most parts of developing, testing and deploying a website can be automated in a cost-effective manner. While the initial process to get this in place can be complicated and time-consuming, the long-term benefits of having this in place will save any organization time and money in the long run. Done right, this can also be aligned well with an organization’s business objectives. While it used to be really costly and difficult to accomplish this seamless process even five years ago, it is no longer that hard or expensive anymore.

If you want to explore rolling this out in your organization, do get in touch and I’d be more than happy to help you out.

Google’s Mobile Woes: The Search Bar

With the exception of transactions, discovery is one of the best aspects of any business to get into. While both discovery and transaction are intermediary plays, the former tends to push much more volumes compared to the latter. Perfect examples of such companies are Paypal (transactions) and Google (discovery), with Paypal representing the transaction piece, while Google represents the discovery piece when it comes to information.

It is always interesting to watch people use a computer, more for the manner in which even really tuned-in folks use Google as a quasi-DNS service. Vast majority of users I have observed refuse to use the address bar in the browser (leveraging the browser history and suggestions) and will open up a Google search page and enter the domain name there. This places a disproportionate level of power in the hands of Google and it is also one of the reasons why the company is so powerful in the information game.

The problem for Google is that they can’t replicate that one-stop-shop experience on the mobile. Even on an Android phone, searching is not an activity that is done easily. On a computer, a good chunk of time is spent inside a browser. On a mobile, phone very little time is spent in a browser and vertical apps have little reason to include a generic search feature. In fact, it would be considered antithetical to how mobile apps are meant to be used.

It is not that Google has not tried. There is a persistent search bar in most Android devices and then there is the Google Now, but neither lends itself to extensive searching compared to what users do on a desktop. There’s also the mobile version of Chrome, which is an excellent little browser, but it needs some serious firepower (processing & network speeds) to do its magic and the experience is awful on lower-end phones. Considering all of that and looking at the manner in which mobile usage is skyrocketing, eclipsing the laptop/desktop usage growth, mobile search volumes must be considerable concern at Mountain View.

The problem is also that the mobile experience itself not a singular one, but one of groups of silos and is heavily push-oriented. Computers tend to be devices where you have to provide the context for the usage. Compared to mobiles, it is a very pull-driven experience. On the other hand, mobiles tend to push the context at you and the apps are increasingly becoming self-contained silos. It is almost like having a separate browser on the computer for each site you want to visit and almost none of them provide an easy way to search using Google from within the app. And therein lies the problem for Google.

As if none of these issues were not a problem already, the newer markets where mobiles growth is most explosive are where Google has little influence as an intermediary. There are hordes of people in countries like India where, for many, the first experience of the internet is not on a computer, but on a mobile device. And as it happens, this first experience also tends to be a WhatsApp or Facebook, cutting Google entirely out of the equation.

Thus, the problem for Google on mobile is not that vertical search will somehow eclipse horizontal search, but that access to horizontal search is a problem that is not going to go away. Mind you, this is the case when Google practically owns the mobile OS with the largest market share out there. What must be even more worrying for Google is that there is no easy way out of this problem. A user who spends 90% of her/his time on Line/Facebook/WhatsApp, won’t start searching in a browser for something as the context switch (apps than tabs) is inherently more expensive on mobile.

In a way, the tables are getting turned on Google on mobile as the app that gets the user’s attention majority of the time will hold all the cards in this game. Eventually, Google will have to wind up doing deals with the top 20 or more apps in each market to establish the distribution for search on mobile as it successfully has done on the desktop. Which can only mean one thing for app makers — more money in the days to come.

Understanding Ramp-Up, Burn And Other Key Business Metrics

One of the common mistakes seen in business plans and projections is that entrepreneurs treat various key business metrics as big aggregate numbers. While this approach makes the plan easier to understand (example: addressable market of 3,000 units per year, convert 10% in year 1 at average revenue of 100 per unit), it also glosses over significant complexities involved in acquiring customers, factoring in churn and other factors that play a key role in determining how far the business can go.

While it is true that there is no 100% accurate plan or projection that is possible, it is foolhardy to not make projections that can at least help organizations be prepared for the various scenarios than be caught confused when faced with various eventualities. This post is based on a template that I normally use to model similar things. It is nowhere close to being detailed, nor is the scenario that it portrays a realistic one, but it is one that should give you a good idea how to go about creating your own model. Consider it more a template than a finished model.

ramp_up_table_1

 

Acme Corp Offerings

The table above describes the key offerings of our hypothetical company (Acme Corp). The company has five offerings, of which two are products and three are services. There is no particular reason why this mix is there other than that I wanted a decent spread of offerings. Of the lot, Service C is a big ticket item, which sells the least, while Service A, being the cheapest, sells the most. Again, for the sake of convenience, I’m not taking into account the addressable market for each offering, which is not a smart thing to do, but for now, we have to make do with it. We are also assuming that the company is being started with a 100,000 investment.

ramp_up_table_2

Acme Corp Ramp-Up

The table above shows the ramp-up scenario we have in mind for the company. The cheaper offerings are predicted to grow in a somewhat linear manner, while the expensive ones are erratic in how they grow. We are taking major liberties with factoring in churn here, as we are working backward from the total unit sales for the year than to consider how a customer’s actual lifecycle impacts the system. There are also no volume or pre-payment discounts taken into account, all for the sake of simplicity again.

ramp_up_table_3

Acme Corp Expenditure

The expenditure table is the one that sees the maximum liberties taken with numbers. The dead giveaway is the ‘Average S’ (average salary) figure. In a realistic scenario, it never stays constant over a 12-month period as the headcount grows. Same is the case with rent. There are also a raft of other costs like connectivity, travel, legal etc. that is not taken into account into the picture. Make sure you make those changes and represent them accurately, if this exercise has to be of any real use.

ramp_up

When you plot all those numbers in a graph, what shows up is that the most critical time period for the company is the 6-9 moth period. Even though the organization has its first positive cash flow month in month four, it is only during month six that it starts a streak of positive cash flow months and it is not until month nine that it actually turns in a profit, even though it is a tiny one. For the 12-month period the organization turns in a profit of 17,38,500. But this profit won’t be realized if the company cannot survive beyond the first six months.

This first six months is the period where angel/seed rounds are critical. The cash flow situation for the organization is negative through that time period and even for the extremely cheerful model presented in this post, the company would go under in five months (or less) if it can’t raise anything above 310,000 during that time. The capital raised at this time only allows for basic validation that a market exists for the product/service at the price levels they are being sold at.

Breaking down the ramp-up to this level allows us to estimate which product or service is the one that we should look to grow. A high ticket value service/product has a different sales cycle and support requirements compared to a low ticket value one. What complicates matters is also the fact that these days disruption happens through pricing which mandates larger scale and also considerably lengthen the road to profitability.

To conclude, what I will stress again on is that what is presented in this post is an oversimplified picture, but it does give us an idea about what is a good starting point to do projections and figure out the kind of ramp-up that is required over time to make the organization a sustainable and profitable one.

Customer Acquisition In Online Media: The Newsletter

Over the past year or so I have switched to consuming a lot of content on email. Well, to be precise, email newsletters. The poor little newsletter has, for long, been consigned as a necessary relic, especially in news organizations and content publications. This started during pre-post-PC era (I know it sounds funny and it is intentional) when mobiles were still primarily voice (than data) devices, RSS aggregators were for niche audience and much of content consumption started at the primary gateway of a publication’s homepage.

Newsletters, at that point in time, added little value to homepage-centric consumption pattern. Moreover, they were seen first as places to sell advertising inventory if you had huge subscription numbers, as an add-on to the primary ad slots on the website. Something like a buy-two-get-one-free kind of deal, a sweetener that cost the publisher nothing much and made the advertiser feel good. Since email-on-mobile was still not a widespread phenomenon, majority of consumers used to access their email on their laptops or desktops, limiting the visibility and utility of the newsletters.

Enter Data On The Move

The switch-over of handheld devices to becoming primarily data devices (that could also handle telephony) has been a game changer for every industry. I prefer to look at this change in the nature of the devices as a better distinction regarding the various eras in computing, than as a pre/post PC thing. The mobile phone, for a large chunk of its life, was a device that handled telephony and telephony-related functions. The switch-over turned them into generic computing devices that could handle wireless data natively and efficiently, while delegating functions related to telephony as one of the many applications that the device could run.

Death Of Branding And Context

This development dovetailed nicely with the emergence of social networks, whereby content was suddenly stripped of the context and branding at the point of origin. In the pre-social/mobile world, a consumer’s path to a particular piece of content was clearly defined. For example, this would mean (more often than not) I would know that I am reading an opinion piece on a particular publication because I went seeking out something specific to read on that publication’s website.

The main contexts for me in that example are 1) a publication that I like to read 2) a section/topic that is of interest to me and 3) a visual representation (design etc.) that is familiar to me. Part of the reason why some content properties can command a premium in advertising rates is because of this degree of certainty that is provided about the context for their audience. The emergence of social and omnipresent data has decimated this certainty.

The growth curve of Facebook and Twitter (and other niche social properties) is captured best in the referral section of the audience numbers for content websites. Save the gated and private networks, the top sources of traffic for almost every site now is social at top with organic search and direct traffic below it. Contrast this with the pre-social era where direct was the primary driver of traffic, followed by organic search.

Even within social there is no predictable path that is possible. The publication’s own pages on the platforms may drive the the traffic. The traffic may come from a much-followed curator’s page. It may lead from a link going viral, which means tens and thousands of pages may be generating that traffic.

Why Email Newsletters?

The greatest downside for content websites of these developments in social and mobile is that they no longer have a constant engagement with their audience, as represented by direct traffic. And it is only going to drop further as the volume and ability to publish more content ramps up, driving more people into the hands of social and content aggregators. The resulting loss or alteration of context (ranging from appreciation, to ridicule and a variety of other not-so-nice things) also impacts advertising options, which in-turn negatively impacts viability of the business itself in the long run.

This is where the humble newsletter becomes a key factor. One application that has weathered all this data and social onslaught is the old school thing called email. Strangely, email has wound up being an off-app notification aggregator of sorts; emerging as a high-engagement app of its own. And unlike the earlier times when email was accessed a lot over browsers in laptops and PCs, it is heavily used in mobile devices. Some of the key numbers regarding use of email on mobiles read like this.

  • Daily we spend 9 minutes on email via a mobile device, that is 7,6% of the total 119 minutes we use our phone per day. O2 – “Mobile life report” UK (2013)
  • Mobile email opens have grown with 21% in 2013, from 43% in Jan to 51% in December. Litmus –”Email Analytics” (Jan 2014)
  • More email is read Mobile than on a desktop email client. Stats say 51% of email is now opened on a mobile device Litmus –”Email Analytics” (Jan 2014)

You can read more of those stats in this excellent post on EmailMonday. And these are numbers that should make every content producer sit up and take notice.

It is not that nobody is taking email seriously. As pointed out by Nikhil in a recent offline conversation, it is a good source of revenue for some of the trade publications. Similarly, e-commerce sites make extensive use of email as a sales funnel. The former is more a fire hose approach, while the latter — e-commerce — has many years of evolution in both methodology and technology that enables them to segment and target customers effectively for acquisition and retention. There is no such thing that is present with the content domain.

What Should Publications Do?

Firstly, they should consider the audience as customers of a product they are selling. The product here is content, which has a tiny ticket size compared to other (especially transaction-oriented) businesses. The desired outcomes here are a) acquisition b) retention and longer term engagement c) transaction. For content plays, the juicy bit are in (b) as (a) is too volatile a number to reliably build anything on. (c) is also a hard one for most as the options are limited to subscriptions, affiliate models or events.

Secondly, they need to have clear-cut retention strategies for the difference audience segments. Presenting the same recommended articles or email sign up forms for all first time users is not the smartest way to go about retaining a horde of new visitors from a link that has gone viral. I can bet my bottom dollar on the assertion that only a tiny percentage of content publishers anywhere will have a handle on conversion percentages from the last viral spike they experienced. This is unacceptable situation if survival is key for you.

This is also the place where email finds a lot value in building an engaged audience where the publisher has at least some modicum of control over the context. But, to get started on that path, publishers have to both market and put together their mailers better. While the automated solutions like Feedblitz are easy to integrate, they also generate incredibly big blind spots. While email can work as a high-engagement platform, it can also quickly wind up in the death folder (spam) or remain unread if you don’t make the best of the tiny window of opportunity a consumer gives you.

It is vital to recognize that the email context is different from anything else. As a result, you have to re-purpose content for it. In the email app, you are not looking for a quick fix. Other than spam, every email in that item already has an established relationship with the reader. It is the publisher’s responsibility to leverage that relationship and trust to meet the aforementioned objectives.

Lastly, it is important to understand the numbers. What are the open rates and referrals from your email campaigns? What is the bounce rate from the email like? Which form factor represents the largest consumption percentage? Is your email layout responsive?

All the points only touch the surface of a good email strategy for publications. While I hope that most publishers already have in place a strategy that covers all this and more, the reality is that most would struggle to answer even basic questions regarding their email strategy. Even so, right now is a good time to start work on it and leverage a tool that allows for persistent engagement, in a world where prolonged engagement is nearly impossible to find.

UIDAI, NIC And India’s Data Security Nightmare

Should the worst happen to India’s official information technology infrastructure, AS4758 is a term that will feature prominently in it. The term denotes a unique name/number (ASN) for a network that is used for routing traffic over IP networks and AS4758 is operated by the National Informatics Center. This prefix represents a vast majority of the servers and sites (the 164.100.0.0 – 164.100.255.255 IP address range) operated by the NIC. Some of the key sites operating from this network include UIDAI, website of the Chief Electoral Officer, Delhi and the NIC Certifying Authority. These three are just a minor part of the vast array of sites and services, that cover everything from the personal information of the citizens of the country, to key information about the government itself.

This post is one that I have been putting off writing for a while. The main reason is that it is not right to identify weak points in our key IT infrastructure in such a public manner. But the fact is that the speed with which we are going ahead to centralize a lot of this information, without thinking through the requisite safeguards is an issue that overrides that concern. Improperly secured, this information is a grave risk to everyone, including the government. And from the evidence seen in public, there is not adequate knowledge or expertise within the system to even take a call on what is adequate security for an undertaking this grave in nature. The secondary reason is the inadequacies of the underlying technology in mining this information. They are immature and not accurate enough and it will lead to a flood of false positives in a system where the legal system itself is under-equipped to make key differentiation when it comes to the evidence that supports the case made by the false positive.

Another point to note is that I am hardly a security expert, the little that I know is what I need to know to keep my applications secure. Whatever I have seen is a tiny percentage of what is available for everyone to see. Information security has become such a complicated and specialized field now that it is no longer good enough to know some of the factors involved in keeping an application and infrastructure secure from prying eyes. I would not dare to certify a client website/application as secure based on my own knowledge. I would rather get a specialized security firm to do that, even if they cost a lot of money. The important bit here is that if I can see these issues, someone with malicious intent can see a hundred other things that can be used to gain unauthorized access.

All Eggs In One Basket

Coming back to As4758, it is a case of keeping too many eggs in one basket. From the outside, it looks like multiple vendors have access to the servers on that network. Forget forcing users to SSL-enabled versions of the sites, most of them don’t even give that as an option. This is true of both the UIDAI website and the Delhi CEO’s website where users have to enter personal information to retrieve more personal information. A compromised machine on the network can easily listen to all network traffic and silently harvest all this data without anyone knowing about it.

A year ago, NISG, which is one of the key service providers for the NATGRID and UIDAI project was running its website on an old Windows desktop (Windows XP or 97, if I remember correctly). Thankfully, NISG seems to have moved to a Linux machine recently. Also, the NISG set-up is not hosted within the NIC’s network, so any the possibility of damage from the machine would have been comparatively lower. Though, we will never know for sure.

That said, even being on different networks won’t provide iron-clad security, if you don’t design networks, access protocols and authentication as the first order of business. Done as an afterthought, it will never be as effective as it needs to be. Agencies often require data from each other to be mashed up (example: overlay UIDAI data over NATGRID data) and this is often managed at the protocol level by restricting access by IP. In the hypothetical case of the NISG server being allowed access to UIDAI data and the former is compromised, you have a scenario where even the most secure UIDAI data center will leak information due to compromise in another network.

Cart Before Horse

A moot point here is the assumption that the UIDAI infrastructure is secure enough in the first place. An NISG requirement for a data center security and risk manager position does not give us confidence in that assumption one bit. As the saying goes, the chain is only as strong as its weakest link and in this case, it seems that security is an afterthought. Part of the problem is that there is not enough experience within the government machinery to even determine what is secure enough. A simple rule about getting work done by someone is that you need to know, better than the person you are engaging to get that work done, what you are looking to get done. We just don’t have that in place in India at the moment.

These systems need to be designed primarily with security in mind and that does not seem to be the case. My fear with these systems is not as much that the government itself will misuse the data (which is a valid and important concern for me), but that it will be quietly pilfered away by foreign players and nobody would know about it. Having such information about all of the citizens of a country opens up millions of avenues for the malicious players to recruit people to their cause as all those people become potential targets to blackmail. Since we are going to collect information about everyone in the country, the potential of who can be blackmailed can range from the richest and most powerful, to the poorest and the weakest. And the best part is that what exposes people to blackmail need not even be illegal behaviour, it can be perfectly legal behaviour that affects social and professional standing of an important person.

We are going to present all of that information to interested parties with a nice bow on top.

Access, Identity, Authentication, Logging

  1. Any secure system will require you to control access to the resource as a whole and/or parts of the resource itself. This planning has to start from physical access to the core and nodes that access the core and it has to then take into account the applications that will provide access to the information and the applications that will access this information from the nodes.
  2. Any secure system will have a clear policy in assigning identities to people who can access those resources. This needs to be consistent across the core and the nodes. This makes the system rather inflexible and a pain to operate, but it is necessary to mitigate even the weakest of attacks.
  3. Any secure system will clear mechanism of of authenticating the identity of a valid user in the system. There cannot be any backdoors built into such a system as it has been proven time and again that the backdoors become a point of major weakness over time.
  4. Any secure system will log all actions at all levels in the system and establish triggers for any out-of-band activity that covers even legitimate use.

The above four points are just an amateur attempt by me at defining the outlines of a reasonably secure system. A proper attempt at this by a real security professional will have a hell of a lot more of points and also go into a great deal of detail. But these points should give you a rough idea about the complexity involved in designing security for systems like these. You simply cannot slap on top security as an afterthought here.

Mining Nightmares

Which brings us to the issue of accuracy in data mining for initiatives like NATGRID.

Personally, I do believe that there is a valid case for governments to either collect or have access to information of any kind. What I do not like is unfettered collection, mining and access and zero oversight on any of those processes.

The reason why mining big data as a sort of Google search for suspicious activity is a terrible idea is simple. It does not work accurately enough to be of use in enforcement. The same technology that results in mis-targeted marketing phone calls and the tech that serves you ads that are irrelevant to you are the ones that are going to be used to determine whether a person or a group of people are likely to do bad things. Even in marketing or advertising it works with an appalling rate of failure, using it in intelligence, surveillance and enforcement will lead to an ocean of false positives and wind up putting a lot of innocent people behind bars for no good reason.

Even worse is the fact that legal system itself has such a weak grasp on these matters that appeals are likely to fall on deaf ears as the evidence is likely to be considered the gospel as there is no understanding available within the system that can say it is not the case. And then there is the potential for real abuse — not limited to planting evidence through spyware — that can ruin lives of anyone and everyone.

Conclusion

Our approach to security and centralized information collection is terrible beyond what can be expressed in words. It needs to be stopped in its tracks and reviewed closely and should be redesigned from the ground-up to keep security as the first objective and data collection as a final objective. We need to codify access laws to data collected in this manner and ensure that all of it does not reside in a single place and access to a complete picture is available only in the rarest and most exceptional of circumstances. What is happening right now is none of that and I am afraid we will find that out in the most painful manner in the coming years.

Fix For Call Volume Bug In Moto G

The Moto G has been a fantastic phone so far, except for one small bug which becomes a big irritant. If the phone is used with a headset (wired or bluetooth) and you take it off, the call volume will drop to really low levels, making it hard to hear what is being said on the other side. A reboot usually fixes the problem, but that is not an ideal solution.

The other, simpler, fix is to reduce the volume while in call and then max it. What usually happens in a situation like this is that the user will only try to push the volume to the maximum (which the phone usually is at) than reduce it. To make the fix work, you have to first reduce the volume and then increase it, making it very likely that this is a software issue than a hardware one.

Update: Motorola has released an OTA update for the single SIM GSM version for this with the 174.44.1 update that fixes this bug. There is no word on when it will be rolled out to the other models.

Lifestyle As A Luxury

As you can see, the title is an obvious play on ‘Luxury Lifestyle’. After five-years of working on my own, what I have come to realize is that I have a lifestyle that is considered luxurious, mostly by people who work a regular job. Luxury, in this context, is not being able to afford a fancy dinner every evening, but the ability to go for a walk or a run in the evening when the streets are starting to fill up with the evening rush hour traffic. It is the ability to take half the day off, on a weekday, to have a nice lunch by yourself out in the winter sun, or catch a movie all on your own.

Like other luxuries, this one also does not come for free. I have had to give up having what most would call a regular life — owning a home, family with kids etc. — to support this lifestyle. To be fair, it is not an exact correlation or causation, as there are other factors too that have played a part. I did struggle through most parts of the five-years on my own, trying to bridge the gap between what I don’t have and what everyone else seemed to have, only to realize recently that it is a gap that remained unfilled not for a lack of ability, but for a lack of willingness to fill it.

Like other luxuries, as long as it delivers contentment for you and if it feels right, the price is always right. I, for one, find it hard to own something extremely expensive. I’m one of those people who merely don’t own things, they are owned by the things they own. Thus, even the thought of owning a luxury car (not that I could probably ever afford one) is an unsettling one for me. I would probably love to rent one some day and experience it for a short period of time, but not own one. For someone who loves owning one, doing exactly that is the way to go forward. You don’t owe an explanation or justification to anyone for what makes you truly happy.

That said, luck is a significant part of being able to live this lifestyle, unless you are someone who is extremely good with their financial planning. I am not one of those people. This is partly because of one of the bizarre outcomes of subsisting on very little money when I moved to Delhi. When a time came where I had more than what I needed, it really didn’t get me much joy, especially as I tried to buy my way into respect, consideration and love. Money, for me, is something that is necessary to have at a basic level. It is nearly impossible to live without it, or live without someone who has enough of it to take care of you.

But, I digress.

You need a good share of luck as being unwell or getting into a serious accident can dent seriously even substantial bank accounts. No matter how careful you are, or how gifted you are, the fact is that you cannot control most of what happens to you. If it does go wrong badly, a lifestyle like mine won’t be possible. The corollary is that even the most accounted-for and provided-for existence cannot account for or provide for all eventualities. Should there happen a massive global market crash, odds are that me, a newly minted millionaire and the beggar on the road are all going to be on the same boat pushing up the river of survival.

Building A Digital Product: Part I

There used to be a time when building and launching a digital product was a straight forward affair. The steps were something like this:

  1. Start with a rough idea of what you were looking for
  2. Find someone who could design and build it for you
  3. Find people who would help you run it and go live.
  4. Find ways to market the product.
  5. Find ways to sell the product.

Other than the really big players, most of the regular Joes would handle most of the steps on their own or, in some extreme cases, handle all the steps on their own.

In the last five to eight years the steps have been shred to bits, thrown out to the dustbin and replaced with a set of steps that bear no resemblance to the earlier ones.

Most of this disruption can squarely be blamed on mobile computing. The revolution that started with telephonic devices being able to access bits of textual data (read SMS), was turned on its head when the same devices were transformed into data devices that could also do telephony as one of the many things it could do.

The other significant development that has caused the playbook to be thrown out is the commoditization of many of the moving parts that are used to build a digital product. From databases, to job queues, to logging, to any other part of the technical stack, finds an array of plug-and-play software service platforms that a new product can leverage from day one.

In the early days, teams had to develop everything — from email subscription systems, to delivery systems to, logging infrastructure — to get going. With all these new services, product builders are less builders and more integrators these days.

While this has created an entirely new universe of opportunities and possibilities, it is also responsible for creating a lot of confusion for companies and individuals looking to build products.

What this series will attempt to do, in the first part, is to bring some structure to the steps and elaborate on the steps a bit with an aim to reducing the amount of confusion in the market regarding the steps.

I have no illusions that this will be a definitive list as there are parts of the stack and ecosystem I am completely unaware of. My idea is to fill in the gaps that I can and I’ll be more than happy to bring in suggestions about what else can I cover here.

I am going to tackle the more technical aspects in this post:

Design: Designs are best approached first with storyboards. The story boards are used to create process flows. The process flows lead to wireframes. The wireframes lead to the final design.

You can skip all of the steps and go directly to a design, but the odds are that you will struggle at a later stage to force fit that disciplined a process into an existing system that has grown without it.

What is more important — short term gain or long term pain — make your pick.

Development: The choice of framework/language to build on is made at this stage. Unless you are someone who knows technology very closely, avoid using the latest fancy framework in town.

You have to establish coding standards, documentation standards, bug tracking, version control systems and release management processes.

Testing: Set up both automated and manual tests to address both logic and real-world usage. Testing infrastructure built right will include a good set of unit and behavioural tests and a continuous integration framework that will catch most errors during the build phase itself.

Deployment: No (S)FTP. Simple. Deployment options are available these days from the simple, to the ridiculously complicated. It gets harder when you have to update code on a pool of application servers that need a rolling update/restart cycle.

The more challenging part in this is to abstract away this part of the stack to a simple interface that the developers can use. You cannot and should not expect developers to debug problems in the deployment infrastructure.

Distribution: A local CDN or an international one — which is the right one to use? Should I use a CDN at all? Recently, a company that I spoke to had a response time to their origin server that was 1/5th of what they were getting from their CDN. This was done to leverage cheaper CDN bandwidth and is a classic case of cost optimization at the wrong place.

Is Couldfront the right solution? Can my preferred CDN provider handle wildcard SSL termination at reasonable cost? How costly is it to do a cache purge across all geographies. Is it even possible? Is it important to purge CDN caches? Is a purge important to avoid compliance hurdles for some obscure requirement in my market of choice?

Mobile-specific Parts: Native, cross-platform or HTML5? Do I need a mobile application at all? Which platforms should I target? What is the minimum OS level that I should support on each of those platforms? How do I align those decisions with the target audience I am going to address?

Outbound, non-consumer-facing Services: Should I expose any of my internal data with a developer-facing API? What should I use to expose that API? Do I build it own on my own or do I use a hosted platform like Apigee? What sort of authentication should I use? What sort of identity management should I use. Should I even try to split identity and authentication as two different services?

Inbound, non-consumer-facing Services: What do I use to handle data that I fetch from other sources? How do I ensure that I cache my requests to respect rate limits. What is a Webhook? How do I go about implementing one?

Replication & Redundancy: What is the maximum acceptable downtime for my application? Is there a business case for a multi-DC deployment? How extensive does my disaster recovery plan have to be?

AWS, Rackspace, good old dedicated rack in a datacenter? Should I use Glacier? What should I use for DNS management?

Analytics & Instrumentation: DAU, MAU, WAU — what all do I have to track? Are bounces more important than acquisition? Is acquisition more important than repeat transactions? How do I bucket and segment my users?

How do I measure passive actions? Should I start tracking a minor version of an otherwise less-used browser as my javascript error tracking reports are showing that the current release is breaking critical parts for my most valuable demographic who use that exact obscure browser?

Wait, I can track client side javascript errors?

Conclusion

As you can see, the list raises more questions and provides no answers. This is intentional as there is no one-size-fits-all answer for these questions. Even within specific company lifecycle segments (early stage, stable start-up, established company), the internal circumstances vary from company to company.

These list is more a starting point than a destination in itself. Use it to build a better framework that is suited for your organization and your product. And if you need more help, just ask!

Moto G Review

Having now spent close to a month using the Moto G, I can now sum up the device in one word — fabulous. The device has not been officially launched in India. I was fortunate to have been gifted one by someone living the US and it cost the regular $199 retail there. If they manage to price it under Rs 12,000 (and closer to Rs. 10,000) in India, Motorola could have a winner on its hands.

DSC_0543What I like about the device:

  1. Battery life: I use location services during a major part of the day, which, previously, was a huge drain on Android devices. I am easily getting 24+ hours on a full charge and have never used the battery saver feature.
  2. Android Kitkat: The slight lag (more like a stutter than lag) that used to be there on even the high-end Android devices is gone. Even with 1GB RAM, the transitions are buttery smooth and comparable to iOS.
  3. Nearly-Stock Android: There are a couple of Moto-specific apps in there, but nothing that gets in your way. Rest is pure Android all the way.
  4. Price: You can pick up at least three of these babies at a price that is lesser than a Samsung S4 or an iPhone 5C.
  5. Main camera is pretty decent. Just remember not to shoot with it in low light.

What I don’t like:

  1. No external SD card support. I don’t store much media or click a zillion pics, but I’m already down to 9 GB left on the device.
  2. 1 GB of RAM.
  3. There’s a bug (not sure whether it is software or hardware) that can cause call volume to drop after using a wired or bluetooth headset. Can be fixed with a reboot, but annoying all the same.
  4. USB port is at the bottom of the phone. Never liked that positioning. It is a personal preference, though.

DSC_0552While I really like the device, you do need to keep in mind that I don’t fit the profile of the average smartphone user for the following reasons:

  1. Limited apps usage: I don’t use Facebook, Twitter, G+ and most other social networking apps. I do use Whatsapp and BBM, but they don’t seem to eat up as much battery and processing as the first three.
  2. I don’t game at all on the device. There’s a chess app that I keep for the odd rainy day, but have not used it more than twice or thrice in the past year.
  3. I don’t watch much video on the device other than the odd YouTube clip.
  4. Reading has moved completely to a 7-inch Lava tablet.
  5. My data connection is permanently set to EDGE. I don’t use 3G.

Before I picked up the Moto G, I was using the Micromax A116, which has been a pleasant experience. After using it for almost a year, I’d rooted it and switched to ROM that had thrown away a lot of the unnecessary bits and made it nearly stock Android. Even that phone was giving me a good 24-hours of usage on a single charge. The reason why I wanted to try something else was that the build quality is extremely poor and I doubt it will be able to take another year or two of abuse. There are also little niggles like the problematic GPS lock, lack of a compass and issues with the filesystem at times.

DSC_0548The Moto is my first Google Android phone, which is a route that I have been looking to go down for a while now. The migration assistant provided by Motorola (works over Bluetooth) is quite good and I could switch devices (with data and apps) in a couple of hours. The device does only MTP, so it cannot be mounted as a regular volume on computers. Since I’m on Linux, I use gMTP, which can misbehave a bit at times. The fallback is Bluetooth, which is the disagreeable option when it comes to speed.

Overall, Kitkat seems to have improved how Android handles the idle state. This has resulted in better battery life, for me at least. There are rooting guides and ROMs available for the interested parties, as usual, on XDA, but I’m pretty happy with the way the device is right now. So I don’t see rooting and custom ROMs happening anytime soon. I like my devices to function flawlessly and stay out of the way and the G increasingly looks like a good candidate for that. I’m well past my weekly flashing phase on my phones and a lack of excitement is a welcome change on that front.