Understanding Ramp-Up, Burn And Other Key Business Metrics

One of the common mistakes seen in business plans and projections is that entrepreneurs treat various key business metrics as big aggregate numbers. While this approach makes the plan easier to understand (example: addressable market of 3,000 units per year, convert 10% in year 1 at average revenue of 100 per unit), it also glosses over significant complexities involved in acquiring customers, factoring in churn and other factors that play a key role in determining how far the business can go.

While it is true that there is no 100% accurate plan or projection that is possible, it is foolhardy to not make projections that can at least help organizations be prepared for the various scenarios than be caught confused when faced with various eventualities. This post is based on a template that I normally use to model similar things. It is nowhere close to being detailed, nor is the scenario that it portrays a realistic one, but it is one that should give you a good idea how to go about creating your own model. Consider it more a template than a finished model.

ramp_up_table_1

 

Acme Corp Offerings

The table above describes the key offerings of our hypothetical company (Acme Corp). The company has five offerings, of which two are products and three are services. There is no particular reason why this mix is there other than that I wanted a decent spread of offerings. Of the lot, Service C is a big ticket item, which sells the least, while Service A, being the cheapest, sells the most. Again, for the sake of convenience, I’m not taking into account the addressable market for each offering, which is not a smart thing to do, but for now, we have to make do with it. We are also assuming that the company is being started with a 100,000 investment.

ramp_up_table_2

Acme Corp Ramp-Up

The table above shows the ramp-up scenario we have in mind for the company. The cheaper offerings are predicted to grow in a somewhat linear manner, while the expensive ones are erratic in how they grow. We are taking major liberties with factoring in churn here, as we are working backward from the total unit sales for the year than to consider how a customer’s actual lifecycle impacts the system. There are also no volume or pre-payment discounts taken into account, all for the sake of simplicity again.

ramp_up_table_3

Acme Corp Expenditure

The expenditure table is the one that sees the maximum liberties taken with numbers. The dead giveaway is the ‘Average S’ (average salary) figure. In a realistic scenario, it never stays constant over a 12-month period as the headcount grows. Same is the case with rent. There are also a raft of other costs like connectivity, travel, legal etc. that is not taken into account into the picture. Make sure you make those changes and represent them accurately, if this exercise has to be of any real use.

ramp_up

When you plot all those numbers in a graph, what shows up is that the most critical time period for the company is the 6-9 moth period. Even though the organization has its first positive cash flow month in month four, it is only during month six that it starts a streak of positive cash flow months and it is not until month nine that it actually turns in a profit, even though it is a tiny one. For the 12-month period the organization turns in a profit of 17,38,500. But this profit won’t be realized if the company cannot survive beyond the first six months.

This first six months is the period where angel/seed rounds are critical. The cash flow situation for the organization is negative through that time period and even for the extremely cheerful model presented in this post, the company would go under in five months (or less) if it can’t raise anything above 310,000 during that time. The capital raised at this time only allows for basic validation that a market exists for the product/service at the price levels they are being sold at.

Breaking down the ramp-up to this level allows us to estimate which product or service is the one that we should look to grow. A high ticket value service/product has a different sales cycle and support requirements compared to a low ticket value one. What complicates matters is also the fact that these days disruption happens through pricing which mandates larger scale and also considerably lengthen the road to profitability.

To conclude, what I will stress again on is that what is presented in this post is an oversimplified picture, but it does give us an idea about what is a good starting point to do projections and figure out the kind of ramp-up that is required over time to make the organization a sustainable and profitable one.

Related Posts:

  • No Related Posts
Posted in Business, Start-ups

Customer Acquisition In Online Media: The Newsletter

Over the past year or so I have switched to consuming a lot of content on email. Well, to be precise, email newsletters. The poor little newsletter has, for long, been consigned as a necessary relic, especially in news organizations and content publications. This started during pre-post-PC era (I know it sounds funny and it is intentional) when mobiles were still primarily voice (than data) devices, RSS aggregators were for niche audience and much of content consumption started at the primary gateway of a publication’s homepage.

Newsletters, at that point in time, added little value to homepage-centric consumption pattern. Moreover, they were seen first as places to sell advertising inventory if you had huge subscription numbers, as an add-on to the primary ad slots on the website. Something like a buy-two-get-one-free kind of deal, a sweetener that cost the publisher nothing much and made the advertiser feel good. Since email-on-mobile was still not a widespread phenomenon, majority of consumers used to access their email on their laptops or desktops, limiting the visibility and utility of the newsletters.

Enter Data On The Move

The switch-over of handheld devices to becoming primarily data devices (that could also handle telephony) has been a game changer for every industry. I prefer to look at this change in the nature of the devices as a better distinction regarding the various eras in computing, than as a pre/post PC thing. The mobile phone, for a large chunk of its life, was a device that handled telephony and telephony-related functions. The switch-over turned them into generic computing devices that could handle wireless data natively and efficiently, while delegating functions related to telephony as one of the many applications that the device could run.

Death Of Branding And Context

This development dovetailed nicely with the emergence of social networks, whereby content was suddenly stripped of the context and branding at the point of origin. In the pre-social/mobile world, a consumer’s path to a particular piece of content was clearly defined. For example, this would mean (more often than not) I would know that I am reading an opinion piece on a particular publication because I went seeking out something specific to read on that publication’s website.

The main contexts for me in that example are 1) a publication that I like to read 2) a section/topic that is of interest to me and 3) a visual representation (design etc.) that is familiar to me. Part of the reason why some content properties can command a premium in advertising rates is because of this degree of certainty that is provided about the context for their audience. The emergence of social and omnipresent data has decimated this certainty.

The growth curve of Facebook and Twitter (and other niche social properties) is captured best in the referral section of the audience numbers for content websites. Save the gated and private networks, the top sources of traffic for almost every site now is social at top with organic search and direct traffic below it. Contrast this with the pre-social era where direct was the primary driver of traffic, followed by organic search.

Even within social there is no predictable path that is possible. The publication’s own pages on the platforms may drive the the traffic. The traffic may come from a much-followed curator’s page. It may lead from a link going viral, which means tens and thousands of pages may be generating that traffic.

Why Email Newsletters?

The greatest downside for content websites of these developments in social and mobile is that they no longer have a constant engagement with their audience, as represented by direct traffic. And it is only going to drop further as the volume and ability to publish more content ramps up, driving more people into the hands of social and content aggregators. The resulting loss or alteration of context (ranging from appreciation, to ridicule and a variety of other not-so-nice things) also impacts advertising options, which in-turn negatively impacts viability of the business itself in the long run.

This is where the humble newsletter becomes a key factor. One application that has weathered all this data and social onslaught is the old school thing called email. Strangely, email has wound up being an off-app notification aggregator of sorts; emerging as a high-engagement app of its own. And unlike the earlier times when email was accessed a lot over browsers in laptops and PCs, it is heavily used in mobile devices. Some of the key numbers regarding use of email on mobiles read like this.

  • Daily we spend 9 minutes on email via a mobile device, that is 7,6% of the total 119 minutes we use our phone per day. O2 – “Mobile life report” UK (2013)
  • Mobile email opens have grown with 21% in 2013, from 43% in Jan to 51% in December. Litmus –”Email Analytics” (Jan 2014)
  • More email is read Mobile than on a desktop email client. Stats say 51% of email is now opened on a mobile device Litmus –”Email Analytics” (Jan 2014)

You can read more of those stats in this excellent post on EmailMonday. And these are numbers that should make every content producer sit up and take notice.

It is not that nobody is taking email seriously. As pointed out by Nikhil in a recent offline conversation, it is a good source of revenue for some of the trade publications. Similarly, e-commerce sites make extensive use of email as a sales funnel. The former is more a fire hose approach, while the latter — e-commerce — has many years of evolution in both methodology and technology that enables them to segment and target customers effectively for acquisition and retention. There is no such thing that is present with the content domain.

What Should Publications Do?

Firstly, they should consider the audience as customers of a product they are selling. The product here is content, which has a tiny ticket size compared to other (especially transaction-oriented) businesses. The desired outcomes here are a) acquisition b) retention and longer term engagement c) transaction. For content plays, the juicy bit are in (b) as (a) is too volatile a number to reliably build anything on. (c) is also a hard one for most as the options are limited to subscriptions, affiliate models or events.

Secondly, they need to have clear-cut retention strategies for the difference audience segments. Presenting the same recommended articles or email sign up forms for all first time users is not the smartest way to go about retaining a horde of new visitors from a link that has gone viral. I can bet my bottom dollar on the assertion that only a tiny percentage of content publishers anywhere will have a handle on conversion percentages from the last viral spike they experienced. This is unacceptable situation if survival is key for you.

This is also the place where email finds a lot value in building an engaged audience where the publisher has at least some modicum of control over the context. But, to get started on that path, publishers have to both market and put together their mailers better. While the automated solutions like Feedblitz are easy to integrate, they also generate incredibly big blind spots. While email can work as a high-engagement platform, it can also quickly wind up in the death folder (spam) or remain unread if you don’t make the best of the tiny window of opportunity a consumer gives you.

It is vital to recognize that the email context is different from anything else. As a result, you have to re-purpose content for it. In the email app, you are not looking for a quick fix. Other than spam, every email in that item already has an established relationship with the reader. It is the publisher’s responsibility to leverage that relationship and trust to meet the aforementioned objectives.

Lastly, it is important to understand the numbers. What are the open rates and referrals from your email campaigns? What is the bounce rate from the email like? Which form factor represents the largest consumption percentage? Is your email layout responsive?

All the points only touch the surface of a good email strategy for publications. While I hope that most publishers already have in place a strategy that covers all this and more, the reality is that most would struggle to answer even basic questions regarding their email strategy. Even so, right now is a good time to start work on it and leverage a tool that allows for persistent engagement, in a world where prolonged engagement is nearly impossible to find.

Related Posts:

  • No Related Posts
Posted in Marketing, Media

UIDAI, NIC And India’s Data Security Nightmare

Should the worst happen to India’s official information technology infrastructure, AS4758 is a term that will feature prominently in it. The term denotes a unique name/number (ASN) for a network that is used for routing traffic over IP networks and AS4758 is operated by the National Informatics Center. This prefix represents a vast majority of the servers and sites (the 164.100.0.0 – 164.100.255.255 IP address range) operated by the NIC. Some of the key sites operating from this network include UIDAI, website of the Chief Electoral Officer, Delhi and the NIC Certifying Authority. These three are just a minor part of the vast array of sites and services, that cover everything from the personal information of the citizens of the country, to key information about the government itself.

This post is one that I have been putting off writing for a while. The main reason is that it is not right to identify weak points in our key IT infrastructure in such a public manner. But the fact is that the speed with which we are going ahead to centralize a lot of this information, without thinking through the requisite safeguards is an issue that overrides that concern. Improperly secured, this information is a grave risk to everyone, including the government. And from the evidence seen in public, there is not adequate knowledge or expertise within the system to even take a call on what is adequate security for an undertaking this grave in nature. The secondary reason is the inadequacies of the underlying technology in mining this information. They are immature and not accurate enough and it will lead to a flood of false positives in a system where the legal system itself is under-equipped to make key differentiation when it comes to the evidence that supports the case made by the false positive.

Another point to note is that I am hardly a security expert, the little that I know is what I need to know to keep my applications secure. Whatever I have seen is a tiny percentage of what is available for everyone to see. Information security has become such a complicated and specialized field now that it is no longer good enough to know some of the factors involved in keeping an application and infrastructure secure from prying eyes. I would not dare to certify a client website/application as secure based on my own knowledge. I would rather get a specialized security firm to do that, even if they cost a lot of money. The important bit here is that if I can see these issues, someone with malicious intent can see a hundred other things that can be used to gain unauthorized access.

All Eggs In One Basket

Coming back to As4758, it is a case of keeping too many eggs in one basket. From the outside, it looks like multiple vendors have access to the servers on that network. Forget forcing users to SSL-enabled versions of the sites, most of them don’t even give that as an option. This is true of both the UIDAI website and the Delhi CEO’s website where users have to enter personal information to retrieve more personal information. A compromised machine on the network can easily listen to all network traffic and silently harvest all this data without anyone knowing about it.

A year ago, NISG, which is one of the key service providers for the NATGRID and UIDAI project was running its website on an old Windows desktop (Windows XP or 97, if I remember correctly). Thankfully, NISG seems to have moved to a Linux machine recently. Also, the NISG set-up is not hosted within the NIC’s network, so any the possibility of damage from the machine would have been comparatively lower. Though, we will never know for sure.

That said, even being on different networks won’t provide iron-clad security, if you don’t design networks, access protocols and authentication as the first order of business. Done as an afterthought, it will never be as effective as it needs to be. Agencies often require data from each other to be mashed up (example: overlay UIDAI data over NATGRID data) and this is often managed at the protocol level by restricting access by IP. In the hypothetical case of the NISG server being allowed access to UIDAI data and the former is compromised, you have a scenario where even the most secure UIDAI data center will leak information due to compromise in another network.

Cart Before Horse

A moot point here is the assumption that the UIDAI infrastructure is secure enough in the first place. An NISG requirement for a data center security and risk manager position does not give us confidence in that assumption one bit. As the saying goes, the chain is only as strong as its weakest link and in this case, it seems that security is an afterthought. Part of the problem is that there is not enough experience within the government machinery to even determine what is secure enough. A simple rule about getting work done by someone is that you need to know, better than the person you are engaging to get that work done, what you are looking to get done. We just don’t have that in place in India at the moment.

These systems need to be designed primarily with security in mind and that does not seem to be the case. My fear with these systems is not as much that the government itself will misuse the data (which is a valid and important concern for me), but that it will be quietly pilfered away by foreign players and nobody would know about it. Having such information about all of the citizens of a country opens up millions of avenues for the malicious players to recruit people to their cause as all those people become potential targets to blackmail. Since we are going to collect information about everyone in the country, the potential of who can be blackmailed can range from the richest and most powerful, to the poorest and the weakest. And the best part is that what exposes people to blackmail need not even be illegal behaviour, it can be perfectly legal behaviour that affects social and professional standing of an important person.

We are going to present all of that information to interested parties with a nice bow on top.

Access, Identity, Authentication, Logging

  1. Any secure system will require you to control access to the resource as a whole and/or parts of the resource itself. This planning has to start from physical access to the core and nodes that access the core and it has to then take into account the applications that will provide access to the information and the applications that will access this information from the nodes.
  2. Any secure system will have a clear policy in assigning identities to people who can access those resources. This needs to be consistent across the core and the nodes. This makes the system rather inflexible and a pain to operate, but it is necessary to mitigate even the weakest of attacks.
  3. Any secure system will clear mechanism of of authenticating the identity of a valid user in the system. There cannot be any backdoors built into such a system as it has been proven time and again that the backdoors become a point of major weakness over time.
  4. Any secure system will log all actions at all levels in the system and establish triggers for any out-of-band activity that covers even legitimate use.

The above four points are just an amateur attempt by me at defining the outlines of a reasonably secure system. A proper attempt at this by a real security professional will have a hell of a lot more of points and also go into a great deal of detail. But these points should give you a rough idea about the complexity involved in designing security for systems like these. You simply cannot slap on top security as an afterthought here.

Mining Nightmares

Which brings us to the issue of accuracy in data mining for initiatives like NATGRID.

Personally, I do believe that there is a valid case for governments to either collect or have access to information of any kind. What I do not like is unfettered collection, mining and access and zero oversight on any of those processes.

The reason why mining big data as a sort of Google search for suspicious activity is a terrible idea is simple. It does not work accurately enough to be of use in enforcement. The same technology that results in mis-targeted marketing phone calls and the tech that serves you ads that are irrelevant to you are the ones that are going to be used to determine whether a person or a group of people are likely to do bad things. Even in marketing or advertising it works with an appalling rate of failure, using it in intelligence, surveillance and enforcement will lead to an ocean of false positives and wind up putting a lot of innocent people behind bars for no good reason.

Even worse is the fact that legal system itself has such a weak grasp on these matters that appeals are likely to fall on deaf ears as the evidence is likely to be considered the gospel as there is no understanding available within the system that can say it is not the case. And then there is the potential for real abuse — not limited to planting evidence through spyware — that can ruin lives of anyone and everyone.

Conclusion

Our approach to security and centralized information collection is terrible beyond what can be expressed in words. It needs to be stopped in its tracks and reviewed closely and should be redesigned from the ground-up to keep security as the first objective and data collection as a final objective. We need to codify access laws to data collected in this manner and ensure that all of it does not reside in a single place and access to a complete picture is available only in the rarest and most exceptional of circumstances. What is happening right now is none of that and I am afraid we will find that out in the most painful manner in the coming years.

Related Posts:

  • No Related Posts
Posted in India, Technology

Fix For Call Volume Bug In Moto G

The Moto G has been a fantastic phone so far, except for one small bug which becomes a big irritant. If the phone is used with a headset (wired or bluetooth) and you take it off, the call volume will drop to really low levels, making it hard to hear what is being said on the other side. A reboot usually fixes the problem, but that is not an ideal solution.

The other, simpler, fix is to reduce the volume while in call and then max it. What usually happens in a situation like this is that the user will only try to push the volume to the maximum (which the phone usually is at) than reduce it. To make the fix work, you have to first reduce the volume and then increase it, making it very likely that this is a software issue than a hardware one.

Update: Motorola has released an OTA update for the single SIM GSM version for this with the 174.44.1 update that fixes this bug. There is no word on when it will be rolled out to the other models.

Related Posts:

Tagged with:
Posted in Android

Lifestyle As A Luxury

As you can see, the title is an obvious play on ‘Luxury Lifestyle’. After five-years of working on my own, what I have come to realize is that I have a lifestyle that is considered luxurious, mostly by people who work a regular job. Luxury, in this context, is not being able to afford a fancy dinner every evening, but the ability to go for a walk or a run in the evening when the streets are starting to fill up with the evening rush hour traffic. It is the ability to take half the day off, on a weekday, to have a nice lunch by yourself out in the winter sun, or catch a movie all on your own.

Like other luxuries, this one also does not come for free. I have had to give up having what most would call a regular life — owning a home, family with kids etc. — to support this lifestyle. To be fair, it is not an exact correlation or causation, as there are other factors too that have played a part. I did struggle through most parts of the five-years on my own, trying to bridge the gap between what I don’t have and what everyone else seemed to have, only to realize recently that it is a gap that remained unfilled not for a lack of ability, but for a lack of willingness to fill it.

Like other luxuries, as long as it delivers contentment for you and if it feels right, the price is always right. I, for one, find it hard to own something extremely expensive. I’m one of those people who merely don’t own things, they are owned by the things they own. Thus, even the thought of owning a luxury car (not that I could probably ever afford one) is an unsettling one for me. I would probably love to rent one some day and experience it for a short period of time, but not own one. For someone who loves owning one, doing exactly that is the way to go forward. You don’t owe an explanation or justification to anyone for what makes you truly happy.

That said, luck is a significant part of being able to live this lifestyle, unless you are someone who is extremely good with their financial planning. I am not one of those people. This is partly because of one of the bizarre outcomes of subsisting on very little money when I moved to Delhi. When a time came where I had more than what I needed, it really didn’t get me much joy, especially as I tried to buy my way into respect, consideration and love. Money, for me, is something that is necessary to have at a basic level. It is nearly impossible to live without it, or live without someone who has enough of it to take care of you.

But, I digress.

You need a good share of luck as being unwell or getting into a serious accident can dent seriously even substantial bank accounts. No matter how careful you are, or how gifted you are, the fact is that you cannot control most of what happens to you. If it does go wrong badly, a lifestyle like mine won’t be possible. The corollary is that even the most accounted-for and provided-for existence cannot account for or provide for all eventualities. Should there happen a massive global market crash, odds are that me, a newly minted millionaire and the beggar on the road are all going to be on the same boat pushing up the river of survival.

Related Posts:

  • No Related Posts
Posted in Misc

Building A Digital Product: Part I

There used to be a time when building and launching a digital product was a straight forward affair. The steps were something like this:

  1. Start with a rough idea of what you were looking for
  2. Find someone who could design and build it for you
  3. Find people who would help you run it and go live.
  4. Find ways to market the product.
  5. Find ways to sell the product.

Other than the really big players, most of the regular Joes would handle most of the steps on their own or, in some extreme cases, handle all the steps on their own.

In the last five to eight years the steps have been shred to bits, thrown out to the dustbin and replaced with a set of steps that bear no resemblance to the earlier ones.

Most of this disruption can squarely be blamed on mobile computing. The revolution that started with telephonic devices being able to access bits of textual data (read SMS), was turned on its head when the same devices were transformed into data devices that could also do telephony as one of the many things it could do.

The other significant development that has caused the playbook to be thrown out is the commoditization of many of the moving parts that are used to build a digital product. From databases, to job queues, to logging, to any other part of the technical stack, finds an array of plug-and-play software service platforms that a new product can leverage from day one.

In the early days, teams had to develop everything — from email subscription systems, to delivery systems to, logging infrastructure — to get going. With all these new services, product builders are less builders and more integrators these days.

While this has created an entirely new universe of opportunities and possibilities, it is also responsible for creating a lot of confusion for companies and individuals looking to build products.

What this series will attempt to do, in the first part, is to bring some structure to the steps and elaborate on the steps a bit with an aim to reducing the amount of confusion in the market regarding the steps.

I have no illusions that this will be a definitive list as there are parts of the stack and ecosystem I am completely unaware of. My idea is to fill in the gaps that I can and I’ll be more than happy to bring in suggestions about what else can I cover here.

I am going to tackle the more technical aspects in this post:

Design: Designs are best approached first with storyboards. The story boards are used to create process flows. The process flows lead to wireframes. The wireframes lead to the final design.

You can skip all of the steps and go directly to a design, but the odds are that you will struggle at a later stage to force fit that disciplined a process into an existing system that has grown without it.

What is more important — short term gain or long term pain — make your pick.

Development: The choice of framework/language to build on is made at this stage. Unless you are someone who knows technology very closely, avoid using the latest fancy framework in town.

You have to establish coding standards, documentation standards, bug tracking, version control systems and release management processes.

Testing: Set up both automated and manual tests to address both logic and real-world usage. Testing infrastructure built right will include a good set of unit and behavioural tests and a continuous integration framework that will catch most errors during the build phase itself.

Deployment: No (S)FTP. Simple. Deployment options are available these days from the simple, to the ridiculously complicated. It gets harder when you have to update code on a pool of application servers that need a rolling update/restart cycle.

The more challenging part in this is to abstract away this part of the stack to a simple interface that the developers can use. You cannot and should not expect developers to debug problems in the deployment infrastructure.

Distribution: A local CDN or an international one — which is the right one to use? Should I use a CDN at all? Recently, a company that I spoke to had a response time to their origin server that was 1/5th of what they were getting from their CDN. This was done to leverage cheaper CDN bandwidth and is a classic case of cost optimization at the wrong place.

Is Couldfront the right solution? Can my preferred CDN provider handle wildcard SSL termination at reasonable cost? How costly is it to do a cache purge across all geographies. Is it even possible? Is it important to purge CDN caches? Is a purge important to avoid compliance hurdles for some obscure requirement in my market of choice?

Mobile-specific Parts: Native, cross-platform or HTML5? Do I need a mobile application at all? Which platforms should I target? What is the minimum OS level that I should support on each of those platforms? How do I align those decisions with the target audience I am going to address?

Outbound, non-consumer-facing Services: Should I expose any of my internal data with a developer-facing API? What should I use to expose that API? Do I build it own on my own or do I use a hosted platform like Apigee? What sort of authentication should I use? What sort of identity management should I use. Should I even try to split identity and authentication as two different services?

Inbound, non-consumer-facing Services: What do I use to handle data that I fetch from other sources? How do I ensure that I cache my requests to respect rate limits. What is a Webhook? How do I go about implementing one?

Replication & Redundancy: What is the maximum acceptable downtime for my application? Is there a business case for a multi-DC deployment? How extensive does my disaster recovery plan have to be?

AWS, Rackspace, good old dedicated rack in a datacenter? Should I use Glacier? What should I use for DNS management?

Analytics & Instrumentation: DAU, MAU, WAU — what all do I have to track? Are bounces more important than acquisition? Is acquisition more important than repeat transactions? How do I bucket and segment my users?

How do I measure passive actions? Should I start tracking a minor version of an otherwise less-used browser as my javascript error tracking reports are showing that the current release is breaking critical parts for my most valuable demographic who use that exact obscure browser?

Wait, I can track client side javascript errors?

Conclusion

As you can see, the list raises more questions and provides no answers. This is intentional as there is no one-size-fits-all answer for these questions. Even within specific company lifecycle segments (early stage, stable start-up, established company), the internal circumstances vary from company to company.

These list is more a starting point than a destination in itself. Use it to build a better framework that is suited for your organization and your product. And if you need more help, just ask!

Related Posts:

  • No Related Posts
Posted in Start-ups, Technology

Moto G Review

Having now spent close to a month using the Moto G, I can now sum up the device in one word — fabulous. The device has not been officially launched in India. I was fortunate to have been gifted one by someone living the US and it cost the regular $199 retail there. If they manage to price it under Rs 12,000 (and closer to Rs. 10,000) in India, Motorola could have a winner on its hands.

DSC_0543What I like about the device:

  1. Battery life: I use location services during a major part of the day, which, previously, was a huge drain on Android devices. I am easily getting 24+ hours on a full charge and have never used the battery saver feature.
  2. Android Kitkat: The slight lag (more like a stutter than lag) that used to be there on even the high-end Android devices is gone. Even with 1GB RAM, the transitions are buttery smooth and comparable to iOS.
  3. Nearly-Stock Android: There are a couple of Moto-specific apps in there, but nothing that gets in your way. Rest is pure Android all the way.
  4. Price: You can pick up at least three of these babies at a price that is lesser than a Samsung S4 or an iPhone 5C.
  5. Main camera is pretty decent. Just remember not to shoot with it in low light.

What I don’t like:

  1. No external SD card support. I don’t store much media or click a zillion pics, but I’m already down to 9 GB left on the device.
  2. 1 GB of RAM.
  3. There’s a bug (not sure whether it is software or hardware) that can cause call volume to drop after using a wired or bluetooth headset. Can be fixed with a reboot, but annoying all the same.
  4. USB port is at the bottom of the phone. Never liked that positioning. It is a personal preference, though.

DSC_0552While I really like the device, you do need to keep in mind that I don’t fit the profile of the average smartphone user for the following reasons:

  1. Limited apps usage: I don’t use Facebook, Twitter, G+ and most other social networking apps. I do use Whatsapp and BBM, but they don’t seem to eat up as much battery and processing as the first three.
  2. I don’t game at all on the device. There’s a chess app that I keep for the odd rainy day, but have not used it more than twice or thrice in the past year.
  3. I don’t watch much video on the device other than the odd YouTube clip.
  4. Reading has moved completely to a 7-inch Lava tablet.
  5. My data connection is permanently set to EDGE. I don’t use 3G.

Before I picked up the Moto G, I was using the Micromax A116, which has been a pleasant experience. After using it for almost a year, I’d rooted it and switched to ROM that had thrown away a lot of the unnecessary bits and made it nearly stock Android. Even that phone was giving me a good 24-hours of usage on a single charge. The reason why I wanted to try something else was that the build quality is extremely poor and I doubt it will be able to take another year or two of abuse. There are also little niggles like the problematic GPS lock, lack of a compass and issues with the filesystem at times.

DSC_0548The Moto is my first Google Android phone, which is a route that I have been looking to go down for a while now. The migration assistant provided by Motorola (works over Bluetooth) is quite good and I could switch devices (with data and apps) in a couple of hours. The device does only MTP, so it cannot be mounted as a regular volume on computers. Since I’m on Linux, I use gMTP, which can misbehave a bit at times. The fallback is Bluetooth, which is the disagreeable option when it comes to speed.

Overall, Kitkat seems to have improved how Android handles the idle state. This has resulted in better battery life, for me at least. There are rooting guides and ROMs available for the interested parties, as usual, on XDA, but I’m pretty happy with the way the device is right now. So I don’t see rooting and custom ROMs happening anytime soon. I like my devices to function flawlessly and stay out of the way and the G increasingly looks like a good candidate for that. I’m well past my weekly flashing phase on my phones and a lack of excitement is a welcome change on that front.

Related Posts:

Tagged with:
Posted in Android, Mobile

Hybrid Entertainment: Free-roam Network Games

I have been a great fan of the  GTA series of games from Rockstar for a long time. The political incorrectness and violence that is there in the game is a different topic that I won’t deal with in this post as I will be focusing on other aspects of the game that could open up a whole new type of entertainment.

Gameplay screencasts are probably almost as old as YouTube itself, but they have always been constrained by the fact that they tend to be all about completing missions and little exploration is built into the game itself. They are also pretty linear in nature and rely more on damage/health variables to determine difficulty than use a large number of changeable variables.

Free-roaming games are extremely complicated and costly affairs. It is one thing to design a game world that has 5% accessible spaces and it is another to bump it up to even 20%. The inaccessible parts are just images plastered on regular shapes that look reasonably OK from a distance.

Designing a highly-accessible game world is hard. You have to define behaviours every object in the accessible areas, irrespective of whether a player will interact with those objects or not. Add other factors like weather and time of the day to the mix and the complexity becomes much harder than what most production houses can afford or handle.

A Free-Roam Real World

What GTA V has done differently is to mimic the real world to a great extent. Between the nearly-unrestricted world, the game AI that controls the non-playing characters and other gamers on the network, the outcomes and possibilities are incredible. Which brings us to the most significant aspect that you cannot script much and even the best laid plans have little certainty about the outcome.

This aspect of GTA V is so incredible that the story mode (where you play offline and complete the story) is pretty tame and boring and you will finish it pretty easily. That, though, is just the gateway to the real deal, which is the online mode. It is a teeming world that is constantly evolving. The list of possibilities for Rockstar with GTA V is endless. There is already a fully functional stock market in the game, where you can trade with other players and you can affect the market’s movements with actions in the game world.

The Live Stream

Games have been live streamed for a while now. Most of the MMORPGs have been doing this for years now. But I have always found them to be tedious and often downright boring unless you have been part of those gaming communities for a long time. For an outsider, it is often hard to make out what is going on in the game and the game worlds themselves are not of the free-roam types, which restricts the ability of a player to do something totally out of the ordinary.

Compared to that, a GTA V live stream provides a level of entertainment that is incredible. Players often ask the audience or fellow players what needs to be done and plans are arbitrarily made and executed. Nobody really knows what is going to happen. Obviously, this requires an extremely well-done game in the first place as there has to be some degree of predictability in the system, but make it too predictable and it will become boring quite easily.

Since the online world in GTA V is constantly evolving, these episodes are highly entertaining even if you have no idea about the game at all. One of the most popular things to do in the game is to steal a fighter jet from the military airbase and escape with it. Even though it is attempted regularly, no two attempts look the same. Rockstar also makes changes that ensures that adds to the unpredictability.

Why Is This Different?

Admittedly, a lot of the underlying themes are hardly something that is new. Second Life has a lot of these concepts — a free-roaming world, its own currency, property ownership — in it for years now. But it always felt like a lot of work (since the players have to build the worlds) and it did not feel fun at all.

The semi-structured world of GTA V online actually makes it really fun to watch these videos. From what I could make out, the viewership for the live stream itself is not all that great, mostly maxing out at the thousands, but the recording seem to do really well. A search for long videos on the game throws up twenty videos. all of them have at least a million views and some of them have cleared a year or two of viewing hours. That is a lot of hours and a lot of content that has been consumed.

In my opinion, we are on to something here. That said, it is also not free of problems. The language, violence and correctness issues aside, with that kind of consumption, it was only natural that game companies and other IP owners have started pushing YouTube to crack down on these videos. It maybe true that this is a new form of entertainment that is taking root, but it is also taking root in channels that are not owned by game companies, cutting them out of the advertising revenue generated by these channels.

Also, should the game companies somehow figure out a way to make this happen in a PG-13 manner on platforms they own the scope for experimentation is huge and the videos can become really big as a sort of reality television. They would not want anyone else to get a piece of that pie.

Related Posts:

  • No Related Posts
Posted in Gaming

On Media And Hypocrisy

This post is not going to be about the business or technology aspects that I normally tend to cover here and it quite a rant, so feel free to skip.

The past couple of days have been really hard watching the terrible events at Tehelka unfold. Even though I have worked for a very short period at Tehelka during the really early days, I can’t claim to personally know either Tarun or Shoma and I have no clue about the identity of the victim either. The industry, though, is one where I have worked for a good decade and more and my relationship with it has been a troubled one. I still have very good friends in the industry and I do consulting work on and off in it.

Sadly, the incident and how everyone is reacting to it sums up my major problem with Indian media. We behave like entitled holier-than-thou cretins in the best of times and in instances like these it gets worse. Even senior journalists shed any modicum of responsibility they have towards bringing out the facts and everyone essentially turns into armchair inquisitors. Combine this with the readily-available lynch mobs that form only too easily on social media and reasonableness is nowhere to be seen.

Before I continue, I’d like to make one thing clear. What seems to have happened (‘seems’ because we need a proper investigation to look into it and I fear what comes out will be worse than what we know right now) is terrible. While I do enjoy the odd Tehelka story forwarded by friends, I am not a regular reader of the magazine and even while I am aware of its thinly-disguised bias or agenda, I find it has a significant role to play in providing a counterbalance to the fringe at the other end of the spectrum.

That said, it is also sad to see an organization slowly being bled to death like this. I have no idea why Shoma is doing what she is doing, but whatever she is doing is destroying the organization one statement at a time. It may well be the case that she is trying to save a dear friend or the company from a potentially devastating legal scenario, but saving it like this won’t leave an ounce of credibility left on the table when it is all over and done with. For the subjects that the magazine tends to cover, credibility is everything.

When you ask for exceptional things from the people you cover (honesty, courage, moral standards, fairness), it is only natural that people would expect at least a similar standard when the story is you. The internal emails that addressed the incident were so Clintonian in nature that I almost expected a diatribe on the meaning of “is” somewhere along the way. Not only were the choice of words extremely poor, but it also displayed a sense of denial about the gravity of the accusation.

Even so, the victim has every right to choose the manner she sees feels is the right way ahead. Unless you have been in a similar situation, which (fortunately), I have not been in, you cannot imagine even the smallest thing about what she is going through. So, pretending to understand and know what is the right thing to do for her is nonsensical. Similarly, it is for the law to determine what course it should take and pursue matters to the logical conclusion; it does not matter whether Tarun has recused himself or not.

Which brings us back to my main problem with the industry — which is that we are a bunch of self-righteous hypocrites. It is not uncommon for a senior journalist to be on a prime time show and criticize the hell out of a celebrity caught in a DUI/hit-and-run case and go straight to the Press Club for more than a drink or two, which is often followed by driving home drunk. Should a cop pull you over in such a state, the ‘press’ privilege is flashed and you go scot free.

I can bet that pretty much every single senior journalist raging at Tehelka has, at some stage of their career, known about some instance or the other of harassment or abuse in their organization that was hushed up. Media organizations, especially the news desks, are high stress, hostile environments to work in, especially for women. If we exercise the same degree of fairness and action that we are clamoring for from a Shoma, in organizations that we work in, a lot of these problems would not exist in the first place.

The fact is that most of us don’t and when it comes our own responsibility, all kinds of excuses start showing up.

It is hypocritical to be shocked by how Tehelka is handling this as this is how almost every media organization handles incidents of a similar nature. This is certainly not the first case of “drunken banter” the industry has seen. I am more shocked by how everyone is pretending this to be the case. It will be enlightening to do an assessment of the level of support for women on the issue of sexual harassment is in media organizations in India. I will not be surprised, if the results are shocking. Yet, every journalist out there is pretending that Tehelka is somehow a unique story.

It is not.

The sad reality is that this incident is yet another instance of everyone washing their hands off the problem. When the December 16 gang rape happened, the undertone was of a problem that is precipitated by poor, unwashed migrants who have nothing to do about their raging hormones. For the educated, cultured and privileged, such problems are always nicely compartmentalized away. It is something that happens to “those people”.

The sad reality is that in almost every family there is an uncle who is fond of pinching/fondling young kids a bit too much. The number of friends I know who have been sexually abused as kids is just way too high. These stories are all from so-called cultured, well-to-do, educated families where the solution is to hush things up. The unwashed gets the blame because there is nobody influential enough in their lives to ensure that there is no coverage, but they are, by no means, the only ones who are raping, molesting and harassing both the young and the old.

That said, lynch mobs or not, I consider it a good thing that these horrific stories are starting to come out. The first step towards solving any problem is to acknowledge that we have a problem in the first place. On that front, at least the well-to-do are in so much denial that only the shocking truth in more such revelations will show us how rampant abuse of every kind is in our society. But it will get a whole lot worse first, when we face up to our true selves, before it gets any better.

Related Posts:

  • No Related Posts
Posted in Misc

Quitters, Speculators And Spectators

If you, like me, consume news and information mostly fed by the usual tech/digital sources, odds are that you would not have missed the outrage du jour — Google’s cumbersome attempts at streamlining the identities of their user base across their product lines. The right or wrong of it aside, I think we need something new to get worked up about every couple of weeks. Consequently, the highly-connected community goes through phases of quitting Facebook (because of privacy and UI/UX concerns), Twitter (because of how they treat the developer ecosystem) and Google (too big a list to be listed here).

At the end of each of these episodes, a bunch of geeks will go and attempt to build products/platforms that aim to provide a viable alternative. A handful actually will quit and a fraction of those will write blog posts about the whole experience, while most of the users stick on as the benefits from using these products outweigh their negatives. In a sense, quitting digital products is like weight-loss these days, only that the former is at best a niche hobby, while the latter is a multi-billion dollar industry.

Through all of this, the normal users (including my parents) seem to not care much. More of them are now exposed to the same products and engage with them without the worries that seem to affect us to no end. The number of people who seem to agree with Zuck’s assertions on privacy or Google’s assumptions on why enforcing real names and a singular identity is much bigger than what a vocal minority comprises. But, unfortunately, our proclivity towards pushing the ‘one right way’ to use a product or a platform, is something that blinds us to all of this.

That said, I deleted my personal Facebook account in 2011. I have a friendless, hidden account that work compels me to keep (same as a Google+ account I have) on the side to admin pages and I don’t miss it much. My concerns were really not related to privacy. I’d be very surprised if it is no longer impossible for the state to get its hands on any piece of information, should it want to. On that front, technology has always provided only reasonable safeguards and not an absolute one. Privacy is a social expectation and that technology can help in delivering. Most of us seem to not comprehend that minor, yet significant, distinction.

Anyhow, coming back to Facebook, I quit it because it was absolutely the best time sink I could ever find. I would waste hours mindlessly clicking through pictures and profiles and, even after trying hard, it was impossible for me to use it in a productive manner. The problem was with me and not with Facebook. It was as simple as that. Which, once again, brings us back to the same point, that technology and platforms are only amplifiers and enablers. They cannot provide motive by themselves. We, the people, provide that motive. So, instead of trying to fix technology, a lot of problems in the world can be solved if we try to fix our own (often) not-so-good motives.

Speculators

Speaking of motive, nobody seems to agree anymore on why bitcoin moves in any direction at all. It is, frankly, amusing to see the volley of “it is so dead” stories pop-up every time it drops in value and the corresponding “second gold” stories pop-up every time it goes up in value. For me Bitcoin is just a different form of derivative. In fact, it is exactly what a derivative would look like if it did not have its origins in the financial industry. But, that does not make it any less a derivative (which is mostly glorified legitimacy-clad-speculation).

In a world that is increasingly depending on speculation to work as they key driver of growth ($4 billion valuation for a 3-year-old company should be enough proof of that), bitcoins are a natural fit. The greatest attraction towards it is that there is no regulatory authority for bitcoins. But, as it is with any fringe phenomenon that goes mainstream, there are already workarounds being put into place. We have started to see legitimate speculators now move into the domain and it won’t be long (especially if it sustains its current levels of volatility) before cartels are formed around market movers. And we’ll be back to square one then.

Spectators

It has now been nearly four-months since I decided to quit Twitter for a week. The reason why I quit, I’ll save it for another post. It really has nothing to do with platform issues, tech or time-wasting. I do swing by regularly, read a bit of my timeline and go away. On most days it is painful an experience. The amount of snark and vitriol on display is amazing; so is the lack of consideration towards both individuals and organizations. It is almost like we are constantly on the lookout for a mistake or an error that we can put out on display as someone’s stupidity.

Once I accomplish what I need to, I will re-engage on Twitter, but this part of it troubles me a lot. Yes, it is wrong to say that everything is negative, there is a decent share of positive, which is what brings me back regularly to read up on the timeline, but I honestly believe that most of us are far better people than we allow ourselves to be seen as.

Related Posts:

  • No Related Posts
Posted in Misc, Social

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 53 other subscribers

Browse Archives

April 2014
M T W T F S S
« Mar    
 123456
78910111213
14151617181920
21222324252627
282930