:::: MENU ::::

A Quick Micromax A116 Review, Some Things Android, Is DEN Going DTH?

On Android

In his post, Android Is Fading Into The Background—And That’s A Good Thing, Dan Rowinski says that Android’s become mature enough now to slowly fade into the background as the devices themselves take their place in the limelight, compared to the limitations of the operating system. Mobile operating systems and platforms represent some of the most opinionated reporting and fan base on the internet. So, a claim like Rowinski’s always trips my sensationalist filter and this one was no different.

The extremely polarized opinion on the mobile front always makes me wary of getting into any discussion regarding the domain. Every day, a large chunk of the world’s population use handheld devices of varying sizes, shapes, brands and prices in spite of whatever positivies or limitations we may find the platforms out there. The reality is that the data-on-by-default mode in handheld computing is a revolution has only begun and only the foolish would claim any sort of victory — be it for iOS or for Android.

Coming back to how much Android has matured, I find it hardly surprising. An ecosystem that tends to large variety of devices (unlike iOS) will always evolve slower and in a clumsier manner compared to iOS. If you look at how Android has developed since the time of the G1, it was always going to be a matter of  ‘when’ rather than ‘if’ when Android would eventually mature and stabilize. And when it reaches that stage, the impact it will be far wider and deeper than what Apple has accomplished in opening the floodgates of the ‘smarter’ devices. We are starting to see that ‘when’ materialize now.

For iOS, as Android matures, its own consistency and polish, which acted as a fair trade off to its major downsides (lack of intents, proper file system access) will increasingly find itself losing the initiative. It is one thing to set the agenda when you are practically the only game in town and a totally different thing to both defend and attack when there are other big players in town. Digital companies, products and platforms have limited windows to react and adapt in the 21st century. If you remain largely unchanged over anything more than a 2-year window, you are going to be in a lot of trouble.

Micromax A116

Those thoughts about Android bring us to the Micromax A116; otherwise known as the Canvas HD in the market. I happened to pick up this phone because my trusted roadwarrior — the Samsung Galaxy S — finally decided it has had enough abuse at my hands by letting the screen crack badly when I dropped it as I seem to unfailingly do every few days. The other option was to go the import route and get a Google Nexus 4, but the warranty situation on it made me pick this one as the temporary choice, while I wait for Google to officially release the device in India.

I already own a low-end Micromax Android phone (the A72) as my backup device, so I’m not a complete stranger to the mysterious ways of Micromax. That phone had a heavily modified version of Android (Gingerbread) and was never updated. While a lot of the changes were quite nice (compared to the travesty that is called Touchwiz on Samsung), it was still not very desirable compared to the stock Android experience. With that in mind, I expected the A116 to be not quite different, but I am extremely surprised by the outcome.

First of all, the phone is incredibly fast. It has a quad core processor and one GB of RAM. But we already knew that about the phone. What I was really surprised by was 1) the OS. It is quite close to stock Android. Yes, there is a bit of modification here and there, but it is nothing compared to what it was on the A72 and coming from a AOSP build of Jellybean on the SGS, I felt instantly at home. 2) The battery life is spectacular. Even with both SIMs on (one is a 3G SIM) and GPS on all the time, it easily lasted a day for me on a single charge. There is the downside that the battery also takes a long time to get to a full charge, but I can live with that a lot better compared to a battery that charges and drains quickly.

The significant downer is the camera. It is 8MP, as advertised, but the sensor is pretty mediocre and it produces mediocre photos. Thankfully, I don’t click a lot with my phone these days, so it is not a big problem for me.

Which brings us to the interesting angle on the price point of Android devices. When I picked up the A72, it cost me a good Rs. 7000. Phones that were in the similar spec used to cost close to Rs 14,000 a year before that. The A72 now retails close to Rs 5000. The A116 cost me less than Rs. 15000. Most of the quad core phones with that much RAM on it will still cost well in excess of Rs 25,000 even now. The point being, the quality of the Android experience is improving systematically across the price range, especially towards the lower end.

In two or three years we can’t make the distinction much between smartphones and feature phones as feature phones will be lower spec smart phones than the dumb phones we are used to these days.

DEN Networks

Why would a cable network company need over Rs. 870 crore in investments in one go? That’s the question that comes to mind If you look at the $110 million raised from Goldman Sachs and $50 million through a QIP by Den Networks Ltd. The obvious answer would seem that the company wants to get into the DTH game. The DTH license is not cheap by any means and add transponder costs, costs for the technology platform and setting up a proper sales/distribution/service network all will cost a lot of money. From that perspective, the money makes sense, especially if the company is still structured in a manner that keeps the Indian promoters above the 51% holding limit.

From the business side, DTH makes more sense than cable. Negotiating right of way in each state in India is one of the most horrible things you can get yourself into. Every big cable network has a local fixer who is there to handle this part alone and god help you if you don’t have the blessings of the local powers-to-be. Moreover, in rural India, DTH is the undisputed king. In 2012, at a place called Pang (on the Manali – Leh highway), which had little power or life beyond the camps for the tourist and the army TCP, we still found a small shop with a television set hooked up to a DTH set up. Rural India is now dotted with the little mushroom-like dishes and while mobile has been our loud revolution, DTH has been the quiet one.

It is no wonder that DEN would want a bite of that, especially as the competition is loaded with debt and still bleeding money. Of the others, both Tata Sky and Videocon D2H are said to be looking for an IPO in India this year. That, along with the government’s push towards digitization will mean that DEN, as an already listed entity, can stand to reap a good harvest both in the DTH marketplace and in the markets.

 


Why Do Start-ups Need Investment?

Continuing from the previous post on market opportunities for start-ups, this post will focus on the funding aspect of it. There are various schools of thought in the start-up world when it comes to funding. There are some who believe that investment only ruins companies, while others think of them as enablers and as necessary evil. The key to understanding your need for funding is often found in the opportunity you’re trying to target, so let us figure it out from that point of view.

Before I get going on the four opportunities, the one thing I’ll be very clear about is that there is no golden rule to all of this. Business and funding environments change regularly and every major player in the ecosystem (companies, funds, public markets) all respond to changes in the larger economic climate. So, if anyone shows you the rulebook, pointing out the One True Way™ to grow your company (with or without raising money), feel free throw throw away that book.

Why Do you need investment?

To De-risk: You can always build flying cars using your own money, if you have enough in the first place, or you can attempt to get that money from someone who is better positioned to absorb the losses should flying cars fail.

To Build: You have to put together the first version of the flying car. This involves buying tools, fabricating parts and a thousand other things. Sometimes you may not have the money on yourself to do this without financial help from the outside.

To Validate: A car once built has to find buyers in the market. Even the best built car, kept as a secret in your garage, won’t sell. If it won’t sell, you don’t have a business.

To Grow: The cars are selling well, but you can’t meet the rising demand with your existing infrastructure. You also want to expand into a different geography because you’ve saturated the market for flying cars in your current geography. This involves hopping up through the stages I had mentioned in a previous post.

To Diversify/Consolidate: The flying cars are flying off the shelf, you’re thinking big now and you have hit the limits of efficiency with the current set up. Growth has to come from elsewhere and M&A becomes a viable option within and outside the car industry.

You also realize that you are spending a lot of money in marketing and discounting the prices (thus adversely affecting the margins) to compete with the New Flying Car Inc. A quick look at the balance sheets says that as a combined entity you can be more efficient and improve the margins by an extent that justifies the risks involved in the merger.

The case for not needing investment!

You can build companies through all the four stages listed above without taking on any investment. There are a lot of profitable businesses that were built and continue to run successfully in this manner. These vary from companies of a substantial size to  mom-and-pop stores.

But these are also companies that tend to grow slowly and the odds are, you won’t find much sought-after hockey stick growth in companies that did not take investment. Exceptions are there to this story, but the norm tends to be that to get into the hyper growth stage, companies need some form of a force multiplier in place and the most obvious and organized one available there is capital.

Risks in taking investment & hyper growth

Investors rarely put money into companies for charitable or altruistic reasons. It is important to understand that they’re also running a type of a business that has a substantially high rate of failure. Funds are raised often for a 5-10 year window and to be successful they need to handsomely beat any other investment class out there.

Trouble is that organic growth companies rarely get massive in a 5-10 year window. This is the reason why the hockey stick growth curve is much sought-after by the investors.

The risk with hockey stick growth, though, is that it compress a lot of events and factors into a very very short window. This is similar to human being going from an infant to a full grown adult in an extremely short period of time. Even in companies, you have both Macaulay  Culkins and Dakota Fannings.


On Market Opportunities For Start-ups

‘Disruptive’ is a much-abused word in the start-up world and it is a flawed measure that can be used to determine if an opportunity is worth chasing after. Disruption is an outcome and not a starting point; thus it is best left for glowing testimonials in history books than in business plans.

A better measure ascertain a product’s viability, longer term capital requirements and other key metrics in a business plan is to look at things through the prism of market opportunity. Opportunities can be of the following types:

Greenfield Opportunities: These are products and services that break new ground, building and doing things that have never been done before at any reasonable scale. Example: A car that flies.

Innovation Opportunities: These are products and services that take an existing product or service and approach it from an innovative new angle. Example: A car that costs as same as a regular car, but runs on water.

Execution Opportunities: These are products and services that don’t do anything new, but they do what is already being done in a much better manner. Example: A car that does really nothing new, but it is well put-together and everything feels just right about it.

Pricing Opportunity: These are products and services that are offered at a price point less than what the customer is used paying for a similar service. Example A car that costs half the price of a similar specification model in the market.

Any new start-up or a product has to be very clear within themselves which of the four opportunities do they address, before they hit the market. It also makes things easier for investors to understand your product if you are clear about the opportunity you are after.


What Am I Building?

When 2013 rolled into view I had already completed four-years of working on my own. In shifting to a line of work that is more research and strategy-oriented I figured out that there was tremendous duplication of work and numerous switching of contexts to collect, organize and leverage information.

By then I had tried various approaches — using a variety of tools — to address this problem, but each attempt at it only frustrated me more. To explain the problem, think of your brain as a machine with limited volatile memory and processing power. All the tools only act as physical storage. The pitfalls are rather obvious with this approach.

What I’m building is a framework that approaches this problem from a different angle. What is the approach — I will write more in it as I build more of it. As of now, it is just a set of tacky looking pages and interfaces for entering and managing data. The code has already grown into few thousands of lines and I have only started to scratch the surface with it.

It is fascinating to build something for your own consumption. Most of my development work before this has focussed on getting things built for my clients and building something for myself feels so different. The key thing to watch out for is to not to take any shortcuts and build the system properly. The amount of technical debt that can be acquired at this stage is tremendous.

Focus

In a build of this kind, where the end result often can be a moving (almost unattainable) target, the ability to focus is key. The good part about various tools to build things for the web is that there are endless options available to get the same thing done. If you don’t keep simple, bite-sized goals and validate it regularly you can easily lose your way and give up.

Adaptability

When building against a moving target, assumptions, algorithms, logic and outcomes will change. If you don’t validate quickly and adapt to changes that is deemed necessary by the results, the product will become lesser and lesser useful over time. At every stage, what the product does has to match the desired outcome to a great degree.

Dual Vision

In the early stages it is very hard to see how the gap between what-is and what-it-has-to-be can be bridged. There will be days when you’ll crank out a complicated feature in a better-than-expected manner in the first go. There will be days when a small simple bug will keep everything held up for a day or days.

Building a product on your own can be both gut-wrenching and unbelievably exhilarating at different times. The key thing is to quickly overcome setbacks and triumphs and keep the longer term goal clearly in mind.


Data On The Move: Lava W150 + Tata Indicom 32GB Plan

If you move around a lot, being able to access data on your phone alone won’t cut it for long. There is the option of tethering your phone for that, but it eats up your phone battery pretty quickly if you are dependent on the phone for things other than using it as an access point.

After trying out various approaches — dongles, tethering — I have figured that the optimal solution is to carry one of those pocket wifi routers. They cost little and tend to be stabler than tethering your phone and saves you the trouble of installing drivers and horrible dialer software that data dongles usually require you to do.

I had picked up the Lava W150 in November 2012 along with a Tata Indicom (Docomo for the rest of India) dongle. The device is Huawei-made and branded as Lava (as it is the case with most of the cheap Indian phone devices these days) and runs embedded Linux.

The Web admin UI is powered by the GoAhead Web Server and it provides for a advanced options. It is not the most user-friendly experience that you can have, but it does its job quite well, even if it has a bad habit of restarting everything for major configuration changes.

The device is only one part of the data-on-the-move equation, the other (and the more important part) is finding a data plan that won’t ruin you. I have a preference here for pre-paid plans as my usage is erratic and I don’t want to pay a fixed high amount for capacity that I’ll rarely use.

The golden rule with pre-paid data plan pricing  is that you have to hit the road and find out from the vendors what is the best available plan. The ones that companies advertise online is not often the best ones out there and I went looking for 30 GB for Rs 5000 plan and found one that gave me 32 GB instead.

The other issue with picking a provider is knowing your travel pattern well. The overall coverage and quality of coverage differs from state-to-state and provider-to-provider. My strategy is to use Airtel on the phone (2G plan that has a quota of 2GB of transfer every month at Rs 149), Indicom on the pocket router and a backup on the Micromax A73 with a 1.1GB 3G plan on MTNL.

It has been a good experience overall and with controlled usage I have finished only 8 GB of data of the 32 GB that I am allowed. The good thing about the Indicom plan is that it has a validity for a year, so I can probably use it all year at the current burn rate that I have.


The Next Big Mobile Wave

The popular history of the evolution of mobile phones is something on the lines of pre-iPhone and post-iPhone, which is, admittedly, quite a convenient way to look at things. The actual history, though, is a far more nuanced (or, complicated, should you prefer that) affair. The evolution of the mobile phones has gone through various phases like full QWERTY keyboards, colour screens, touch screens, WAP browsers, ability to record and handle videos — the list is endless. Reducing that to a pre and post iPhone world does a lot of injustice to pretty much everyone but Apple.

While Apple deservedly gets a lot of credit for changing our idea of what a smartphone is and how we interact with it, what they don’t get enough credit for how they also changed the way we think of how to use data on a smartphone. If you used a smartphone in the pre-iPhone era the one thing that stood out was that packet data was a second class citizen on the phone. The devices were phone-frist and data access devices second, or later. More than the iOS interface or the physical experience of using an iPhone what is seldom spoken about is this drastic change Apple bought to the market – it was a data access device first, while the phone functionality was a secondary issue.

Always On Data

Data being always-on was a game changer. Phones prior to that would ask you which connection you wanted to use to access packet data and if it should ask you again should the need to use data arise again. Taking data connectivity for granted has changed the way we use these devices. More than faster processors or wider and longer screens, always-on-data is the critical path that has led to the state of affairs today in the mobile domain. Pretty much all of our interactions on a smartphone now takes it for granted that it will be able to access packet data. If it was not for that we’d still be largely relying on text messaging and closed access methods like Blackberry Internet Services.

Momentum, Implications

If you look at the winners and losers in the smartphone game, you will see a clear pattern. The players, like iOS and Android, who adapted quickly to the always-on-data paradigm have moved rapidly ahead of the competition. The ones who failed to adapt that quickly, like Nokia, Palm and RIM, have struggled and continue to struggle. Rapidly growing and evolving markets like smartphones place a premium on momentum and you’ll always find that on the winner’s side. Without momentum, the best of platforms will struggle. And smartphones, being one of those rare objects that potentially can belong to every human being, is a ruthless market where you cannot blink for even a second.

Cycling is a brilliant analogy in this case, especially some of the stages of the Tour de France. The best riders always look to stay in the front group — called the peloton — at all times. This is due to two factors. 1) Momentum: The guys at the front have a much better average momentum through any stage than the rest of the others 2) Safety: In the case of unforeseen eventualities like crashes and crazy headwinds, being at the front gives the riders a better chance of working around problems. Always-on-data was a headwind that was unforeseen by the industry.

The Next Big Wave

Changes in these domains can easily make or break companies depending on whether they ride or miss out on the important waves. Even established big companies can die or go through near-death experiences if they can’t ride these waves quickly. If the last big wave was the switch from seldom-on-data to always-on-data (the one that made Apple), the next big wave in mobile could be anything from a multitude of devices using the same OS to devices that are embedded within/on us than being actual handhelds.

Crystal ball gazing, though, is not an easy task here as products in the domain are not often ruled by simple value choices either to the consumer or to the companies that are involved in the game. There is considerable regulatory interference that stands in the way of services and there’s considerable commoditization at the hardware end. For the skeptics, this is the reason why Apple is very touchy about keeping a cash hoard, the size of which confounds everyone. You don’t take anything in this market for granted and ease off, Nokia is a classic example of that.

 


New Gmail Ad Unit

This morning, I noticed a new ad unit within Gmail. The usual practice for Google, and almost everyone else, is to make the click open a new window and show the target URL in that. In this case, the ad unit opens a first level of content within Gmail, just like an email, and then there is a further click-through that takes the user to the final page.

Gmail now has four ad units in the mail content page and Google has been fairly innovative with their formats. One of the units — the top one — starts as a band on top and when you scroll down it collapses to a smaller unit on the right panel, above the usual set of four link tower unit.

As it is always possible, Google could just be bucket testing this unit.


Grokking Growth: Part I

The topic is a vast one in itself, so I’ll address only a smart part of it and that too from a business-to-consumer perspective.

Growth is one of the key drivers for a business and it is not as uni-dimensional as it is often made out to be. For example, you keep hearing about Facebook hitting a billion users and then some more after it and so on and so forth. Does it mean that once a vast majority of the population in the world is on Facebook the company will stop growing? Obviously, not. Market saturation is a business reality for all businesses that marks an inflection point that often leads to diversification or a comprehensive strategy. Smart companies pre-empt this and change course and pursue another kind of growth, while the not-so-smart ones stagnate and expose themselves to significant risk due to disruption.

 

Chasing growth is, though, quite simple for most companies (keeping side M&A options which don’t happen for most). You can either:

a) Keep getting more users (usually known as the hyper growth stage)  or b) Get more from the same (happens after hyper growth, when market saturation has kicked in).

It is harder for companies to segment growth strategies to address both (a) and (b) as you need to deploy 2x of everything (strategy, resources, measurement) to make this hybrid approach happen, while keeping even a single strategy going is tough enough for most companies. Companies that accomplish it, though, tend to be significantly agile. A good example of a company that didn’t manage a dual strategy would be Nokia.

This dual approach can be applied to specific components of a company’s operations:

Revenue:  How do you increase revenue by getting more users? How do you get more revenue from the existing users?

Profitability: How do you increase profits by getting (customer acquisition cost) new users? How do you increase profits from the existing users?

Putting in place a growth strategy also requires a good understanding of which stage is your company at.

Stage-I: Companies that have low to moderate turnover, revenue growth rate that outpaces at least inflation (ideally outpaces other obvious investment segments). The healthy ones tend to be debt/financing-free and privately held. They also have low risk appetite and low profit margins.

Stage-II: Companies that have medium turnover, they aim for explosive revenue growth through hyper growth. These companies tend to have an extremely high risk and limited runways to make the strategy work. They usually involve significant external investment and/or debt load and tend to be privately held. They have a high risk appetite.  Profit margins are nearly non-existent. Companies in this stage either die or make it to Stage III.

State-III: Companies that have massive turnovers. Their revenue growth rate is low but predictable and use debt as a routine path to fund growth. These tend to be public companies and they have a good ability to absorb risk. Their profit margins tend to be steady and companies like this die a slow agonizing death when they do die.

S-I companies typically grow organically. S-II companies typically grow by spending vast amounts on customer acquisition. S-III companies accelerate growth usually through M&A or by diversification.

I’ll examine each theme in detail in a later post.


Facebook Graph Search: Nobody’s Killing Anyone Yet

The launch of Facebook’s Graph Search has led to the expected feeding frenzy from the media. The product was long-expected from the company, but what was delivered was different from what was expected. The hope was that the company would launch a product that would take Google head-on, but what was delivered was a query parser that uses a restricted dialect closer to natural language and considerably improved results display and filtration options.

The buildup to this event was seen in the stock price of the company in the past week with it rising to its highest in a while and going by the reaction to the product launch it has not been received well as the stock is down, but holding the $30 mark it had risen to. While the stock market is hardly a good indicator of the health of a company (ask Apple), it would seem that all is not well.

The fundamental problem with the Facebook v/s Google narrative is that till Facebook starts crawling the open web, they don’t represent a threat to Google as far as search goes. The same holds true in the opposite direction: till Google explicitly starts building a social network to pull people off Facebook they don’t represent a threat for Facebook. In short, Facebook Graph Search is not search as you know it, Google+ is not the social network that you use in Facebook.

Most of the divergence is the search approach for Facebook and Google boils down to two things:

1. Intent & Context: Most of the content posted on Facebook is posted with an active intent to be consumed within Facebook. This context is vastly different from content on the open web where the context is determined by Google using their secret sauce.

2. Universe: In Facebook, the universe of data is what is created and shared between the network of users.If it is not shared or liked by someone on the network, it will not exist in Facebook. In Google, the universe is every page that can be crawled out there.

For both companies battling the other one is not the most significant challenge they face. Google needs a framework in place that will, over time, reduce their dependence on open crawling (pull) and move in a direction where content publishers will intentionally (push) data into their index. This has the additional benefit of allowing them to fend off lawsuits regarding sourcing (crawling) and preferential display (using Google+ pages for local data).

For Facebook, user retention and overcoming Facebook fatigue is the big challenge. They can build many wonderful things on mobile and elsewhere, but it will all come to naught if a good chunk of its users start to find it is no longer fun to be on the service or be active on it anymore. The company has a long way to go to de-risk the core part of their business.

Coming back to the product specifics, it will be interesting to see user reactions once the feature is rolled out across its entire user base. Natural language querying has been an interesting niche for a while. It was considered as a panacea to all search aliments a while ago, but we discovered then that query parsing is only one half a brilliant search experience, the other — most critical — part is result quality and relevance.

On Facebook, quality is going to be awesome when the results are available, but I’ll wait and see how does availability work out for a large spectrum of queries. The trouble with socially networked data is that the results I see may not be the results you get to see. Having designed and run a private network for a while, I can tell you that this is a pretty significant challenge that few understand clearly.


India Telco Scenario: Re-living 2005 in 2013

Looks like Airtel’s move to hike the price of its 1GB 2G plan by 25% will soon be aped by other operators. And thus continues the decimation of mobile data (and voice to a lesser extent) in India thanks to policies put in place by various governments of various formulations. We are now in the very unique situation where 3G is too expensive to be used in any reasonable quantities by the masses, so we are now marketing 2G in 2012, which should have been the case in 2005. What a mess.

Due to crazy amounts of money spent on 3G licences, the telcos have no choice but to eke out every possible paisa from the subscribers by hook or by crook. Even at the really expensive prices for 3G data, the telcos won’t recover the money they plonked into the licenses, so reducing the prices is no longer a feasible option for them, even though many attempt to do that by putting in place new plans that have minuscule transfer limits, after which the subscriber is billed at per Kb or per 10 Kb rate, which can easily throw up bills that run into the thousands for the subscriber.

The net result of such adventures is that subscribers get sucked into the trap one large group at a time. Such a scalding puts the subscribers off such services, while the telco balance sheets brighten up a bit for as long as they can keep finding more subscribers to die on the 3G data sword. The data billing process and plans are so convoluted now that we are seeing a new level of innovation in both pricing and products (even though some many alternatively call it dishonest business practices).

Some of the innovations:

  1. Capped high speed 3G connection with an ‘unlimited’ slower connection after the FUP which, for some reason, also puts a billing limit in place.
  2. Cheap volume plans with extremely low validity. Basically, you can’t use all of the data you have paid for within the given time. You’re actually paying more for less, even though it looks otherwise.
  3. 3G connections that bill fallback 2G connections on a per Kb or per 10 Kb basis. There’s no way to track this during normal usage as 3G tower footprint is dicey even within metros.
  4. Airtel also sells 2G plans which reduce in transfer speed after the plan’s limit is reached with ambiguous terms on how they define ‘unlimited’ after that.
  5. It is really hard to use even 1 GB of data in a month on a 2G connection. Most customers I know are underutilizing that data allocation every month. They were already paying more for using less, now they’ll pay even more.

What is even more troubling is that 2G cannot support bulk usage due to limited spectrum, you can already experience this in places that have a high concentration of people using mobile data over 2G in the area covered by the same towers. This is one of those spectacular cases that has failure built into it as a fact.

It is only the government who is capable of altering this terrible state of affairs, but being the party that came out smelling the sweetest of all the involved parties (other two being the telcos and the customers), it will be foolhardy to expect them to alter the course on this front. Which is a real pity as affordable 3G data had the potential to transform our internet penetration scenario. That said, we are in good company in the 3G mess. Over in China, the story is no different with 3G eroding margins for operators, thanks to lousy government policy.

The story, for me, as a consumer is different. I spent the last 3-months of 2012 streamlining my connectivity scenario. I was spending an average of Rs. 4000 per month on data and voice till then and a bit of moving things around has almost halved that amount. I switched to a Rs. 149 per month 2G plan on Airtel (2GB transfer & ‘unlimited’ slow transfer after the cap), changed my plan with a higher upfront payment but a lot of free minutes and messages, picked up a Tata Photon data card with a Rs 5000 for 32GB transfer (validity for a year) plan and stuck to a Rs 1200 plan for home broadband.


Pages:12345678...14