Category: Technology

Building A Digital Product: Part I

There used to be a time when building and launching a digital product was a straight forward affair. The steps were something like this:

  1. Start with a rough idea of what you were looking for
  2. Find someone who could design and build it for you
  3. Find people who would help you run it and go live.
  4. Find ways to market the product.
  5. Find ways to sell the product.

Other than the really big players, most of the regular Joes would handle most of the steps on their own or, in some extreme cases, handle all the steps on their own.
In the last five to eight years the steps have been shred to bits, thrown out to the dustbin and replaced with a set of steps that bear no resemblance to the earlier ones.
Most of this disruption can squarely be blamed on mobile computing. The revolution that started with telephonic devices being able to access bits of textual data (read SMS), was turned on its head when the same devices were transformed into data devices that could also do telephony as one of the many things it could do.
The other significant development that has caused the playbook to be thrown out is the commoditization of many of the moving parts that are used to build a digital product. From databases, to job queues, to logging, to any other part of the technical stack, finds an array of plug-and-play software service platforms that a new product can leverage from day one.
In the early days, teams had to develop everything — from email subscription systems, to delivery systems to, logging infrastructure — to get going. With all these new services, product builders are less builders and more integrators these days.
While this has created an entirely new universe of opportunities and possibilities, it is also responsible for creating a lot of confusion for companies and individuals looking to build products.
What this series will attempt to do, in the first part, is to bring some structure to the steps and elaborate on the steps a bit with an aim to reducing the amount of confusion in the market regarding the steps.
I have no illusions that this will be a definitive list as there are parts of the stack and ecosystem I am completely unaware of. My idea is to fill in the gaps that I can and I’ll be more than happy to bring in suggestions about what else can I cover here.
I am going to tackle the more technical aspects in this post:
Design: Designs are best approached first with storyboards. The story boards are used to create process flows. The process flows lead to wireframes. The wireframes lead to the final design.
You can skip all of the steps and go directly to a design, but the odds are that you will struggle at a later stage to force fit that disciplined a process into an existing system that has grown without it.
What is more important — short term gain or long term pain — make your pick.
Development: The choice of framework/language to build on is made at this stage. Unless you are someone who knows technology very closely, avoid using the latest fancy framework in town.
You have to establish coding standards, documentation standards, bug tracking, version control systems and release management processes.
Testing: Set up both automated and manual tests to address both logic and real-world usage. Testing infrastructure built right will include a good set of unit and behavioural tests and a continuous integration framework that will catch most errors during the build phase itself.
Deployment: No (S)FTP. Simple. Deployment options are available these days from the simple, to the ridiculously complicated. It gets harder when you have to update code on a pool of application servers that need a rolling update/restart cycle.
The more challenging part in this is to abstract away this part of the stack to a simple interface that the developers can use. You cannot and should not expect developers to debug problems in the deployment infrastructure.
Distribution: A local CDN or an international one — which is the right one to use? Should I use a CDN at all? Recently, a company that I spoke to had a response time to their origin server that was 1/5th of what they were getting from their CDN. This was done to leverage cheaper CDN bandwidth and is a classic case of cost optimization at the wrong place.
Is Couldfront the right solution? Can my preferred CDN provider handle wildcard SSL termination at reasonable cost? How costly is it to do a cache purge across all geographies. Is it even possible? Is it important to purge CDN caches? Is a purge important to avoid compliance hurdles for some obscure requirement in my market of choice?
Mobile-specific Parts: Native, cross-platform or HTML5? Do I need a mobile application at all? Which platforms should I target? What is the minimum OS level that I should support on each of those platforms? How do I align those decisions with the target audience I am going to address?
Outbound, non-consumer-facing Services: Should I expose any of my internal data with a developer-facing API? What should I use to expose that API? Do I build it own on my own or do I use a hosted platform like Apigee? What sort of authentication should I use? What sort of identity management should I use. Should I even try to split identity and authentication as two different services?
Inbound, non-consumer-facing Services: What do I use to handle data that I fetch from other sources? How do I ensure that I cache my requests to respect rate limits. What is a Webhook? How do I go about implementing one?
Replication & Redundancy: What is the maximum acceptable downtime for my application? Is there a business case for a multi-DC deployment? How extensive does my disaster recovery plan have to be?
AWS, Rackspace, good old dedicated rack in a datacenter? Should I use Glacier? What should I use for DNS management?
Analytics & Instrumentation: DAU, MAU, WAU — what all do I have to track? Are bounces more important than acquisition? Is acquisition more important than repeat transactions? How do I bucket and segment my users?
How do I measure passive actions? Should I start tracking a minor version of an otherwise less-used browser as my javascript error tracking reports are showing that the current release is breaking critical parts for my most valuable demographic who use that exact obscure browser?
Wait, I can track client side javascript errors?
Conclusion
As you can see, the list raises more questions and provides no answers. This is intentional as there is no one-size-fits-all answer for these questions. Even within specific company lifecycle segments (early stage, stable start-up, established company), the internal circumstances vary from company to company.
These list is more a starting point than a destination in itself. Use it to build a better framework that is suited for your organization and your product. And if you need more help, just ask!

Filed under: Start-ups, Technology

Scaling Notifications On Elgg To Support Rich, Context-Aware Emails

One of the core aspects of a social networking site is its ability to notify its users by leveraging different frameworks. Social networks that have complex access restrictions are entirely different beasts to build and scale compared to sites that are either mostly open, or are those where the content generation can only be done by a handful of users.
I have been running an Elgg site for an old client since 2009, which is a private gated network. At an early stage itself we ran into problems with the newsletter that had to go out to the entire user base. This was from a time when products like MailChimp were not an option and we were also working with a fairly limited budget. At the first stage, we mitigated the problem by using a job queue that was built on MySQL.
As any engineer will tell you that a job queue based on an RDBMS that can only run one worker, or even worse depends heavily on locking to run multiple workers is not a job queue. Eventually, it will cause more trouble than what it is worth and that is what we got into. Besides, as an Elgg site grows and you introduce more features to it, something that can farm out jobs and handle them asyc is worth its weight in gold.
Eventually, I wound up creating a simple set-up using Beanstalkd. The notification handler and the generic mail handlers are overwritten to add jobs to the Beanstalk queue and a PHP worker job (managed by Supervisord) processes the jobs in the background. I could go a level deeper and even leave out the individual job creation to Beanstalk itself, but the current approach seems to be holding up well for the moment, so, that next step can easily wait for a while longer.
Couple of pitfalls you need to watch out for, should you attempt to do the same thing:
1. Content encoding. This will drive you nuts if your scripts, DB tables and the CLI environment are different in how their locales are set up. Do not assume that everything that works in the browser will work the same in CLI. It won’t.
2. Access: The CLI script loads the Elgg environment and has no user. So, be aware of any functions that use sessions to return results.
3. Valid entities: PHP will error out when faced with an attempt to call a method on a non-object. If you don’t kick or bury a job (which is not possible when the script exits with an invalid object error) that is causing the error, the script will endlessly start and stop again. You have to obsessively check every object for validity before you attempt to do anything with it.
4. Use MailCatcher on your development set up. It will save you a ton of time, even though it does make the server itself a bit sluggish.
There are few other options available in the Elgg ecosystem to do the same like Jettmail and the upcoming Async notifications feature in Elgg 1.9. But both have their own complexities and issues and I could not wait till 1.9 and I needed something that didn’t require as much fiddling as Jettmail.
It is also possible to further extend this kind of development to leverage some of the transactional email services out there to use the inbound email feature to post to Elgg with webhooks. There are, though, no plans to roll that out right now and I will update this post if we ever get around to doing that.

Filed under: TechnologyTagged with: ,

Running 3.8.0-29 Kernel On ElementaryOS Luna

After a bit of tweaking and fiddling I have managed to get the 3.8.x kernel running on the Acer Aspire V5 431. Unlike the previous time when I tried it and failed to get bcmwl-kernel-source to compile from the package manager, this time it worked with a different approach. Thanks to this post on AskUbuntu, I picked up the latest bcmwl-kernel-source (6.30.223.30) and installed it.
The package installs without any issues and it enables WiFi for the machine. If hit the problem where the driver is shown and installed and activated, yet, you can’t seem to get the WiFi going, just make sure the other WiFi modules are blacklisted and disabled.
My blacklist looks something like this:

blacklist b44
blacklist b43legacy
blacklist b43
blacklist brcm80211
blacklist brcmsmac
blacklist ssb

You also have to make sure that the ‘b43’ is commented out in cat /etc/modules if it is present there.
I have also been able to make the Huawei EC1260 Wireless Data Modem (Tata photon+ being my provider) to work with the kernel. You will need to configure usb_modeswitch for that. After which the device will show up with the 12d1:140b profile.
The profile data looks like this:

DefaultVendor= 0x12d1
DefaultProduct=0x140b
#HuaweiMode=1
MessageEndpoint=0x08
MessageContent=”55534243123456780000000000000011062000000100000000000000000000″
NeedResponse=1
CheckSuccess=10
DisableSwitching=0

The 3.8.x kernel seems to be pretty good. The machine runs a lot cooler than what it has with the 3.2.x kernel and I am yet to run into any issues. The older kernel seemed to have the odd lock-up now and then. I have not experienced that in a day or two now. It has been a wrthwhile upgrade for me.

Filed under: Technology

Moving Away From OS X, Switching Over Fully To Linux

Most of the reasons for the move has already been documented in a previous post, so I’ll skip the immediate compulsions that pushed me in this direction. Even while I writing that post, I was not very sure if it would all come together well in the end. After  much experimentation (and some really frustrating times) I’m glad to say that the transition is complete and I won’t be going back to an Apple laptop for a while.
The overall Linux on desktop experience is a marked improvement from the last time I had attempted it. This was during a time when I was only glad to tinker around endlessly and when it was more than OK for me to insert a module into the kernel to get the sound card to work. That time, though, is long gone and I prefer having systems with me that just stay out of the way. Which was why OS X and the Apple laptops were wonderful for me.
That said, I have recently been feeling that the premium you pay for getting that experience is a bit over the top with Apple. But replicating that experience on another platform (Windows does not cut it for me because I am simply way too used to having a *nix environment to work than due to any other reasons) has been more than a painful experience every time I have tried it.
In a lot of ways, the Linux on desktop story right now resembles a lot of what the Android story was like around the time of Froyo. And that comparison is meant cover only the technical aspects, you can safely ignore the market share part of the story. Even with this marked improvement, it will be a long long time before Linux becomes a serious player in the desktop/laptop market.
Coming back to the comparison, I find the quality of apps on Linux have improved significantly. They are still not as pretty or as consistent as OS X apps, but the story is a drastic improvement from the earlier times. Then there are the projects like elementaryOS, where the teams have made a concerted effort to make everything a lot more consistent and well thought out.
In the overall picture, none of that will matter. Most of the big companies that sell desktops and laptops are all primarily tied to Microsoft and the ecosystem around it. There have been efforts like Dell’s Developer Edition, but those are hardly mainline efforts and since we are living in an age where a platform is no longer simply about the hardware and the OS, without major muscle behind it, the Desktop Linux story will always be a minor one.
For me, the Linux story has so far been extremely positive so far. Save the exception of not being able to run iTunes without virtualization or emulation (one of the sad outcomes of the demise of Flipkart’s digital music business), there is nothing that I have been unable to do on Linux that I was able to do on OS X. The UI/UX aspect is no longer an issue with eOS, which, surprisingly feels a lot less OS X once you start using it a lot more.
There are some terrors that remind me of the good old days of Desktop Linux when everything was a lottery, but once you get a stable system in place the beast just keeps chugging on and stays out of your way and I do foresee a long and fruitful association for us this time around.

Filed under: Technology

Do Not Upgrade Kernel While Using elemetaryOS On Acer Aspire V5-431

Edit: Figured out a way to run the 3.8.x series kernel here. I am running 3.8.0-31 at the moment, without any issues. This, though, is not recommended by the eOS team and should something go wrong, you will be on your own.
One of the best post-installation resources on elemetaryOS is the elementaryupdate.com site. They conclude their post on what more you can do to customize and update the OS after installing the current version (Luna), with a recommendation to upgrade the kernel to raring-lts. If you do this on the Acer Aspie V5-431, you will break your Broadcom BCM43228 (14e4:4359) driver as the bcmwl-kernel-source module will not build on the 3.8.0-29-generic kernel and many hours of frustration will follow.
In short, stick to the 3.2.x.x series kernels till the eOS team will suggest otherwise, as they do recommend sticking to the 3.2.x.x series in this post. There are good reasons to move to the latest kernel as a lot of things seem to work better — auto-dimming of the display for one — with the new kernel, but this kind of breakage is severe and it will be a good idea to stay away from any kernel upgrades that don’t get pushed through the software update process.
This is really one of the annoying things about using Linux on the desktop as you would expect something that worked out-of-the-box in an older version of the kernel to do the same in a much newer version. I fully understand the reasons why things work this way, but it is extremely poor user experience and even for someone like me, who is a bit better than the average user in figuring out these things, it is frustrating and a waste of time.

Filed under: TechnologyTagged with: ,

Revisiting Linux With elementaryOS, Acer Aspire V5

With the old Macbook getting on in age (it is an early 2008 model MacBook4,1) the move to find a replacement for it was always on the cards. The machine had served me well, travelling with me to different parts of India, including high-altitude passes in Himalayas. Of late, even after a complete reinstall, the machine has been showing its age and with persistent heating problems and lock-ups, the writing was quite clearly on the wall. I could get it repaired, which I eventually will, but the board only supports DDR2 and the memory is maxed out as it is at 4GB. The only other option is to upgrade to a SSD, fix the problems and hope for the best after that.
The primary candidate for the replacement was to go for the 13″ Macbook Air. After the millionth (failed) attempt to find a reasonably priced Linux laptop solution that just stayed out of the way, I was pretty sure that I’d have to stick to OS X and Apple, and have no choice but to gulp down the high premium that Apple charges for the fire-and-forget experience it is more than justifiably famous for. In the midst of all of this, I ran into this interesting so-called Linux laptop from Acer. It is called the Aspire V5-431 and I found a pretty decent price at Flipkart for it.
At this point, I must digress a bit about the non-Apple laptops. Dear god, some of them,  especially the Lenovo ultrabooks, are such a ‘slavish’ ripoff of the Apple laptop line up. I can imagine smartphones looking much like each other these days. There are not too many different ways in which you can design a phone, but that’s not the case with laptops and it is really shameful the extent to which the copying happens here. I guess none of these copies are much of  a threat to Apple in the market, so it is probably not worth suing the manufacturers for it, but it still is not a great thing to see. The V5-431 also suffers from a bit of this ‘inspiration’ problem, but it is hard to mistake it for an Apple unit.
The laptop comes pre-installed with Linpus Linux, which is instantly discarded by most users. But having a Linux laptop meant that I could have some degree of certainty that most of the bits and pieces would work well should I run some other Linux distro on it. It has been a while since I have used a Linux desktop as my main platform and it seems that while the underlying platform has changed a lot (and for the better), the user experience is still ghastly and inconsistent, featuring interfaces and UX that can only be created and loved by engineers.
That was when I came upon this project called elementaryOS. It is based on Ubuntu (current version is built on Precise: 12.04.2), but has an awful lot of work that has gone into making the front end user experience clean, consistent and fast. It is hard to miss the very obvious OS X inspiration in a lot of the visual elements, but once you start use it a bit more, the differences start to show up and it does that in a nice way. Linux on the desktop/laptop has been begging for something like this for years and I am really thrilled to see someone finally do it right. If you care to take apart the bits on top, you’ll find a familiar Ubuntu installation underneath, but, you really should not bother doing that.
I have gone through some three re-installs for the OS so far due to various reasons. One thing you need to watch out for, while sorting out eOS on the V5-431 is to stick to the 32-bit OS as things get quite a bit crazy should you attempt mixing 686 and X86_64 platforms while using virtualization. The eOS 32-bit kernel is PAE-enabled, so you can use more than 4GB RAM on the machine, but I would highly recommend sticking to 32-bit on everything (OS, Vritualbox, any guest OS) and you’ll not have a reason to complain. I discovered all of this the hard way as my primary requirement is to have a working Vagrant installation on the laptop and eventually had to go through redoing the base box in 32-bit (the original from the Macbook was 64-bit Centos 6.4) in the end.
The experience has been pleasant so far with the laptop. I have ordered more memory (8GB, to be precise) and even at 2GB the machine feels a lot faster and stabler than the ailing Macbook. I will hold off on getting a SSD at least for now as I feel the machine is quick enough for me at the moment and the extra memory will only make things much better. After many attempts at customizing the interface what I have realized is that it is best left alone. The developers have done a great job of selecting the defaults and 9/10 times the modifications you’ll make are not going to make it any better. The only thing you’ll need is to install the non-free TTF fonts, enable them in your browser’s font selection and get on with the rest of it.
Other than that, the main issue is of color calibration of the monitor. The default install has a blue-ish tint with the monitor and the blacks don’t render true on it, which was infuriating when you get that on a glossy screen. I finally fixed the problem by calibrating the display on a Windows installation and pulling out the ICC profile from it. I’ll share the link to the profile at the end of this post and if you have the same machine and are running Linux on it, use it. It makes a world of a difference. You will have to install Gnome Color Manager to view the profiles.
After all of that, the machine seems quite a good deal for me. It does not heat up too much, is extremely quiet and weighs a bit over 2-kilos. The 14″ screen is real estate I appreciate a lot, coming from the 13″ Macbook. The external display options are standard VGA and HDMI. My primary 22″ monitor has only DVI-D and DVI-Sub inputs, so I’m waiting for the delivery of a convertor cable to hook it up to that one. The battery is a not the best, though. Acer has cut some corners on that, but you can’t have everything at such a low price. Even with the memory upgrade, the machine will still cost me less than 1/3rd of what a new Macbook Air (the base model, that is) will do right now. I’m getting around 2.5 hours on real hard core usage, which is not bad at all.
The stack is otherwise quite stable. It reads something like below:

  • Google Chrome
  • LibreOffice
  • Virtualbox
  • Vagrant
  • Sublime Text 2
  • Skype
  • Dropbox
  • VLC
  • Darktable

I’m not exactly a power user and 90% of my work is done in a text editor, web browser and VLC, but the combination of eOS and the Aspire V5-431 is something that I can easily suggest to a lot of people looking to break away from regular Linux/Windows/OS X and that too at a good price. There is a new version of the laptop that is out with the next generation of the chip, but I have not seen any great benefits that you’ll get from that upgrade which will cost a bit more. You can spend that money on getting more RAM instead.
eOS is also a nice surprise and it is a pretty young project. With time it will only get better and eventually become quite distinct from an OS that looks similar to OS X.
 

Filed under: TechnologyTagged with: ,

Javascript Corruption In Vagrant Shared Folders

If you are serving javascript files from your typical LAMP stack in Vagrant using shared folders, you will hit a problem where the JS files will be served truncated at arbitrary lengths. Curiously, this does not seem to affect other static text file types and it could be a combination of headers and caching that is responsible for this.
By the looks of it, the problem is not something that’s new.  This thread from the Virtualbox forums addresses the same issue and it goes back all the way to 2009. And the last post in the thread provides the right solution, which is to turn off sendfile in httpd config.
Curiously, EnableSendFile defaults to ‘off’ in the stock installation, but disabling it specifically gets rid of the problem. This should be fun to dig into and unravel, but I will leave that for another day.

Filed under: TechnologyTagged with:

Quick Tip On Shared Folders And Logging In Vagrant

Continuing with the recent posts on Vagrant, today, we’ll look at the tricky issue of shared folders and using them as locations to store logs.
My idea with using Vagrant was to keep all development-related files, including logs, on the host machine through shared folders, while the guest would only access these files through shared folders. This gives you the best of both worlds, as you can use your editor of choice on the host, while the files are executed on the guest. This works fine on a set-up that has only few shares and not more than port or two that are forwarded.
For a bit of background, this is how Vagrant goes through its start-up cycle.
First cycle is all network-related. It detects any forwarding conflicts, cleans up old forwarding settings and once the coast looks clear, it sets up all the forwards specified in the Vagrantfile.
Next cycle is the actual VM boot, where a headless instance of the VM is kicked into life.
Lastly, Vagrant loads all the shared folders.
The problem comes starts when the guest machine starts processing its init.d directives at the second cycle. The shared folders often take a good chunk of time to load, and depending on the level of panic triggered by the software started by init.d when it encounters missing files that are missing because, well, the shared folder that has them has not been shared yet, life may move on peacefully (with adequate warnings) or the software may just error out and die.
One such software is the Apache HTTPD daemon. It can start-up without issues if it can’t find the documents it has to serve, but it simply throws up its hands and quits if it can’t find the log files that it is supposed to write to. And a good developer always logs everything (as she/he should).
The solution, in the case of HTTPD, is to ensure that you log to a volume that is on the guest machine and not on the host. This does mean that you can’t tail the log file to see errors and requests stream by, from the host, but it is not a big problem compared to figuring out mysterious deaths of the HTTPD daemon, which starts-up fine after you do a ‘restart’ once the VM is fully up and running.

Filed under: TechnologyTagged with: ,

Port Forwarding Small Port Numbers With Vagrant On OS X

While working with a Vagrant set-up it is easy to forward ports with the forwarded_port directive.
This is accomplished by making entries in the format below in your Vagrantfile:
config.vm.network :forwarded_port, guest: _guest_port_number, host: _host_port_number
The catch here is that Vagrant won’t forward ports when it comes to small port numbers on the host machine. This means that you will have to access the service on a higher port number, which is a bit of a downer considering the fact that we are going through all of this pain to have a development environment that is nearly an exact clone of what we will find on production.
The solution is to use ipfw (the humble IP Firewall in Unix-based and Linux systems), to forward the low port to a higher port and then forward that higher port to the corresponding low port on the VM.
Let us assume that you want to forward both HTTP (Port 80) and HTTPS (Port 443) to the Vagrant VM.
First, use ipfw to forward the ports with the host:
sudo ipfw add 100 fwd 127.0.0.1,8080 tcp from any to me 80
sudo ipfw add 101 fwd 127.0.0.1,8443 tcp from any to me 443
Then forward the lower ports to higher ones in the Vagrant file.
#forward httpd
config.vm.network :forwarded_port, guest: 80, host: 8080


#forward https
config.vm.network :forwarded_port, guest: 443, host: 8443

I do realize that this is a bit of a loopy way to go about accomplishing this, but when you have to juggle port numbers in a complex deployment environment, the overheads of keeping in mind the difference (and the set-up/code changes that will handle it) and the propensity to make mistakes will only keep increasing through time.
As far as I know, you can do the same with iptables on Linux, if ipfw is not your poison of choice, but I have not tested it.

Filed under: TechnologyTagged with: ,

Vagrant, FreeNas And An Automated Life

Somewhere during the week my OS X Leopard installation became so broken that it made more sense to go for a clean install than attempt another workaround. The installation was well over two-years old and for the amount of fiddling and development work that gets done on the machine it had held on quite well. But after a point the debt accrued from numerous hacks and fixes around problems started eating up more time and thought and I felt it was better to just dedicate a day or two and start from scratch with the OS X.
Having chosen that path, I decided to change the manner in which I do my development. Having heard a lot of good things about Vagrant, I wanted to give it a shot. One of the broken things on the old installation was QT, which meant that I could not get Virtualbox installed as it was dependent on QT. Thus, moving to a Vagrant-based development set up would have necessitated a reinstall in any case. The other change I made was to convert my old netbook into a FreeNas machine, which means that it would handle DLNA, storage and file transfers on its own and also consume very little power as it is an Atom-based machine.
Instead of using one of the many available boxes for Vagrant, I built my own base box with the following set-up:
Centos 6.4 (64 bit)
Apache
PHP
MySQL
Beanstalkd
memcached
RabbitMQ
Supervisord
Postgresql
Mailcatcher
Chef
Puppet
Once I can find a bit of time, I will upload the base box and related Vagrant files for anyone who would like to use the same.
The current set up forwards everything to port 80 to the VM’s port 80 using ipfw and using Vagrant’s file sync, all my web directories remain on my main hard disk. This has wound up giving me a very consistent development environment, much like my production one and it also means saying goodbye to the confusion one has to go through every time to compile and get things to run on OS X, even though it has gotten so much easier compared to 6-7 years ago.
This also has the benefit of making the primary OS a much more simpler affair to handle. It has now has only a code editor, Git, Subversion, Vagrant, couple of browsers, Dropbox and LibreOffice as its main components. My development stack is almost a level now that I can have nearly complete device independence and portability as I can always easily move my development Vagrant box to another machine and get started there in less than 5-minutes.
The timing for doing this, though, was not that great. I lost a good two-days in it, but the longer it was kept off, the more it would have cost in time and money along the way and this is one of the many investments that need to be made towards running a smart and lean operation. Next steps will be to fully automate provisioning, testing, logging and deployment, which will enable me to focus more on the key aspects of running the business than spend many hours doing repetitive tasks while waiting for finding talented people who can do all of this.

Filed under: TechnologyTagged with: , , ,