Ronnie05's Blog

The cost of cloud computing

Posted in The cloud and the open source by Manas Ganguly on August 31, 2009

 

Featuring an analysis of the top 3 cloud computing companies by Dion Hinchcliffe in terms of current pricing and feature sets.This is probably one of the first time a cost, feature benefit of cloud computing is being examined and from the looks of it this space is gong to get red hot in future.

Lessons from today’s cloud computing value propositions

Taking a look at all this, I’ve come away with five conclusions about the top providers of cloud computing today given their current pricing and feature sets:

  1. Amazon is currently the lowest cost cloud computing option overall. At least for production applications that need more than 6.5 hours of CPU/day, otherwise GAE (Google Apps Engine) is technically cheaper because it’s free until this usage level. Amazon’s current pricing advantage is entirely due to its reserved instances model. It’s also the provider with the most experience right now and this makes it the one to beat with low prices + maturity. However, expect subscriptions from Azure to give it a run for its money when Microsoft’s cloud platform formally launches in a few months (probably November).
  2.  Windows costs at least 20% more to run in the cloud. Both Microsoft and Amazon offer almost identical pricing for Windows instances while Google App Engine is not even a player in Windows compute clouds. There are undoubtedly cheaper offerings from smaller clouds but they are less likely to be suitable for enterprise use, though certainly there are exceptions.
  3.  Subscriptions will be one of the lock-in models for cloud computing. Pre-pay for your cloud to get the most value and you’ll get great prices. But you’ll be committed to providers for years potentially without a way to leave without stranded investments.
  4.  Better elasticity does not confer major price advantages. GAE is one of the most granular of the cloud computing services, only requiring for you to pay for what you actually use (for example, you have to commit to at least an hour of compute time at a time from Amazon) but does not provide a major cost advantage for large applications.
  5.  You can’t pay more for better uptime and existing SLAs are not sufficient for important business systems. It’s unclear why, given open questions about cloud reliability, why no vendors will offer differentiated service where enterprises can pay more for a better SLA. The best you can get right now is also the worst, or 99.95% uptime. This is about 4 hours of expected but unscheduled downtime a year. For business critical applications, this is still too much. This will end up being an opportunity for other vendors entering the space though I expect the Big 3 listed here will improve their SLAs over time as they mature.

Wikipedia goes with “Flagged Revisions”: Emphasizes on importance of discipline in crowd-sourced data

Posted in The cloud and the open source by Manas Ganguly on August 31, 2009

Crowd-sourcing to create an online repository of data/information has been a masterstroke from Jimmy Wales, the founder of Wikipedia! However, monitoring content in flow and validating data to be “clean” is key to building credibility. A little bit of censorship/discipline of data may actually favor Crowd-sourcing and content democratization!

Wikipedia, the online encyclopedia launched by American entrepreneur Jimmy Wales in 2001 with the idealistic intention of being an online repository of all human knowledge, announced this week that it would have to abandon one of its founding principles. To combat a growing amount of vandalism on the website, all entries would be edited before they go up on the site. Wiki announced this on August 31st and will conduct a pilot run over the next fortnight to assess the data validity, cleanliness on these lines.

 Previously, any user was allowed to make – almost – any change to any entry: this was hailed as part of the democratizing power of the internet. But a sharp increase in false information – particularly in relation to people still alive – has forced a rethink.

 

Wiki II

How did the Wikipedia work before?

Wales has been feted as a brilliant business mind and social innovator for tapping into a popular impulse to add to public knowledge that few people knew existed, and even fewer publicly predicted.

Wikipedia still works largely by allowing anybody to login as a user and click on an “Edit this page” tab at the top of an entry. From there it’s simply a case of making changes and saving them, albeit according to a policy on “biographies of living persons”.

Any changes are then filed under the “Edit history” of the page, and the IP address – a numbered identity that shows where the change has been made from – is also kept on record. Pages that contain unverified information are highlighted.

Wiki introduces “Flagged Revisions”

The new policy is referred to as “flagged revisions”. It allows editors to adjudicate (mainly through reference to other news sources) on changes made to the pages of a living person. The flagged revisions will be rolled out by September15th,2009, and Wikimedia, the non-profit organisation that runs the website, will monitor users responses over the trial period.

A team of “experienced volunteer editors” will oversee amendments to such pages. “We are no longer at the point where it is acceptable to throw things at the wall and see what sticks”, said Michael Snow, chairman of the Wikimedia board.

And Mike Peel, its UK spokesman, clarified the intention: “Anyone can continue to edit these articles, but the work of inexperienced editors with less than three days’ experience will be subject to review by more experienced editors”, he said. “This is our attempt to create a buffer to ensure that editors do not commit acts of vandalism.”

%d bloggers like this: