« Thanksgiving Meals and Software Turkeys | Main | Regulations: Goals or Directions? »

Comments

R.I.Pienaar

Most of this is true but the one thing that sets a cloud aside from just a big datacenter is that all common operations are API controlled.

Instead of logging into 100s of switches and making painstaking changes you do a simple API call.

Instead of racking load balancers and editing some vendor specific CLI you make an API call.

These APIs let you do things a bit differently, you do not build your long running servers that stick around for 3 years and demand careful care and loving. You simply build an entirely new copy of infra that is running the next version of your app.

This means things like continuous integration is possible that doesn't just concern itself with code but with the infrastructure as a whole. It's not uncommon to see a team using something like Jenkins to build and destroy complete end to end infrastructures complete with load balancers, dns, databases, SAN storage etc only to run a set of tests and then tear the whole thing down having only invested in 1 hour worth of usage.

Imagine the old days where developers machines simply never matched production. Now you can literally give each developer an entire prod like behavior that is built from scratch this morning. No longer are they out of date or not maintained, they are built automatically from the same configuration management rules as the production systems.

It's this agility - among many other things - that sets a cloud apart from just a bunch of machines in a old school DC.

David B. Black

Your point is well taken, and I agree with it. However, I'd just like to point out that it's quite possible for companies to move to cloud infrastructures without the wonderful automation you describe, and it's also the case that this kind of automation was possible and was in fact implemented (albeit not often) in pre-cloud days.
I love your specific example: an automatically built developer environment that exactly matches production. Although not often done, this has always been a great way to go, and I'm delighted to see its use rise in "cloud" environments.
I am no supporter of heavily customized and manual DC's no matter what buzzwords are used.

R.I.Pienaar

Yeah you get a large a number of people who approach the cloud like they would a normal data center deploy and this is a mistake.

The availability pattern of say Amazon cloud is such that you _have_ to build with rapid failure and recovery in mind and this force you down the line of automation

The performance pattern of Amazon is such that you _have_ to use newer types of databases and not be built on relational databases, hence the Cambrian explosion of NoSQL databases thesedays all trying to find the right balance in what of the pillars of CAP they can ignore and still be suitable to at least some narrow set of use cases. This force you down the line of being an extremely metrics driven production setup. For which today tools in the monitoring space just doesnt cut it, so all the big shops literally build their own monitoring.

Those who embrace the model outlined above find the cloud a good - albeit not cheap as thats a whole lot of engineering investment - place to live in, those who don't invest in the agility finds that when, not if, it fails they are left with downtime and frustration. They find since the only way Amazon suggest you get reliability is to deploy in multiple zones that their application is inherently more robust and resistant to failure, accepting of change etc. But this all comes at a big development cost.

Most people who jump to the cloud in a mad buzzword induced rush would probably be better off just on some VM provider like Linode vs the very dynamic and unpredictable world of Amazon.

VCs love it though cos they can shoot some struggling startup in the head in a day - no longer do they need to worry about long running multi year contracts and I think this is a huge driver for the current perceived success of the cloud era

Leonardo

I too agree that "Cloud" computing as a name will be aronud for a while, but the problem is from a technical standpoint we all know (as IBMers or) techies what it is and that it encompasses many things from the architecture to the off-site storage, on-demand web apps, Web2.0 et-all, but the main issue is that the "public" don't understand the concepts, the processes or the uses, they only understand the offf-site storage & more recently (due possibly to Apple) the sync possibilies. Looking at it from that point of view, the "naming" should maybe refer to what people already understand -> Hardware is the touchy-feely stuff physical and tactile.Software is the thing that makes the touchy-feely stuff come to life & do things.Middleware although more advanced, is the stuff that makes the software do even more with the hardware, So why not keep to the same lines? Air-WareIf it's marketed and described properly & well implemented & integrated then there is no reason why a naming convention along the lines of "air-ware" could not be adopted. It has the implication of being something thats there and we can't see it but we CAN use it. Just my two-penny worth

Lhyn

Nargess, It is ironic, ineded! Our service can to some extent be considered a cloud service; however, it does not have some of the characteristics of an advanced cloud service. I considered moving our service to the cloud, but unfortunately it becomes expensive for us at this point. Moreover, our podcast is not a critical service needing high reliability and I am sure our audience would not mind if we have outages like this every once in a while.

The comments to this entry are closed.