Cloud X.0 (and Then Some?)

HP’s latest announcement that it was stepping away from an Intel-based chip model and planning a whole new GENUS (not generation) of servers raises the bar on the server race. And every bar raising event comes with some questions. Some questions are pure curiosity, and some more probative.

In its press relsase, HP noted: With nearly 10 billion devices connected to the internet and predictions for exponential growth, we’ve reached a point where the space, power and cost demands of traditional technology are no longer sustainable,” said Meg Whitman, president and chief executive officer, HP. “HP Moonshot marks the beginning of a new style of IT that will change the infrastructure economics and lay the foundation for the next 20 billion devices.” (Emphasis is mine.)

I’ve been waiting for something like this — and since long before the announcement of Moonshot 18 months ago. At Sperry, nearly 40 years ago, we were actually developing a mainframe that was dynamically software configurable … but it  got shelved after $50 Million based on two factors, one of which was that Fairchild, then leading the fab world, said the design was too advanced to be realizable in chips (THEN). The other was customer-base resistance to ANY manner or scale of actual conversion that might be required (and yes, some would be required for the very oldest of Sperry Systems). This next generation of HP servers (and its forthcoming competition)  soared over the first hurdle and made the second one moot.

Back in the late 70’s, Fairchild offered to help Sperry build the facility to R&D and develop the chips on its (Sperry’s) own. But that project got shelved and 800 hardworking people went on to do something else, and it led to the evolution of Sperry’s (later Unisys’) prdouct family. I was part of the future product planning group that periodically reviewed the product family and its intent, status, etc., and we collectively gave it a (giddy with anticipation) thumbs up. Sigh.

The impact of this new form of server can and should be seen been as positive. Having energy efficient multi-module functionally-specific servers is or should be a customer’s dream. Have racks of them, and cabinets of them. This means they can align the specific of their data center to be tuned to meet the actual nature of the system they run, not just have to have one “standard” server architecture approach meet all. And with greater cost/expense efficiency. AND if the workload changes, and you have to adapt, there likely is (or will be) one or more modules out there you can plug in to support it.

But did Intel just adopt the “cellular phone market and product” model, preparing to offer new versions of the servers every three to six months or so? “It’s a software-defined server, with the software you’re running determining your configuration,” said David Donatelli, the general manager of H.P.’s server, storage and networking business. “The ability to have multiple chips means that innovations will happen faster.”  That’s probably quite the understatement. NO slight intended, bit it sounds like the cell/smartphpone model to me.

Possibilities are endless, but the top two to me are: (1) those holding onto their own data centers will go berserk and the cloud data center guys are going to be having heart attacks; (2) or maybe everyone will step back and carefully plan their futures.  Servers used to be just servers, more or less. Now servers would or could seem to be anything you want them so be. That hits a lot of critical issues from efficiency in processing to surge processing accommodation to … (feel free to keep adding).

We have reached [yet] another branch in the “Cloud Journey,” as I call it. We can’t call it a revolution nor an evolution. We are on a fast-track journey that keeps picking up speed.  We’re not even noticing speed bumps anymore.

You can have ONE datacenter with a (potentially) constantly changing mix of servers to meet specific organization/user systems and user community needs, or maybe move to a SOA-like CLOUD services provider model in which specific cloud suppliers provide best-in-class services for specific applications types, e.g., graphics, scientific, data analytics, etc.

Now what this means to the software vendors (SaaS and others) is still to be fully defined, and I personally imagine a lot of head scratching and perhaps some serious angst over how to cope with this. Or some vendors might just go back about their business. But a shift to a consistent per-user pricing model versus a per instance model has some interesting implications, for example. Or create a whole new metric.

Personally, I see this as really re-dimensioning the evolution of cloud computing and [possibly rethinking what goes into the cloud and what stays in house — if anything, or even pulling some items back in house. It also leads to the potential for more comprehensive federations of linked special purpose cloud providers individually focusing on niche markets as firms try to slice and dice. SOA in the Cloud carried to the extreme — or maybe it is an evolution of the CLOUD AS SOA. IF you can get the energy and operaitonal economics back in tow, and the management of all of this is NOT back-breaking, AND you have software economics benefits, you might even see the migration to the cloud slow, or morph into something different.

This could really change the forecasts for % of the cloud held by the various players in the SaaS, PaaS and IaaS space, such as shown in the Telco2Research graphics and other prognostications/forecasts. It will open and close some doors, for sure.

In my recent book Technology’s Domino Effect – How A Simple Thing Like Cloud Computing and Teleworking Can De-urbanize the World, I tackled a simplified management overview of  the implications of JUST cloud computing and teleworking on American society (which can be scaled to a global appraisal as well).  My last article here on Enterprise Features, “Farewell Bricks and Mortar,” addressed the implications of mass (or even just materially increased) telecommuting/telework.

This server announcement re-dimensions the scope of that impact. And with that re-dimensioning comes an increased concern for many things and likely number 1 on that list is security, and number 2 will likely be technology-proficient skill sets for that (or each) generation of the servers as they come to market.

But a solid #3 is the need for companies to assess just what this does to THEM … how they will operate, what they will do with this new capability, how best to apply it … This technological baseline shift is a reason for firms to become even better at their on-going self-appraisal and dynamic planning.

The new servers reflect pressures on prices, and certainly the need to tune or enhance typical server performance.   The new capabilities need to be understood and effectively adopted by organizations. I can easily see vector and scalar computing plug-in modules; graphics; GIS; … in fact, a single cabinet could house up to hundreds of modules with a nice mix for many types of users in vertical areas. It opens up massively parallel computing and federated assets. It does many things.

But it points to something else.

Mr. Donatelli indicated that it “It’s the Web changing the way things get done.” Other firms will be launching their own versions of that type of server. With the plug in module approach they are using, the server platform will remain static but the modules can be changed. The mental analogy is replacing PC chip with a new one with ease, almost as if you were plugging in a USB device.

Eric Schmidt’s observation on the web ought to be stenciled onto every cabinet enclosure. (I was going to say engraved, but stenciling is cheaper.)   And I’m starting to think that Schmidt, like Murphy, was an optimist. I have to wonder when that type of dynamic software definability will hit PCs, laptops, tablets, smartphones … and even our TVs and other devices. We get firmware/software  updates all the time. This is the next step, more or less.

I’m not sure if the dog is biting the man or the man is biting the dog in this case.  But I do have a paw print on my chest and I am none too sure I am happy about that.  The paw print is the size of a stegosaurus’ footprint.

I asked a handful of colleagues what they would do with this. Some had no immediate response, some said they needed to assess the impact and sort out new planning models.    Several others just grunted.  One often-beleaguered soul smiled and asked me when someone was going to announce SkyNet and he could retire and not have to worry about it. (Really.) Let the machine run the machine.

And the Cloud marches on. But now, it  seems to have its own cadence.