The elements of the data center of the future are mostly available today. Everyone is pretty used to the data centers they've been using for a while, which are thoroughly grounded in the past. So they keep building new copies of hoary architectures. But the parts and ideas are available to anyone who chooses to avail themselves...
Ancient Computing History
Back during the first internet bubble, say around the year 2000, data centers would likely have lots of Intel Pentium Pro microprocessor chips in their servers. It was an amazing device at the time. It had over 5 million transistors on the chip. It was so powerful that it was used in the first supercomputer to reach the teraFLOPS performance mark. Pretty amazing.
But building applications to support internet-scale applications was still hard. The clever software engineers of the time worked out ways to distribute the work among a collection of computers to get the job done, quickly and reliably. It was called, appropriately, "distributed computing."
Ancient Storage History
Back in the halcyon days before the internet, disks were just hooked up to the computers whose storage they maintained.
In internet-scale data centers, that wasn't good enough. There was always too much storage where it wasn't needed, and not enough where it was. Those bright computer guys got another good idea: we already have a local area network for connecting servers to each other. How about a storage area network for connecting computers to storage?
Ka-ching! Problem solved!
Today
Things have moved along. The latest microcomputer chips from Intel, for example the Xeon Processor E7 v2 family, have grown from millions of transistors to ... Billions of transistors. That's Billion, with a "B," as in 1,000 times more transistors than the number in the Pentium Pro line. Instead of a single, single-threaded core, there are 15 dual-threaded cores, a total of 30 effective processors, each awesomely faster than the single core in the Pentium Pro. And it support about 25 times more main memory.
Each server with a Xeon E7 can handle at least 30 times the workload of the Pentium Pro, maybe 100 times. Here's a picture of the evolution:
The average data center architecture? Unchanged. Still emphasizing distributed computing, LAN and SAN, all the stuff invented to solve a problem of limited computing power that has long-since disappeared.
The Future (for most people) Data Center
This data center can be yours in 2015 -- if you chose to build it.
The core principle is simple: use the cores! And everything else that's there! Get rid of obsolete architectures, chief among them that of "distributed computing" (with just a couple of exceptions), and drastically reduce the parts count. Here's what it might look like:
Note all the cores in the big (in power, but physically small) server boxes -- plenty of room for "distributed computing," inside a single chip. There are loads of cores; devoting a couple to handling storage functions eliminates boatloads of parts and connections, the whole SAN, without loss. You can easily arrange that most of the time, the storage is connected to the box the apps that use it run on. If not? No problem -- just a single hop to the storage software on the right box, and you've got yourself a virtual SAN, faster and for less money.
Conclusion
The data center of the future is still in the future for most people. The parts are there. The concepts are there. But old habits appear to be hard to break...
and the perfect storage for the big ccccccccc box on the right?
X-IO ISE or course!!
soon to be released X-IO ISE Virtual Volume capability that supports over 250,000 volumes . yikes!
Posted by: Gary Cullen tech pre sales for X-IO | 08/19/2015 at 07:17 PM