The data center of the future is easy to visualize if you think back 25 – 35 years to the mainframe era. The mainframes of old consisted of multiple processor complexes with local memory interconnected via multiple I/O channels to high speed (for then) storage and communications gateways. They were managed by a centralized console and executed one or more control programs on each node (processor, I/O controller, etc.).
Modern data centers are rapidly evolving towards this architecture. Probably because it works best.
- Storage is specialized into network attached mass storage solutions accessing a high speed I/O channel (GigE, iSCSI, FC, etc.)
- I/O channels are increasingly looking like high speed switched networks with tag-switched Gigabit Ethernet as the inevitable winner.
- Servers are specializing into virtual machine hypervisors with gobs memory and multi-CPU cores per blade as the most cost effective specialization.
- Since CPUs are so fast, on-blade memory is the only viable answer. Maximizing memory on each blade is also the most cost-effective way of provisioning general purpose hypervisors.
- Control programs are tucked into VMs loaded by a hypervisor. Just like the old IBM-VM mainframe O/S that could run multiple operating systems including MVS (now known as MS-Windows) and Unix (now known as Linux). VMs are necessary because most software development predates the Pentium IV and it’s hyperthreading capabilities (i.e. you can only go so far with multitasking & threading before it is just simpler to run a dedicated O/S per application and not have to worry about concurrency).
- Workload management is a necessary new thing for maximizing the value of distributed servers, but in the old days was simply an amazingly rich job schedular. Enter SOA and workload orchestration solutions having uncanny workload scheduling similarities to those old job schedulars.
- Systems management must be centralized to a console, but the tired old agent-based server management solutions of the 20th century are not going to scale – principally because they are locked into the very VMs that they need to manage. Enter an emerging class of agentless management solutions.
So if you were planning to win the war for the data center it seems obvious that you would have a product that has:
- high speed bus-based network with an ability to tag and stream traffic flows by priority (aka tag-switched Ethernet with Netflow and other QoS)
- ability to support a wide variety of densely packed mass storage (because nobody other than fruit flies want to actually be in the disk drive business)
- have a multi-core blade architecture featuring as much memory as possible (maximizing VM capacity)
- be able to pack lots of blades into a small physical space with an environmentally low footprint (green is not only good for the planet – it’s much cheaper to operate and your components live longer)
- avoid totally minimal margins on the blades by using proprietary network QoS to assure superior I/O performance for the same unit cost (since everyone has an Intel CPU you cannot gain strategic advantage from a superiour CPU performance)
- support a data-center wide control program consisting of distributed hypervisors running Windows and Linux guest O/S (aka VMWare, Hyper-V, or Citrix). This allows you to have a data-center wide network-centric solution that interconnects all storage to all blade complexes and all blades within those complexes. (You want to source your hypervisor from others because it too will become undifferentiated commodity – that’s why most are already free.)
- have an advanced distributed workload management solution that can deal with both SOA-services and batch jobs (yes these still exist in real life and still do real work). You want to own this because this is where a huge part of the value and differentiation is.
- have a centralized agentless manager to help you administer all these moving parts. Being agentless it must rely on standards to actually perform the administration, but the level of intelligence built into it will directly lower the operational cost of management (i.e. wetware is expensive). Hence it is key to own this part too.
With the exception of the last 2 points, we have just described Cisco’s UCS product plan.