Browsing the blog archives for March, 2009.

Winning the Land War in the Data Center

Virtual Reality

Modern Data Centers

The data center of the future is easy to visualize if you think back 25 – 35 years to the mainframe era.  The mainframes of old consisted of multiple processor complexes with local memory interconnected via multiple I/O channels to high speed (for then) storage and communications gateways.  They were managed by a centralized console and executed one or more control programs on each node (processor, I/O controller, etc.).

Modern data centers are rapidly evolving towards this architecture.  Probably because it works best. 

  • Storage is specialized into network attached mass storage solutions accessing a high speed I/O channel (GigE, iSCSI, FC, etc.)
  • I/O channels are increasingly looking like high speed switched networks with tag-switched Gigabit Ethernet as the inevitable winner.
  • Servers are specializing into virtual machine hypervisors with gobs memory and multi-CPU cores per blade as the most cost effective specialization.
  • Since CPUs are so fast, on-blade memory is the only viable answer.  Maximizing memory on each blade is also the most cost-effective way of provisioning general purpose hypervisors.
  • Control programs are tucked into VMs loaded by a hypervisor.  Just like the old IBM-VM mainframe O/S that could run multiple operating systems including MVS (now known as MS-Windows) and Unix (now known as Linux).  VMs are necessary because most software development predates the Pentium IV and it’s hyperthreading capabilities (i.e. you can only go so far with multitasking & threading before it is just simpler to run a dedicated O/S per application and not have to worry about concurrency).
  • Workload management is a necessary new thing for maximizing the value of distributed servers, but in the old days was simply an amazingly rich job schedular.  Enter SOA and workload orchestration solutions having uncanny workload scheduling similarities to those old job schedulars.
  • Systems management must be centralized to a console, but the tired old agent-based server management solutions of the 20th century are not going to scale – principally because they are locked into the very VMs that they need to manage.  Enter an emerging class of agentless management solutions.

Strategic Advantage

So if you were planning to win the war for the data center it seems obvious that you would have a product that has:

  • high speed bus-based network with an ability to tag and stream traffic flows by priority (aka tag-switched Ethernet with Netflow and other QoS)
  • ability to support a wide variety of densely packed mass storage (because nobody other than fruit flies want to actually be in the disk drive business)
  • have a multi-core blade architecture featuring as much memory as possible (maximizing VM capacity)
  • be able to pack lots of blades into a small physical space with an environmentally low footprint (green is not only good for the planet – it’s much cheaper to operate and your components live longer)
  • avoid totally minimal margins on the blades by using proprietary network QoS to assure superior I/O performance for the same unit cost (since everyone has an Intel CPU you cannot gain strategic advantage from a superiour CPU performance)
  • support a data-center wide control program consisting of distributed hypervisors running Windows and Linux guest O/S (aka VMWare, Hyper-V, or Citrix).  This allows you to have a data-center wide network-centric solution that interconnects all storage to all blade complexes and all blades within those complexes. (You want to source your hypervisor from others because it too will become undifferentiated commodity – that’s why most are already free.)
  • have an advanced distributed workload management solution that can deal with both SOA-services and batch jobs (yes these still exist in real life and still do real work).  You want to own this because this is where a huge part of the value and differentiation is.
  • have a centralized agentless manager to help you administer all these moving parts.  Being agentless it must rely on standards to actually perform the administration, but the level of intelligence built into it will directly lower the operational cost of management (i.e. wetware is expensive).  Hence it is key to own this part too.

With the exception of the last 2 points, we have just described Cisco’s UCS product plan.

No Comments

Cisco’s Unified Computing System

Virtual Reality

This week Cisco announced an adaptive infrastructure vision that unifies virtual networking, virtual storage, and virtual computing. In doing so, Cisco demonstrated that they understand the essential problem of cloud/grid resource provisioning in way that should put the computing vendors to shame.

It is early days for an ambitious product suite that spans blade computing, virtual LAN switching, and 10 GigE storage networking. So we shouldn’t be too surprised to see a few holes surface – such as Cisco’s use of BMC for virtual machine management. Cisco stumbled by endorsing a device-centric, agent-based management architecture in what would otherwise be a very network-centric, agentless suite. Nonetheless Cisco needed some kind of management story for version 1, so why not use a tried and true solution from the last century? No doubt we can look forward to a 21st century, agentless solution in version 2.

The other notable hole is the lack of an advanced workload orchestration solution. VMotion is relatively primative and very immature compared to more established products such as Data Synapse, etc. Fortunately for Cisco, most enterprise IT is far behind the curve of exploiting the opportunity of marrying adaptive service oriented architecture to an adaptive infrastructure so they will be unlikely to notice the gap. However by not moving quickly to round up one or more leading solutions in this space, Cisco is now exposed to a competitive response from IBM or HP that could potentially one up them.

Then again that would imply that HP and IBM really understand the opportunity here. IBM has demonstrated that they “get it”, or more accurately “some of it”, but HP is currently far behind them. Cisco may indeed be onto version 2 before these competitors react.

No Comments

Grim Outlook for US Banks in 09

Financial Crisis

According to RBC Capital Markets, more than 1,000 US banks may fail over the next 3 – 5 years as commercial loan losses pile up.  This would be on the same level as the great savings & loan collapse back in 1988 – 1990 when 1,386 lending institutions failed.

To put that into perspective, according to the US Federal Deposit Insurance Corp, FDIC, there are 8.309 lending institutions in the USA and only 25 failed in 2008.  Yet 9 have already failed in one month so far in 2009.

The Royal Bank’s recently published Q109 financials also bear witness to the sorry state of US banking.  The Royal’s Provision for Credit Losses (PCL) in US banking soared from $10M in Q107 to $71M in Q108 to $200M in Q109 = 75% of the total PCL for the Royal Bank. 

Royal Bank Gross Impaired Loans

The Royal’s US Gross Impaired Loans (GIL), illustrated above, - which are loans that are highly likely to become credit losses – also soared from $0.1B in Q107 to $0.6B in Q108 to a staggering $2.2B in Q109 = 63% of the total GIL. 

No Comments


/* ADDED Google Analytics */