Browsing the archives for the Virtual Reality category.

Has the Ottawa Citizen Become a Blogspaper?

Economic Reality, Financial Crisis, Political Reality, South March Highlands, Virtual Reality

Today, Jan 12 2013,  there is no news article to be found anywhere on today’s front page of the Ottawa Citizen’s print edition.  The only article is a columnist’s opinion piece.

The Ottawa Citizen, which has recently been steadily displacing news with opinion on its front page, appears to have taken another step in a transition from being a reputable newspaper to being primarily a compendium of opinion articles – in effect a blogspaper.  Actual reporting of news appears to have become a scare commodity on the front page where opinion-based articles written by columnists appear to be increasingly crowding-out fact-based news.

The reason for this is probably economic as more and more people rely on Internet news sources than print sources.  I’ve been told by former Citizen reporters that fewer than half the reporters that worked at the Citizen in 2005 remain due to rounds of budget cutbacks.  Many of the columnists employed by the Citizen are syndicated across more than one newspaper to reduce costs.

The need to protect non-subscription revenue – i.e. advertising – appears to explain why news reporting over the past few years at the Citizen seemed to become skewed, by what appears to be selective editing, in favour of the interests of its largest sources of ad revenue: new home sales, real estate, car sales, and city notices.

Selective editing is invisible to those not intimately familiar with an issue being “reported”.  It wasn’t until I participated in the Coalition to Protect the South March Highlands that I personally realized the extent of news that simply was not being reported in the Citizen.

  • For example, on more than one occasion I or someone else in the Coalition would be interviewed by a reporter, only to see the Coalition’s perspective omitted or under-represented in the subsequent article.
  • Other media (TV, radio) would report our perspective in a more balanced way, but compared to the print space allocated to support a developer’s or the City of Ottawa’s perspective, it appeared that an editorial slant was silently at work.
  • From discussions with spokespeople for other environmental groups in Ottawa, it appears that selective editing is widespread.  One can only wonder if it will naturally lead to selective reporting by reporters who will see the futility in reporting more than will ever be printed.

I also see the same signs of lack of depth & balance in the reporting of the Idle No More movement that I also have first-hand knowledge of.  For example, prior to running sensational headlines about the audit at Attawapiskat, did the Citizen bother to investigate the other side to the story?

  • How many qualified accountants even exist within a 1000-mile radius of a tiny, isolated, northern community in which few have any opportunity for post-secondary education?  Attawapiskat has an on-reserve population of less than 1,600 people and 1/3 of them are under the age of 19.  Most of its 1000 adults are unemployed, living in crowded, substandard, housing with no running water.
  • As for education, the state of deteriorating buildings caused the elementary school to be closed in 2000 and replaced by crowded portables which hardly promote a positive educational experience in the average -30 C weather during the school year. The space in those portables is only 50% of the standard that is supposed to be funded by the Federal Government.
  • So is it surprising that record-keeping is not to the standard expected by Certified Public Accountants?  There isn’t even a doctor in Attawapiskat, so why would anyone expect to find a professional accountant in a warm and comfy office diligently recording receipts?  The real story is that the Chief’s husband upgraded his accounting skills in a best-effort to try to improve financial accountability and, according to the audit, this resulted in fewer audit concerns.  Much has been made of the daily rate charged for this service, but has anyone inquired into how many days he billed?
  • More to the point, is there actually any evidence of misappropriation of funds?  Or is it possible that it was more expedient for the Citizen to run a story that required less investigative journalism?

The Federal government, who does not advertise much in the Citizen, appears to be the main target for investigative news which provides the illusion of continued balanced reporting to many.   But with fewer reporters on payroll, how long will even this continue?

Today may be remembered as a day of infamy for journalism as no news content at all was reported on the front page.  Headlines and a columnist’s article do not make much of a newspaper – especially for the advertising enriched weekend edition.

There once was a time when the Ottawa Citizen won awards for the high-quality of its investigative journalism.  Sadly those days appear to be gone, and so now I personally rely on the Globe and Mail for old-fashioned, real “news”.  Most bloggers like me are not trained journalists.  Some of us, like some of the columnists in the Citizen, try to present facts along with opinion but our primary service is to share our fair comment on the news – not report the news.

As the Internet inevitably eviscerates the Fourth Estate and replaces it with the Fifth Estate, I for one will miss its professionalism.  Meanwhile I still subscribe to the Citizen because my wife enjoys its extensive funny papers.

No Comments

Virtual Fixed Assets

Economic Reality, Virtual Reality
According to the US Bureau of Economic Activity, the real US economy (i.e. non-public sector) spends just over $1 Trillion / year on non-structural fixed assets.
This number excludes the cost of buildings, warehouses and factories but includes all household, farm, business, and non-profit organization spending on fixed assets.  A precise definition is found here.
Roughly half of that amount ($537 Billion in 2011) is on information processing equipment and slightly over half of that amount ($279 B) is software.
Spending on transportation equipment (trucks, cars, ships) was $232 B and industrial equipment (engines, lathes, robots, …) $178 B. Furniture and other types of equipment (e.g. agricultural, mining, oil rigs, …) was $194 B.
Within the $537 B on information processing equipment, is spending on computers ($79 B ) and network equipment ($77 B).  The 3rd largest sub-category after software is medical equipment at $72 B.
So the largest single spending area for fixed assets is for software which is a virtual asset! Henry Ford must be spinning in his grave!
No Comments

Dark Clouds

Climate Change, Virtual Reality

Cloud Computing

Cloud Computing helps companies (and individuals) off-load their computing needs onto a network-based facility.

At a personal level, the advent of mobile Internet devices such as

  • 3G/4G Broadband Roaming cards for laptops,
  • iPhone/iTouch,
  • Kindle / eBook Readers,
  • BlackBerrys /Smartphones,
  • etc.

has fueled a need for network-based applications, storage, backup, social networking, and a variety of other services.  These needs are typically met by a data center somewhere “off in the cloud” that is managed by someone else.

Similarly at a corporate level, clouds have enabled Software-as-a-Service (SaaS), Infrastructure-as-a-Service (IaaS), and Platform-as-a-Service (PaaS) opportunities that basically outsource management of the IT infrastructure to the Cloud provider on a pay-per-use basis.

In both cases, the cloud user’s carbon foot-print is reduced since less infrastructure is needed on-site.

Or is it?

Coal & Steam

Cloud computing simply transfers the burden of IT service delivery onto the cloud service provider.  Not surprisingly most of these providers are currently in the USA – with data centers in the USA.

Sadly most regions in the USA depend on dirty carbon-fired generating stations (oil, gas, coal) to provide electrical power, so it comes as no surprise that power-hungry data centers are dependent on greenhouse gas (GHG) emitting coal-burning power plants.

The most popular form of “clean” energy generation in the USA is to use nuclear power to heat water to drive steam turbines.

Although the power-generation part of the nuclear power story is arguably clean, there is still that pesky detail of how to dispose of the radioactive waste that results from the process.  Since that problem has not been solved and is literally “buried”, nuclear power is actually dirty.

Isn’t it strange that in the 21st century, IT is largely dependent on coal & steam?

Greenpeace Study

Greenpeace recently did a survey of some of the largest and better-known Internet sites to raise an alarm about the dirty side of cloud computing:

  • Apple’s largest data center is in Lenoir, NC (500,000 Sq Ft) with a dependence on 96% dirty power.  Apple is building an even larger facility nearby that will have the same dependency on dirty power.
  • Yahoo’s 190,000 Sq Ft data center in Lockport, NY is 72% dependent on dirty power, while its largest facility in La Vista, NE (350,000 Sq Ft) is 93% dependent on dirty power.  Yahoos’ dirty power index is 86% based on the weighed average of these two facilities.
  • Google’s two largest data centers are in Lenoir, NC (476,000 Sq Ft) and Dalles, OR (206,000 Sq Ft).  The Lenoir facility depends on 96% dirty power and the Dalles facility depends on 49% dirty power.  The weighted dirty power consumption index for Google is 82%.
  • Microsoft’s 700,000 Sq Ft data center in Chicago is 99% dependent on dirty power, while its 470,000 Sq Ft data center in San Antonio, TX is 89% dependent on dirty power.  Microsoft’s 470,000 Quincy, WA data center is 100% clean energy powered (hydro).  The weighted average dirty power consumption index for Microsoft is 68%.

Silver Lining

What Greenpeace doesn’t tell you is that these industry giants are all trying to improve their GHG-emissions.

Although Apple has been visibly reducing the carbon footprint of its products and has taken the high ground in responsibly accounting for its total product life cycle impact, it appears that its IT department has not yet focused on this problem.

Overall, Apple reports that its facilities, including data centers, account for 3% of its total life cycle GHG emissions.  In other words, Apple has a massive reduction challenge to solve in the manufacturing and use of its power-hungry products before it shifts its focus on internal IT impact.

Apple’s focus on lifecycle impact will have a larger collateral benefit on reducing GHG emissions globally.

Yahoo is building a large facility in Buffalo, NY that is expected to be hydro-powered, so we can expect that its dirty power footprint will fall somewhat in the near future.

Google is moving aggressively by limiting it’s power waste and investing heavily in renewable energy sources through its RE < C initiative.

Meanwhile, Microsoft appears to have the leadership position with 25% of its total energy consumption coming from renewable sources.

All of this leaves considerable room for improvement and Greenpeace is rightfully keeping the heat on cloud computing.

No Comments

Strip-Searching The Charter of Rights

Civil Rights, Virtual Reality

Airport Strip-Searches
One of the goals of this blog is to comment on the duality between our actual and virtual realities. Most of the time our collective virtual society mirrors our real-world beliefs and values.

On today’s Internet we find the full range of human behaviour (including virtualized dating, sex, marriage, and funerals) mirrored from our real world and for the most part our response to it is the same as what we wished we could do in the real world. It is cause for alarm, however, whenever our virtual response differs from our real-world response.

Would you comply if you were asked, prior to boarding an aircraft, to step into a room and remove all your clothes so that a security officer could visually confirm that you had nothing under your clothes but your naked body?

Yet that is exactly what happens during a 1mm virtual scan of your body.  A security officer being in a different room is basically the same as using TV to visually inspect your nakedness. The level of detail in the virtual scan is about as good as your eyesight and comparable to an air-brushed image that removes pimples and other blemishes smaller than 1mm.

Charter of Rights
The Canadian Charter of Rights and Freedoms clearly states, para 8, that “Everyone has the right to be secure against unreasonable search or seizure.”.

The definition of “unreasonable” under traditional legal interpretation means that it is unreasonable for you to be searched without probable cause. Under the Charter of Rights, a police officer who had reasonable cause to suspect that you were going to blow up an airplane would be justified to search you via a pat-down or, after arresting you, via a strip search.

The Charter protects us from being searched without any reason to do so. Boarding an aircraft is not a valid reason since all travellers have no intention of blowing up the aircraft.

In fact, given that attempts to blow up an airplane occur less than once a year, all travellers are innocent virtually all of the time. This is hardly probable cause for strip-searching all passengers.

Privacy Commissioner
Why should airport security be given more latitude under the law than a police officer?

Jennifer Stoddard, the Privacy Commissioner of Canada, believes that the ends justify the means. In a recent letter to the Ottawa Citizen and posted to her website, she outlined the 4-point test that she applied to this question.

The 4-point test applied by Ms. Stoddard starts with (1) “Is the measure necessary to address a specific risk?”. In other words are the means necessary to achieve the ends?

If so the ends justify the means as long as (2) they work, (3) the loss of privacy is proportional to the identified need (i.e. loss of privacy caused by the means is proportionate to the ends that are to be achieved), and (4) there is no less privacy invasive way of achieving the same end. It is all about the ends justifying the means.

Perhaps the reason why the Privacy Commissioner of Canada does not defend our privacy rights under the Charter of Rights and Freedoms is because she has no mandate to do so.

According to the Privacy Act that defines her office and duties, the Privacy Commissioner is limited to reviewing situatations only pertaining to the privacy of information about an individual and not the individual’s inaliable rights and freedoms. The letter on her website confirms that “… it is neither our duty nor expertise to assess the aviation threat and risk assessments…”

In other words her office has no business making a decision on CATSA’s request to strip search Canadians – whether it is done virtually or otherwise.

Just because the Privacy Commissioner says it’s OK to do so doesn’t change that fact that full body scanning and pat-downs without probable cause is a violation of our Charter Rights.

No Comments

Lien on Nortel Patents

Political Reality, Virtual Reality

Just as a plumber can place a construction lien on a house when they are not fully paid for their labour, Nortel’s current and past employees should have the right to place a lien on the intellectual property they created but are not being fully paid for via pension and severance obligations.

Under common law, a lien is a form of security interest granted over an item of property to secure the payment of a debt or performance of some other obligation.

At the time of employment, Nortel created a contractual obligation to pay it’s employees salary and pension in return for transfer of ownership of all intellectual property, including patents, created by its employees.

Nortel, in bankruptcy, has breached that contract and Nortel’s past and current employees should be entitled to collectively place an equitable lien on the patent portfolio to assure payment.

If our current federal and provincial governments were not asleep at the switch concerning Nortel’s demise, they would be enshrining this protective right for white collar workers into statute in the same way that the Construction Lien Act protects blue collar workers.

Instead our elected representatives have their heads stuck in dark places while US and European jurisdictions pick apart all of Nortel’s assets to protect their native workers.


The Internet’s Y2K Crisis

Virtual Reality

IPv4 Exhaustion

The Internet as we know it is forecasted to end in 2012 when the last of the IPv4 addresses are handed out.  The Internet Assigned Number Authority (IANA), who is responsible for managing IP address assignments, has been concerned about the rate at which IPv4 addresses are being consumed ever since the world starting using email in the 1980s.  

Since an Internet address is currently just a 32 bit number, a maximum of 4,294,967,296 addresses can be used before the addresses simply run out.  In reality, however, the total number of addresses available for Internet use is much lower due to the practice of handing out address ranges.  Originally, Internet addresses were class-based (A, B, C) with differing sizes of ranges for each class (24, 16, or 8-bit sized chunks), meaning that the actual number of chunks available was considerably less. than the theorectical 2^32 number of addresses.

The introduction of NAT (Network Address Translation) and CIDR (Classless Inter-Domain Routing) bought another 10 years of address life (which was has been consumed by the growth of the Web) and recently IANA and the global Regional Internet Registries (RIR) have aggressively pursued a policy of address reclamation and re-use to further extend the life of IPv4. As a result, the most accurate projection of IPv4 address extinction is shown below.

 IPv4 Address Extinction In Regional Registeries

When this date is reached, the Internet will no longer be able to grow.  Click here to see exactly how many days away this event is from today.


The technical solution to this problem has been around for years in the form of IPv6 which uses a 128-bit address space. The size of this number is difficult to comprehend. For example, there are 80,000,000,000,000,000,000,000,000,000 IPv6 addresses for every single IPv4 address.

Or, looking at it another way you could assign 70,000,000,000,000,000,000,000 IPv6 addresses to every star in the known universe. Clearly address exhaustion with IPv6 is not a problem.

However for a variety of technical reasons, the interoperabilty of IPv4 and IPv6 is complex and in some cases very difficult. And despite almost two decates of research and experimentation, the best minds in the world have failed to figure out a seamless transition path. At best it seems that every system connected to an Internet backbone or access network will have to switch over to IPv6 at some point and, since this infeasible to happen overnight (more likely many years), this requires simultaneous mapping of IPv4 addresses that will still remain in use behind firewalls. 

Some solutions require the equivalent of having two addresses for everything (also infeasible).  For example, your favourite website would need a dual address for each of the IPv6-equipped and IPv4-equipped clients that visit it.  Imagine the overhead of assigning a second address to the 50 M+ Internet accessible websites!  The DNS (Domain Naming System) is of no help for this because of lack of widespread implementation of compatible mechanisms in thousands of currently deployed DNS servers.

Local networks will also have to be upgraded to IPv6 to assure a smoother transition.  That means that the cut-over to IPv6 will require a massive network investment over a small period of time (1 – 3 years).

Sounds like Y2K all over again doesn’t it?

Canada’s Lack of Vision

Many countries (China, Japan, India, USA, France, Germany, UK, Norway, Netherlands, Russia, Ukraine, Australia) already have public IPv6 inter-networks in-service on either an experimental or production basis and are gearing up to interconnect them into a new IPv6 Internet.

Canada is notably absent in preparing for IPv6.  There is no official government policy on IPv6 adoption.  Worse yet, Canada’s leading communication research group, CANARIE (Canadian Advanced Network and Research for Industry and Education) is actually dragging its feet on pursuing IPv6.  CANARIE is a nonprofit corporation funded by IT and telecom vendors, research organizations and the federal government and the void created by the deadly combination of lack of public science policy and the collapse of Nortel is telling.

Our official national stance is to wait and see what the USA does!  This seems to be our national policy on just about everything these days.

In view of this, the recent proposal by a former Nortel executive to urge the Harper government to actually do something by investing in creating digital infrastructure jobs takes on new meaning. 

Why not invest the billions that we are about to otherwise spend on so-called “shovel ready” projects to actually prepare  Canada for the new Internet reality?


Winning the Land War in the Data Center

Virtual Reality

Modern Data Centers

The data center of the future is easy to visualize if you think back 25 – 35 years to the mainframe era.  The mainframes of old consisted of multiple processor complexes with local memory interconnected via multiple I/O channels to high speed (for then) storage and communications gateways.  They were managed by a centralized console and executed one or more control programs on each node (processor, I/O controller, etc.).

Modern data centers are rapidly evolving towards this architecture.  Probably because it works best. 

  • Storage is specialized into network attached mass storage solutions accessing a high speed I/O channel (GigE, iSCSI, FC, etc.)
  • I/O channels are increasingly looking like high speed switched networks with tag-switched Gigabit Ethernet as the inevitable winner.
  • Servers are specializing into virtual machine hypervisors with gobs memory and multi-CPU cores per blade as the most cost effective specialization.
  • Since CPUs are so fast, on-blade memory is the only viable answer.  Maximizing memory on each blade is also the most cost-effective way of provisioning general purpose hypervisors.
  • Control programs are tucked into VMs loaded by a hypervisor.  Just like the old IBM-VM mainframe O/S that could run multiple operating systems including MVS (now known as MS-Windows) and Unix (now known as Linux).  VMs are necessary because most software development predates the Pentium IV and it’s hyperthreading capabilities (i.e. you can only go so far with multitasking & threading before it is just simpler to run a dedicated O/S per application and not have to worry about concurrency).
  • Workload management is a necessary new thing for maximizing the value of distributed servers, but in the old days was simply an amazingly rich job schedular.  Enter SOA and workload orchestration solutions having uncanny workload scheduling similarities to those old job schedulars.
  • Systems management must be centralized to a console, but the tired old agent-based server management solutions of the 20th century are not going to scale – principally because they are locked into the very VMs that they need to manage.  Enter an emerging class of agentless management solutions.

Strategic Advantage

So if you were planning to win the war for the data center it seems obvious that you would have a product that has:

  • high speed bus-based network with an ability to tag and stream traffic flows by priority (aka tag-switched Ethernet with Netflow and other QoS)
  • ability to support a wide variety of densely packed mass storage (because nobody other than fruit flies want to actually be in the disk drive business)
  • have a multi-core blade architecture featuring as much memory as possible (maximizing VM capacity)
  • be able to pack lots of blades into a small physical space with an environmentally low footprint (green is not only good for the planet – it’s much cheaper to operate and your components live longer)
  • avoid totally minimal margins on the blades by using proprietary network QoS to assure superior I/O performance for the same unit cost (since everyone has an Intel CPU you cannot gain strategic advantage from a superiour CPU performance)
  • support a data-center wide control program consisting of distributed hypervisors running Windows and Linux guest O/S (aka VMWare, Hyper-V, or Citrix).  This allows you to have a data-center wide network-centric solution that interconnects all storage to all blade complexes and all blades within those complexes. (You want to source your hypervisor from others because it too will become undifferentiated commodity – that’s why most are already free.)
  • have an advanced distributed workload management solution that can deal with both SOA-services and batch jobs (yes these still exist in real life and still do real work).  You want to own this because this is where a huge part of the value and differentiation is.
  • have a centralized agentless manager to help you administer all these moving parts.  Being agentless it must rely on standards to actually perform the administration, but the level of intelligence built into it will directly lower the operational cost of management (i.e. wetware is expensive).  Hence it is key to own this part too.

With the exception of the last 2 points, we have just described Cisco’s UCS product plan.

No Comments

Cisco’s Unified Computing System

Virtual Reality

This week Cisco announced an adaptive infrastructure vision that unifies virtual networking, virtual storage, and virtual computing. In doing so, Cisco demonstrated that they understand the essential problem of cloud/grid resource provisioning in way that should put the computing vendors to shame.

It is early days for an ambitious product suite that spans blade computing, virtual LAN switching, and 10 GigE storage networking. So we shouldn’t be too surprised to see a few holes surface – such as Cisco’s use of BMC for virtual machine management. Cisco stumbled by endorsing a device-centric, agent-based management architecture in what would otherwise be a very network-centric, agentless suite. Nonetheless Cisco needed some kind of management story for version 1, so why not use a tried and true solution from the last century? No doubt we can look forward to a 21st century, agentless solution in version 2.

The other notable hole is the lack of an advanced workload orchestration solution. VMotion is relatively primative and very immature compared to more established products such as Data Synapse, etc. Fortunately for Cisco, most enterprise IT is far behind the curve of exploiting the opportunity of marrying adaptive service oriented architecture to an adaptive infrastructure so they will be unlikely to notice the gap. However by not moving quickly to round up one or more leading solutions in this space, Cisco is now exposed to a competitive response from IBM or HP that could potentially one up them.

Then again that would imply that HP and IBM really understand the opportunity here. IBM has demonstrated that they “get it”, or more accurately “some of it”, but HP is currently far behind them. Cisco may indeed be onto version 2 before these competitors react.

No Comments

The day the music on hold died

Canadian Politics, Economic Reality, Virtual Reality

Helping Nortel

Today Nortel became another casualty of the deepening financial crisis by filing for creditor protection.  Amazingly the Canadian federal government, fresh from extending billions of dollars of credit to the auto industry of the past, managed to scrounge up all of $30 M in credit financing for the digital industry.

What a joke.  $250 Million for GM vs $30 M for Nortel.  GM with all of 19,000 employees in Canada is smaller than today’s Nortel that weighs in with 26,000 employees (mostly in Canada) – let alone the Nortel of yesteryear that once employed 95,000 with over 20,000 in Ottawa alone. 

Perhaps the fact that our federal finance minister is the member of parliament representing the GM employees in Oshawa has something to do with the smell of conflict of interest in this.

Meanwhile, McGuinty’s Ontario government is actually bragging about how they turned down Nortel’s application for financing under the NGOF pork barrel.  But McGuinty can find easily find $8 M to create 133 jobs at some outfit called Cyclone Manufacturing – is this a way to ensure that Ontario is a global leader in anything?

When, Nortel, the largest and one of the oldest companies in Canada is in trouble, our politicians don’t give a shite.  As recently as 2001, Nortel alone was 1/3 of the entire value of the TSX.  If job creation was actually important to our provincial government, a reasonable person might expect them to consider helping companies that actually have proven that they can employ Canadians in high tax-paying jobs.

Nortel’s Legacy

The impact of Nortel on the global economy across the 115 year history of the company is impossible to count. 

Every time you pick up a touch tone phone, use digital communications of any kind, experience broadband Internet access enabled by optical technology, or DSL, or high speed wireless – you are using technology invented by Nortel.

Every time you access your bank or brokerage account online, or use your mobile phone, you are riding on one or more protocols designed by Nortel. 

The first corporate email system in the world was built by Bell Northern Research.  So was the first use of digital packet communications, high-speed fibre optic rings, etc.  These are the very foundations of the Internet.

Nortel’s impact on the tech sector extends far beyond communications.  Engineers at Bell Northern Research contributed enabling technology to the electronic design community, distributed computing, advanced man-machine interfaces such as speech recognition, visualization graphics, dignital signal processing, etc. 

Nortel’s patent portfolio extends across Wireline, Wireless, Datacom, Enterprise and Optical technologies and services.  As of December 31, 2007, Nortel had approximately 3,650 US patents and approximately 1,650 patents in other countries. In fact Nortel has consistently ranked in the top 70 in terms of number of granted U.S. patents since 1998. 

Nortel has received patents covering standards-essential, standards-related and other fundamental and core solutions, including patents directed to CDMA, UMTS, 3GPP, 3GPP2, GSM, OFDM/MIMO, LTE, ATM, MPLS, GMPLS, Ethernet, IEEE 802.3, NAT, VoIP, SONET, RPR, GFP, DOCSIS, IMS, Call-Waiting Caller ID and many other areas.  The term “standards-essential” means that the technology would not be viable without the contribution of Nortel’s engineers.

My own career at Nortel was relatively brief, but in the less than 10 years that I was there I personally witnessed meetings where Nortel’s engineers educated IBM, HP, Intel, Cadence, Mentor Graphics, Microsoft, and a hundred other companies on advanced technology.  The spin off impact of those meetings alone on the tech industry was incalcuable.  Intel actually modified silicon designs, HP introduced new products, and Cadence & Mentor acquired new technology to rev up their revenues.  These were non-patent related discussions.

Nortel was the largest spender on R&D in Canada through both direct investment in its own labs and through leveraged investment in university interaction.  Literally thousands of doctoral degrees in Canada were made possible though collaborative research with Nortel over the years.  Even the scaled back Nortel of today spends more than 1/3 of its salaries on R&D jobs for Canadians.

Yet McGuinty is proud of denying Nortel’s call of distress?  Shame on him.

Broken Backs

We get what we vote for.  Our politicians both federally and provincially have demonstrated that they would rather prop up the resource sucking industries of the past than enable a modern Canadian economy of the future.

The fact that the digital economy can create more numerous, more interesting, and higher paying jobs for Canadians compared to the back-breaking and mind-numbing jobs of the resource and manufacturing sectors is completely lost on our politicians. 

Perhaps it is because we elect lawyers and not engineers to parliament?

Is the real problem with Canadian voters who sleepwalk their way to the polls if they bother to vote at all? Do Canadian parents not care about the quality of jobs that will be available for their children? 

Why do we tolerate this ineptitude from our politicians?

Yes Nortel’s management laid the seeds for its destruction.  John Roth in particular is to blame, as is his successor Frank Dunn who is now facing charges for misleading shareholders and gross stupidity. 

Nonetheless, allowing Nortel to die is the wrong policy decision for both the Canadian economy and the high technology sector of Canadian industry.  Write your MPP and MP and give them a shake!

No Comments

Using Clouds for Real Work

Virtual Reality

Primitive Resources

The sad reality of cloud computing right now is that the concept of resource virtualization is currently very primitive.  Almost everyone on the cloud computing bandwagon seems to be fixated on the merits of aggregating the supply of primitive resources (memory, byte buckets, CPU). 

Amazon’s Elastic Compute Cloud is a case in point that is vaguely reminiscent of Beowulf clustering on steroids without all the scheduling features that never really worked well in Beowulf anyway.

In a step backwards to use these primitive resources, the early cloud computing groupthink revolves around rewriting the I/O layer so you can hide keep your data in virtual buckets.   It’s astonishing that people actually believe that the Simple Storage Service is an innovation just because Amazon has made byte buckets available on demand.

Or that SimpleDB is a breakthrough because they slapped an ISAM index on buckets and have a non-standard SQL-like interpreter (with little query optimization) to make it easier to use.   

Or that everyone seems to think that Cloudfront is going to redefine content distribution networks because it offers content distribution without cache coherency.

Yawn.  Welcome back to the early 1980s.


Clearly the industry can do better and Microsoft’s recent Azure announcement certainly raises the bar in a way that might wake up some other interesting players. 

For example, Oracle appears to have been on the Cloud Computing (CC) sidelines because of their belief that CC is fashionable gibberish.  Looking at Amazon and others it is easy to understand this point of view.  However Azure changes the playing field substantially.

In true MSFT fashion, Azure consists of many things – a clear sign that MSFT is serious about winning market share in the cloud.  Some of the more interesting elements include

  • the use of SQL Server databases (yes Virginia a real database with a real query optimizer and a real I/O optimizer),
  • redirection of SOAP and REST interprocess communication via or into the cloud,
  • a general purpose distributed cache manager (with cache coherence strategies built-in) and
  • deep integration with a development toolset that in fact is already used by over 100,000 developers.

While Amazon relies on revolution and using EC2 necessitates a re-write of the I/O layer in most applications, Microsoft is promising smooth transition and evolution of many existing applications into the cloud. 

It’s not hard to imagine which will be more appealing to all those IT departments with more work than people!  If MSFT can acutally make Azure work as advertised sooner rather than later, Amazon will rapidly resemble the former Netscape (anyone remember them?). 

However the battle is far from over.

Higher Level Resources

Although there are indeed benefits from aggregating primitive resources such as CPU and storage, the economics of this are entirely based on economies of scale.  Cloud computing solutions that offer undifferentiated access to commodity resources will compete solely on cost – not on value.

Small wonder that the early players are those with excess Internet-accessible CPU and storage.  Amazon has already invested in large computer farms to run its online retail business, so why not sell it’s excess capacity at marginal cost via EC2?

Doing real work on the cloud, however, requires using higher level resources such as websites, databases, and other middleware.  Social networking sites such as Facebook and various blogsites, already offer higher level resources (such as websites and media storage management) on a cloud basis.

Offering the necessary APIs and tools to provide access to higher level resources so that custom applications can be migrated to run on the cloud is a natual step forward – as demonstrated by Azure’s easy support for migrating entire SQL Server databases into the cloud and by Oracle’s support for live migration into Oracle Virtual Server.

Although there are a greater variety of higher level resources, ultimately this approach is still a supply-side resource aggregation play where the service is sold at its marginal cost.  Small wonder that Oracle offers it’s enabling virutal server for free.

Dynamic Resources

Inevitably Oracle (or another established player) will enter the game with an equally competitive solution as MSFT is promising. For example, Citrix is already offering a cloud product suite that is far more compelling than Amazon’s. 

The only difference right now is that Oracle and Citrix are deciding to offer cloud-enabling software products and not cloud services, while Microsoft and Amazon are offering their cloud-enabled products to users of their proprietary cloud service.

The real end game will be fought over how well these competing cloud solutions can dynamically match the massive supply of resources to real-time workload demand.

From prior experience in creating and working with massively scalable grid computing products (such as Platform Computing, DataSynapse, Globus, etc.), dynamic resource management is far from trivial.  It involves managing both the demand-side as well as the supply-side of the virtualized environment. 

On the demand-side it requires rich scheduling of millions of requests for complex resources in seconds.  Workload is specified based on the resources required to complete it (e.g. this version of O/S, that version of database, etc.) in the form of Jobs or Transactions (depending on the type of workload).  Scheduling is based on matching resource requirements to resource availability in real time.

On the supply-side vituralization is used to segregate resource consumers and it requires the dynamic re-provisioning of supply in response to shifts in workload demand for different resource types.  For example, if you need more Apache and less IIS at any instant in time and the virtual pool of resources is re-provisioned accordingly by loading the necessary virtual machines on the fly.

By offering dynamic distributed resource management, cloud computing will move from supply-side cost economics to demand-side value economics.

No Comments

Federated Identity Management

Virtual Reality

One of the last barriers to usability on the web has been the insane prolification of accounts and passwords.  Virtually every website has some kind of account and password scheme, which by itself is not so bad, but in aggregate results in an explosion of identities and passwords to keep track of.

Most users try to contain the problem by using the same account ID and credentials on as many sites as possible, but this is not always possible due to differing password rules and different account name structures.  More recently the convention of using an email address as the account name (an old trick borrowed from FTP) has helped but is not universally available.

Federated identity protocols such as OpenID have been around for some time but are only recently gaining critical mass.  Most recently both Google and Yahoo have conducted their own usability research and discovered the obvious – federated identity is sorely needed.

More importantly both of these large Internet properties have agreed to support OpenID.  This should result in enough critical mass to finally drive an industry standard.  Increasingly, leading applications will start to adopt this emerging standard and the Web will be a better place for it.

No Comments

/* ADDED Google Analytics */