Rather then basing this blog on buzzwords we’ve been hearing about lately like Grid and Cloud computing I’m going to attempt to explain the concepts behind the new evolution of computing and how we’re moving away from single server dependence and moving into an infrastructure so robust, operations like High Availability, and Disaster Recovery will be included in the solution as is the BIOS to our PC’s.

At first we had a Mainframe, a central place to do all the computing.. We then realised we could place the CPU inside a much smaller computer and put them on Desks and in server rooms. We then discovered we could alleviate problems and increase performance by grouping servers together in clusters. This gave us redundancy and some primitive load management technology. Now for the first time we are learning to transcend the traditional ‘Central’ processing concept and moving to distributed load, servers spread across wide area grids per say where all the hardware makes up one entity with no one server claiming any ownership on any data. The introduction of this technology will incorporate servers together and utilize the resources they have to offer. In the future Grid units will appear, provisioning the core resources spread across multiple physical locations all offering their CPU/RAM/IO resources.

Datacenter in a box

Lets take the internet for example using routing technology we’re able to reach our destination address by jumping across computers all over the world, Lets say it took 16 hops to reach this website from your location. your browser found its way here by a series of jumps through routers all containing information on where to go next to reach the destination. The most basic of our routing technology will bypass jump points that are experiencing problems and re-routing your packets as necessary to ensure safe delivery. Distributed computing delivers all your requests on the first hop. It is not necessary to have to jump through 16 routers. The first hop you reach is on a wide-area distributed grid of servers that all have access to the same data. Sound far fetched? Let me give you an example.

I recently just designed and tested a Virtual environment where the virtuals actually booted from their storage located over 3,000 km’s away with almost no latency over a dark fibre optical network. This particular solution utilised asynchronous SAN replication using an SVC controller coupled with MPLS network technology enabling storage fail-over over multiple physical locations. Essentially, we made every server in every datacenter think that they are all in the same physical location.