Ronnie05's Blog

How dynamic distributed computing and resource allocation is pushing the boundaries of modern computing?

Posted in Big Data by Manas Ganguly on March 15, 2013

Globally, Petabytes and Zettabytes are the new everyday normals for Data and Data Operators. In the era of consumer based massive media generation, web giants such as Google and Twitter are honing their skills at dynamic and distributed computing to serve the global demands of high-speed data realization. Here’s how?

A Google data centre

The raw computing power for responding and processing to billions of online requests comes through data centres – clusters and arrays of servers handling queries and searches. Google for instance works on Petabytes of data generated on a daily basis. Management and economics of data centres are key technology and business supports for running the internet. Hence, processes and techniques to enable data centre efficiecies are key to great internet experience and data delivery.

Towards this Google and Twitter, independently have been working on Dynamic distributed computing resource allocation systems. The term is used to imply efficient parcelling of work and applications across Google’s fleet of data centre and armies of computing servers. Google calls this system Borg and Twitter calls it Mesos. Google has a next generation system that is in the works called – Omega.

These systems provide a central brain for controlling tasks across the company’s data centers. Rather than building a separate cluster of servers for each software system — one for Google Search, one for Gmail, one for Google Maps, etc. — Google can erect a cluster that does several different types of work at the same time. All this work is divided into tiny tasks, and the system dynamically assigns these tasks wherever it can find free computing resources, such as processing power or computer memory or storage space i.e resources.

Underneath the concept of dynamic distribution of application/ computing load across sets of servers is the core- the microprocessor! Traditionally, the computer processor — the brain at the center of a machine — ran one task at a time. But a multi-core processor lets the programmer run many tasks in parallel. Basically, it’s a single chip that includes many processors, or processor cores. The numbers could be as high as 64 or 128 cores on the same processor – thereby multiplying the processing capability.

Thus Omega and Mesos let you run multiple distributed systems atop the same cluster of servers. Instead of running one server for Application 1 and the second server for Application 2- this now allows the same server to run both the applications at the same time. Complex computational processes and data hogging activities now are automatically alloted computing resources and (server) core structure to allow the data centre do multiple times activity and computation.

Yes, there are other ways of efficiently spreading workloads across a cluster of servers. One could use virtualization, where virtual servers can run atop physical machines and then load them with relevant software and applications. But with Borg and Mesos, the human element in juggling all those virtual machines is eliminated making the process automated!

%d bloggers like this: