Monday, February 21, 2011

Google Page Rank Sparks New Computing Paradigm

In Map-Reduce and its Children today at NICTA in Canberra, Professor Jeffrey Ullman discussed the wider uses for the approach Google uses for calculating "PageRank" for web sites:
Abstract: Since its publication by Google researchers in 2004, Map-reduce has proven to be a significant advance in programming methodology that offers resilient, easy-to-code parallel computation on a modern computing cluster or "cloud." It has led to a variety of systems that improve it in different ways. One direction is raising the level of abstraction, especially to support relational-database operations. A second direction is increasing the generality, while maintaining the programmability and resiliency in the face of partial failures. We shall review the environment of NICTA a map-reduce system, give some examples of how it works, and discuss the various extensions and the technical problems they posed.
From: Map-Reduce and its Children, NICTA, 2010
Professor Ullman started by introducing Distributed File Systems, as popularised by Google. He described how they are large (many Terabytes), divided into chunks (typically 64mbytes). These databases are usually added to (with few random updates). Chunks are replicated at compute nodes in racks connected by switches. Apart from the Google File System (GFS), HDFS and CloudStore.

On the distributed file system is built Map-Reduce, Key-Value stores and SQL. Map-reduce (such as Hadoop) is alongside the object store (key value pair).

M
ap reduce allows for large numbers of highly parallel processing tasks. It must allow for failure of nodes without having to restart the entire job. Previously failure could be though of as a rare event, but with tens of thousands of nodes, it is common.

Key-Value pair databases (a throwback to pre SQL relational databases?). SQL-like implementers are PIG, Hive, Sawzall and Scope.

Map-reduce has two functions: map and reduce. Large numbers of tasks are created for each to run in parallel and the results combined. Each map task operates on a distinct portion of the input file. The map tasks produce pairs of values. These are then passed to the reduce tasks, with pairs with the same "key" pass to the same reduce task, where they are combined. The result of the reduce tasks then are placed in the file store.

Clearly this design is intended for tasks such as indexing web pages for searching. It is good for matrix multiplication and relational algebra (such as a natural join). It assumes there are not many updated of data in place, just data added (that assumption works well for many web applications).

The process for dealing with a failed task is simple: restart the task. Professor Ulman commented that on real systems about 5% of failures are due to software problems (such as the version of Java not being upgraded).

Map reduce can be generalised to allow any number of tasks to be strung together. Despite the complexity, such a system can be designed to be fault tolerant: either the processing is still underway or it has finished successfully. This sounds good, but still seems to be at the experimental stage. The simpler map-reduce is easier to get to work as there are just two tasks before the results are safely stored in the file system.

It would be interesting to look at how generally applicable these applications are. Normally you would think these would work for web applications, such as indexing web pages, but not traditional databases, such as bank records. Financial records in a bank require frequent update in place of data (such as the current balance). However, banks have to keep transaction records. Normally these transaction records are seen as less important that the current updates and just something needed occasionally. But if the client has access to their bank account on-line, they are more likely to be looking at the history. This makes the transaction records important. Also "cloud" based applications may result in more aggregations of data which suit this approach.

The Professor argued that recursion is now key to web based applications, such as Page-Rank and web structure. However, this appears an area for research, not practical implementation.

No comments:

Post a Comment