The last few weeks I have been looking at the map-reduce area; specifically Hadoop and IBM’s BigInsight. This led to questioning how the data-parallel activities could be sped up. Found a reference to the usage of GPU for such data parallel tasks. A quick yahoo search kicked off the following site navigation:

The promise of a GPU powered speed-up in processing is a rather seductive one. There are ofcourse a bunch of disadvantages going with the GPU (refer to one of the Khronos, or MacResearch links for details).

This is an area I will be looking at in some detail going forward.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s