October 1, 2013

GPU acceleration gives a significant performance boost to Java applications

At the JavaOne conference in San Francisco IBM demonstrated how GPU-powered hardware could accelerate Java applications. According to Sumit Gupta general manager, Tesla GPU Accelerated Computing at Nvidia, there are other general purpose GPU applications as well as IBM's ground-breaking demo.

In his keynote presentation at at JavaOne, IBM’s chief technology officer of Java, John Duimovich,explained that IBM will be enabling IBM runtimes for server-based GPU accelerators and is also set to investigate the use of GPU acceleration in standard workloads under existing application programming interfaces.

GPU acceleration is nothing new. The graphics processing demands of the latest generation of 2D and 3D computer games has been deployed in scientific computing and for rendering animation films like Avatar (pictured), thanks to Nvidia's Cuda libraries, which makes the GPU programmable.



Where IBM has differed is in its use of the GPU for more general purpose computing. Although it also used the Cuda libraries to attain the 48x Java performance break-through, Gupta says IBM is also investigating how to incorporate the libraries into the next version of the Java Development Kit (JDK).

In doing so, he believes IBM's research will enable GPU acceleration to be used to speed up Java, without the need to use Cuda. It could be a significant milestone, given Java powers many of the many of the application servers behind e-commerce systems. “The type of applications that can be accelerated are either compute or data intensive,” Gupta says.

But he says applications will need to be modified to use GPU acceleration. “In a standard Java program you need to add a few keywords to the programming language to tell the Java compiler to map to the GPU.” Nvidia has already added seven new keywords to the C++ programming language to enable programmers to tell the C++ compiler that a section of code needs to run on a GPU.

Seven extra Java keywords may not seem like a lot. Gupta says: “The challenge is not the programming language. The challenge is thinking about the way to solve a problem in a parallel [computing] way.”

For instance, in a very basic sorting algorithm to order a list of numbers a program could run down the list comparing each number to find the largest one, then run down the same procedure again for the next largest number until the whole list had been sorted This is a linear sort: it starts at the top of the list and processes one number at a time until it reaches the bottom.

“You could also sort in a parallel way. Lots of algorithms already use a parallel programming model.” Running in parallel means that each GPU could handle a bit of the task independently.

A program to sort a list of 20 numbers could divide the list into four groups of five, and split the processing so that each GPU handles one of the four groups ie GPU 1 processes the first five numbers, GPU handles the second group of five and so on. Gupta says the big issue for software developers is that they need to be aware of data bottleneck. A data botteneck would occur if more than one GPU simultaneously accessed the same data such as if GPU1 and GPU2 tried to look at the first group of five numbers in the sorting example. He says this problem would not normally occur if an algorithm is coded normally in a linear way, since only one processor is being used, and it can only process one piece of information at a time.

Beyond the advantages of parallel programming Gupta says: “The biggest benefit of the GPU is it has memory that is 6* faster than CPU memory due to DDR5 SDRAM.” He says GPU-powered graphics cards offer high operational throughput and very high memory bandwidth, which makes GPUs suitable for data analytics accelerators.

He says Salesforce.com has a group that does sentiment analysis on Twitter feeds using GPUs for data analytics. The Shazam cloud-based song matching service also uses GPU-based data analysis to help users identify songs, by analysing a sample against a music fingerprint. “Companies like Jedox and Fuzzy Logix are among a few companies who are building businesses using GPU accelerators for databases,” Gupta adds.

But given that a high-end graphics card can only have up to 6 GB of SDRAM, there are unlikely to be any in-memory GPU-based database accelerators coming to market in the near future, since the RAM requirements of in-memory databases are thousands of times greater.

However, Gupta believes GPUs will have a place in database acceleration. He says: “Within the core database, the GPU is very fast at sorting [data].”

A GPU-powered database system would need some way to transfer data to and from the GPU. Once in the GPU's memory, data processing should be extremely fast. Such technology is theoretically possible but it is a long way off, according got Gupta. “It may take some time for someone to invent.”

0 comments:

Post a Comment