Narendra Konda, Director of Hardware Engineering at NVIDIA, spoke on stage at the April 27th Cadence EDA360 Event at the San Jose Tech Museum. Afterwards, Konda was available for questions. It was great to get a direct reponse from NVIDIA for #4 in The Mocha Mystery Series.
Please note: Konda's answers below are a brief and interesting addendum to the GPU v. CPU cook-off that NVIDIA showcased at their own special event in January at the Clift Hotel in San Francisco. You can read about that event here.
Meanwhile, understanding both Narendra Konda's comments below and the January NVIDIA event requires a grasp of CUDA. To do that, Wikipedia was called in for additional clarification.
Q -- Is a GPU just a suped-up CPU?
Narendra Konda -- A GPU is not a general purpose CPU. It is something which inherently has a tremendous amount of compute power. Internally at NVIDIA, externally at our customers, and in a universal context, people want to be able to harness the power of GPUs.
A GPU is highly programmable, and requires specific computing realization and knowledge into the silicon. CPUs today are generally quad-core or 6-core devices, but GPUs have up to 512 cores. Once people start to use the power of GPUs, they will see up to a 10x improvement in compute power.
For instance, in oil exploration where evaluating wave data is very compute intensive, people are starting to do that work on GPUs and seeing calculations that used to require up to 5 days, now running in just a few hours.
GPUs are not CPUs, but they are definitely stepping into the CPU space.
Q -- So, what is a GPU?
Narendra Konda -- a GPU is a complex device that handles tradtional graphics processing, plus compute processing.
A GPU is something in a PC that does graphics processing, and some of the compute processing that was traditionally handled by the CPU.
Q -- Accepting that a GPU may have 512 homogeneous cores, who's going to parse the software to run on the GPU?
Narendra Konda -- NVIDIA is offering our CUDA architecture to the industry to help solve the problem.
Q -- What is CUDA?
Wikipedia -- CUDA stands for Compute Unified Device Architecture, and is a parallel computing architecture developed by NVIDIA.
Q -- Come again?
Wikipedia -- CUDA is the computing engine in NVIDIA GPUs. It's accessible to software developers through industry standard programming languages.
Programmers use 'C for CUDA' compiled through a PathScale Open64 compiler, to code algorithms for execution on the GPU.
CUDA architecture shares a range of computational interfaces with two competitors -- the Khronos Group's Open Computing Language, and Microsoft's DirectComputer. Third-party wrappers are also availabel for Python, Fortran, Java, and MATLAB.