GPGPU Accelerated Massive Parallel Design of Long Wave Radiation Process in GRAPES-Global Model
-
Abstract
In recent years, with the rapid advance of GPGPU (General Purpose Graphic Processing Unit) technology, leveraging the massive parallel processing power of GPGPU to provide super-computing capacity becomes a new trend. At present, GPGPU has been applied to scientific calculations of many fields. GRAPES (Global/Regional Assimilation and PrEdictions System) is the new-generation multi-scale numerical model, which is developed by Chinese Academy of Meteorological Sciences and plays an important role in weather forecasting and research. Long wave radiation process is one of the most important physical processes in GRAPES_Global model and occupies a lot of processing time, affecting the whole model's computing efficiency. Since this process could be partitioned into different tiles within the horizontal plane, a naturally parallel scheme could be carried out.A GPU has hundreds of stream processors within one chip, which enables it to handle thousands of hardware threads simultaneously, and gives much higher theoretical throughput: Over 1 TFlops by one chip. GPU also has a whole integration of supporting tool sets, from compiler to libraries, which could facilitate the development. Considering the characteristics of the long wave radiation computing process, keeping the high level MPI communication the same, a low-level fine-grained parallel architecture is designed to harness the computing power of the new hardware. This massive parallel processing implementation is based on NVIDIA GPGPU and CUDA technology. Other than looping through a big portion of the atmosphere columns within conventional CPU-based systems, the new GPU-based implementation uses each small core to process a single column. This scheme has three major advantages, including much higher thread concurrence, using bigger band width of GPU memory, denser computing intensity and better efficiency. Experiments with real dataset are performed and the correctness of the new design is validated, which show that Tesla C1060 has an 11x speedup compared to a high-end x86 CPU, greatly improving the execution speed and forecast efficiency. Timing on sub-routines and data transfer time are also recorded and compared. Different partition configurations are carried out to get the best combination. Also, the overlapping of execution and data transfer is used to hide the latency. The experiment shows GPGPU has good potential to improve numerical weather forecasting models. With more and more routines ported to GPU systems, a much better speedup could be achieved over the whole model.
-
-