Vol.25, NO.5, 2014

Display Method:
Spectral Parameters and Signal-to-noise Ratio Requirement for CO2 Hyper Spectral Remote Sensor
Wang Qian, Yang Zhongdong, Bi Yanmeng
2014, 25(5): 600-609.
Abstract:
With the stable increase of carbon dioxide (CO2) concentrations, space based measurement of CO2 concentration in lower atmosphere by reflected sunlight in near infrared band has become a hot research topic. Recently, instruments sensitive to total CO2 column data in near-surface have become available through the SCIAMACHY instrument on ENVISAT and TANSO-FTS on GOSAT. The developing hyper spectral CO2 detector in China carried by TANSAT is going to be launched in 2015. Hyper spectral CO2 detector is designed to provide global measurements of CO2 in lower troposphere, employing high resolution spectra of reflected sunlight taken simultaneously in near-infrared CO2 (1.61 μm and 2.06 μm) and O2 (0.76 μm) bands.Associated with climate change and observation requirements of carbon sources and sinks, the feasibility of making CO2 column concentration measurements with high-resolution and high-precision is studied by high resolution atmosphere radiation transfer model. In consideration of the application requirements, effects of key specifications of the hyper spectral CO2 detector such as spectral resolution, sampling ratio and sign-to-noise ratio (SNR) on CO2 detection are analyzed.Typical characteristics of hyper spectral CO2 detector on TANSAT are grating spectrometer and array-based detector. To achieve the column averaged atmospheric CO2 dry air mole fraction (XCO2) precision requirements of 1×10-6-4×10-6, hyper spectral CO2 detector should provide high resolution at first to resolve CO2 absorption lines from continuous spectra of reflected sunlight. Compared to a variety of simulated spectral resolutions, the spectral resolution of hyper spectral CO2 detector on TANSAT can resolve CO2 spectral features and maintain the moderate radiance sensitivity. Since small size array detector-based instruments may suffer from undersampling of the spectra, influences of spectral undersampling to CO2 absorption spectra are studied, indicating that sampling ratio should exceed 2 pixels/FWHM to ensure the accuracy of CO2 spectrum.SNR is one of the most important parameters of hyper spectral CO2 detectors to ensure the reliability. SNR requirements of CO2 detector to different detection precisions are explored based on the radiance sensitivity factors. Results show that it is difficult to achieve SNR to detect 1×10-6-4×10-6 CO2 concentration change in the boundary layer by solar shortwave infrared passive remote sensing, limited by the instrument development condition and level at present. However, the instrument SNR to detect 1% change in the CO2 column concentration is attainable. These results are not only conductive to universal applications and guides on developing grating spectrometer, but also helpful to better understand the complexity of CO2 retrieval.
REVIEWS
Review on Inverted Charge Structure of Severe Storms
Zhang Yijun, Xu Liangtao, Zheng Dong, Wang Fei
2014, 25(5): 513-526.
Abstract:
The charge structure in storms is regarded as a bridge linking lightning activity with dynamic and microphysical conditions. An inverted charge structure is always seen in severe storms, which attracts much attention in recent years.Although the charge structure in thundercloud is complicated, the tripole charge structure could be used to describe the main discharging region. In general, the tripole charge structure is characterized by a negative charge region between levels of -10 ℃ and -25 ℃, accompanied by a respective positive charge region below and above the negative charge region. In 2000, lightning mapping array and electric field sounding are carried out in the experiment of Severe Thunderstorm Electrification and Precipitation Study (STEPS) organized by USA. Most case studies in this experiment indicate that the charge structure in severe storms is opposite to the normal charge structure, when original positive charge changes into negative charge, and vice versa. This new structure is called inverted charge structure.Related studies on the inverted charge structure are reviewed, focusing on the discovery, formation, relevant numerical simulation and the detection method. The inverted charge structure appears in the severe storms, resulting in the substantial positive cloud-to-ground lightning. Moreover, it is always associated with disastrous weather. The inverted charge structure doesn't appear in the beginning of storms, but in the special developing stage of storms.The inverted charge structure formation is associated with the strong ascending motion in severe storms, which makes the liquid water content change and influences the electrification process during the collision among different kinds of particles in the main electrification region. It will result in graupel charged positively and ice crystal charged negatively, implying the formation of the inverted charge structure. One view focuses on microphysical conditions, by which the charge separation is influenced during the collision of particles, and this physical process is defined as microphysically-inverted. Another view is, the inverted charge structure could be formed through the dynamic transport and wind shear in severe storms when the graupel is still charged negatively in the main electrification region, and this is defined as dynamically-inverted. The research on the latter is relatively scarce compared to the former.
ARTICLES
Changes of the Boundary Between the South Asian and East Asian Tropical Summer Monsoon Subsystems
Guo Pinchao, Song Chaohui
2014, 25(5): 527-537.
Abstract:
NCEP/NCAR 65-year daily reanalysis wind data are used to divide the boundary of the two tropical Asian summer monsoon subsystems (the South Asian summer monsoon and the East Asian Tropical summer monsoon). According to the boundary derived from the wind data, it's concluded that both of the Asian summer monsoons have their own control domain. A deviation index is defined which can describe changes of two Asian tropical summer monsoon subsystems' boundary, positive value means that the boundary of two tropical Asian summer monsoon subsystems is further east, while the negative value means their boundary is further west. Then comparative study is carried out to check out which area of the boundary changes larger. The area with intense changes is further investigated to study the change regulation of two Asian tropical summer monsoon subsystems. At last, a monsoon strength index is designed which can describe the strength change of Asian tropical summer monsoon.According to the latitude of vorticity minimum value, and the boundary deviation index between the decadal changes, results show the boundary change is either eastern, central or western. When the pattern of the central changes to the pattern of western, there is a mutation passing the test of 0.01 level. The low, medium and high layers of the three boundary categories are also quite different. And then, 10°-17.5°N area can be chosen as the main study area according to the oscillation amplitude. By using the boundary deviation to define the two Asian Tropical Summer monsoon indexes, it's found that consistent change happens more than reversed phase change for the strength of two summer monsoons. And the meridional wind anomaly field of four kinds of strength type from the Bay of Bengal to the South China Sea reflect the four anomaly situations:'+ -', '- +', '- -', and '+ +'.
Classification and Satellite Nephogram Features of Hail Weather in North China
Lan Yu, Zheng Yongguang, Mao Dongyan, Lin Yinjing, Zhu Wenjian, Fang Chong
2014, 25(5): 538-549.
Abstract:
Based on conventional observations, automatic weather station data, geostationary satellite data and NCEP FNL data, meso-scale features of 27 hail processes occurred over Northern China during 2010-2012 are analyzed. According to synoptic circulation and cloud characteristics, these hail processes are divided into three types.The first type of hail convective storm is often embedded in the westerly trough of cold vortex system. The place where the severe convective storms initiated is frequently on the rear of the cloud band corresponding to the synoptic system. The cold front provides a strong lifting for convective initiation, while the anticyclone dry air intrusion triggers the intensive development of the hail storm. Whenever the water vapor content is plenty, heavy rainfall can also occur.The hail shooting zone of the second type convective storm is in the front of the cold vortex. The range of affected area is highly related to the southward movement of the cold vortex system. The front system often presents a forward-tilting structure, which is the main characteristic of this type of hail convective storm. The middle layer cold air mass become superimposed above 850 hPa warm ridge, which causes a wide range of potential instability, and also a continuous hail shooting weather, accompany with heavy rainfall in North China. The life span of the convective system is as long as 10 to 16 hours. The third type of hail convective storm generally occurs in a stable synoptic background, which is different from the other two. The hail storm initiates within the cold air mass, while the northerly air stream dominates the upper layer. Due to the poor moisture condition, the main disaster is hail and wind gale rather than short-during heavy rainfall. The short-wave trough at 500 hPa and the weak convective instability in the afternoon locally may be the cause for this kind of convective storm, and it is still difficult to forecast.On satellite-based (infrared and water vapor) images, over 90% of hail events produce hail when the convective storm growing rapidly. The main hail shooting zone is near the edge of a storm propagation frontal side, corresponding to a large gradient of TBB area in infrared image. The convective storm with both the low TBB (≤-40 ℃) and large gradient of TBB (≥8 ℃/0.05°) features seems an important threshold for short-range forecasting a bigger hail stone.
Drought Changes in Southwest China and Its Impacts on Rice Yield of Guizhou Province
Song Yanling, Cai Wenyue, Liu Yanju, Zhang Cunjie
2014, 25(5): 550-558.
Abstract:
Extreme weather and climate events have become more frequent and severe in recent years in the context of global warming. These extreme events bring serious impacts on human survival and sustaining development of society. Drought is one of the major types of extreme climatic events in China, which seriously affects agriculture and social-economic development. The domestic extreme drought events in recent years mostly occur in Southwest China, influencing the agriculture severely. Using the high quality observations from 348 weather stations and the agriculture data of county level in Southwest China, the complicated relationship among droughts, water supply, and rice yields is investigated.The precipitation decreases in Southwest China from 1951 to 2012, with an average decadal decrease of 16.9 mm per 10 years. In particular, the precipitation decreases distinctly from August to October, mainly due to the weak South Asian monsoon. Drought days are counted by using the drought index ISWAP, and the result shows that the number of drought days generally increases by 3.3 days per 10 years in the past several decades, especially since 2001 due to the less precipitation. Since rice cultivation is irrigated agriculture, drought won't directly affect rice growth. To further understand the complexity, impacts of various drought events on rice yields are investigated using high quality rice yield data collected in 70 counties of Guizhou Province, and the result indicates that the drought has little adverse but favorable impacts on rice yields when annual accumulated number of drought days is less than 40 days. It is because that this kind of drought won't affect the rice irrigation water supply, and the temperature is usually abnormally high with more bright sunshine days during the period of drought which are actually favorable for rice growth. However, when the number of drought days is more than 86 days, rice yields will reduce by 20%-73% due to the drought and the insufficient irrigation. When the number of drought days is between 49-86 days, rice yields usually reduce by less than 20% but with large differences between different regions. This kind of drought has little impact on rice yields in regions of robust drought tolerance but greatly affects rice yields in regions of weak drought tolerance.
Fine Spatial and Temporal Characteristics of Humidity and Wind in Beijing Urban Area
Dou Jingjing, Wang Yingchun, Miao Shiguang
2014, 25(5): 559-569.
Abstract:
The temporal and spatial characteristics of specific humidity, wind speed and direction in Beijing urban area and urban effects are investigated in terms of hourly automatic weather station data during 2008-2012.Results show that values of specific humidity (q) in urban areas are lower than those in rural area during summer daytime and early night in Beijing, which is known as "urban dry island"(UDI). Values show a multi-center distribution, due to the non-uniform distribution of non-evaporating urban impervious-surfaces, which decrease evapotrans-piration, increase run-off, and thus lower urban specific humidity levels. In winter, urban values of q are bigger than rural ones at most hours which are affected by anthropogenic emissions. Studies on 10-m wind directions show that they are affected by seasonal prevailing winds, topography and urban effects. During summer valley-breeze time, southerlies bypass Beijing urban area because of buildings, while air flows converge into city during summer mountain-breeze time due to combined effects of topography, urban effects and seasonal prevailing wind. During winter breeze-time, a convergence line is formed in northwest-southeast direction in urban area. Wind speeds are reduced by the large Beijing surface-roughness. A low-speed region is observed in more urbanized areas between Second Ring Road and Third Ring Road due to its high surface-roughess. It shows that humidity and air flow are strongly affected by urban effects besides the much studied "urban heat island" in Beijing. The local urban effects have to be taken into account in fine weather forecasting. In addition, the result will contibute to the discussion of urban atmospheric environmental governance and city planning and construction.
Raindrop Size Distribution Retrieval from Wind Profiler Radar Based on Double-Gaussian Fitting
He Yue, He Ping, Lin Xiaomeng
2014, 25(5): 570-580.
Abstract:
The raindrop size distribution is extremely important for understanding the physical process of cloud and fog formation, and the generation of natural rainfall. It is a major tool that can be used to assess the cloud conditions for weather modification and verify associated results, in addition to being an important scientific evidence for numerical modeling.The weather radar often uses the method of PPP (Pulse Pair Processing) to process the signal, so it cannot get the raindrop size data directly. However, wind profiler radar is invented to detect the turbulence of clear air and it can obtain the distribution of Doppler velocity of precipitation particles, hence data can be used to retrieve raindrop spectral of precipitation effectively. Under the condition of precipitation, the return information of wind profiler radar is superimposed by turbulent signal and precipitation signal, and the power spectrum would often appear an obvious bimodal structure. Some representative precipitation data of Yanqing, Beijing in 2006 and 2012 are analyzed, by the method of removing noise and calibration curve, the power spectrum of antenna array is retrieved and then a more accurate signal power spectrum is obtained. The method of double-Gaussian fitting is used to distinguish the power spectrum of atmospheric turbulence signal and the power spectrum of precipitation signal. The signal is used to estimate a better raindrop size distribution after removing effects of air turbulence. According to relations between precipitation particles and diameters, the raindrop spectrum can be obtained easily. Through analyses and comparisons of different intensity and types of retrieved raindrop size distribution data, it can be concluded that in the process of estimating the raindrop size distribution from wind profiler radar, the method of double-Gaussian fitting could separate two peaks effectively, and the precision is more accurate and the structure emerges an exponential form basically. The result shows that using the double-Gaussian fitting to separate the bimodal structure of power spectral data is feasible and reliable, and it can achieve better quality control of wind profile radar data. Also, the method provides reference for applying wind profiler radars under more complex weather conditions.
The Hybrid MPI and OpenMP Parallel Scheme of GRAPES_Global Model
Jiang Qingu, Jin Zhiyan
2014, 25(5): 581-591.
Abstract:
Clustered SMP systems are gradually becoming more prominence, as advances in multi-core technology which allows larger numbers of CPUs to have access to a single memory space. To take advantage of benefits of this hardware architecture that combines both distributed and shared memory, utilizing hybrid MPI and OpenMP parallel programming model is a good trial. This hierarchical programming model can achieve both inter-node and intra-node parallelization by using a combination of message passing and thread based shared memory parallelization paradigms within the same application. MPI is used to coarse-grained communicate between SMP nodes and OpenMP based on threads is used to fine-grained compute within a SMP node.As a large-scale computing and storage-intensive typical numerical weather forecasting application, GRAPES (Global/Regional Assimilation and PrEdictions System) has been developed into MPI version and put into operational use. To adapt to SMP cluster systems and achieve higher scalability, a hybrid MPI and OpenMP parallel model suitable for GRPAES_Global model is developed with the introduction of horizontal domain decomposition method and loop-level parallelization. In horizontal domain decomposition method, a patch is uniformly divided into several tiles while patches are obtained by dividing the whole forecasting domain. There are two main advantages in performing parallel operations on tiles. Firstly, tile-level parallelization which applies OpenMP at a high level, to some extent, is coarse grained parallelism. Compared to computing work associated with each tile, OpenMP thread overhead is negligible. Secondly, implementation of this method is relative simple, and the subroutine thread safety is the only thing to ensure. Loop-level parallelization which can improve load imbalance by adopting different thread scheduling policies is fine grained parallelism. The main computational loops are applied OpenMP's parallel directives in loop-level parallelization method. The preferred method is horizontal domain decomposition for uniform grid computing, while loop-level parallelization method is preferred for non-uniform grid computing and the thread unsafe procedures. Experiments with 1°×1° dataset are performed and timing on main subroutines of integral computation are compared. The hybrid parallel performance is superior to single MPI scheme in terms of long wave radiation process, microphysics and land surface process while Helmholtz equation generalized conjugate residual (GCR) solution has some difficulty in thread parallelism for incomplete LU factorization preconditioner part. ILU part with tile-level parallelization can improve GCR's hybrid parallelization. Short wave process hybrid parallel performance is close to single MPI scheme under the same computing cores. It requires less elapsed time with increase of the number of threads under certain MPI processes in hybrid parallel scheme. And hybrid parallel scheme within four threads is superior to single MPI scheme under large-scale experiment. Hybrid parallel scheme can also achieve better scalability than single MPI scheme. The experiment shows hybrid MPI and OpenMP parallel scheme is suitable for GRAPES_Global model.
FY-2 Meteorological Satellite Attitude Solving Method Under Area Scan Mode
Wei Caiying, Zhang Xiaohu, Zhao Xiangang, Han Qi, Lin Weixia
2014, 25(5): 592-599.
Abstract:
Area observation is realized by the geostationary meteorological satellites of the United States and Europe. The timeliness of satellite observation is greatly improved, and an instance can be accomplished within several minutes. Geostationary meteorological satellites of FY-2 series have the capability of specific area observation, but due to the strip image navigation problem, the area observation of FY-2 satellites is not put into operational use. Therefore, the attitude determination technology of FY-2 satellites' area observation is studied and 3 solving methods are proposed, which are based on area images, attitude prediction, and the relation model between crude and precise attitude.The attitude determination method based on area images uses area images to calculate satellite attitude directly. This method is applicable to the condition that area images can be obtained continuously. When area images are accumulated for twenty-four hours, the method navigates the area image with the attitude gained by the method based on area images. The navigation deviation is in 2.5 infrared pixels. FY-2 satellite receives full disk images under normal observation mode, and receives high-frequency area images under emergency observation mode when special weather occurs. When using the attitude to navigate satellite image at the beginning of area observation, the image navigation accuracy maybe not ideal because the area images quantity is not enough. On this occasion, the attitude solving method based on attitude prediction can be used. With this method, the average deviation of image navigation can be reduced to 1 pixel. Based on the precise attitude which is gained through full disk images, this method predicts the future satellite attitude according to the variety rules of attitude. It is applicable to predict navigation parameter in future 24 hours with mathematics model while precise orbit attitude parameter is obtained. The attitude solving method based on relation model of crude-precise attitude uses the crude attitude which is calculated from telemetry data and the relation between the latest crude-precise attitude, and estimates precise attitude from the crude attitude which is calculated in real time. Using the attitude gained by this method to navigate the image, the max navigation deviation of area image is 4.9 infrared pixels, and the average deviation of former 24 hours is 3.6 infrared pixels. When precise attitude is not available, the attitude obtained by the method based on relation model of crude-precise attitude can be used to navigate area image during emergency situations.
Dual Optical Path Visibility System Measuring Method and Experiment
Du Chuanyao, Ma Shuqing, Yang Ling, Zhang Chunbo
2014, 25(5): 610-617.
Abstract:
Dual optical path visibility system is a visibility measuring system based on a charge coupled device (CCD) digital camera and a light attenuation theory in atmosphere. Photovoltaic conversion process is realized by using the CCD to measure the light attenuation in the atmosphere. Two target reflection and background devices at different fixed distances are installed in the dual optical path visibility system and have identical characteristics except for distances. During measurement, a light source and the CCD are arranged at the same place, light signal sent by the light source is transmitted to the target reflector and reflected back, two beams of light reflected back are received by the CCD, the CCD converted reflected beams of light to corresponding facula images, and the whole photovoltaic conversion process is completed. Compared with the traditional digital camera method in which the same distance between a CCD and a target device is set, the light path distance of the dual optical path visibility system is doubled because of light reflection. The facula images captured by the CCD are transmitted to a computer and the attenuation information and background grey information of target facula images are acquired by digital image processing. The center of gravity method is used to dynamically extract the attenuation information of the target facula images, random noise is eliminated by averaging a plurality of extracted facula images, and the attenuation information is used for visibility back calculating. A back calculation formula is derived based on the classical optical attenuation theory, the formula is improved by combining an actual experimental platform, and finally, visibility is calculated. Through contrast experiments and correlation coefficients, the basic trend of visibility data of the dual optical path visibility system is consistent with that of the traditional transmission visibility meter and the traditional forward scatter visibility meter, especially when the visibility is low, and with the visibility increasing, the trend consistency declines to some extent. According to the mean deviation and mean relative deviation, visibility data of the dual optical path visibility system are more close to the traditional transmission visibility meter because of a similar working principle, as the dual optical path visibility system and the traditional transmission visibility measure attenuation of a whole light path while the forward scatter visibility meter only measures atmospheric scattering. With the visibility increasing, the visibility data deviation of the dual optical path visibility system and the traditional transmission visibility meter become larger mainly because of more fluctuation. In addition, optical axis alignment of the traditional transmission visibility meter is required and the camera lens of the traditional transmission visibility meter is sensitive to contaminant. Through visibility data comparison of day and night, it is observed that sunlight influences on the visibility data are basically eliminated.
OPERATIONAL SYSTEMS
A Set of MapReduce Tuning Experiments Based on Meteorological Operations
Yang Runzhi, Shen Wenhai, Xiao Weiqing, Hu Kaixi, Yang Xin, Wang Ying, Tian Wei
2014, 25(5): 618-628.
Abstract:
Cloud computing technologies, which solves the problem of low computing power of a standalone server, uses distributed computing technology to achieve the computing power of parallel computing and computational efficiency. Cloud computing is a new application model for decentralized computing which can provide reliable, customized and maximum number of users with minimum resource, and it is also an important way to carry out cloud computing theory research and practical application combining with other theory and good techniques. In many industries and fields, cloud computing has a wider range of applications, and its flexibility, ease of use, stability is gradually affirmed. In meteorological department, cloud-based platform for the development of scientific computing is still very limited, but some attempts are implemented with the maturation of cloud computing.In meteorological operations, such as large-scale scientific computing and other general computing model are run on high-performance server clusters. Due to limitations of resources and the number of HPC nodes, scientific computing still relies on traditional standalone or clustered mode. Therefore, an internal exploration and conventional general-purpose computing and cloud computing platform is very meaningful for the meteorological department. 60-year valuable and precious long sequence of historical data are stored in National Meteorological Information Center for the use of real-time, near-real-time business and research. Processing these historical data is time-consuming, therefore some new methods are implemented. Based on Hadoop cloud computing platform, a cluster mode is built and a variety of statistical methods are adopted using MapReduce computation model. The storage format of the source data is adjusted with SequenceFile which is composed of < Key, Value > serialization, by this mean multiple files of Format-A are merged to a large SequenceFile to test computational efficiency changes. Meanwhile, many small files are merged to a larger file. Configurations are modified experimentally for the Hadoop cluster environment, and different number of task nodes are used to record different computational efficiency.
Adaptive Optimization in Small Size File Transmission of Massive Meteorological Data
Lu Yinghua, Ma Tinghuai, Cao Hao, Li Dequan
2014, 25(5): 629-637.
Abstract:
The data transfer and service architecture constructed by National Meteorological Information Center is the fundament for most meteorological data transmission. How to improve the timeliness of transmission of various data is a hot topic to enhance capabilities of meteorological services.According to requirements of transmission performance of massive small files, transmission parameters are optimized. And a self-adapting data transmission method is proposed based on real-time network status, which emphasizes network transmission protocol and file compression. Compression parameters and network transmission parameters are adjusted in real-time operation.Meteorological data include a great amount of heterogeneous small files, therefore compressing small files into a big file when being transformed will effectively reduce I/O accesses. First, 50 KB is defined as the threshold for small meteorological data files through experiments. Then, by analyzing the file transfer time, the appropriate file amount in compressed packages is calculated to achieve the best transmission efficiency. Finally, considering the variability of network conditions and real-time network conditions, a self-adapting compression methods based on real-network is designed by means of real-time adjusting the compression level. This entire compression process is controlled by setting various parameters of lzop commands on the basis of the lzop algorithm library and the LZO algorithm. To achieve the goal of adjusting compression levels according to real-time network conditions, RTT (round trip time) is taken advantage of judging the current state of the network congestion. By comparing current RTT and previous RTT, changing the compression level or not is decided.In network transmission optimization, conclusions are made that TCP buffer and parallel transmission will consume memory resources according to experiments in Globus platform. At the same time, more parallel streams and larger size of TCP buffers will result in network congestion. Then, the self-adapting adjustment algorithm of TCP buffer size and the concurrent connection number algorithm of TCP based on real-network parameters are designed. Finally, the entire transmission framework of massive small files is designed by combining self-adapting compression method and transmission parameters optimization. Complete experiments are carried out based on the integration of self-adapting algorithm, showing that proposed optimization methods can improve the transmission performance sharply.