Abstract:
With the rapid development of numerical prediction model, kinds of observations play an important role, among which the shipborne observations show great importance. In order to ensure the quality of shipborne observations and its positive contribution in numerical model, according to the temporal and spatial distribution characteristics of shipborne observations, a quality control scheme for sea level pressure data is set up consisting of element extreme range checking, eliminating the missing and redundant data, background field consistency checking, deciding the blacklist of observation stations, quality control method for blacklist data and so on. The scheme is developed based on the contrast analysis results between the observations and the T639 analysis field (0.28125°×0.28125°) in January and July of 2011, and it's also applied to the data of February and June of 2011.Shipborne observations consist of the data from oceanographic research vessel and unmanned automatic buoy station, the highest density of data is found at mid-and low-latitude ocean of the Northern Hemisphere, and the number of observation reports are fluctuating with time unsteadily. Missing observations and data redundancy are common cases, which affect the effectiveness of some quality control methods such as time consistency check and space consistency check, but the background field consistency check could avoid these disadvantages. The amount of sea level pressure data is the largest among all observed elements, but the missing data ratio and redundant data ratio both reach up to 50% and needs pre-processing. Blacklist data quality control scheme include the data elimination of blacklist station and quality control of residual blacklist data. The scheme can identify and eliminate the blacklist data accurately, as well as establish the blacklist of observation stations, which is beneficial to the lookup and maintenance work. Due to the altitude difference between the observation terrain and the model terrain in the Five Lakes and Great Slave Lake areas, background field data must be corrected through background consistency checking, and the double weighted average correction method can effectively eliminate the systemic deviation between observations and model outputs, thereby avoiding the errors in data quality control. Quality control results are proved to be correct and reasonable by the verification of case analysis and data rejection percentage of every quality control steps, and the quality control scheme also has a favorable application foreground in providing reliable initial field for data assimilation work.