117 results on '"Climatology -- Models"'
Search Results
2. Overlap of solar and infrared spectra and the shortwave radiative effect of methane
- Author
-
Li, J., Curry, C.L., Sun, Z., and Zhang, F.
- Subjects
Solar radiation -- Research ,Methane -- Chemical properties ,Methane -- Environmental aspects ,Infrared radiation -- Research ,Climatology -- Models ,Earth sciences ,Science and technology - Abstract
This paper focuses on two shortcomings of radiative transfer codes commonly used in climate models. The first aspect concerns the partitioning of solar versus infrared spectral energy. In most climate models, the solar spectrum comprises wavelengths less than 4 [micro]m with all incoming solar energy deposited in that range. In reality, however, the solar spectrum extends into the infrared, with about 12 W [m.sup.-2] in the 4-1000-[micro]m range. In this paper a simple method is proposed wherein the longwave radiative transfer equation with solar energy input is solved. In comparison with the traditional method, the new solution results in more solar energy absorbed in the atmosphere and less at the surface. As mentioned in a recent intercomparison of the Intergovernmental Panel on Climate Change Fourth Assessment Report (IPCC AR4) and line-by-line (LBL) radiation models, most climate model radiation schemes neglect shortwave absorption by methane. However, the shortwave radiative forcing at the surface due to C[H.sub.4] since the preindustrial period is estimated to exceed that due to C[O.sub.2]. The authors show that the C[H.sub.4] shortwave effect can be included in a correlated k-distribution model, with the additional flux being accurately simulated in comparison with LBL models. Ten-year GCM simulations are presented, showing the detailed climatic effect of these changes in radiation treatment. It is demonstrated that the inclusion of solar flux in the infrared range produces a significant amount of extra warming in the atmosphere, specifically (i) in the tropical stratosphere where the warming can exceed 1 K [day.sup.-1], and (ii) near the tropical tropopause layer. Additional GCM simulations show that inclusion of C[H.sub.4] in the shortwave calculations also produces a warming of the atmosphere and a consequent reduction of the upward flux at the top of the atmosphere. DOI: 10.1175/2010JAS3282.1
- Published
- 2010
3. Reanalysis of 44 yr of climate in the French alps (1958-2002): methodology, model validation, climatology, and trends for air temperature and precipitation
- Author
-
Durand, Yves, Laternser, Martin, Giraud, Gerald, Etchevers, Pierre, Lesaffre, Bernard, and Merindol, Laurent
- Subjects
Rain and rainfall -- Models ,Rain and rainfall -- Methods ,Rain and rainfall -- Analysis ,Meteorological research -- Methods ,Meteorological research -- Analysis ,Meteorological research -- Models ,Climate -- Methods ,Climate -- Analysis ,Climate -- Models ,Climatology -- Methods ,Climatology -- Analysis ,Climatology -- Models ,Earth sciences - Abstract
Since the early 1990s, Meteo-France has used an automatic system combining three numerical models to simulate meteorological parameters, snow cover stratification, and avalanche risk at various altitudes, aspects, and slopes for a number of mountainous regions in France. Given the lack of sufficient directly observed long-term snow data, this 'SAFRAN'-Crocus-'MEPRA' (SCM) model chain, usually applied to operational avalanche forecasting, has been used to carry out and validate retrospective snow and weather climate analyses for the 1958-2002 period. The SAFRAN 2-m air temperature and precipitation climatology shows that the climate of the French Alps is temperate and is mainly determined by atmospheric westerly flow conditions. Vertical profiles of temperature and precipitation averaged over the whole period for altitudes up to 3000 m MSL show a relatively linear variation with altitude for different mountain areas with no constraint of that kind imposed by the analysis scheme itself. Over the observation period 1958-2002, the overall trend corresponds to an increase in the annual near-surface air temperature of about 1[degrees]C. However, variations are large at different altitudes and for different seasons and regions. This significantly positive trend is most obvious in the 1500-2000-m MSL altitude range, especially in the northwest regions, and exhibits a significant relationship with the North Atlantic Oscillation index over long periods. Precipitation data are diverse, making it hard to identify clear trends within the high year-to-year variability.
- Published
- 2009
4. High-resolution large-eddy simulations of scalar transport in atmospheric boundary layer flow over complex terrain
- Author
-
Michioka, Takenobu and Chow, Fotini Katopodes
- Subjects
Topographical drawing -- Models ,Topographical drawing -- Analysis ,Turbulence -- Models ,Turbulence -- Analysis ,Meteorological research -- Models ,Meteorological research -- Analysis ,Dynamic meteorology -- Models ,Dynamic meteorology -- Analysis ,Climatology -- Models ,Climatology -- Analysis ,Planetary boundary layer -- Models ,Planetary boundary layer -- Analysis ,Earth sciences - Abstract
This paper presents high-resolution numerical simulations of the atmospheric flow and concentration fields accompanying scalar transport and diffusion from a point source in complex terrain. Scalar dispersion is affected not only by mean flow, but also by turbulent fluxes that affect scalar mixing, meaning that predictions of scalar transport require greater attention to the choice of numerical simulation parameters than is typically needed for mean wind field predictions. Large-eddy simulation is used in a mesoscale setting, providing modeling advantages through the use of robust turbulence models combined with the influence of synoptic flow forcing and heterogeneous land surface forcing. An Eulerian model for scalar transport and diffusion is implemented in the Advanced Regional Prediction System mesoscale code to compare scalar concentrations with data collected during field experiments conducted at Mount Tsukuba, Japan, in 1989. The simulations use horizontal grid resolution as fine as 25 m with up to eight grid nesting levels to incorporate time-dependent meteorological forcing. The results show that simulated ground concentration values contain significant errors relative to measured values because the mesoscale wind typically contains a wind direction bias of a few dozen degrees. Comparisons of simulation results with observations of arc maximum concentrations, however, lie within acceptable error bounds. In addition, this paper investigates the effects on scalar dispersion of computational mixing and lateral boundary conditions, which have received little attention in the literature--in particular, for high-resolution applications. The choice of lateral boundary condition update interval is found not to affect time-averaged quantities but to affect the scalar transport strongly. More frequent updates improve the simulated ground concentration values. In addition, results show that the computational mixing coefficient must be set to as small a value as possible to improve scalar dispersion predictions. The predicted concentration fields are compared as the horizontal grid resolution is increased from 190 m to as fine as 25 m. The difference observed in the results at these levels of grid refinement is found to be small relative to the effects of computational mixing and lateral boundary updates.
- Published
- 2008
5. Deep-water formation in the Adriatic Sea: Interannual simulations for the years 1979-1999
- Author
-
Mantziafou, A. and Lascaratos, A.
- Subjects
Climatology -- Analysis ,Climatology -- Models ,Earth sciences - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.dsr.2008.06.005 Byline: A. Mantziafou, A. Lascaratos Keywords: Adriatic Sea; Numerical modeling; Deep-water formation; Interannual variability Abstract: Simulations of the interannual variability of the deep-water formation processes in the Adriatic basin for the years 1979-1999 are performed using the Princeton Ocean Model (POM) with a [approximately equal to]10km grid and 6-h atmospheric forcing provided by the European Center for Medium Weather Forecast (ECMWF). Focus is given to the pattern and amplitude of the interannual variability of the water mass formation processes in terms of deep-water formation sites, rates and characteristics. The connection of this variability with the interannual variability of (a) the atmospheric forcing and (b) the open boundary characteristics is investigated. The model performance is tested against the few available observations of deep-water formation processes inside the basin and generally shows a good agreement with the main characteristics of the mixed layer and the deep-water formation rates. A strong interannual variability is found in the calculated deep-water formation rate of the basin, which is highly dependent on the interannual variability of the atmospheric forcing. This rate becomes three times larger than climatology during the biennium 1992-1993, and during all years it is associated mostly with the events of enhanced buoyancy loss and not with the mean winter buoyancy fields. Advection through the open boundary plays an important role in determining the characteristics and volume of deep water formed inside the Adriatic basin, but it is the high frequency atmospheric forcing that determines the amplitude of the interannual variability of deep-water formation rates. Author Affiliation: Department of Applied Physics, University of Athens, Athens, Greece Article History: Received 24 August 2006; Revised 11 June 2008; Accepted 15 June 2008
- Published
- 2008
6. Simulated water table and soil moisture climatology over North America
- Author
-
Miguez-Macho, Gonzalo, Li, Haibin, and Fan, Ying
- Subjects
Climatology -- Models ,Water, Underground -- Properties ,Soil moisture -- Influence ,Computer-generated environments -- Methods ,Computer simulation -- Methods ,Business ,Earth sciences - Abstract
We demonstrate the link between two terrestrial water reservoirs: the root-zone soil moisture and the groundwater, and contribute our simulated climatologic water table depth and soil moisture fields over North America to the community. Because soil moisture strongly influences land-atmosphere fluxes, its link to the groundwater may affect the spatiotemporal variability of these fluxes. Here we simulate the climatologic water table depth at 30-arc-s resolution as constrained by U.S. Geological Survey site observations. Then, we use this water table climatology as the lower boundary for the soil, and variable infiltration capacity (VIC)-simulated land surface flux climatology as the upper boundary, to calculate the soil moisture climatology (SMC) at 14 depths (down to 4 m). Comparisons with VIC, the North America Regional Reanalysis (NARR), and observations suggest the following: first, SMC is wetter than VIC, despite their having identical land surface flux; second, while climate is the dominant signature in NARR and VIC, the water table manifests itself in SMC, with wet soil over the shallow water table; third, while soils in VIC and NARR get drier with depth, soils in SMC get wetter in regions of a shallow water table; and last, SMC has the highest root-zone (top 2 m) total soil water storage. These differences may have implications for climate modeling. We make our simulation results available to any interested researcher, for applications such as model initialization and intercomparison.
- Published
- 2008
7. The WCRP CMIP3 multimodel dataset: a new era in climate change research
- Author
-
Meehl, Gerald A, Covey, Curt, Delworth, Thomas, Latif, Mojib, McAvaney, Bryant, Mitchell, John F.B., Stouffer, Ronald J., and Taylor, Karl E.
- Subjects
Climatic changes -- Research ,Ocean-atmosphere interaction -- Models ,Atmospheric circulation -- Models ,Climatology -- International aspects ,Climatology -- Models ,Business ,Earth sciences ,World Climate Research Programme -- Powers and duties - Abstract
A coordinated set of global coupled climate model [atmosphere-ocean general circulation model (AOGCM)] experiments for twentieth-and twenty-first-century climate, as well as several climate change commitment and other experiments, was run by 16 modeling groups from 11 countries with 23 models for assessment in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4). Since the assessment was completed, output from another model has been added to the dataset, so the participation is now 17 groups from 12 countries with 24 models. This effort, as well as the subsequent analysis phase, was organized by the World Climate Research Programme (WCRP) Climate Variability and Predictability (CLIVAR) Working Group on Coupled Models (WGCM) Climate Simulation Panel, and constitutes the third phase of the Coupled Model Intercomparison Project (CMIP3). The dataset is called the WCRP CMIP3 multimodel dataset, and represents the largest and most comprehensive international global coupled climate model experiment and multimodel analysis effort ever attempted. As of March 2007, the Program for Climate Model Diagnostics and Intercomparison (PCMDI) has collected, archived, and served roughly 32 TB of model data. With oversight from the panel, the multimodel data were made openly available from PCMDI for analysis and academic applications. Over 171 TB of data had been downloaded among the more than 1000 registered users to date. Over 200 journal articles, based in part on the dataset, have been published so far. Though initially aimed at the IPCC AR4, this unique and valuable resource will continue to be maintained for at least the next several years. Never before has such an extensive set of climate model simulations been made available to the international climate science community for study. The ready access to the multimodel dataset opens up these types of model analyses to researchers, including students, who previously could not obtain state-of-the-art climate model output, and thus represents a new era in climate change research. As a direct consequence, these ongoing studies are increasing the body of knowledge regarding our understanding of how the climate system currently works, and how it may change in the future.
- Published
- 2007
8. Angular momentum conservation and gravity wave drag parameterization: implications for climate models
- Author
-
Shaw, Tiffany A. and Shepherd, Theodore G.
- Subjects
Gravity waves -- Research ,Climatology -- Models ,Climatology -- Research ,Earth sciences ,Science and technology - Abstract
The robustness of the parameterized gravity wave response to an imposed radiative perturbation in the middle atmosphere is examined. When momentum is conserved and for reasonable gravity wave drag parameters, the response to a polar cooling induces polar downwelling above the region of the imposed cooling, with consequent adiabatic warming. This response is robust to changes in the gravity wave source spectrum, background flow, gravity wave breaking criterion, and model lid height. When momentum is not conserved, either in the formulation or in the implementation of the gravity wave drag parameterization, the response becomes sensitive to the above-mentioned factors--in particular to the model lid height. The spurious response resulting from nonconservation is found to be nonnegligible in terms of the total gravity wave drag--induced downwelling.
- Published
- 2007
9. Nonlinearities, feedbacks and critical thresholds within the earth's climate system
- Author
-
Rial, Jose A., Pielke, Roger A., Sr., Beniston, Martin, Clausen, Martin, Canadell, Josep, Cox, Peter, Held, Herman, De Noblet-Ducoudre, Nathalie, Prinn, Ronald, Reynolds, James F., and Salas, Jose D.
- Subjects
Climatology -- Models ,Climatology -- Forecasts and trends ,Climate -- Models ,Climate -- Forecasts and trends ,Market trend/market analysis ,Earth sciences - Abstract
The Earth's climate system is highly nonlinear: inputs and outputs are not proportional, change is often episodic and abrupt, rather than slow and gradual, and multiple equilibria are the norm. While this is widely accepted, there is a relatively poor understanding of the different types of nonlinearities, how they manifest under various conditions, and whether they reflect a climate system driven by astronomical forcings, by internal feedbacks, or by a combination of both. In this paper, after a brief tutorial on the basics of climate nonlinearity, we provide a number of illustrative examples and highlight key mechanisms that give rise to nonlinear behavior, address scale and methodological issues, suggest a robust alternative to prediction that is based on using integrated assessments within the framework of vulnerability studies and, lastly, recommend a number of research priorities and the establishment of education programs in Earth Systems Science. It is imperative that the Earth's climate system research community embraces this nonlinear paradigm if we are to move forward in the assessment of the human influence on climate.
- Published
- 2004
10. Improvement of microphysical parameterization through observational verification experiment
- Author
-
Stoelinga, Mark T., Hobbs, Peter V., Mass, Clifford F., Locatelli, John D., Colle, Brian A., Houze, Robert A., Jr., Rangno, Arthur L., Bond, Nicholas A., Smull, Bradley F., Rasmussen, Roy M., Thompson, Gregory, and Colman, Bradley R.
- Subjects
Climatology -- Models ,Mathematical optimization -- Usage ,Numerical weather forecasting -- Methods ,Business ,Earth sciences - Abstract
Despite continual increases in numerical model resolution and significant improvements in the forecasting of many meteorological parameters, progress in quantitative precipitation forecasting (QPF) has been slow. This is attributable in part to deficiencies in the bulk microphysical parameterization (BMP) schemes used in mesoscale models to simulate cloud and precipitation processes. These deficiencies have become more apparent as model resolution has increased. To address these problems requires comprehensive data that can be used to isolate errors in QPF due to BMP schemes from those due to other sources. These same data can then be used to evaluate and improve the microphysical processes and hydrometeor fields simulated by BMP schemes. In response to the need for such data, a group of researchers is collaborating on a study titled the Improvement of Microphysical Parameterization through Observational Verification Experiment (IMPROVE). IMPROVE has included two field campaigns carried out in the Pacific Northwest: an offshore frontal precipitation study off the Washington coast in January-February 2001, and an orographic precipitation study in the Oregon Cascade Mountains in November-December 2001. Twenty-eight intensive observation periods yielded a uniquely comprehensive dataset that includes in situ airborne observations of cloud and precipitation microphysical parameters; remotely sensed reflectivity, dual-Doppler, and polarimetric quantities; upper-air wind, temperature, and humidity data; and a wide variety of surface-based meteorological, precipitation, and microphysical data. These data are being used to test mesoscale model simulations of the observed storm systems and, in particular, to evaluate and improve the BMP schemes used in such models. These studies should lead to improved QPF in operational forecast models.
- Published
- 2003
11. Multimodel ensembling in seasonal climate forecasting at IRI
- Author
-
Barnston, Anthony G., Mason, Simon J., Goddard, Lisa, DeWitt, David G., and Zebiak, Stephen E.
- Subjects
Atmospheric circulation -- Models ,Climatology -- Models ,Numerical weather forecasting -- Methods ,Business ,Earth sciences - Abstract
The International Research Institute (IRI) for Climate Prediction seasonal forecast system is based largely on the predictions of ensembles of several atmospheric general circulation models (AGCMs) forced by two versions of an SST prediction--one consisting of persisted SST anomalies from the current observations and one of evolving SST anomalies as predicted by a set of dynamical and statistical SST prediction models. Recently, an objective multimodel ensembling procedure has replaced a more laborious and subjective weighting of the predictions of the several AGCMs. Here the skills of the multimodel predictions produced retrospectively over the first 4 years of IRI forecasts are examined and compared with the skills of the more subjectively derived forecasts actually issued. The multimodel ensemble predictions are generally found to be an acceptable replacement, although the precipitation forecasts do benefit from inclusion of empirical forecast tools. Planned pattern-level model output statistics (MOS) corrections for systematic biases in the AGCM forecasts may render them more sufficient in their own right.
- Published
- 2003
12. Systematic strategies for stochastic mode reduction in climate
- Author
-
Majda, Andrew J., Timofeyev, Ilya, and Vanden-Eijnden, Eric
- Subjects
Climatology -- Models ,Dynamic meteorology -- Models ,Earth sciences ,Science and technology - Abstract
A systematic strategy for stochastic mode reduction is applied here to three prototype 'toy' models with nonlinear behavior mimicking several features of low-frequency variability in the extratropical atmosphere. Two of the models involve explicit stable periodic orbits and multiple equilibria in the projected nonlinear climate dynamics. The systematic strategy has two steps: stochastic consistency and stochastic mode elimination. Both aspects of the mode reduction strategy are tested in an a priori fashion in the paper. In all three models the stochastic mode elimination procedure applies in a quantitative fashion for moderately large values of [epsilon] [approximately equal to] 0.5 or even [epsilon] [approximately equal to] 1, where the parameter [epsilon] roughly measures the ratio of correlation times of unresolved variables to resolved climate variables, even though the procedure is only justified mathematically for [epsilon] << 1. The results developed here provide some new perspectives on both the role of stable nonlinear structures in projected nonlinear climate dynamics and the regression fitting strategies for stochastic climate modeling. In one example, a deterministic system with 102 degrees of freedom has an explicit stable periodic orbit for the projected climate dynamics in two variables; however, the complete deterministic system has instead a probability density function with two large isolated peaks on the 'ghost' of this periodic orbit, and correlation functions that only weakly 'shadow' this periodic orbit. Furthermore, all of these features are predicted in a quantitative fashion by the reduced stochastic model in two variables derived from the systematic theory; this reduced model has multiplicative noise and augmented nonlinearity. In a second deterministic model with 101 degrees of freedom, it is established that stable multiple equilibria in the projected climate dynamics can be either relevant or completely irrelevant in the actual dynamics for the climate variable depending on the strength of nonlinearity and the coupling to the unresolved variables. Furthermore, all this behavior is predicted in a quantitative fashion by a reduced nonlinear stochastic model for a single climate variable with additive noise, which is derived from the systematic mode reduction procedure. Finally, the systematic mode reduction strategy is applied in an idealized context to the stochastic modeling of the effect of mountain torque on the angular momentum budget. Surprisingly, the strategy yields a nonlinear stochastic equation for the large-scale fluctuations, and numerical simulations confirm significantly improved predicted correlation functions from this model compared with a standard linear model with damping and white noise forcing.
- Published
- 2003
13. Unified treatment of thermodynamic and optical variability in a simple model of unresolved low clouds
- Author
-
Jeffery, Christopher A. and Austin, Philip H.
- Subjects
Atmospheric thermodynamics -- Research ,Cloud physics -- Research ,Climatology -- Models ,Earth sciences ,Science and technology - Abstract
Comparative studies of global climate models have long shown a marked sensitivity to the parameterization of cloud properties. Early attempts to quantify this sensitivity were hampered by diagnostic schemes that were inherently biased toward the contemporary climate. Recently, prognostic cloud schemes based on an assumed statistical distribution of subgrid variability replaced the older diagnostic schemes in some models. Although the relationship between unresolved variability and mean cloud amount is known in principle, a corresponding relationship between ice-free low cloud thermodynamic and optical properties is lacking. The authors present a simple, analytically tractable statistical optical depth parameterization for boundary layer clouds that links mean reflectivity and emissivity to the underlying distribution of unresolved fluctuations in model thermodynamic variables. To characterize possible impacts of this parameterization on the radiative budget of a large-scale model, they apply it to a zonally averaged climatology, illustrating the importance of a coupled treatment of subgrid-scale condensation and optical variability. They derive analytic expressions for two response functions that characterize two potential low cloud feedback scenarios in a warming climate.
- Published
- 2003
14. Climatological effects of orography and land--sea heating contrasts on the gravity wave--driven circulation of the mesosphere
- Author
-
Becker, Erich and Schmitz, Gerhard
- Subjects
Mesosphere -- Environmental aspects ,Climatology -- Models ,Atmospheric circulation -- Models ,Earth sciences ,Science and technology - Abstract
On the basis of permanent January simulations performed with an idealized general circulation model for the troposphere and middle atmosphere, the sensibility of the general circulation to orographic and thermal forcing of large-scale stationary waves is assessed. Gravity waves are parameterized following Lindzen's saturation theory. Up to the stratopause, present model results coincide with earlier estimates, confirming that the boreal winter zonal-mean climate does crucially depend on the combined action of orography and land-sea heating contrasts. Since, in turn, the propagation and breakdown of internal gravity waves is strongly modulated by the background horizontal winds, the mesospheric response to stationary wave forcing turns out be substantial as well. It is found that in the climatological zonal mean, a warmer polar night stratosphere is accompanied by lower temperatures in the mesosphere up to about 80 km. The temperature signal induced by stationary wave forcing changes sign again in the upper mesosphere/lower thermosphere, which, except for the polar night region, is globally heated up by 10-20 K. This heating is weaker if the assumed Prandtl number for gravity wave-induced vertical diffusion is raised from 3 to 6. The thermal effects in the mesosphere are interpreted in terms of a global weakening of the summer-to-winterpole residual circulation that occurs along with strongly diminished gravity wave drag, turbulent diffusion, and energy deposition in the northern winter mesosphere. The weakening of gravity wave effects in the presence of quasi-stationary planetary waves is dominated by reduced efficiency of gravity wave saturation in the mesosphere. That is, due to the more variable and, on average, reduced planetary-scale horizontal winds, gravity wave saturation is distributed over a greater depth and drops in altitude. On the other hand, enhanced critical level absorption of gravity waves in the lower stratosphere plays at most a secondary role. Furthermore, present model results suggest that the winter-summer asymmetry in gravity wave breakdown, which is well known from the northern mesosphere, may be absent or even reversed in the southern mesosphere.
- Published
- 2003
15. Mapping and pseudoinverse algorithms for ocean data assimilation
- Author
-
Fieguth, Paul W., Menemenlis, Dimitris, and Fukumori, Ichiro
- Subjects
Climatology -- Models ,Remote sensing -- Usage ,Remote sensing -- Technology application ,Algorithms -- Usage ,Oceanographic research -- Technology application ,Algorithm ,Technology application ,Business ,Earth sciences ,Electronics and electrical industries - Abstract
Among existing ocean data assimilation methodologies, reduced-state Kalman filters are a widely studied compromise between resolution, optimality, error specification, and computational feasibility. In such reduced-state filters, the measurement update takes place on a coarser grid than that of the general circulation model (GCM); therefore, these filters require mapping operators from the GCM grid to the reduced state and vice versa. The general requirements are that the state-reduction and interpolation operators be pseudoinverses of each other, that the coarse state define a closed dynamical system, that the mapping operations be insensitive to noise, and that they be appropriate for regions with irregular coastlines and bathymetry. In this paper, we describe three efficient algorithms for computing the pseudoinverse: a fast Fourier transform algorithm that serves for illustration purposes, an exact implicit method that is recommended for most applications, and an efficient iterative algorithm that can be used for the largest problems. The mapping performance of 11 interpolation kernels is evaluated. Surprisingly, common kernels such as bilinear, exponential, Gaussian, and sinc perform only moderately well. We recommend instead three kernels, smooth, thin-plate, and optimal interpolation, which have superior properties. This study removes the computational bottleneck of mapping and pseudoinverse algorithms and makes possible the application of reduced-state filters to global problems at state-of-the-art resolutions. Index Terms--Climatological models, data assimilation, interscale transforms, pseudoinverse methods, remote sensing, sparse pseudoinverses.
- Published
- 2003
16. How can we advance our weather and climate models as a community?
- Subjects
Weather -- Models ,Climatology -- Models ,Software -- Evaluation ,Business ,Earth sciences - Published
- 2002
17. Natural variability of atmospheric temperatures and geomagnetic intensity over a wide range of time scales
- Author
-
Pelletier, Jon D.
- Subjects
Geomagnetism -- Models ,Climatology -- Models ,Statistical physics -- Analysis ,Science and technology - Abstract
The majority of numerical models in climatology and geomagnetism rely on deterministic finite-difference techniques and attempt to include as many empirical constraints on the many processes and boundary conditions applicable to their very complex systems. Despite their sophistication, many of these models are unable to reproduce basic aspects of climatic or geomagnetic dynamics. We show that a simple stochastic model, which treats the flux of heat energy in the atmosphere by convective instabilities with random advection and diffusive mixing, does a remarkable job at matching the observed power spectrum of historical and proxy records for atmospheric temperatures from time scales of one day to one million years (Myr). With this approach distinct changes in the power-spectral form can be associated with characteristic time scales of ocean mixing and radiative damping. Similarly, a simple model of the diffusion of magnetic intensity in Earth's core coupled with amplification and destruction of the local intensity can reproduce the observed 1/f noise behavior of Earth's geomagnetic intensity from time scales of 1 (Myr) to 100 yr. In addition, the statistics of the fluctuations in the polarity reversal rate from time scales of 1 Myr to 100 Myr are consistent with the hypothesis that reversals are the result of variations in 1/f noise geomagnetic intensity above a certain threshold, suggesting that reversals may be associated with internal fluctuations rather than changes in mantle thermal or magnetic boundary conditions.
- Published
- 2002
18. CLIMATE IMPACT RESPONSE FUNCTIONS: AN INTRODUCTION
- Author
-
Toth, Ferenc L., Cramer, Wolfgang, and Hizsnyik, Eva
- Subjects
Climatic changes -- Models ,Climatic factors -- Models ,Climatology -- Models ,Human beings -- Influence on nature ,Earth sciences - Abstract
The concept of climate impact response function is introduced and placed into the context of integrated assessment models to analyze policy options under climate change constraints. An example of developing such response functions is presented that entails a global model of potential natural vegetation driven by a climate change pattern derived from a general circulation model. A large array of strenuous issues are introduced that will be addressed by the set of papers included in this Special Issue.
- Published
- 2000
19. A GCM SIMULATION OF HEAT WAVES, DRY SPELLS, AND THEIR RELATIONSHIPS TO CIRCULATION
- Author
-
Huth, Radan, Kysely, Jan, and Pokorna, Lucie
- Subjects
Moravia -- Natural history ,Heat waves (Meteorology) -- Research ,Droughts -- Research ,Atmospheric temperature -- Research ,Climatology -- Models ,Earth sciences - Abstract
Heat waves and dry spells are analyzed (i) at eight stations in south Moravia (Czech Republic), (ii) in the control ECHAM3 GCM mn at the gridpoint closest to the study area, and (iii) in the ECHAM3 GCM mn for doubled CO2 concentrations (scenario A) at the same gridpoint (heat waves only). The GCM outputs are validated both against individual station data and areally representative values. In the control run, the heat waves are too long, appear later in the year, peak at higher temperatures and their numbers are under- (over-) estimated in June and July (in August). The simulated dry spells are too long, and the annual cycle of their occurrence is distorted. Mid-tropospheric circulation, and heat waves and dry spells are linked much less tightly in the control climate than in the observed. Since mid-tropospheric circulation is simulated fairly successfully, we suggest the hypothesis that either the air-mass transformation and local processes are too strong in the model or the simulated advection is too weak. In the scenario A climate, the heat waves become a common phenomenon: warming of 4.5 degrees C in summer (difference between scenario A and control climates) induces a five-fold increase in the frequency of tropical days and an immense enhancement of extremity of heat waves. The results of the study underline the need for (i) a proper validation of the GCM output before a climate impact study is conducted and (ii) translation of large-scale information from GCMs into local scales using downscaling and stochastic modelling techniques in order to reduce GCMs' biases.
- Published
- 2000
20. Process-based impact models: now and in the future
- Author
-
Darwin, Roy
- Subjects
Climatology -- Models ,Climatic changes -- Models ,Global warming -- Models ,Earth sciences - Abstract
An evaluation is presented on several climatic change prediction models. These include the Global Impact Model, which fails to identify economic relationships, and the Future Agricultural Resources Model, which provides more information about economic as well as environmental impacts.
- Published
- 2000
21. COUNTRY-SPECIFIC MARKET IMPACTS OF CLIMATE CHANGE
- Author
-
Mendelsohn, Robert, Morrison, Wendy, Schlesinger, Michael E., and Andronova, Natalia G.
- Subjects
Climatic changes -- Economic aspects ,Global warming -- Economic aspects ,Climatology -- Models ,Earth sciences - Abstract
We develop a new climate-impact model, the Global Impact Model (GIM), which combines future scenarios, detailed spatial simulations by general circulation models (GCMs), sectoral features, climate-response functions, and adaptation to generate country-specific impacts by market sector. Estimates are made for three future scenarios, two GCMs, and two climate-response functions - a reduced-form model and a cross-sectional model. Combining empirically based response functions, sectoral data by country, and careful climate forecasts gives analysts a more powerful tool for estimating market impacts. GIM predicts that country specific results vary, implying that research in this area is likely to be policy-relevant.
- Published
- 2000
22. IDENTIFYING KEY SOURCES OF UNCERTAINTY IN CLIMATE CHANGE PROJECTIONS
- Author
-
Visser, H., Folkert, R.J.M., Hoekstra, J., and De Wolff, J.J.
- Subjects
Climatic changes -- Models ,Global warming -- Models ,Climatology -- Models ,Earth sciences - Abstract
What sources of uncertainty should be included in climate change projections and what gains can be made if specific sources of uncertainty are reduced through improved research? DIALOGUE, an integrated assessment model, has been used to answer these questions. Central in the approach of DIALOGUE is the concept of parallel modeling, i.e., for each step in the chain from emissions to climate change a number of equivalent models are implemented. The following conclusions are drawn: The key source of uncertainty in global temperature projections appears to be the uncertainty in radiative forcing models. Within this group of models uncertainty within aerosol forcing models is about equal to the total forcing of greenhouse gas models. In the latter group CO(sub 2) is dominant. The least important source of uncertainty appears to be the gas cycle models. Within this group of models the role of carbon cycle models is dominant. Uncertainty in global temperature projections has not been treated consistently in the literature. First, uncertainty should be calculated as a product of all uncertainty sources. Second, a particular choice of a base year for global warming calculations influences the ranking of uncertainty. Because of this, a comparison of ranking results across different studies is hampered. We argue that 'pre-Industrial' is the best choice for studies on uncertainty. There is a linear relationship between maximum uncertainty in the year 2100 and cumulative emissions of CO(sub 2) over the period 1990-2100: higher emissions lead to more uncertainty.
- Published
- 2000
23. Data signatures and visualization of scientific data sets
- Author
-
Wong, Pak CHung, Foote, Harlan, Leung, Ruby, Adams, Dan, and Thomas, Jim
- Subjects
United States. Department of Energy. Pacific Northwest National Laboratory -- Research ,Data structures -- Design and construction ,Science ,Computer graphics -- Usage ,Climatology -- Models ,Mathematical models -- Usage ,Floods -- Models ,Winds -- Models ,Rain and rainfall -- Models ,Computer simulation -- Methods - Published
- 2000
24. SCALING AND DEMOGRAPHIC ISSUES IN GLOBAL CHANGE RESEARCH: THE GREAT PLAINS, 1880-1990
- Author
-
Gutmann, Myron P.
- Subjects
Great Plains -- Environmental aspects ,Demography -- Research ,Population density -- Environmental aspects ,Human beings -- Influence of climate ,Climatology -- Models ,Earth sciences - Abstract
This paper is about the scales at which demographic data are available, and demographic research is conducted, and their implications for understanding the relationship between population and environment. It describes a multi-disciplinary project designed to study the long-term relationship between population, land use, and environment in the U.S. Great Plains. The paper begins with a discussion of the scales at which data are readily available for demographic, agricultural land use, and environmental data for the United States. Some of these data can be obtained at relatively high resolutions, but the lowest common denominator for many of the long term data is the county, a fairly large unit. I then discuss the advantages and disadvantages of the different scales available. The third section of the paper uses county net migration as an example of research that can be done, and the scale at which it is effective. The example shows that the county is an effective unit for the study of migration, and that the research results are significant. The conclusion suggests that the study of population processes in an environmental and economic context is appropriate at the county level for some questions, but that scaling the results to larger units may be difficult because of the need to be certain about the contexts in which those processes take place. We probably should not study net migration at the national or continental scale, but aggregating county-level or regional studies to a larger scale may be successful.
- Published
- 2000
25. MEASURING ENVIRONMENTAL VALUES AND ENVIRONMENTAL IMPACTS: GOING FROM THE LOCAL TO THE GLOBAL
- Author
-
Rothman, Dale S.
- Subjects
Environmental impact analysis -- Methods ,Environmental monitoring -- Methods ,Climatology -- Models ,Environmental sciences -- Ethical aspects ,Scaling laws (Statistical physics) -- Methods ,Earth sciences - Abstract
Measuring the impact of global change depends critically upon our ability to gauge the impacts of changes at the local and individual level. There are strong philosophical and practical concerns in measuring environmental, health, social, and economic impacts even at this level. Aggregating these across commodities, individuals, sectors, regions, and time presents further difficulties. Overcoming these obstacles oftentimes runs afoul of our inability to make interpersonal and intergenerational comparisons. This paper walks through the process of economic valuation from the very local scale of individual choices up to global aggregations across goods and services, individuals, space, and time. Along the way, important assumptions, particularly those related to analyses across different scales and aggregation from lower to higher scales will be emphasized. The effects of making different assumptions will be noted. Fundamental questions about the 'scientific' versus 'political/moral/ethical' nature of valuation will also be highlighted.
- Published
- 2000
26. SCALING ISSUES IN FOREST SUCCESSION MODELLING
- Author
-
Bugmann, Harald, Lindner, Marcus, Lasch, Petra, Flechsig, Michael, Ebert, Beatrix, and Cramer, Wolfgang
- Subjects
Forest dynamics -- Models ,Forest ecology -- Research ,Climatology -- Models ,Forest microclimatology -- Models ,Scaling laws (Statistical physics) -- Methods ,Earth sciences - Abstract
This paper reviews scaling issues in forest succession modelling, focusing on forest gap models. Two modes of scaling are distinguished: (1) implicit scaling, i.e. taking scale-dependent features into account while developing model equations, and (2) explicit scaling, i.e. using procedures that typically involve numerical simulation to scale up the response of a local model in space and/or time. Special attention is paid to spatial upscaling methods, and downscaling is covered with respect to deriving scenarios of climatic change to drive gap models in impact assessments. When examining the equations used to represent ecological processes in forest gap models, it becomes evident that implicit scaling is relevant, but has not always been fully taken into consideration. A categorization from the literature is used to distinguish four methods for explicit upscaling of ecological models in space: (1) Lumping, (2) Direct extrapolation, (3) Extrapolation by expected value, and (4) Explicit integration. Examples from gap model studies are used to elaborate the potential and limitations of these methods, showing that upscaling to areas as large as 30000 km(super 2) is possible, given that there are no significant disturbances such as fires or insect outbreaks at the landscape scale. Regarding temporal upscaling, we find that it is important to consider migrational lags, i.e. limited availability of propagules, if one wants to assess the transient behaviour of forests in a changing climate, specifically with respect to carbon storage and the associated feedbacks to the atmospheric CO(sub 2) content. Regarding downscaling, the ecological effects of different climate scenarios for the year 2100 were compared at a range of sites in central Europe. The derivation of the scenarios is based on (1) imposing GCM grid-cell average changes of temperature and precipitation on the local weather records; (2)a qualitative downscaling technique applied by the IPCC for central and southern Europe; and (3) statistical downscaling relating large-scale circulation patterns to local weather records. Widely different forest compositions may be obtained depending on the local climate scenario, suggesting that the downscaling issue is quite important for assessments of the ecological impacts of climatic change on forests.
- Published
- 2000
27. POTENTIAL SCALE-RELATED PROBLEMS IN ESTIMATING THE COSTS OF CO(sub 2) MITIGATION POLICIES
- Author
-
Green, Christopher
- Subjects
Scaling laws (Statistical physics) -- Methods ,Carbon dioxide -- Environmental aspects ,Fossil fuels -- Environmental aspects ,Climatology -- Models ,Alternative energy sources -- Research ,Earth sciences - Abstract
The scale-related problem addressed here relates to a difficulty in substituting away from fossil fuels as part of a policy designed to mitigate climate change. The replacement of fossil fuels by renewable forms of energy is a widely advocated means of reducing the build-up greenhouse gases in the atmosphere. However, the substitution, on a large-scale, of renewable, non-fossil fuel energy sources for fossil fuels requires using vast amounts of land to produce energy. It is shown that, with the exception of nuclear energy, almost all non-fossil fuel energy sources are highly land using, or land-intensive. In particular, the widespread substitution of renewables such as biomasses, wind, solar, and hydro for fossil fuels would require adapting large amounts of land to energy production, land which may have good alternative uses. Thus, the economic feasibility of producing, globally, relatively small amounts of renewable energies is not a good indicator of the feasibility of producing them on a large scale. This implies that substantial reduction in the use of fossil fuels requires the discovery and development of new non-land intensive energy technologies.
- Published
- 2000
28. UPSCALING TROPICAL DEFORESTATION: IMPLICATIONS FOR CLIMATE CHANGE
- Author
-
O'Brien, Karen L.
- Subjects
Scaling laws (Statistical physics) -- Methods ,Deforestation -- Research ,Rain forest ecology -- Research ,Climatology -- Models ,Earth sciences - Abstract
This article examines the implications of upscaling tropical deforestation for climate change. In this case, upscaling refers to the extrapolation and aggregation of deforestation to the grid scale that is used in global climate models (GCMs). The upscaling of deforestation emphasizes the extent of forest loss, and assumes that deforestation is a homogeneous and instantaneous process. The structure of deforested landscapes is usually disregarded in 'upscaled' experiments, and the intensity of deforestation is seldom considered. Consequently, the atmospheric response to a heterogeneous surface is not addressed. Furthermore, climatically significant soil and vegetation parameters associated with complex and dynamic deforested landscapes are ignored. These factors underscore the need for more realistic representations of tropical deforestation in modeling studies. Several recent attempts to address the issue of scale in deforestation studies are described in the article.
- Published
- 2000
29. SCALING ECOLOGICAL DYNAMICS: SELF-ORGANIZATION, HIERARCHICAL STRUCTURE, AND ECOLOGICAL RESILIENCE
- Author
-
Peterson, Garry D.
- Subjects
Climatology -- Models ,Ecological research -- Models ,Biotic communities -- Research ,Scaling laws (Statistical physics) -- Methods ,Earth sciences - Abstract
Assessing impacts of global change is complicated by the problems associated with translating models and data across spatial and temporal scales. One of the major problems of ecological scaling is the dynamic, self-organized nature of ecosystems. Ecological organization emerges from the interaction of structures and processes operating at different scales. The resilience of ecological organization to changes in key cross-scale processes can be used to assess the contexts within which scaling methods function well, need adjustment, and break down.
- Published
- 2000
30. UPSCALING IN GLOBAL CHANGE RESEARCH
- Author
-
Harvey, L.D. Danny
- Subjects
Climatology -- Models ,Climatic factors -- Evaluation ,Climatic changes -- Models ,Scaling laws (Statistical physics) -- Methods ,Earth sciences - Abstract
This paper reviews the problems of upscaling that arise, in the context of global change research, in a wide variety of disciplines in the physical and social sciences. Upscaling is taken to mean the process of extrapolating from the site-specific scale at which observations are usually made or at which theoretical relationships apply, to the smallest scale that is resolved in global-scale models. Upscaling is pervasive in global change research, although in some cases it is done implicitly. A number of conceptually distinct, fundamental causes of upscaling problems are identified and are used to classify the upscaling problems that have been encountered in different disciplines. A variety of solutions to the upscaling problems have been developed in different disciplines, and these are compared here. Improper upscaling can dramatically alter model simulation results in some cases. A consideration of scaling problems across diverse disciplines reveals a number of interesting conceptual similarities among disciplines whose practitioners might otherwise not communicate with each other. Upscaling raises a number of important questions concerning predictability and reliability in global change research, which are discussed here. There is a clear need for more research into the circumstances in which simple upscaling is not appropriate, and to develop or refine techniques for upscaling.
- Published
- 2000
31. LIMITATIONS OF USING A COARSE RESOLUTION MODEL TO ASSESS THE IMPACT OF CLIMATE CHANGE ON SEA ICE IN HUDSON BAY
- Author
-
Gough, William A. and Allakhverdova, Tatiana
- Subjects
Hudson Bay -- Natural history ,Sea ice -- Environmental aspects ,Climatic changes -- Environmental aspects ,Climatic factors -- Models ,Ocean-atmosphere interaction -- Models ,Climatology -- Models ,Geography - Abstract
The simulation of Sea-ice in a coarse resolution ocean general circulation model is examined in Hudson Bay and surrounding waters. Sea-ice distribution and duration compared well to climatological values, although ice thickness is undersimulated as it is in other modelling work. In Hudson Bay ice thickness variation was dominated by the atmospheric forcing as shown by the symmetric response of ice thickness to warming and cooling scenarios. Below ice heat fluxes play a more significant role in Foxe Basin and Baffin Bay where they mitigate air-ice heat loss by as much as 40 percent, thus limiting ice thickness and duration. Below ice heat flux reduces by 23 percent for the region of study (Hudson Bay, Foxe Basin, Baffin Bay, and Labrador Sea) for a global 3 degrees C cooling and increases by 9 percent for a 3 degrees C global warming. This asymmetric response is attributed to the oceans asymmetric response to warming and cooling scenarios. In so much as Hudson Bay is dominated by atmospheric forcing rather than under ice heat as these results indicate, coarse resolution models may be useful in assessing the impact of change. However the necessary reconfiguration of the model grid render results from Foxe Basin and Hudson Strait less credible. Key Words: Hudson Bay, sea ice, climate change, climate modelling, ocean modelling
- Published
- 1999
32. The Role of Carbonates in the Evolution of Early Martian Oceans
- Author
-
Morse, John W. and Marion, Giles M.
- Subjects
Mars (Planet) -- Natural history ,Climatology -- Models ,Chalk -- Analysis ,Alkalic igneous rocks -- Analysis ,Earth sciences - Abstract
The authors examine the evidence of the presents of liquid water on the surface of Mars. Topics include atmosphere, weather, and surface temperatures.
- Published
- 1999
33. Modified delta-Eddington approximation for solar reflection, transmission, and absorption calculations
- Author
-
Qiu, Jinhuan
- Subjects
Scattering (Physics) -- Research ,Atmospheric circulation -- Research ,Climatology -- Models ,Earth sciences ,Science and technology - Abstract
The fractional factor f of [Delta]-function scaling in the [Delta]-Eddington approximation modifies the fractional scattering into the forward peak. As shown in this paper, reasonably choosing the factor f can yield a great improvement of transmission, reflection, and absorption calculations in the condition of the optical depth [Tau] [less than or equal to] 1. based on this fact, a modified [Delta]-Eddington approximation is empirically and mathematically developed using a parameterization model of the factor f that mainly depends on asymmetry factor [g.sub.0], total optical depth [Tau], single scattering albedo [Mathematical Expression Omitted], (ground) surface reflectance A, and cosine of solar zenith angle [[Mu].sub.0]. There are 69 120 sets of comparative numerical tests, covering seven aerosol and two cloud size distributions, as well as three Henyey-Greenstein phase functions. Among the exiting two-stream approximations, [Delta]-Eddington generally has better transmission, reflection, and absorption accuracy as [Tau] [less than or equal to] 1. In an average sense, in the condition of A [less than or equal to] 0.6, [Tau] [less than or equal to] 1, 0.1 [less than or equal to] [[Mu].sub.0] [less than or equal to] 1.0, and [Mathematical Expression Omitted], the modified [Delta]-Eddington approximation can reduce transmission, reflection, and absorption errors by a factor of about 2, compared with the results of the [Delta]-Eddington. For the conservative atmosphere, much greater improvement of transmission and reflection accuracy is obtained.
- Published
- 1999
34. Testing climate models: an approach
- Author
-
Goody, Richard, Anderson, James, and North, Gerald
- Subjects
Climatology -- Models ,Business ,Earth sciences - Abstract
The scientific merit of decadal climate projections can only be established by means of comparisons with observations. Testing of models that are used to predict climate change is of such importance that no single approach will provide the necessary basis to analyze systematic errors and to withstand critical analysis. Appropriate observing systems must be relevant, global, precise, and calibratable against absolute standards. This paper describes two systems that satisfy these criteria: spectrometers that can measure thermal brightness temperatures with an absolute accuracy of 0.1 K and a spectral resolution of 1 [cm.sup.-1], and radio occultation measurements of refractivity using satellites of the GPS positioning system, which give data of similar accuracy. Comparison between observations and model predictions requires an array of carefully posed tests. There are at least two ways in which either of these data systems can be used to provide strict, objective tests of climate models. The first looks for the emergence from the natural variability of a predicted climate 'fingerprint' in data taken on different occasions. The second involves the use of high-order statistics to test those interactions that drive the climate system toward a steady state. A correct representation of these interactions is essential for a credible climate model. A set of climate model tests is presented based upon these observational and theoretical ideas. It is an approach that emphasizes accuracy, exposes systematic errors, and is focused and of low cost. It offers a realistic hope for resolving some of the contentious arguments about global change.
- Published
- 1998
35. Middle atmosphere climatologies from the troposphere-stratosphere configuration of the UKMO's unified model
- Author
-
Butchart, Neal and Austin, John, English writer
- Subjects
United Kingdom. Met Office -- Models ,Climatology -- Models ,Troposphere -- Environmental aspects ,Stratosphere -- Environmental aspects ,Earth sciences ,Science and technology - Abstract
A climatology of the middle atmosphere is determined from 11-yr integrations of the U.K. Meteorological Office Unified Model and compared with 18 years of satellite observations and 5 years of data assimilation fields. The model has an upper boundary at 0.1 mb, and above 20 mb uses Rayleigh friction as a substitute for gravity wave drag. Many of the results are, however, found to be relatively insensitive to enhancing the damping above 0.3 mb. As with most general circulation models, the polar night jet in both hemispheres is too strong and does not have the observed equatorward slope with height. The model suffers from the common 'cold pole' problem and, apart from a local warm pool centered just below 100 mb in northern high latitudes in January, and another at about 30 mb at 70 [degrees] S in July, has a cold bias throughout the stratosphere. At the level where polar stratospheric clouds occur, the temperature bias is about -4 K in the Northern Hemisphere and up to +6 K in the Southern Hemisphere. For the majority of the southern winters, local minimum temperatures in the lower stratosphere agree well with observations but in some years the behavior is more like the Northern Hemisphere with values rising rapidly in late winter. This feature of the simulation is also seen in the South Pole temperatures at 10 mb with midwinter warmings occurring in two of the years. At 10 mb, midwinter warming behavior at the North Pole is quite well reproduced, as is the annual cycle in extratropical circulation. In the Tropics, there is no quasi-biennial oscillation, and the semiannual oscillation in the upper stratosphere has a poorly simulated westerly phase, while the easterly phase lacks the observed seasonal asymmetry. Simulated stationary wave amplitudes in the upper stratosphere lack a strong hemispheric asymmetry and are overpredicted in both hemispheres despite having roughly the correct amplitudes at 100 mb. Interannual variability in the winter stratosphere is underestimated, and again there is evidence that the model does not produce the proper hemispheric asymmetries.
- Published
- 1998
36. Monthly simulation of surface layer fluxes and soil properties during FIFE
- Author
-
Bosiovich, Michael G. and Sun, Wen-Yih
- Subjects
Climatology -- Models ,Atmospheric circulation -- Models ,Earth sciences ,Science and technology - Abstract
Global change and regional climate experiments with atmospheric numerical models rely on the parameterization of the surface boundary in order to evaluate impact on society and agriculture. In this paper, several surface modeling strategies have been examined in order to test their ability to simulate for a period of one month, and hence, their impact on short-term and regional climate modeling. The interaction between vegetation and soil models is also discussed. The resolution of a multiple-level soil model, the method of computing moisture availability, the Force-Restore Method, and vegetation parameterization were studied by comparing model-simulated soil temperature, soil moisture, and surface energy budget with observations and intercomparison of the simulations. The increase of model soil resolution improved both the simulation of daytime ground heat flux and latent heat. Evaporation from the soil surface with more coarse resolution soil was larger than the higher resolution simulation, but transpiration and the simulation of soil water were similar for each case. The Alpha method of moisture availability allowed less soil evaporation under stressed conditions than the Beta method. The soil water became larger than the observations, and more transpiration occurred. The Force-Restore Method simulations produced reasonable results, when coupled with the vegetation model. Eliminating the vegetation model from several of the previous cases, however, produced significant variability between different soil models. It is possible that this variability could affect long-term GCM sensitivity simulations.
- Published
- 1998
37. Modelling weather and climate
- Author
-
Atkinson, B.W.
- Subjects
Weather -- Models ,Climatology -- Models ,Geography - Abstract
Weather and climate result from both complex mechanisms in the atmosphere itself, and from interactions between the atmosphere and land and water surfaces. The atmosphere can be described as a stratified fluid envelope on a rough, unevenly heated, rotating planet. Approaches to understanding its behaviour, including and numerical modelling techniques, are presented here, as well as examples of the application of these techniques to the study of past, present and projected climates.
- Published
- 1998
38. Natural climatic variability as an explanation for historical climatic fluctuations
- Author
-
Hung, B.G.
- Subjects
Climatic changes -- Analysis ,Climatology -- Models ,Earth sciences - Abstract
The question as to whether the climatic anomalies associated with the Medieval Warm Period and the Little Ice Age can be attributed to natural climatic variability is explored in this paper. The output from a 500-year run with a global climatic model is used for this purpose. The model exhibits multi-decadal variability in its climatic outputs, which appears to have many of the characteristics of observed climatic data over the last millennium. Global distributions of surface temperature associated with peak warming and cooling phases of the model run highlight the spatial variability which occurs, and the lack of synchroneity in the response from region to region. Considerable year-to-year variability occurs in temperature anomaly patterns during the warming and cooling phases, indicating the complexity of the responses. The model results suggest that such climatic phases should not be considered as lengthy periods of universal warming or cooling. Comparison of observed time series of land surface temperature for the northern hemisphere for the last 500 years with model output indicates that most of the observed features in this climatic record can be reproduced by processes associated with internal mechanisms of the climatic system as reproduced in the model. While the model results do not exclude the possible contribution of external forcing agents as a contributing factor to these climatic episodes, the perception is that such agents would enhance existing naturally-induced climatic features rather than initiate them, at least for this time frame. Given the omnipresent nature of natural climatic variability, it is assumed that such variability rather than external forcing agents has primacy in generating and maintaining the underlying observed climatic variability. An understanding of the mechanisms and behaviour of such climatic features is becoming of increasing importance, in view of their possible role in modulating future climatic trends given the expected influence of the greenhouse effect.
- Published
- 1998
39. Multiple scattering parameterization in thermal infrared radiative transfer
- Author
-
Fu, Qiang, Liou, K.N., Cribb, M.C., Charlock, T.P., and Grossman, A.
- Subjects
Radiative transfer -- Analysis ,Atmospheric research -- Models ,Climatology -- Models ,Earth sciences ,Science and technology - Abstract
A systematic formulation of various radiative transfer parameterizations is presented, including the absorption approximation (AA), [Delta]-two-stream approximation (D2S), [Delta]-four-stream approximation (D4S), and [Delta]-two- and four-stream combination approximation (D2/4S), in a consistent manner for thermal infrared flux calculations. The D2/4S scheme uses a source function from the [Delta]-two-stream approximation and evaluates intensities in the four-stream directions. A wide range of accuracy checks for monochromatic emissivity of a homogeneous layer and broadband heating rates and fluxes in nonhomogeneous atmospheres is performed with respect to the 'exact' results computed from the [Delta]-128-stream scheme for radiative transfer. The computer time required for the calculations using different radiative transfer parameterizations is compared. The results pertaining to the accuracy and efficiency of various radiative transfer approximations can be utilized to decide which approximate method is most appropriate for a particular application. In view of its overall high accuracy and computational economy, it is recommended that the D2/4S scheme is well suited for GCM and climate modeling applications.
- Published
- 1997
40. Habitable planets with high obliquities
- Author
-
Williams, Darren M. and Kasting, James F.
- Subjects
Climatology -- Models ,Earth -- Natural history ,Astronomy ,Earth sciences - Abstract
Earth's obliquity would vary chaotically from 0 [degrees] to 85 [degrees] were it not for the presence of the Moon (J. Laskar, F. Joutel, and P. Robutel, 1993, Nature 361, 615-617). The Moon itself is thought to be an accident of accretion, formed by a glancing blow from a Mars-sized planetesimal. Hence, planets with similar moons and stable obliquities may be extremely rare. This has lead Laskar and colleagues to suggest that the number of Earth-like planets with high obliquities and temperate, life-supporting climates may be small. To test this proposition, we have used an energy-balance climate model to simulate Earth's climate at obliquities up to 90 [degrees]. We show that Earth's climate would become regionally severe in such circumstances, with large seasonal cycles and accompanying temperature extremes on middle- and high-latitude continents which might be damaging to many forms of life. The response of other, hypothetical, Earth-like planets to large obliquity fluctuations depends on their land-sea distribution and on their position within the habitable zone (HZ) around their star. Planets with several modest-sized continents or equatorial supercontinents are more climatically stable than those with polar supercontinents. Planets farther out in the HZ are less affected by high obliquities because their atmospheres should accumulate C[O.sub.2] in response to the carbonate-silicate cycle. Dense, C[O.sub.2]-rich atmospheres transport heat very effectively and therefore limit the magnitude of both seasonal cycles and latitudinal temperature gradients. We conclude that a significant fraction of extrasolar Earth-like planets may still be habitable, even if they are subject to large obliquity fluctuations.
- Published
- 1997
41. Scale and modeling issues in water resources planning
- Author
-
Lins, Harry F., Wolock, David M., and McCabe, Gregory J.
- Subjects
Water resources development -- Models ,Climatology -- Models ,Earth sciences - Abstract
Resource planners and managers interested in utilizing climate model output as part of their operational activities immediately confront the dilemma of scale discordance. Their functional responsibilities cover relatively small geographical areas and necessarily require data of relatively high spatial resolution. Climate models cover a large geographical, i.e. global, domain and produce data at comparatively low spatial resolution. Although the scale differences between model output and planning input are large, several techniques have been developed for disaggregating climate model output to a scale appropriate for use in water resource planning and management applications. With techniques in hand to reduce the limitations imposed by scale discordance, water resource professionals must now confront a more fundamental constraint on the use of climate models - the inability to produce accurate representations and forecasts of regional climate. Given the current capabilities of climate models, and the likelihood that the uncertainty associated with long-term climate model forecasts will remain high for some years to come, the water resources planning community may find it impractical to utilize such forecasts operationally.
- Published
- 1997
42. A fuzzy logic technique for correcting climatological ionospheric models
- Author
-
Giannini, J.A. and Kilgus, C.C.
- Subjects
Fuzzy systems -- Usage ,Ionosphere -- Models ,Climatology -- Models ,Business ,Earth sciences ,Electronics and electrical industries - Abstract
This paper reports on a fuzzy logic correction technique for the IRI90 climatological ionospheric model that uses a sparse set of GPS total electron content (TEC) measurements to provide a significant model correction over the entire sub-solar equatorial bulge. Crisp inputs, represented by a sparse set of GPS measurements of ionospheric TEC, are ingested into the fuzzy correction model which is composed of a set of fuzzy membership functions and a knowledge base (fuzzy rules). The fuzzy logic estimation is an iterative procedure that begins with the uncorrected model as the zero order prediction of the shape of the sub-solar equatorial bulge. The measured data (inputs) are fuzzified to account for errors in the GPS measurements of TEC, and then are mapped onto fuzzy input-membership functions. The knowledge base is then accessed, firing the appropriate rules, to produce a fuzzy output estimate of the correction. This fuzzy estimate is then defuzzified to provide a crisp output correction that modifies the shape of the zero order prediction bulge to better fit the GPS data and to produce a first order prediction. The procedure is repeated until termination criteria are satisfied. The goal of the process is to accurately reproduce the characteristic signature of the ionospheric TEC along a satellite subtrack across the sub-solar equatorial bulge. The fuzzy logic model can make large scale alterations to the model prediction without requiring an extensive measurement data set and without inducing spikes in the local vicinity of the ingested data points. In particular, the IRI90 climatological model estimates of the ionospheric TEC were adjusted using two concurrent TEC measurements at locations approximately 800 km apart along the satellite ground track. For this first test, two simulated GPS measurements were derived from TOPEX dual-frequency TEC data. The results were compared with the TOPEX TEC measurements for four ground tracks in the Pacific across the sub-solar equatorial bulge. Initial results showed a model improvement to within -0.32 TECU averaged over four entire passes ([+ or -]66 [degrees] latitude) when compared with the TOPEX 'ground truth' measured TEC along track profiles. (1 TECU equals [10.sup.16] electrons/[m.sup.2].) The mean error over the equatorial portion of the passes ([+ or -]20 [degrees] latitude) was -4.65 TECU. The fuzzy correction model was run for 18 iterations, approximately full convergence. The averaging over sunlit passes provides an upper bound on the error since the spatial and temporal sampling allowed by the equatorial application include night time passes with low TEC. The residual error is dominated by the failure to match the structure of the TEC peaks north and south of the geomagnetic equator. Incorporating measured global averaged ionospheric geomagnetic index and a local solar zenith angle as inputs to the fuzzy logic may allow this error to be reduced. Fine tuning of the fuzzy logic model rules and full development of the multi-GPS station ingestion scheme can now proceed given that this first test shows that potentially the fuzzy logic correction is able to produce a correction that could satisfy the equatorial basin-scale measurement needs for GFO.
- Published
- 1997
43. Estimating climate-change impacts on Colorado Plateau snowpack using downscaling methods
- Author
-
McGinnis, David L.
- Subjects
Colorado -- Environmental aspects ,Climatic changes -- Analysis ,Snow -- Measurement ,Atmospheric circulation -- Models ,Climatology -- Models ,Geography - Published
- 1997
44. A model for the boreal summer intraseasonal oscillation
- Author
-
Wang, Bin and Xie, Xiaosu
- Subjects
Climatology -- Models ,Weather -- Models ,Climatic changes -- Analysis ,Earth sciences ,Science and technology - Abstract
The tropical intraseasonal oscillation (ISO) exhibits pronounced seasonality. The boreal summer ISO is more complex than its winter counterpart due to the coexistence of equatorial eastward, off-equatorial westward, and northward propagating, low-frequency modes and their interactions. Based on observational evidence and results obtained from numerical experiments, a mechanism is proposed for the boreal summer ISO in which the Northern Hemisphere summer monsoon (NHSM) circulation and moist static energy distribution play essential roles. With a climatological July mean basic state, the life cycle of model low-frequency waves consists of four processes: an equatorial eastward propagation of a coupled Kelvin-Rossby wave packet, an emanation of moist Rossby waves in the western Pacific, a westward propagation and amplification of the Rossby waves in South Asian monsoon regions, and a reinitiation of the equatorial disturbances over the central Indian Ocean. The life cycle spans about one month and provides a mechanism for self-sustained boreal summer ISO. Analyses of the model experiments reveal that the monsoon mean flows and spatial variation of moist static energy trap equatorial disturbances in the NHSM domain. The reduction of moist static energy over the eastern central Pacific suppresses equatorial convection, leading to disintegration of the equatorial Kelvin-Rossby wave packet and the emanation of Rossby waves in the western North Pacific. Strong easterly vertical shears and seasonally enhanced boundary layer humidity in the NHSM further amplify the Rossby waves (of the gravest meridional mode), making their structures highly asymmetric about the equator. The intensified Rossby waves start to stall and decay when approaching the Arabian Sea due to the 'blocking' of the sinking dry air mass over North Africa, meanwhile triggering equatorial convection. The mean Hadley circulation plays a critical role in reinitiation of the equatorial Kelvin-Rossby wave packet over the equatorial Indian Ocean.
- Published
- 1997
45. Predictive oscillation patterns: a synthesis of methods for spatial-temporal decomposition of random fields
- Author
-
Kooperberg, Charles and O'Sullivan, Finbarr
- Subjects
Climatology -- Models ,Oscillation -- Models ,Spectral energy distribution -- Models ,Mathematics - Abstract
Spatial-temporal decompositions of climatologic fields were accomplished by applying various methods, including principal component analysis (PCA) and principal oscillation patterns (POPS). A hybrid of these techniques that tries to retain the positive aspects of both PCA and POPS is presented. The technique tries to project the field onto a lower dimensional subspace with the characteristic that the average error linked with predicting a future state of the field on the basis of the history contained in the projection is minimized. The technique is employed on a 47-year climatological record of the 5-day average 500-millibar-height anomaly field, sampled on a 445 grid over the Northern Hemisphere extra-tropics., 1. INTRODUCTION The study of fluid motion is of substantial interest in atmospheric science and oceanography (Chelton 1994; Panel on Statistics and Oceanography 1994). Although physical principles describing the small-scale [...]
- Published
- 1996
46. A university perspective on global climate modeling
- Author
-
Randall, David Anton
- Subjects
Climatology -- Models ,Atmosphere -- Models ,Universities and colleges -- Research ,Business ,Earth sciences - Abstract
Global atmospheric models are proliferating, in part because of the widespread availability of powerful computers. There are about two dozen global modeling groups at work in the United States today. These groups are put into four categories, considering both laboratories and universities and development and applications. Community models are a special subgroup and in principle are both developed and applied by the community. Most U.S. global modeling groups are focusing on applications rather than on development. This is especially true in the university community, although over the years university groups have made important contributions in the model-development arena. A key role of university groups is to train new model developers at a rate matched to the community's demand for such scientists. A simple but functional conceptual organization of the U.S. global modeling community is suggested.
- Published
- 1996
47. A semiempirical cloudiness parameterizations for use in climate models
- Author
-
Xu, Kuan-Man and Randall, David Anton
- Subjects
Climatology -- Models ,Clouds -- Models ,Earth sciences ,Science and technology - Abstract
Data produced from explicit simulations of observed tropical cloud systems and subtropical stratocumuli are used to develop a 'semiempirical' cloudiness parameterization for use in climate models. The semiempirical cloudiness parameterization uses the large-scale average condensate (cloud water and cloud ice) mixing ratio as the primary predictor. The large-scale relative humidity and cumulus mass flux are also used in the parameterization as secondary predictors. The cloud amount is assumed to vary exponentially with the large-scale average condensate mixing ratio. The rate of variation is, however, a function of large-scale relative humidity and the intensity of convective circulations. The validity of such a semiempirical approach and its dependency on cloud regime and horizontal-averaging distance are explored with the simulated datasets.
- Published
- 1996
48. Evaluation of statistically based cloudiness parameterization used in climate models
- Author
-
Randall, David Anton and Xu, Kuan-Man
- Subjects
Climatology -- Models ,Clouds -- Models ,Earth sciences ,Science and technology - Abstract
Existing cloudiness parameterizations based on specified probability distribution functions (PDFs) and large-scale relative humidity (RH) in climate models are evaluated with data produced from explicit simulations of observed tropical cloud systems and subtropical stratocumuli. PDF-based parameterizations were originally intended for use in cloud-resolving models, where fractional cloudiness is only associated with turbulence-scale motion. It is demonstrated with simulated data that most PDF-based parameterizations are not adequate for predicting fractional cloudiness in climate models because their performance is dependent upon the cloud regimes. Modifications to some PDF-based formulations are suggested, especially with regard to the inclusion of skewness of conservative variables. The skewness factors are found to be highly dependent upon which scales of motion coexist within a grid cell. RH-based parameterizations are not readily supported due to a wide range of variations of clear-region averaged RHs with height and the grid size of climate models, as well as their wide range of variations at a given height.
- Published
- 1996
49. Model impacts of entrainment and detrainment rates in shallow cumulus convection
- Author
-
Siebesma, A.P. and Holtslag, A.A.M.
- Subjects
Cloud physics -- Models ,Convection (Meteorology) -- Models ,Atmospheric circulation -- Models ,Climatology -- Models ,Earth sciences ,Science and technology - Abstract
A mass flux parameterization scheme for shallow cumulus convection is evaluated for a case based on observations and large eddy simulation (LES) results for the Barbados Oceanographic and Meteorological Experiment (BOMEX). The mass flux scheme is embedded in a one-column model with prescribed large-scale forcings. Comparing the findings of the latter with the LES results, it is found that the mass flux scheme is too active. As a result, the scheme is mixing too much heat and moisture between the cloud layer and the inversion layer, giving rise to erroneous moisture and temperature profiles for the trade wind region. This is due to an underestimation of the lateral exchange rates. LES results show that for shallow cumulus cloud ensembles (lateral) entrainment and detrainment rates are typically one order of magnitude larger than values used in most operational parameterization schemes and that the detrainment rate is systematically larger than the entrainment rate. When adopting these enhanced rates, the mass flux scheme produces realistic mass fluxes and cloud excess values for moisture and heat and is therefore capable of maintaining the stationary state as observed during BOMEX.
- Published
- 1996
50. The quasi-linear equilibration of a thermally maintained, stochastically excited jet in a quasigeostrophic model
- Author
-
DelSole, Timothy and Farrell, Brian F.
- Subjects
Climatology -- Models ,Jets -- Research ,Stochastic analysis -- Usage ,Climatic changes -- Analysis ,Earth sciences ,Science and technology - Abstract
A theory for quasigeostrophic turbulence in baroclinic jets is examined in which interaction between the mean flow and the perturbations is explicitly modeled by the nonnormal operator obtained by linearization about the mean flow, while the eddy-eddy interactions are parameterized by a combination of stochastic excitation and effective dissipation. The quasi-linear equilibrium is the stationary state in dynamical balance between the mean flow forcing and eddy forcing produced by the linear stochastic model. The turbulence model depends on two parameters that specify the magnitude of the effective dissipation and stochastic excitation. The quasi-linear model produces heat fluxes (upgradient), momentum fluxes, and mean zonal winds, which are remarkably consistent with those produced by the nonlinear model over a wide range of parameter values despite energy and enstrophy imbalances associated with the parameterization for eddy-eddy interactions. The quasi-linear equilibrium also appears consistent with most aspects of the energy cycle, with baroclinic adjustment (though the adjustment is accomplished in a fundamentally different manner), and with the negative correlation between transient eddy transport and other transports observed in the atmosphere. The model overestimates the equilibrium eddy kinetic energy in cases in which it achieves correct eddy fluxes and energy balance. Understanding the role of forcing orthogonal functions rationalizes this behavior and provides the basis for addressing the role of transient eddies in climate.
- Published
- 1996
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.