API Reference¶
Here we provide the reference documentation for aospy’s public API. If you are new to the package and/or just trying to get a feel for the overall workflow, you are better off starting in the Overview, Using aospy, or Examples sections of this documentation.
Warning
aospy is under active development. While we strive to maintain backwards compatibility, it is likely that some breaking changes to the codebase will occur in the future as aospy is improved.
Core Hierarchy for Input Data¶
aospy provides three classes for specifying the location and
characteristics of data saved on disk as netCDF files that the user
wishes to use as input data for aospy calculations: Proj
,
Model
, and Run
.
Proj¶
-
class
aospy.proj.
Proj
(name, description=None, models=None, default_models=None, regions=None, direc_out='', tar_direc_out='')[source]¶ An object that describes a single project that will use aospy.
This is the top-level class in the aospy hierarchy of data representations. It is meant to contain all of the Model, Run, and Region objects that are of relevance to a particular research project. (Any of these may be used by multiple Proj objects.)
The Proj class itself provides little functionality, but it is an important means of organizing a user’s work across different projects. In particular, the output of all calculations using aospy.Calc are saved in a directory structure whose root is that of the Proj object specified for that calculation.
Attributes: - name : str
The run’s name
- description : str
A description of the run
- direc_out, tar_direc_out : str
The paths to the root directories of, respectively, the standard and .tar versions of the output of aospy calculations saved to disk.
- models : dict
A dictionary with entries of the form
{model_obj.name: model_obj}
, for each of thisProj
’s child model objects- default_models : dict
The default model objects on which to perform calculations via aospy.Calc if not otherwise specified
- regions : dict
A dictionary with entries of the form
{regin_obj.name: region_obj}
, for each of thisProj
’s child region objects
-
__init__
(name, description=None, models=None, default_models=None, regions=None, direc_out='', tar_direc_out='')[source]¶ Parameters: - name : str
The project’s name. This should be unique from that of any other Proj objects being used.
- description : str, optional
A description of the model. This is not used internally by aospy; it is solely for the user’s information.
- regions : {None, sequence of aospy.Region objects}, optional
The desired regions over which to perform region-average calculations.
- models : {None, sequence of aospy.Model objects}, optional
The child Model objects of this project.
- default_models : {None, sequence of aospy.Run objects}, optional
The subset of this Model’s runs over which to perform calculations by default.
- direc_out, tar_direc_out : str
Path to the root directories of where, respectively, regular output and a .tar-version of the output will be saved to disk.
See also
aospy.Model
,aospy.Region
,aospy.Run
Notes
Instantiating a Proj object has the side-effect of setting the proj attribute of each of it’s child Model objects to itself.
Model¶
-
class
aospy.model.
Model
(name=None, description=None, proj=None, grid_file_paths=None, default_start_date=None, default_end_date=None, runs=None, default_runs=None, load_grid_data=False, grid_attrs=None)[source]¶ An object that describes a single climate or weather model.
Each Model object is associated with a parent Proj object and also with one or more child Run objects.
If aospy is being used to work with non climate- or weather-model data, the Model object can be used e.g. to represent a gridded observational product, with its child Run objects representing different released versions of that dataset.
Attributes: - name : str
The model’s name
- description : str
A description of the model
- proj : {None, aospy.Proj}
The model’s parent aospy.Proj object
- runs : list
A list of this model’s child Run objects
- default_runs : list
The default subset of child run objects on which to perform calculations via aospy.Calc with this model if not otherwise specified
- grid_file_paths : list
The paths to netCDF files stored on disk from which the model’s coordinate data can be taken.
- default_start_date, default_end_date : datetime.datetime
The default start and end dates of any calculations using this Model
-
__init__
(name=None, description=None, proj=None, grid_file_paths=None, default_start_date=None, default_end_date=None, runs=None, default_runs=None, load_grid_data=False, grid_attrs=None)[source]¶ Parameters: - name : str
The model’s name. This must be unique from that of any other Model objects being used by the parent Proj.
- description : str, optional
A description of the model. This is not used internally by aospy; it is solely for the user’s information.
- proj : {None, aospy.Proj}, optional
The parent Proj object. When the parent Proj object is instantiated with this Model included in its models attribute, this will be over-written with that Proj object.
- grid_file_paths : {None, sequence of strings}, optional
The paths to netCDF files stored on disk from which the model’s coordinate data can be taken.
- default_start_date : {None, datetime.datetime}, optional
Default start date of calculations to be performed using this Model.
- default_end_date : {None, datetime.datetime}, optional
Default end date of calculations to be performed using this Model.
- runs : {None, sequence of aospy.Run objects}, optional
The child run objects of this Model
- default_runs : {None, sequence of aospy.Run objects}, optional
The subset of this Model’s runs over which to perform calculations by default.
- load_grid_data : bool, optional (default False)
Whether or not to load the grid data specified by ‘grid_file_paths’ upon initilization
- grid_attrs : dict, optional (default None)
Dictionary mapping aospy internal names of grid attributes to their corresponding names used in a particular model. E.g.
{TIME_STR: 'T'}
. While aospy checks for a number of alternative names for grid attributes used by various models, it is not possible to anticipate all possible names. This option allows the user to explicitly tell aospy which variables correspond to which internal names (internal names not provided in this dictionary will be attempted to be found in the usual way). For a list of built-in alternative names see the table here.
See also
aospy.DataLoader
,aospy.Proj
,aospy.Run
Notes
A side-effect of instantiating a Model object is that the parent attribute of all of the model’s Run objects is set to that model.
Run¶
-
class
aospy.run.
Run
(name=None, description=None, proj=None, default_start_date=None, default_end_date=None, data_loader=None)[source]¶ An object that describes a single model ‘run’ (i.e. simulation).
Each Run object is associated with a parent Model object. This parent attribute is not set by Run itself, however; it is set during the instantation of the parent Model object.
If aospy is being used to work with non climate-model data, the Run object can be used e.g. to represent different versions of a gridded observational data product, with the parent Model representing that data product more generally.
Attributes: - name : str
The run’s name
- description : str
A description of the run
- proj : {None, aospy.Proj}
The run’s parent aospy.Proj object
- default_start_date, default_end_date : datetime.datetime
The default start and end dates of any calculations using this Run
- data_loader : aospy.DataLoader
The aospy.DataLoader object used to find data on disk corresponding to this Run object
-
__init__
(name=None, description=None, proj=None, default_start_date=None, default_end_date=None, data_loader=None)[source]¶ Instantiate a Run object.
Parameters: - name : str
The run’s name. This must be unique from that of any other Run objects being used by the parent Model.
- description : str, optional
A description of the model. This is not used internally by aospy; it is solely for the user’s information.
- proj : {None, aospy.Proj}, optional
The parent Proj object.
- data_loader : aospy.DataLoader
The DataLoader object used to find the data on disk to be used as inputs for aospy calculations for this run.
- default_start_date, default_end_date : datetime.datetime, optional
Default start and end dates of calculations to be performed using this Model.
See also
aospy.DataLoader
,aospy.Model
DataLoaders¶
Run
objects rely on a helper “data loader” to specify how to find
their underlying data that is saved on disk. This mapping of
variables, time ranges, and potentially other parameters to the
location of the corresponding data on disk can differ among modeling
centers or even between different models at the same center.
Currently supported data loader types are DictDataLoader
,
NestedDictDataLoader
, and GFDLDataLoader
Each of these inherit
from the abstract base DataLoader
class.
Note
An important consideration can be the datatype used to store values in
your datasets. In particular, if the float32 datatype is used in
storage, it can lead to undesired inaccuracies in the computation of
reduction operations (like means) due to upstream issues (see
pydata/xarray#1346 for
more information). To address this it is recommended to always upcast
float32 data to float64. This behavior is turned on by default. If you
would like to disable this behavior you can set the upcast_float32
argument in your DataLoader
constructors to False
.
-
class
aospy.data_loader.
DataLoader
[source]¶ A fundamental DataLoader object.
-
__init__
($self, /, *args, **kwargs)¶ Initialize self. See help(type(self)) for accurate signature.
-
load_variable
(var=None, start_date=None, end_date=None, time_offset=None, grid_attrs=None, **DataAttrs)[source]¶ Load a DataArray for requested variable and time range.
Automatically renames all grid attributes to match aospy conventions.
Parameters: - var : Var
aospy Var object
- start_date : datetime.datetime
start date for interval
- end_date : datetime.datetime
end date for interval
- time_offset : dict
Option to add a time offset to the time coordinate to correct for incorrect metadata.
- grid_attrs : dict (optional)
Overriding dictionary of grid attributes mapping aospy internal names to names of grid attributes used in a particular model.
- **DataAttrs
Attributes needed to identify a unique set of files to load from
Returns: - da : DataArray
DataArray for the specified variable, date range, and interval in
-
recursively_compute_variable
(var, start_date=None, end_date=None, time_offset=None, model=None, **DataAttrs)[source]¶ Compute a variable recursively, loading data where needed.
An obvious requirement here is that the variable must eventually be able to be expressed in terms of model-native quantities; otherwise the recursion will never stop.
Parameters: - var : Var
aospy Var object
- start_date : datetime.datetime
start date for interval
- end_date : datetime.datetime
end date for interval
- time_offset : dict
Option to add a time offset to the time coordinate to correct for incorrect metadata.
- model : Model
aospy Model object (optional)
- **DataAttrs
Attributes needed to identify a unique set of files to load from
Returns: - da : DataArray
DataArray for the specified variable, date range, and interval in
-
-
class
aospy.data_loader.
DictDataLoader
(file_map=None, upcast_float32=True, data_vars='minimal', coords='minimal', preprocess_func=<function DictDataLoader.<lambda>>)[source]¶ A DataLoader that uses a dict mapping lists of files to string tags.
This is the simplest DataLoader; it is useful for instance if one is dealing with raw model history files, which tend to group all variables of a single output interval into single filesets. The intvl_in parameter is a string description of the time frequency of the data one is referencing (e.g. ‘monthly’, ‘daily’, ‘3-hourly’). In principle, one can give it any string value.
Parameters: - file_map : dict
A dict mapping an input interval to a list of files
- upcast_float32 : bool (default True)
Whether to cast loaded DataArrays with the float32 datatype to float64 before doing calculations
- data_vars : str (default ‘minimal’)
Mode for concatenating data variables in call to
xr.open_mfdataset
- coords : str (default ‘minimal’)
Mode for concatenating coordinate variables in call to
xr.open_mfdataset
.- preprocess_func : function (optional)
A function to apply to every Dataset before processing in aospy. Must take a Dataset and
**kwargs
as its two arguments.
Examples
Case of two sets of files, one with monthly average output, and one with 3-hourly output.
>>> file_map = {'monthly': '000[4-6]0101.atmos_month.nc', ... '3hr': '000[4-6]0101.atmos_8xday.nc'} >>> data_loader = DictDataLoader(file_map)
If one wanted to correct a CF-incompliant units attribute on each Dataset read in, which depended on the
intvl_in
of the fileset one could define apreprocess_func
which took into account theintvl_in
keyword argument.>>> def preprocess(ds, **kwargs): ... if kwargs['intvl_in'] == 'monthly': ... ds['time'].attrs['units'] = 'days since 0001-01-0000' ... if kwargs['intvl_in'] == '3hr': ... ds['time'].attrs['units'] = 'hours since 0001-01-0000' ... return ds >>> data_loader = DictDataLoader(file_map, preprocess)
-
class
aospy.data_loader.
NestedDictDataLoader
(file_map=None, upcast_float32=True, data_vars='minimal', coords='minimal', preprocess_func=<function NestedDictDataLoader.<lambda>>)[source]¶ DataLoader that uses a nested dictionary mapping to load files.
This is the most flexible existing type of DataLoader; it allows for the specification of different sets of files for different variables. The intvl_in parameter is a string description of the time frequency of the data one is referencing (e.g. ‘monthly’, ‘daily’, ‘3-hourly’). In principle, one can give it any string value. The variable name can be any variable name in your aospy object library (including alternative names).
Parameters: - file_map : dict
A dict mapping intvl_in to dictionaries mapping Var objects to lists of files
- upcast_float32 : bool (default True)
Whether to cast loaded DataArrays with the float32 datatype to float64 before doing calculations
- data_vars : str (default ‘minimal’)
Mode for concatenating data variables in call to
xr.open_mfdataset
- coords : str (default ‘minimal’)
Mode for concatenating coordinate variables in call to
xr.open_mfdataset
.- preprocess_func : function (optional)
A function to apply to every Dataset before processing in aospy. Must take a Dataset and
**kwargs
as its two arguments.
Examples
Case of a set of monthly average files for large scale precipitation, and another monthly average set of files for convective precipitation.
>>> file_map = {'monthly': {'precl': '000[4-6]0101.precl.nc', ... 'precc': '000[4-6]0101.precc.nc'}} >>> data_loader = NestedDictDataLoader(file_map)
See
aospy.data_loader.DictDataLoader
for an example of a possible function to pass as apreprocess_func
.
-
class
aospy.data_loader.
GFDLDataLoader
(template=None, data_direc=None, data_dur=None, data_start_date=None, data_end_date=None, upcast_float32=None, data_vars=None, coords=None, preprocess_func=None)[source]¶ DataLoader for NOAA GFDL model output.
This is an example of a domain-specific custom DataLoader, designed specifically for finding files output by the Geophysical Fluid Dynamics Laboratory’s model history file post-processing tools.
Parameters: - template : GFDLDataLoader
Optional argument to specify a base GFDLDataLoader to inherit parameters from
- data_direc : str
Root directory of data files
- data_dur : int
Number of years included per post-processed file
- data_start_date : datetime.datetime
Start date of data files
- data_end_date : datetime.datetime
End date of data files
- upcast_float32 : bool (default True)
Whether to cast loaded DataArrays with the float32 datatype to float64 before doing calculations
- data_vars : str (default ‘minimal’)
Mode for concatenating data variables in call to
xr.open_mfdataset
- coords : str (default ‘minimal’)
Mode for concatenating coordinate variables in call to
xr.open_mfdataset
.- preprocess_func : function (optional)
A function to apply to every Dataset before processing in aospy. Must take a Dataset and
**kwargs
as its two arguments.
Examples
Case without a template to start from.
>>> base = GFDLDataLoader(data_direc='/archive/control/pp', data_dur=5, ... data_start_date=datetime(2000, 1, 1), ... data_end_date=datetime(2010, 12, 31))
Case with a starting template.
>>> data_loader = GFDLDataLoader(base, data_direc='/archive/2xCO2/pp')
See
aospy.data_loader.DictDataLoader
for an example of a possible function to pass as apreprocess_func
.
Variables and Regions¶
The Var
and Region
classes are used to represent,
respectively, physical quantities the user wishes to be able to
compute and geographical regions over which the user wishes to
aggregate their calculations.
Whereas the Proj
- Model
- Run
hierarchy is used to
describe the data resulting from particular model simulations, Var
and Region
represent the properties of generic physical entities
that do not depend on the underlying data.
Var¶
-
class
aospy.var.
Var
(name, alt_names=None, func=None, variables=None, units='', plot_units='', plot_units_conv=1, domain='atmos', description='', def_time=False, def_vert=False, def_lat=False, def_lon=False, math_str=False, colormap='RdBu_r', valid_range=None)[source]¶ An object representing a physical quantity to be computed.
Attributes: - name : str
The variable’s name
- alt_names : tuple of strings
All other names that the variable may be referred to in the input data
- names : tuple of strings
The combination of name and alt_names
- description : str
A description of the variable
- func : function
The function with which to compute the variable
- variables : sequence of aospy.Var objects
The variables passed to func to compute it
- units : str
The variable’s physical units
- domain : str
The physical domain of the variable, e.g. ‘atmos’, ‘ocean’, or ‘land’
- def_time, def_vert, def_lat, def_lon : bool
Whether the variable is defined, respectively, in time, vertically, in latitude, and in longitude
- math_str : str
The mathematical representation of the variable
- colormap : str
The name of the default colormap to be used in plots of this variable
- valid_range : length-2 tuple
The range of values outside which to flag as unphysical/erroneous
-
__init__
(name, alt_names=None, func=None, variables=None, units='', plot_units='', plot_units_conv=1, domain='atmos', description='', def_time=False, def_vert=False, def_lat=False, def_lon=False, math_str=False, colormap='RdBu_r', valid_range=None)[source]¶ Instantiate a Var object.
Parameters: - name : str
The variable’s name
- alt_names : tuple of strings
All other names that the variable might be referred to in any input data. Each of these should be unique to this variable in order to avoid loading the wrong quantity.
- description : str
A description of the variable
- func : function
The function with which to compute the variable
- variables : sequence of aospy.Var objects
The variables passed to func to compute it. Order matters: whenever calculations are performed to generate data corresponding to this Var, the data corresponding to the elements of variables will be passed to self.function in the same order.
- units : str
The variable’s physical units
- domain : str
The physical domain of the variable, e.g. ‘atmos’, ‘ocean’, or ‘land’. This is only used by aospy by some types of DataLoader, including GFDLDataLoader.
- def_time, def_vert, def_lat, def_lon : bool
Whether the variable is defined, respectively, in time, vertically, in latitude, and in longitude
- math_str : str
The mathematical representation of the variable. This is typically a raw string of LaTeX math-mode, e.g. r’$T_mathrm{sfc}$’ for surface temperature.
- colormap : str
(Currently not used by aospy) The name of the default colormap to be used in plots of this variable.
- valid_range : length-2 tuple
The range of values outside which to flag as unphysical/erroneous
Note
While for the sake of tracking metadata we encourage users to add a units
attribute to each of their Var
objects, these units
attributes provide
nothing more than descriptive value. One day we hope DataArrays
produced by loading or computing variables will be truly units-aware
(e.g. adding or subtracting two DataArrays with different units will lead to
an error, or multiplying two DataArrays will result in a DataArray with new units),
but we will leave that to upstream libraries to implement
(see pydata/xarray#525
for more discussion).
Region¶
-
class
aospy.region.
Region
(name='', description='', west_bound=None, east_bound=None, south_bound=None, north_bound=None, mask_bounds=None, do_land_mask=False)[source]¶ Geographical region over which to perform averages and other reductions.
Each Proj object includes a list of Region objects, which is used by Calc to determine which regions over which to perform time reductions over region-average quantities.
Region boundaries are specified as either a single “rectangle” in latitude and longitude or the union of multiple such rectangles. In addition, a land or ocean mask can be applied.
See also
aospy.Calc.region_calcs
Attributes: - name : str
The region’s name
- description : str
A description of the region
- mask_bounds : tuple
The coordinates definining the lat-lon rectangle(s) that define(s) the region’s boundaries
- do_land_mask
Whether values occurring over land, ocean, or neither are excluded from the region, and whether the included points must be strictly land or ocean or if fractional land/ocean values are included.
-
__init__
(name='', description='', west_bound=None, east_bound=None, south_bound=None, north_bound=None, mask_bounds=None, do_land_mask=False)[source]¶ Instantiate a Region object.
Note that longitudes spanning (-180, 180), (0, 360), or any other range are all supported: -180 to 0, 180 to 360, etc. are interpreted as the western hemisphere, and 0-180, 360-540, etc. are interpreted as the eastern hemisphere. This is true both of the region definition and of any data upon which the region mask is applied.
E.g. suppose some of your data is defined on a -180 to 180 longitude grid, some of it is defined on a 0 to 360 grid, and some of it is defined on a -70 to 290 grid. A single Region object will work with all three of these.
Conversely, latitudes are always treated as -90 as the South Pole, 0 as the Equator, and 90 as the North Pole. Latitudes larger than 90 are not physically meaningful.
Parameters: - name : str
The region’s name. This must be unique from that of any other Region objects being used by the overlying Proj.
- description : str, optional
A description of the region. This is not used internally by aospy; it is solely for the user’s information.
- west_bound, east_bound : { scalar, aospy.Longitude }, optional
The western and eastern boundaries of the region. All input longitudes are casted to
aospy.Longitude
objects, which essentially maps them to a 180W to 180E grid. The region’s longitudes always start atwest_bound
and move toward the east until reachingeast_bound
. This means that there are two distinct cases:- If, after this translation,
west_bound
is less thaneast_bound
, the region includes the points east ofwest_bound
and west ofeast_bound
. - If
west_bound
is greater thaneast_bound
, then the region is treated as wrapping around the dateline, i.e. it’s western-most point iseast_bound
, and it includes all points moving east from there untilwest_bound
.
If the region boundaries are more complicated than a single lat-lon rectangle, use
mask_bounds
instead.- If, after this translation,
- south_bound, north_bound : scalar, optional
The southern, and northern boundaries, respectively, of the region. If the region boundaries are more complicated than a single lat-lon rectangle, use mask_bounds instead.
- mask_bounds : sequence, optional
Each element is a length-4 sequence of the format (west_bound, east_bound, south_bound, north_bound), where each of these _bound arguments is of the form described above.
- do_land_mask, bool or str, optional
Determines what, if any, land mask is applied in addition to the mask defining the region’s boundaries. Default False. Must be one of
False
,True
, ‘ocean’, ‘strict_land’, or ‘strict_ocean’:True
: apply the data’s full land maskFalse
: apply no mask- ‘ocean’: mask out land rather than ocean
- ‘strict_land’: mask out all points that are not 100% land
- ‘strict_ocean’: mask out all points that are not 100% ocean
Examples
Define a region spanning the entire globe:
>>> globe = Region(name='globe', west_bound=0, east_bound=360, ... south_bound=-90, north_bound=90, do_land_mask=False)
Longitudes are handled as cyclic, so this definition could have equivalently used west_bound=-180, east_bound=180 or west_bound=200, east_bound=560, or anything else that spanned 360 degrees total.
Define a region corresponding to land in the mid-latitudes, which we’ll define as land points within 30-60 degrees latitude in both hemispheres. Because this region is the union of multiple lat-lon rectangles, it has to be defined using mask_bounds:
>>> land_mid_lats = Region(name='land_midlat', do_land_mask=True, ... mask_bounds=[(-180, 180, 30, 60), ... (-180, 180, -30, -60)])
Define a region spanning the southern Tropical Atlantic ocean, which we’ll take to be all ocean points between 60W and 30E and between the Equator and 30S:
>>> atl_south_trop = Region(name='atl_sh_trop', west_bound=-60, ... east_bound=30, south_bound=-30, ... north_bound=0, do_land_mask='ocean')
Define the “opposite” region, i.e. all ocean points in the southern Tropics outside of the Atlantic. We simply swap
west_bound
andeast_bound
of the previous example:>>> non_atl_south_trop = Region(name='non_atl_sh_trop', west_bound=30, ... east_bound=-60, south_bound=-30, ... north_bound=0, do_land_mask='ocean')
-
av
(data, lon_str='lon', lat_str='lat', land_mask_str='land_mask', sfc_area_str='sfc_area')[source]¶ Time-average of region-averaged data.
Parameters: - data : xarray.DataArray
The array to compute the regional time-average of
- lat_str, lon_str, land_mask_str, sfc_area_str : str, optional
The name of the latitude, longitude, land mask, and surface area coordinates, respectively, in
data
. Defaults are the corresponding values inaospy.internal_names
.
Returns: - xarray.DataArray
The region-averaged and time-averaged data.
-
mask_var
(data, lon_cyclic=True, lon_str='lon', lat_str='lat')[source]¶ Mask the given data outside this region.
Parameters: - data : xarray.DataArray
The array to be regionally masked.
- lon_cyclic : bool, optional (default True)
Whether or not the longitudes of
data
span the whole globe, meaning that they should be wrapped around as necessary to cover the Region’s full width.- lon_str, lat_str : str, optional
The names of the longitude and latitude dimensions, respectively, in the data to be masked. Defaults are
aospy.internal_names.LON_STR
andaospy.internal_names.LON_STR
, respectively.
Returns: - xarray.DataArray
The original array with points outside of the region masked.
-
std
(data, lon_str='lon', lat_str='lat', land_mask_str='land_mask', sfc_area_str='sfc_area')[source]¶ Temporal standard deviation of region-averaged data.
Parameters: - data : xarray.DataArray
The array to compute the regional time-average of
- lat_str, lon_str, land_mask_str, sfc_area_str : str, optional
The name of the latitude, longitude, land mask, and surface area coordinates, respectively, in
data
. Defaults are the corresponding values inaospy.internal_names
.
Returns: - xarray.DataArray
The temporal standard deviation of the region-averaged data
-
ts
(data, lon_cyclic=True, lon_str='lon', lat_str='lat', land_mask_str='land_mask', sfc_area_str='sfc_area')[source]¶ Create yearly time-series of region-averaged data.
Parameters: - data : xarray.DataArray
The array to create the regional timeseries of
- lon_cyclic : { None, True, False }, optional (default True)
Whether or not the longitudes of
data
span the whole globe, meaning that they should be wrapped around as necessary to cover the Region’s full width.- lat_str, lon_str, land_mask_str, sfc_area_str : str, optional
The name of the latitude, longitude, land mask, and surface area coordinates, respectively, in
data
. Defaults are the corresponding values inaospy.internal_names
.
Returns: - xarray.DataArray
The timeseries of values averaged within the region and within each year, one value per year.
Calculations¶
Calc
is the engine that combines the user’s specifications of (1)
the data on disk via Proj
, Model
, and Run
, (2) the
physical quantity to compute and regions to aggregate over via Var
and Region
, and (3) the desired date range, time reduction method,
and other characteristics to actually perform the calculation
Whereas Proj
, Model
, Run
, Var
, and Region
are all
intended to be saved in .py
files for reuse, Calc
objects are
intended to be generated dynamically by a main script and then not
retained after they have written their outputs to disk following the
user’s specifications.
Moreover, if the main.py
script is used to execute calculations,
no direct interfacing with Calc
is required by the user,
in which case this section should be skipped entirely.
Also included is the automate
module, which enables aospy e.g. in
the main script to find objects in the user’s object library that the
user specifies via their string names rather than having to import the
objects themselves.
Calc¶
-
class
aospy.calc.
Calc
(proj=None, model=None, run=None, var=None, date_range=None, region=None, intvl_in=None, intvl_out=None, dtype_in_time=None, dtype_in_vert=None, dtype_out_time=None, dtype_out_vert=None, level=None, time_offset=None)[source]¶ Class for executing, saving, and loading a single computation.
-
__init__
(proj=None, model=None, run=None, var=None, date_range=None, region=None, intvl_in=None, intvl_out=None, dtype_in_time=None, dtype_in_vert=None, dtype_out_time=None, dtype_out_vert=None, level=None, time_offset=None)[source]¶ Instantiate a Calc object.
Parameters: - proj : aospy.Proj object
The project for this calculation.
- model : aospy.Model object
The model for this calculation.
- run : aospy.Run object
The run for this calculation.
- var : aospy.Var object
The variable for this calculation.
- region : sequence of aospy.Region objects
The region(s) over which any regional reductions will be performed.
- date_range : tuple of datetime.datetime objects
The range of dates over which to perform calculations.
- intvl_in : {None, ‘annual’, ‘monthly’, ‘daily’, ‘6hr’, ‘3hr’}, optional
The time resolution of the input data.
- dtype_in_time : {None, ‘inst’, ‘ts’, ‘av’, ‘av_ts’}, optional
What the time axis of the input data represents:
- ‘inst’ : Timeseries of instantaneous values
- ‘ts’ : Timeseries of averages over the period of each time-index
- ‘av’ : A single value averaged over a date range
- dtype_in_vert : {None, ‘pressure’, ‘sigma’}, optional
The vertical coordinate system used by the input data:
- None : not defined vertically
- ‘pressure’ : pressure coordinates
- ‘sigma’ : hybrid sigma-pressure coordinates
- intvl_out : {‘ann’, season-string, month-integer}
The sub-annual time interval over which to compute:
- ‘ann’ : Annual mean
- season-string : E.g. ‘JJA’ for June-July-August
- month-integer : 1 for January, 2 for February, etc.
- dtype_out_time : tuple with elements being one or more of:
- Gridpoint-by-gridpoint output:
- ‘av’ : Gridpoint-by-gridpoint time-average
- ‘std’ : Gridpoint-by-gridpoint temporal standard deviation
- ‘ts’ : Gridpoint-by-gridpoint time-series
- Averages over each region specified via region:
- ‘reg.av’, ‘reg.std’, ‘reg.ts’ : analogous to ‘av’, ‘std’, ‘ts’
- Gridpoint-by-gridpoint output:
- dtype_out_vert : {None, ‘vert_av’, ‘vert_int’}, optional
How to reduce the data vertically:
- None : no vertical reduction (i.e. output is defined vertically)
- ‘vert_av’ : mass-weighted vertical average
- ‘vert_int’ : mass-weighted vertical integral
- time_offset : {None, dict}, optional
How to offset input data in time to correct for metadata errors
- None : no time offset applied
- dict : e.g.
{'hours': -3}
to offset times by -3 hours Seeaospy.utils.times.apply_time_offset()
.
-
ARR_XARRAY_NAME
= 'aospy_result'¶
-
compute
(write_to_tar=True)[source]¶ Perform all desired calculations on the data and save externally.
-
automate¶
Functionality for specifying and cycling through multiple calculations.
-
class
aospy.automate.
CalcSuite
(calc_suite_specs)[source]¶ Suite of Calc objects generated from provided specifications.
-
aospy.automate.
submit_mult_calcs
(calc_suite_specs, exec_options=None)[source]¶ Generate and execute all specified computations.
Once the calculations are prepped and submitted for execution, any calculation that triggers any exception or error is skipped, and the rest of the calculations proceed unaffected. This prevents an error in a single calculation from crashing a large suite of calculations.
Parameters: - calc_suite_specs : dict
The specifications describing the full set of calculations to be generated and potentially executed. Accepted keys and their values:
- library : module or package comprising an aospy object library
The aospy object library for these calculations.
- projects : list of aospy.Proj objects
The projects to permute over.
- models : ‘all’, ‘default’, or list of aospy.Model objects
The models to permute over. If ‘all’, use all models in the
models
attribute of eachProj
. If ‘default’, use all models in thedefault_models
attribute of eachProj
.- runs : ‘all’, ‘default’, or list of aospy.Run objects
The runs to permute over. If ‘all’, use all runs in the
runs
attribute of eachModel
. If ‘default’, use all runs in thedefault_runs
attribute of eachModel
.- variables : list of aospy.Var objects
The variables to be calculated.
- regions : ‘all’ or list of aospy.Region objects
The region(s) over which any regional reductions will be performed. If ‘all’, use all regions in the
regions
attribute of eachProj
.- date_ranges : ‘default’ or a list of tuples
The range of dates (inclusive) over which to perform calculations. If ‘default’, use the
default_start_date
anddefault_end_date
attribute of eachRun
. Else provide a list of tuples, each containing a pair of start and end dates, such asdate_ranges=[(start, end)]
wherestart
andend
are eachdatetime.datetime
objects, partial datetime strings (e.g. ‘0001’),np.datetime64
objects, orcftime.datetime
objects.- output_time_intervals : {‘ann’, season-string, month-integer}
The sub-annual time interval over which to aggregate.
- ‘ann’ : Annual mean
- season-string : E.g. ‘JJA’ for June-July-August
- month-integer : 1 for January, 2 for February, etc. Each one is
- a separate reduction, e.g. [1, 2] would produce averages (or other specified time reduction) over all Januaries, and separately over all Februaries.
- output_time_regional_reductions : list of reduction string identifiers
Unlike most other keys, these are not permuted over when creating the
aospy.Calc
objects that execute the calculations; eachaospy.Calc
performs all of the specified reductions. Accepted string identifiers are:- Gridpoint-by-gridpoint output:
- ‘av’ : Gridpoint-by-gridpoint time-average
- ‘std’ : Gridpoint-by-gridpoint temporal standard deviation
- ‘ts’ : Gridpoint-by-gridpoint time-series
- Averages over each region specified via region:
- ‘reg.av’, ‘reg.std’, ‘reg.ts’ : analogous to ‘av’, ‘std’, ‘ts’
- Gridpoint-by-gridpoint output:
- output_vertical_reductions : {None, ‘vert_av’, ‘vert_int’}, optional
How to reduce the data vertically:
- None : no vertical reduction
- ‘vert_av’ : mass-weighted vertical average
- ‘vert_int’ : mass-weighted vertical integral
- input_time_intervals : {‘annual’, ‘monthly’, ‘daily’, ‘#hr’}
A string specifying the time resolution of the input data. In ‘#hr’ above, the ‘#’ stands for a number, e.g. 3hr or 6hr, for sub-daily output. These are the suggested specifiers, but others may be used if they are also used by the DataLoaders for the given Runs.
- input_time_datatypes : {‘inst’, ‘ts’, ‘av’}
What the time axis of the input data represents:
- ‘inst’ : Timeseries of instantaneous values
- ‘ts’ : Timeseries of averages over the period of each time-index
- ‘av’ : A single value averaged over a date range
- input_vertical_datatypes : {False, ‘pressure’, ‘sigma’}, optional
The vertical coordinate system used by the input data:
- False : not defined vertically
- ‘pressure’ : pressure coordinates
- ‘sigma’ : hybrid sigma-pressure coordinates
- input_time_offsets : {None, dict}, optional
How to offset input data in time to correct for metadata errors
- None : no time offset applied
- dict : e.g.
{'hours': -3}
to offset times by -3 hours Seeaospy.utils.times.apply_time_offset()
.
- exec_options : dict or None (default None)
Options regarding how the calculations are reported, submitted, and saved. If None, default settings are used for all options. Currently supported options (each should be either True or False):
- prompt_verify : (default False) If True, print summary of
- calculations to be performed and prompt user to confirm before submitting for execution.
- parallelize : (default False) If True, submit calculations in
- parallel.
- client : distributed.Client or None (default None) The
- dask.distributed Client used to schedule computations. If None and parallelize is True, a LocalCluster will be started.
- write_to_tar : (default True) If True, write results of calculations
- to .tar files, one for each
aospy.Run
object. These tar files have an identical directory structures the standard output relative to their root directory, which is specified via the tar_direc_out argument of each Proj object’s instantiation.
Returns: - A list of the return values from each :py:meth:`aospy.Calc.compute` call
If a calculation ran without error, this value is the
aospy.Calc
object itself, with the results of its calculations saved in itsdata_out
attribute.data_out
is a dictionary, with the keys being the temporal-regional reduction identifiers (e.g. ‘reg.av’), and the values being the corresponding result.- If any error occurred during a calculation, the return value is None.
Raises: - AospyException
If the
prompt_verify
option is set to True and the user does not respond affirmatively to the prompt.
Utilities¶
aospy includes a number of utility functions that are used internally and may also be useful to users for their own purposes. These include functions pertaining to input/output (IO), longitudes, time arrays, and vertical coordinates.
utils.io¶
Utility functions for data input and output.
-
aospy.utils.io.
data_in_label
(intvl_in, dtype_in_time, dtype_in_vert=False)[source]¶ Create string label specifying the input data of a calculation.
-
aospy.utils.io.
data_name_gfdl
(name, domain, data_type, intvl_type, data_yr, intvl, data_in_start_yr, data_in_dur)[source]¶ Determine the filename of GFDL model data output.
utils.longitude¶
Functionality relating to parsing and comparing longitudes.
-
class
aospy.utils.longitude.
Longitude
(value)[source]¶ Geographic longitude.
Enables unambiguous comparison of longitudes using the standard comparison operators, regardless of they were initially represented with a 0 to 360 convention, -180 to 180 convention, or anything else, and even if the original convention differs between them.
Specifically, the
<
operator assesses if the first object is to the west of the second object, with the standard convention that longitudes in the Western Hemisphere are always to the west of longitudes in the Eastern Hemisphere. The>
operator is defined analogously.==
,>=
, and<=
are also all defined.In addition to other Longitude objects, the operators can be used to compare a Longitude object to anything that can be casted to a Longitude object, or to any sequence (e.g. a list or xarray.DataArray) whose elements can be casted to Longitude objects.
-
hemisphere
¶ {‘W’, ‘E’} The longitude’s hemisphere, either western or eastern.
-
longitude
¶ (scalar) The unsigned numerical value of the longitude.
Always in the range 0 to 180. Must be combined with the
hemisphere
attribute to specify the exact latitude.
-
-
aospy.utils.longitude.
lon_to_0360
(lon)[source]¶ Convert longitude(s) to be within [0, 360).
The Eastern hemisphere corresponds to 0 <= lon + (n*360) < 180, and the Western Hemisphere corresponds to 180 <= lon + (n*360) < 360, where ‘n’ is any integer (positive, negative, or zero).
Parameters: - lon : scalar or sequence of scalars
One or more longitude values to be converted to lie in the [0, 360) range
Returns: - If ``lon`` is a scalar, then a scalar of the same type in the range [0,
360). If
lon
is array-like, then an array-like of the same type with each element a scalar in the range [0, 360).
-
aospy.utils.longitude.
lon_to_pm180
(lon)[source]¶ Convert longitude(s) to be within [-180, 180).
The Eastern hemisphere corresponds to 0 <= lon + (n*360) < 180, and the Western Hemisphere corresponds to 180 <= lon + (n*360) < 360, where ‘n’ is any integer (positive, negative, or zero).
Parameters: - lon : scalar or sequence of scalars
One or more longitude values to be converted to lie in the [-180, 180) range
Returns: - If ``lon`` is a scalar, then a scalar of the same type in the range
[-180, 180). If
lon
is array-like, then an array-like of the same type with each element a scalar in the range [-180, 180).
utils.times¶
Utility functions for handling times, dates, etc.
-
aospy.utils.times.
add_uniform_time_weights
(ds)[source]¶ Append uniform time weights to a Dataset.
All DataArrays with a time coordinate require a time weights coordinate. For Datasets read in without a time bounds coordinate or explicit time weights built in, aospy adds uniform time weights at each point in the time coordinate.
Parameters: - ds : Dataset
Input data
Returns: - Dataset
-
aospy.utils.times.
apply_time_offset
(time, years=0, months=0, days=0, hours=0)[source]¶ Apply a specified offset to the given time array.
This is useful for GFDL model output of instantaneous values. For example, 3 hourly data postprocessed to netCDF files spanning 1 year each will actually have time values that are offset by 3 hours, such that the first value is for 1 Jan 03:00 and the last value is 1 Jan 00:00 of the subsequent year. This causes problems in xarray, e.g. when trying to group by month. It is resolved by manually subtracting off those three hours, such that the dates span from 1 Jan 00:00 to 31 Dec 21:00 as desired.
Parameters: - time : xarray.DataArray representing a timeseries
- years, months, days, hours : int, optional
The number of years, months, days, and hours, respectively, to offset the time array by. Positive values move the times later.
Returns: - pandas.DatetimeIndex
Examples
Case of a length-1 input time array:
>>> times = xr.DataArray(datetime.datetime(1899, 12, 31, 21)) >>> apply_time_offset(times) Timestamp('1900-01-01 00:00:00')
Case of input time array with length greater than one:
>>> times = xr.DataArray([datetime.datetime(1899, 12, 31, 21), ... datetime.datetime(1899, 1, 31, 21)]) >>> apply_time_offset(times) # doctest: +NORMALIZE_WHITESPACE DatetimeIndex(['1900-01-01', '1899-02-01'], dtype='datetime64[ns]', freq=None)
-
aospy.utils.times.
assert_matching_time_coord
(arr1, arr2)[source]¶ Check to see if two DataArrays have the same time coordinate.
Parameters: - arr1 : DataArray or Dataset
First DataArray or Dataset
- arr2 : DataArray or Dataset
Second DataArray or Dataset
Raises: - ValueError
If the time coordinates are not identical between the two Datasets
-
aospy.utils.times.
average_time_bounds
(ds)[source]¶ Return the average of each set of time bounds in the Dataset.
Useful for creating a new time array to replace the Dataset’s native time array, in the case that the latter matches either the start or end bounds. This can cause errors in grouping (akin to an off-by-one error) if the timesteps span e.g. one full month each. Note that the Dataset’s times must not have already undergone “CF decoding”, wherein they are converted from floats using the ‘units’ attribute into datetime objects.
Parameters: - ds : xarray.Dataset
A Dataset containing a time bounds array with name matching internal_names.TIME_BOUNDS_STR. This time bounds array must have two dimensions, one of which’s coordinates is the Dataset’s time array, and the other is length-2.
Returns: - xarray.DataArray
The mean of the start and end times of each timestep in the original Dataset.
Raises: - ValueError
If the time bounds array doesn’t match the shape specified above.
-
aospy.utils.times.
datetime_or_default
(date, default)[source]¶ Return a datetime-like object or a default.
Parameters: - date : None or datetime-like object or str
- default : The value to return if date is None
Returns: - `default` if `date` is `None`, otherwise returns the result of
- `utils.times.ensure_datetime(date)`
-
aospy.utils.times.
ensure_datetime
(obj)[source]¶ Return the object if it is a datetime-like object
Parameters: - obj : Object to be tested.
Returns: - The original object if it is a datetime-like object
Raises: - TypeError if `obj` is not datetime-like
-
aospy.utils.times.
ensure_time_as_index
(ds)[source]¶ Ensures that time is an indexed coordinate on relevant quantites.
Sometimes when the data we load from disk has only one timestep, the indexing of time-defined quantities in the resulting xarray.Dataset gets messed up, in that the time bounds array and data variables don’t get indexed by time, even though they should. Therefore, we need this helper function to (possibly) correct this.
Note that this must be applied before CF-conventions are decoded; otherwise it casts
np.datetime64[ns]
asint
values.Parameters: - ds : Dataset
Dataset with a time coordinate
Returns: - Dataset
-
aospy.utils.times.
ensure_time_avg_has_cf_metadata
(ds)[source]¶ Add time interval length and bounds coordinates for time avg data.
If the Dataset or DataArray contains time average data, enforce that there are coordinates that track the lower and upper bounds of the time intervals, and that there is a coordinate that tracks the amount of time per time average interval.
CF conventions require that a quantity stored as time averages over time intervals must have time and time_bounds coordinates [1]. aospy further requires AVERAGE_DT for time average data, for accurate time-weighted averages, which can be inferred from the CF-required time_bounds coordinate if needed. This step should be done prior to decoding CF metadata with xarray to ensure proper computed timedeltas for different calendar types.
[1] http://cfconventions.org/cf-conventions/v1.6.0/cf-conventions.html#_data_representative_of_cells Parameters: - ds : Dataset or DataArray
Input data
Returns: - Dataset or DataArray
Time average metadata attributes added if needed.
-
aospy.utils.times.
extract_months
(time, months)[source]¶ Extract times within specified months of the year.
Parameters: - time : xarray.DataArray
Array of times that can be represented by numpy.datetime64 objects (i.e. the year is between 1678 and 2262).
- months : Desired months of the year to include
Returns: - xarray.DataArray of the desired times
-
aospy.utils.times.
infer_year
(date)[source]¶ Given a datetime-like object or string infer the year.
Parameters: - date : datetime-like object or str
Input date
Returns: - int
Examples
>>> infer_year('2000') 2000 >>> infer_year('2000-01') 2000 >>> infer_year('2000-01-31') 2000 >>> infer_year(datetime.datetime(2000, 1, 1)) 2000 >>> infer_year(np.datetime64('2000-01-01')) 2000 >>> infer_year(DatetimeNoLeap(2000, 1, 1)) 2000 >>>
-
aospy.utils.times.
maybe_convert_to_index_date_type
(index, date)[source]¶ Convert a datetime-like object to the index’s date type.
Datetime indexing in xarray can be done using either a pandas DatetimeIndex or a CFTimeIndex. Both support partial-datetime string indexing regardless of the calendar type of the underlying data; therefore if a string is passed as a date, we return it unchanged. If a datetime-like object is provided, it will be converted to the underlying date type of the index. For a DatetimeIndex that is np.datetime64; for a CFTimeIndex that is an object of type cftime.datetime specific to the calendar used.
Parameters: - index : pd.Index
Input time index
- date : datetime-like object or str
Input datetime
Returns: - date of the type appropriate for the time index of the Dataset
-
aospy.utils.times.
month_indices
(months)[source]¶ Convert string labels for months to integer indices.
Parameters: - months : str, int
If int, number of the desired month, where January=1, February=2, etc. If str, must match either ‘ann’ or some subset of ‘jfmamjjasond’. If ‘ann’, use all months. Otherwise, use the specified months.
Returns: - np.ndarray of integers corresponding to desired month indices
Raises: - TypeError : If months is not an int or str
See also
_month_conditional
-
aospy.utils.times.
monthly_mean_at_each_ind
(monthly_means, sub_monthly_timeseries)[source]¶ Copy monthly mean over each time index in that month.
Parameters: - monthly_means : xarray.DataArray
array of monthly means
- sub_monthly_timeseries : xarray.DataArray
array of a timeseries at sub-monthly time resolution
Returns: - xarray.DataArray with eath monthly mean value from `monthly_means` repeated
- at each time within that month from `sub_monthly_timeseries`
See also
monthly_mean_ts
- Create timeseries of monthly mean values
-
aospy.utils.times.
monthly_mean_ts
(arr)[source]¶ Convert a sub-monthly time-series into one of monthly means.
Also drops any months with no data in the original DataArray.
Parameters: - arr : xarray.DataArray
Timeseries of sub-monthly temporal resolution data
Returns: - xarray.DataArray
Array resampled to comprise monthly means
See also
monthly_mean_at_each_ind
- Copy monthly means to each submonthly time
-
aospy.utils.times.
sel_time
(da, start_date, end_date)[source]¶ Subset a DataArray or Dataset for a given date range.
Ensures that data are present for full extent of requested range. Appends start and end date of the subset to the DataArray.
Parameters: - da : DataArray or Dataset
data to subset
- start_date : np.datetime64
start of date interval
- end_date : np.datetime64
end of date interval
Returns: - da : DataArray or Dataset
subsetted data
Raises: - AssertionError
if data for requested range do not exist for part or all of requested range
-
aospy.utils.times.
yearly_average
(arr, dt)[source]¶ Average a sub-yearly time-series over each year.
Resulting timeseries comprises one value for each year in which the original array had valid data. Accounts for (i.e. ignores) masked values in original data when computing the annual averages.
Parameters: - arr : xarray.DataArray
The array to be averaged
- dt : xarray.DataArray
Array of the duration of each timestep
Returns: - xarray.DataArray
Has the same shape and mask as the original
arr
, except for the time dimension, which is truncated to one value for each year thatarr
spanned
utils.vertcoord¶
Utility functions for dealing with vertical coordinates.
-
aospy.utils.vertcoord.
d_deta_from_pfull
(arr)[source]¶ Compute $partial/partialeta$ of the array on full hybrid levels.
$eta$ is the model vertical coordinate, and its value is assumed to simply increment by 1 from 0 at the surface upwards. The data to be differenced is assumed to be defined at full pressure levels.
Parameters: - arr : xarray.DataArray containing the ‘pfull’ dim
Returns: - deriv : xarray.DataArray with the derivative along ‘pfull’ computed via
2nd order centered differencing.
-
aospy.utils.vertcoord.
d_deta_from_phalf
(arr, pfull_coord)[source]¶ Compute pressure level thickness from half level pressures.
-
aospy.utils.vertcoord.
does_coord_increase_w_index
(arr)[source]¶ Determine if the array values increase with the index.
Useful, e.g., for pressure, which sometimes is indexed surface to TOA and sometimes the opposite.
-
aospy.utils.vertcoord.
dp_from_p
(p, ps, p_top=0.0, p_bot=110000.0)[source]¶ Get level thickness of pressure data, incorporating surface pressure.
Level edges are defined as halfway between the levels, as well as the user- specified uppermost and lowermost values. The dp of levels whose bottom pressure is less than the surface pressure is not changed by ps, since they don’t intersect the surface. If ps is in between a level’s top and bottom pressures, then its dp becomes the pressure difference between its top and ps. If ps is less than a level’s top and bottom pressures, then that level is underground and its values are masked.
Note that postprocessing routines (e.g. at GFDL) typically mask out data wherever the surface pressure is less than the level’s given value, not the level’s upper edge. This masks out more levels than the
-
aospy.utils.vertcoord.
dp_from_ps
(bk, pk, ps, pfull_coord)[source]¶ Compute pressure level thickness from surface pressure
-
aospy.utils.vertcoord.
get_dim_name
(arr, names)[source]¶ Determine if an object has an attribute name matching a given list.
-
aospy.utils.vertcoord.
integrate
(arr, ddim, dim=False, is_pressure=False)[source]¶ Integrate along the given dimension.
-
aospy.utils.vertcoord.
level_thickness
(p, p_top=0.0, p_bot=101325.0)[source]¶ Calculates the thickness, in Pa, of each pressure level.
Assumes that the pressure values given are at the center of that model level, except for the lowest value (typically 1000 hPa), which is the bottom boundary. The uppermost level extends to 0 hPa.
Unlike dp_from_p, this does not incorporate the surface pressure.
-
aospy.utils.vertcoord.
pfull_from_ps
(bk, pk, ps, pfull_coord)[source]¶ Compute pressure at full levels from surface pressure.
-
aospy.utils.vertcoord.
phalf_from_ps
(bk, pk, ps)[source]¶ Compute pressure of half levels of hybrid sigma-pressure coordinates.
-
aospy.utils.vertcoord.
replace_coord
(arr, old_dim, new_dim, new_coord)[source]¶ Replace a coordinate with new one; new and old must have same shape.
-
aospy.utils.vertcoord.
to_pascal
(arr, is_dp=False)[source]¶ Force data with units either hPa or Pa to be in Pa.
-
aospy.utils.vertcoord.
to_pfull_from_phalf
(arr, pfull_coord)[source]¶ Compute data at full pressure levels from values at half levels.
-
aospy.utils.vertcoord.
to_phalf_from_pfull
(arr, val_toa=0, val_sfc=0)[source]¶ Compute data at half pressure levels from values at full levels.
Could be the pressure array itself, but it could also be any other data defined at pressure levels. Requires specification of values at surface and top of atmosphere.