pygmt.blockmedian

pygmt.blockmedian(data=None, x=None, y=None, z=None, output_type='pandas', outfile=None, **kwargs)[source]

Block average (x, y, z) data tables by median estimation.

Reads arbitrarily located (x, y, z) triplets [or optionally weighted quadruplets (x, y, z, w)] and writes to the output a median position and value for every non-empty block in a grid region defined by the region and spacing parameters.

Takes a matrix, (x, y, z) triplets, or a file name as input.

Must provide either data or x, y, and z.

Full option list at https://docs.generic-mapping-tools.org/6.5/blockmedian.html

Aliases:

  • I = spacing

  • R = region

  • V = verbose

  • a = aspatial

  • b = binary

  • d = nodata

  • e = find

  • f = coltypes

  • h = header

  • i = incols

  • o = outcols

  • r = registration

  • w = wrap

Parameters:
  • data (str, numpy.ndarray, pandas.DataFrame, xarray.Dataset, or geopandas.GeoDataFrame) – Pass in (x, y, z) or (longitude, latitude, elevation) values by providing a file name to an ASCII data table, a 2-D numpy.ndarray, a pandas.DataFrame, an xarray.Dataset made up of 1-D xarray.DataArray data variables, or a geopandas.GeoDataFrame containing the tabular data.

  • x/y/z (1-D arrays) – Arrays of x and y coordinates and values z of the data points.

  • output_type (Literal['pandas', 'numpy', 'file'], default: 'pandas') –

    Desired output type of the result data.

    • pandas will return a pandas.DataFrame object.

    • numpy will return a numpy.ndarray object.

    • file will save the result to the file specified by the outfile parameter.

  • outfile (str | None, default: None) – File name for saving the result data. Required if output_type="file". If specified, output_type will be forced to be "file".

  • spacing (float, str, or list) –

    x_inc[+e|n][/y_inc[+e|n]]. x_inc [and optionally y_inc] is the grid spacing.

    • Geographical (degrees) coordinates: Optionally, append an increment unit. Choose among m to indicate arc-minutes or s to indicate arc-seconds. If one of the units e, f, k, M, n or u is appended instead, the increment is assumed to be given in meter, foot, km, mile, nautical mile or US survey foot, respectively, and will be converted to the equivalent degrees longitude at the middle latitude of the region (the conversion depends on PROJ_ELLIPSOID). If y_inc is given but set to 0 it will be reset equal to x_inc; otherwise it will be converted to degrees latitude.

    • All coordinates: If +e is appended then the corresponding max x (east) or y (north) may be slightly adjusted to fit exactly the given increment [by default the increment may be adjusted slightly to fit the given domain]. Finally, instead of giving an increment you may specify the number of nodes desired by appending +n to the supplied integer argument; the increment is then recalculated from the number of nodes, the registration, and the domain. The resulting increment value depends on whether you have selected a gridline-registered or pixel-registered grid; see GMT File Formats for details.

    Note: If region=grdfile is used then the grid spacing and the registration have already been initialized; use spacing and registration to override these values.

  • region (str or list) – xmin/xmax/ymin/ymax[+r][+uunit]. Specify the region of interest.

  • verbose (bool or str) –

    Select verbosity level [Default is w], which modulates the messages written to stderr. Choose among 7 levels of verbosity:

    • q - Quiet, not even fatal error messages are produced

    • e - Error messages only

    • w - Warnings [Default]

    • t - Timings (report runtimes for time-intensive algorithms)

    • i - Informational messages (same as verbose=True)

    • c - Compatibility warnings

    • d - Debugging messages

  • aspatial (bool or str) – [col=]name[,…]. Control how aspatial data are handled during input and output. Full documentation is at https://docs.generic-mapping-tools.org/6.5/gmt.html#aspatial-full.

  • binary (bool or str) –

    i|o[ncols][type][w][+l|b]. Select native binary input (using binary="i") or output (using binary="o"), where ncols is the number of data columns of type, which must be one of:

    • c - int8_t (1-byte signed char)

    • u - uint8_t (1-byte unsigned char)

    • h - int16_t (2-byte signed int)

    • H - uint16_t (2-byte unsigned int)

    • i - int32_t (4-byte signed int)

    • I - uint32_t (4-byte unsigned int)

    • l - int64_t (8-byte signed int)

    • L - uint64_t (8-byte unsigned int)

    • f - 4-byte single-precision float

    • d - 8-byte double-precision float

    • x - use to skip ncols anywhere in the record

    For records with mixed types, append additional comma-separated combinations of ncols type (no space). The following modifiers are supported:

    • w after any item to force byte-swapping.

    • +l|b to indicate that the entire data file should be read as little- or big-endian, respectively.

    Full documentation is at https://docs.generic-mapping-tools.org/6.5/gmt.html#bi-full.

  • nodata (str) – i|onodata. Substitute specific values with NaN (for tabular data). For example, nodata="-9999" will replace all values equal to -9999 with NaN during input and all NaN values with -9999 during output. Prepend i to the nodata value for input columns only. Prepend o to the nodata value for output columns only.

  • find (str) – [~]“pattern” | [~]/regexp/[i]. Only pass records that match the given pattern or regular expressions [Default processes all records]. Prepend ~ to the pattern or regexp to instead only pass data expressions that do not match the pattern. Append i for case insensitive matching. This does not apply to headers or segment headers.

  • coltypes (str) – [i|o]colinfo. Specify data types of input and/or output columns (time or geographical data). Full documentation is at https://docs.generic-mapping-tools.org/6.5/gmt.html#f-full.

  • header (str) –

    [i|o][n][+c][+d][+msegheader][+rremark][+ttitle]. Specify that input and/or output file(s) have n header records [Default is 0]. Prepend i if only the primary input should have header records. Prepend o to control the writing of header records, with the following modifiers supported:

    • +d to remove existing header records.

    • +c to add a header comment with column names to the output [Default is no column names].

    • +m to add a segment header segheader to the output after the header block [Default is no segment header].

    • +r to add a remark comment to the output [Default is no comment]. The remark string may contain \n to indicate line-breaks.

    • +t to add a title comment to the output [Default is no title]. The title string may contain \n to indicate line-breaks.

    Blank lines and lines starting with # are always skipped.

  • incols (str or 1-D array) –

    Specify data columns for primary input in arbitrary order. Columns can be repeated and columns not listed will be skipped [Default reads all columns in order, starting with the first (i.e., column 0)].

    • For 1-D array: specify individual columns in input order (e.g., incols=[1,0] for the 2nd column followed by the 1st column).

    • For str: specify individual columns or column ranges in the format start[:inc]:stop, where inc defaults to 1 if not specified, with columns and/or column ranges separated by commas (e.g., incols="0:2,4+l" to input the first three columns followed by the log-transformed 5th column). To read from a given column until the end of the record, leave off stop when specifying the column range. To read trailing text, add the column t. Append the word number to t to ingest only a single word from the trailing text. Instead of specifying columns, use incols="n" to simply read numerical input and skip trailing text. Optionally, append one of the following modifiers to any column or column range to transform the input columns:

      • +l to take the log10 of the input values.

      • +d to divide the input values by the factor divisor [Default is 1].

      • +s to multiple the input values by the factor scale [Default is 1].

      • +o to add the given offset to the input values [Default is 0].

  • outcols (str or 1-D array) –

    cols[,…][,t[word]]. Specify data columns for primary output in arbitrary order. Columns can be repeated and columns not listed will be skipped [Default writes all columns in order, starting with the first (i.e., column 0)].

    • For 1-D array: specify individual columns in output order (e.g., outcols=[1,0] for the 2nd column followed by the 1st column).

    • For str: specify individual columns or column ranges in the format start[:inc]:stop, where inc defaults to 1 if not specified, with columns and/or column ranges separated by commas (e.g., outcols="0:2,4" to output the first three columns followed by the 5th column). To write from a given column until the end of the record, leave off stop when specifying the column range. To write trailing text, add the column t. Append the word number to t to write only a single word from the trailing text. Instead of specifying columns, use outcols="n" to simply read numerical input and skip trailing text. Note: If incols is also used then the columns given to outcols correspond to the order after the incols selection has taken place.

  • registration (str) – g|p. Force gridline (g) or pixel (p) node registration [Default is g(ridline)].

  • wrap (str) –

    y|a|w|d|h|m|s|cperiod[/phase][+ccol]. Convert the input x-coordinate to a cyclical coordinate, or a different column if selected via +ccol. The following cyclical coordinate transformations are supported:

    • y - yearly cycle (normalized)

    • a - annual cycle (monthly)

    • w - weekly cycle (day)

    • d - daily cycle (hour)

    • h - hourly cycle (minute)

    • m - minute cycle (second)

    • s - second cycle (second)

    • c - custom cycle (normalized)

    Full documentation is at https://docs.generic-mapping-tools.org/6.5/gmt.html#w-full.

Return type:

DataFrame | ndarray | None

Returns:

ret – Return type depends on outfile and output_type:

  • None if outfile is set (output will be stored in the file set by outfile)

  • pandas.DataFrame or numpy.ndarray if outfile is not set (depends on output_type)

Example

>>> import pygmt
>>> # Load a table of ship observations of bathymetry off Baja California
>>> data = pygmt.datasets.load_sample_data(name="bathymetry")
>>> # Calculate block median values within 5 by 5 arc-minute bins
>>> data_bmedian = pygmt.blockmedian(
...     data=data, region=[245, 255, 20, 30], spacing="5m"
... )