pygmt.triangulate.delaunay_triples

static triangulate.delaunay_triples(data=None, x=None, y=None, z=None, *, output_type='pandas', outfile=None, **kwargs)[source]

Delaunay triangle based gridding of Cartesian data.

Reads in x,y[,z] data and performs Delaunay triangulation, i.e., it finds how the points should be connected to give the most equilateral triangulation possible. If a map projection (give region and projection) is chosen then it is applied before the triangulation is calculated. The actual algorithm used in the triangulations is either that of Watson [1982] or Shewchuk [1996] [Default if installed; type gmt get GMT_TRIANGULATE on the command line to see which method is selected).

Must provide either data or x, y, and z.

Full option list at https://docs.generic-mapping-tools.org/6.5/triangulate.html

Aliases:

  • I = spacing

  • J = projection

  • R = region

  • V = verbose

  • b = binary

  • d = nodata

  • e = find

  • f = coltypes

  • h = header

  • i = incols

  • r = registration

  • s = skiprows

  • w = wrap

Parameters:
  • x/y/z (np.ndarray) – Arrays of x and y coordinates and values z of the data points.

  • data (str, numpy.ndarray, pandas.DataFrame, xarray.Dataset, or geopandas.GeoDataFrame) – Pass in (x, y, z) or (longitude, latitude, elevation) values by providing a file name to an ASCII data table, a 2-D numpy.ndarray, a pandas.DataFrame, an xarray.Dataset made up of 1-D xarray.DataArray data variables, or a geopandas.GeoDataFrame containing the tabular data.

  • projection (str) – projcode[projparams/]width|scale. Select map projection.

  • region (str or list) – xmin/xmax/ymin/ymax[+r][+uunit]. Specify the region of interest.

  • output_type (Literal['pandas', 'numpy', 'file'], default: 'pandas') –

    Desired output type of the result data.

    • pandas will return a pandas.DataFrame object.

    • numpy will return a numpy.ndarray object.

    • file will save the result to the file specified by the outfile parameter.

  • outfile (str | None, default: None) – File name for saving the result data. Required if output_type="file". If specified, output_type will be forced to be "file".

  • verbose (bool or str) –

    Select verbosity level [Default is w], which modulates the messages written to stderr. Choose among 7 levels of verbosity:

    • q - Quiet, not even fatal error messages are produced

    • e - Error messages only

    • w - Warnings [Default]

    • t - Timings (report runtimes for time-intensive algorithms)

    • i - Informational messages (same as verbose=True)

    • c - Compatibility warnings

    • d - Debugging messages

  • binary (bool or str) –

    i|o[ncols][type][w][+l|b]. Select native binary input (using binary="i") or output (using binary="o"), where ncols is the number of data columns of type, which must be one of:

    • c - int8_t (1-byte signed char)

    • u - uint8_t (1-byte unsigned char)

    • h - int16_t (2-byte signed int)

    • H - uint16_t (2-byte unsigned int)

    • i - int32_t (4-byte signed int)

    • I - uint32_t (4-byte unsigned int)

    • l - int64_t (8-byte signed int)

    • L - uint64_t (8-byte unsigned int)

    • f - 4-byte single-precision float

    • d - 8-byte double-precision float

    • x - use to skip ncols anywhere in the record

    For records with mixed types, append additional comma-separated combinations of ncols type (no space). The following modifiers are supported:

    • w after any item to force byte-swapping.

    • +l|b to indicate that the entire data file should be read as little- or big-endian, respectively.

    Full documentation is at https://docs.generic-mapping-tools.org/6.5/gmt.html#bi-full.

  • nodata (str) – i|onodata. Substitute specific values with NaN (for tabular data). For example, nodata="-9999" will replace all values equal to -9999 with NaN during input and all NaN values with -9999 during output. Prepend i to the nodata value for input columns only. Prepend o to the nodata value for output columns only.

  • find (str) – [~]“pattern” | [~]/regexp/[i]. Only pass records that match the given pattern or regular expressions [Default processes all records]. Prepend ~ to the pattern or regexp to instead only pass data expressions that do not match the pattern. Append i for case insensitive matching. This does not apply to headers or segment headers.

  • coltypes (str) – [i|o]colinfo. Specify data types of input and/or output columns (time or geographical data). Full documentation is at https://docs.generic-mapping-tools.org/6.5/gmt.html#f-full.

  • header (str) –

    [i|o][n][+c][+d][+msegheader][+rremark][+ttitle]. Specify that input and/or output file(s) have n header records [Default is 0]. Prepend i if only the primary input should have header records. Prepend o to control the writing of header records, with the following modifiers supported:

    • +d to remove existing header records.

    • +c to add a header comment with column names to the output [Default is no column names].

    • +m to add a segment header segheader to the output after the header block [Default is no segment header].

    • +r to add a remark comment to the output [Default is no comment]. The remark string may contain \n to indicate line-breaks.

    • +t to add a title comment to the output [Default is no title]. The title string may contain \n to indicate line-breaks.

    Blank lines and lines starting with # are always skipped.

  • incols (str or 1-D array) –

    Specify data columns for primary input in arbitrary order. Columns can be repeated and columns not listed will be skipped [Default reads all columns in order, starting with the first (i.e., column 0)].

    • For 1-D array: specify individual columns in input order (e.g., incols=[1,0] for the 2nd column followed by the 1st column).

    • For str: specify individual columns or column ranges in the format start[:inc]:stop, where inc defaults to 1 if not specified, with columns and/or column ranges separated by commas (e.g., incols="0:2,4+l" to input the first three columns followed by the log-transformed 5th column). To read from a given column until the end of the record, leave off stop when specifying the column range. To read trailing text, add the column t. Append the word number to t to ingest only a single word from the trailing text. Instead of specifying columns, use incols="n" to simply read numerical input and skip trailing text. Optionally, append one of the following modifiers to any column or column range to transform the input columns:

      • +l to take the log10 of the input values.

      • +d to divide the input values by the factor divisor [Default is 1].

      • +s to multiple the input values by the factor scale [Default is 1].

      • +o to add the given offset to the input values [Default is 0].

  • skiprows (bool or str) –

    [cols][+a][+r]. Suppress output for records whose z-value equals NaN [Default outputs all records]. Optionally, supply a comma-separated list of all columns or column ranges to consider for this NaN test [Default only considers the third data column (i.e., cols = 2)]. Column ranges must be given in the format start[:inc]:stop, where inc defaults to 1 if not specified. The following modifiers are supported:

    • +r to reverse the suppression, i.e., only output the records whose z-value equals NaN.

    • +a to suppress the output of the record if just one or more of the columns equal NaN [Default skips record only if values in all specified cols equal NaN].

  • wrap (str) –

    y|a|w|d|h|m|s|cperiod[/phase][+ccol]. Convert the input x-coordinate to a cyclical coordinate, or a different column if selected via +ccol. The following cyclical coordinate transformations are supported:

    • y - yearly cycle (normalized)

    • a - annual cycle (monthly)

    • w - weekly cycle (day)

    • d - daily cycle (hour)

    • h - hourly cycle (minute)

    • m - minute cycle (second)

    • s - second cycle (second)

    • c - custom cycle (normalized)

    Full documentation is at https://docs.generic-mapping-tools.org/6.5/gmt.html#w-full.

Return type:

DataFrame | ndarray | None

Returns:

ret – Return type depends on outfile and output_type:

  • None if outfile is set (output will be stored in file set by outfile)

  • pandas.DataFrame or numpy.ndarray if outfile is not set (depends on output_type)

Note

For geographic data with global or very large extent you should consider sphtriangulate instead since triangulate is a Cartesian or small-geographic area operator and is unaware of periodic or polar boundary conditions.