The H5D_CREATE function creates a dataset at the specified location.

Example


See the example under H5F_CREATE.

Syntax


Result = H5D_CREATE(Loc_id, Name, Datatype_id, Dataspace_id [, CHUNK_DIMENSIONS=vector [, GZIP=value [, /SHUFFLE]]])

Return Value


The Result gives the dataset identifier. This identifier should be released with the H5D_CLOSE procedure.

Arguments


Loc_id

An integer giving the identifier of the file or group within which to create the dataset.

Name

A string giving the name of the dataset to create.

Datatype_id

An integer giving the datatype identifier to use when creating the dataset.

Dataspace_id

An integer giving the dataspace identifier to use when creating the dataset.

Keywords


CHUNK_DIMENSIONS

A vector containing the chunk dimensions for the dataset. CHUNK_DIMENSIONS must have the same number of elements as the number of dimensions in the dataspace specified in Dataspace_id. This keyword must be set if the dataspace specified in Dataspace_id has unlimited or extendable dimensions.

Note: Choosing appropriate values for CHUNK_DIMENSIONS is not always straightforward and is dependent on the size of the dataspace, the size of the data, how the data will be read, the current operating system, and many other factors. Improper chunk sizes can drastically inflate the size of the resulting file or greatly slow the reading of the data. For a dimension that is immutable a good suggestion is to choose a value that is evenly divisible into the dimension size. Values of less than 100 for dataspaces with dimensions greater than 1000 can result in bloated file sizes.

GZIP

Specifies the level of gzip compression applied to the dataset, which should be a value from zero to nine, inclusive. Lower compression levels are faster but result in less compression. If CHUNK_DIMENSIONS is not specified this keyword is ignored.

SHUFFLE

If set the shuffle filter will be applied to the dataset. If GZIP is not specified this keyword is ignored.

The shuffle filter de-interlaces a block of data by reordering the bytes. All bytes from one consistent byte position of each data element are placed together in one block; all bytes from a second consistent byte position of each data element are placed together a second block; and so on. For example, given three data elements of a 4-byte datatype stored as 012301230123, shuffling will re-order data as 000111222333. This can be a valuable step in an effective compression algorithm because the bytes in each byte position are often closely related to each other and putting them together can increase the compression ratio. When the shuffle filter is applied to a dataset, the compression ratio achieved is often superior to that achieved without the shuffle filter.

Version History


6.2

Introduced

See Also


H5D_CLOSE, H5S_CREATE_SIMPLE, H5T_IDL_CREATE