NetCDF-4 C Library Loading

In order to write NetCDF-4 files, you must have the NetCDF-4 C library (libnetcdf)—version 4.3.1 or above—available on your system, along with all supporting libraries (libhdf5, libz, etc). The details of this differ for each operating system, and our experiences (so far) are documented below.


For all platforms, we strongly recommend 64-bit Java, if you can run it. Also, be sure to use the latest version, as security improvements are constantly being made.



The easiest way to get libnetcdf is through a package management program, such as rpm, yum, adept, and others. Details will vary with each program but "netcdf" is usually the package name you want.

Build from source

Instructions for how to build libnetcdf from source can be found here.



As with Linux, a package manager is usually the easiest option. libnetcdf is known to be available both from Homebrew and MacPorts. "netcdf" is usually the package name you want. Here is a support question that may be useful.



Pre-built binaries are available here.


In order to use libnetcdf, the CDM must know its location, as well as the location(s) of its dependencies. These binaries will have different extensions depending on your platform:

There are several ways to specify their location(s).

Preferred method (requires NetCDF-Java 4.5.4 or later)

Set the system library path. This is the path that the operating system will search whenever it needs to find a shared library that it doesn't already know the location of. It is not Java-, NetCDF-, or CDM-specific. As usual, details will vary with each platform.


The system library path maps to the LD_LIBRARY_PATH environment variable. If you built from source and used the default installation directory, libnetcdf and its dependencies will all be in /usr/local/lib. If you got libnetcdf from a package manager, it might've been installed elsewhere.

Note that /usr/local/lib is often included in the default shared library search path of many flavors of Linux. Therefore, it may not be necessary to set LD_LIBRARY_PATH at all. Notable exceptions include many RedHat-derived distributions. Read this for more info.


The system library path maps to the DYLD_LIBRARY_PATH environment variable. If you built from source and used the default installation directory, libnetcdf and its dependencies will all be in /usr/local/lib. They will also be installed there if you obtained them using Homebrew. MacPorts, on the other had, installs binaries to /opt/local/lib.

Note that /usr/local/lib is part of the default library search path on Mac. Therefore, it may not be necessary to set DYLD_LIBRARY_PATH at all.


The system library path maps to the PATH environment variable. To find libnetcdf and its dependencies, you'll want to add $NC4_INSTALL_DIR/bin, $NC4_INSTALL_DIR/deps/$ARCH/bin, and $NC4_INSTALL_DIR/deps/$ARCH/lib to the PATH variable. NC4_INSTALL_DIR is the location where you installed libnetcdf and ARCH is its architecture (either "w32" or "x64").

Alternate methods

The following alternatives are Java- and/or CDM-specific. To use these, it is required that libnetcdf and all of its dependencies live in the same directory. So, if that is not the case in your current configuration, you must manually copy them all to the same place. This is a particular issue on Windows, because the libraries are installed in separate locations by default.

In addition to the library path, the CDM also needs to know the library name. This is almost always "netcdf", unless you've renamed it.


If you get a message like this:

 Warning! ***HDF5 library version mismatched error***
 The HDF5 header files used to compile this application do not match
 the version used by the HDF5 library to which this application is linked.
 Data corruption or segmentation faults may occur if the application continues.
 This can happen when an application was compiled by one version of HDF5 but
 linked with a different version of static or shared HDF5 library.
 You should recompile the application or check your shared library related
 settings such as 'LD_LIBRARY_PATH'.
 You can, at your own risk, disable this warning by setting the environment
 variable 'HDF5_DISABLE_VERSION_CHECK' to a value of '1'.
 Setting it to 2 or higher will suppress the warning messages totally.
 Headers are 1.8.10, library is 1.8.5

Make sure that you don't have an old version of libhdf5 in your system library path.

Writing NetCDF-4 files

Chunking Strategy (version 4.5)

When writing NetCDF-4 files, one must decide on how the variables are to be chunked. In the NetCDF-Java library this is done through the use of a Nc4Chunking strategy. The possibilities currently are:

Both standard and grib strategies allow you to override individual variable chunking if you want by setting the variable's _ChunkSizes attribute.

By default, the Java library will write chunked and compressed NetCDF-4 files, using the default chunking algorithm. You may pass in a null for the chunking parameter to use the default.

Default chunking strategy

For each Variable:

  1. Look for a variable attribute named "_ChunkSizes", whose value is a vector of integer chunk sizes, one for each dimension. If it exists, use it.
  2. If the variable does not have an unlimited dimension:
    • it will be chunked if the total size in bytes > Nc4ChunkingDefault.minVariableSize
    • chunk size will be fillFastest( variable.shape, Nc4ChunkingDefault.defaultChunkSize)
  3. If the variable has one or more unlimited dimensions, it will be chunked, and the chunk size will be calculated as:
    1. set unlimited dimensions to length one, then compute fillFastest( variable.shape, Nc4ChunkingDefault.defaultChunkSize)
    2. if the resulting chunk size is greater than Nc4ChunkingDefault.minChunksize, use it
    3. if not, set the unlimited dimension chunk sizes so that the resulting chunksize is close to Nc4ChunkingDefault.minChunksize. If there are N unlimited dimensions, take the Nth root, ie evenly divide the chunk size among the unlimited dimensions.

The fillFastest( int[] shape, maxSize) algorithm fills the fastest varying (rightmost) dimensions first, until the chunkSize is as close to maxSize as possible without exceeding. The net effect is that the chunkSizes will be close to Nc4ChunkingDefault.defaultChunkSize, with a minimum of Nc4ChunkingDefault.minChunksize, and favoring read access along the fast dimensions. Any variable with an unlimited dimension will use at least Nc4ChunkingDefault.minChunksize bytes (approx, but if compressing, unused space should be mostly eliminated).

Current default values (these can be overidden by the user):

By default, compression (deflate level = 5) and the shuffle filter will be used. The user can override these by:

// set deflate > 0 to compress
// set shuffle to true for the shuffle filter 
Nc4Chunking chunker = Nc4Chunking factory(Nc4Chunking.Strategy.standard, int deflateLevel, boolean shuffle);

This document was last updated December 2014