You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This project is writing a pure Java implementation for accessing HDF5 files. It is written from the file format specification and is not using any HDF Group code, it is *not* a wrapper around the C libraries. The file format specification is available from the HDF Group [here](https://support.hdfgroup.org/HDF5/doc/H5.format.html). More information on the format is available on [Wikipedia](https://en.wikipedia.org/wiki/Hierarchical_Data_Format).
4
+
This project is a pure Java implementation for accessing HDF5 files. It is written from the file format specification and is not using any HDF Group code, it is *not* a wrapper around the C libraries. The file format specification is available from the HDF Group [here](https://support.hdfgroup.org/HDF5/doc/H5.format.html). More information on the format is available on [Wikipedia](https://en.wikipedia.org/wiki/Hierarchical_Data_Format).
5
5
6
6
The intension is to make a clean Java API to access HDF5 data. Currently the project is targeting HDF5 read-only compatibility. For progress see the [change log](CHANGES.md).
For an example of traversing the tree inside a HDF5 file see [PrintTree.java](jhdf/src/main/java/io/jhdf/examples/PrintTree.java). For accessing attributes see [ReadAttribute.java](jhdf/src/main/java/io/jhdf/examples/ReadAttribute.java).
20
20
21
21
## Why did I start jHDF?
22
-
Mostly its a challenge, HDF5 is a fairly complex file format with lots of flexibility and writing a library to access it interesting. Also as a widely used file format for storing scientific, engineering, and commercial data, it seem like a good idea to be able to read HDF5 files with more than one library. In particular JVM languages are among the most widely used so having a native HDF5 implementation seems useful.
22
+
Mostly it's a challenge, HDF5 is a fairly complex file format with lots of flexibility, writing a library to access it is interesting. Also as a widely used file format for storing scientific, engineering, and commercial data, it seem like a good idea to be able to read HDF5 files with more than one library. In particular JVM languages are among the most widely used so having a native HDF5 implementation seems useful.
23
23
24
24
## Why should I use jHDF?
25
-
- Easy integration with JVM based projects. The library is available on Maven Central and JCenter so using it should be as easy as adding any other dependency. To use the libraries supplied by the HDF Group you need to load native code which means you need to handle this in your build and it complicates distribution of your software on multiple platforms.
25
+
- Easy integration with JVM based projects. The library is available on Maven Central and JCenter so using it should be as easy as adding any other dependency. To use the libraries supplied by the HDF Group you need to load native code, which means you need to handle this in your build and it complicates distribution of your software on multiple platforms.
26
26
- The API is designed to be familiar to Java programmers, so hopefully it works as you might expect. (If this is not the case, open an issue with suggestions for improvement)
27
-
- Performance? Maybe, the library uses Java NIO `MappedByteBuffer`s which should provide fast file access. In addition, when accessing chunked datasets the library is parallelized to take advantage of modern CPUs. I have seen cases where jHDF is significantly faster than the C libraries, but as with all performance issues its case specific so you will need to do your own tests on the cases you care about. If you do tests please post the results so everyone can benefit.
27
+
- No use of JNI, so you avoid all the issue associated with calling native code from the JVM.
28
+
- Performance? Maybe, the library uses Java NIO `MappedByteBuffer`s which should provide fast file access. In addition, when accessing chunked datasets the library is parallelized to take advantage of modern CPUs. I have seen cases where `jHDF` is significantly faster than the C libraries, but as with all performance issues it is case specific so you will need to do your own tests on the cases you care about. If you do tests please post the results so everyone can benefit.
28
29
29
30
## Why should I not use jHDF?
30
-
- If you want to write HDF5 files. Currently this is not supported. I would like to do this in the future but full read-only compatibility is currently the goal.
31
-
- If `jHDF` does not yet support a feature you need. If this is the case you should receive a `UnsupportedHdfException`, open an issue and support can be added. For scheduling I attempt to work on the feature which will allow the most files to be read. Currently that's chunked v4 datasets. If you really want to use a new feature feel free to work on it and open a PR, any help is much appreciated.
32
-
- If you want to read slices of datasets. This is a really good feature of HDF5 and one reason why its suited to large datasets. I will add support in the future but currently its not possible.
33
-
- If you want to read datasets larger than can fit in a Java array (i.e. `Integer.MAX_VALUE` elements). This issue would also be addressed by slicing. Currently its actually a bit worse than this, your dataset must fit into `byte[Integer.MAX_VALUE]` but that can be fixed with some more work.
31
+
- If you want to write HDF5 files. Currently this is not supported. This will be supported in the future, but full read-only compatibility is currently the goal.
32
+
- If `jHDF` does not yet support a feature you need. If this is the case you should receive a `UnsupportedHdfException`, open an issue and support can be added. For scheduling, the features which will allow the most files to be read are prioritized. If you really want to use a new feature feel free to work on it and open a PR, any help is much appreciated.
33
+
- If you want to read slices of datasets. This is a really good feature of HDF5, and one reason why its suited to large datasets. Support will be added in the future but currently its not possible.
34
+
- If you want to read datasets larger than can fit in a Java array (i.e. `Integer.MAX_VALUE` elements). This issue would also be addressed by slicing.
0 commit comments