Skip to content

Commit

Permalink
Merge pull request #259 from yuzawa-san/nogpu
Browse files Browse the repository at this point in the history
Stop publishing GPU artifacts
  • Loading branch information
yuzawa-san authored Nov 25, 2024
2 parents 82d3a3a + 583f0d5 commit c2ad8b9
Show file tree
Hide file tree
Showing 5 changed files with 6 additions and 14 deletions.
9 changes: 3 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,9 +43,7 @@ A collection of native libraries with CPU support for a several common OS/archit

#### onnxruntime-gpu

[![maven](https://img.shields.io/maven-central/v/com.jyuzawa/onnxruntime-gpu)](https://search.maven.org/artifact/com.jyuzawa/onnxruntime-gpu)

A collection of native libraries with GPU support for a several common OS/architecture combinations. For use as an optional runtime dependency. Include one of the OS/Architecture classifiers like `osx-x86_64` to provide specific support.
See https://github.com/yuzawa-san/onnxruntime-java/issues/258

### In your library

Expand All @@ -58,7 +56,7 @@ This puts the burden of providing a native library on your end user.
There is an example application in the `onnxruntime-sample-application` directory.
The library should use the `onnxruntime` as a implementation dependency.
The application needs to have acccess to the native library.
You have the option providing it via a runtime dependency using either a classifier variant from `onnxruntime-cpu` or `onnxruntime-gpu`
You have the option providing it via a runtime dependency using either a classifier variant from `onnxruntime-cpu`.
Otherwise, the Java library path will be used to load the native library.


Expand All @@ -74,7 +72,6 @@ Since this uses a native library, this will require the runtime to have the `--e
### Execution Providers

Only those which are exposed in the C API are supported.
The `onnxruntime-gpu` artifact supports CUDA and TensorRT, since those are built off of the GPU artifacts from the upstream project.
If you wish to use another execution provider which is present in the C API, but not in any of the artifacts from the upstream project, you can choose to bring your own onnxruntime shared library to link against.

## Versioning
Expand All @@ -86,4 +83,4 @@ Upstream major version changes will typically be major version changes here.
Minor version will be bumped for smaller, but compatible changes.
Upstream minor version changes will typically be minor version changes here.

The `onnxruntime-cpu` and `onnxruntime-gpu` artifacts are versioned to match the upstream versions and depend on a minimum compatible `onnxruntime` version.
The `onnxruntime-cpu` artifacts are versioned to match the upstream versions and depend on a minimum compatible `onnxruntime` version.
2 changes: 2 additions & 0 deletions build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -282,6 +282,7 @@ publishing {
artifact tasks.named("osArchJar${it}")
}
}
/*
onnxruntimeGpu(MavenPublication) {
version = ORT_JAR_VERSION
artifactId = "${rootProject.name}-gpu"
Expand All @@ -293,6 +294,7 @@ publishing {
artifact tasks.named("osArchJar${it}")
}
}
*/
onnxruntime(MavenPublication) {
from components.java
pom {
Expand Down
2 changes: 0 additions & 2 deletions onnxruntime-sample-application/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,6 @@ dependencies {
// For the application to work, you will need to provide the native libraries.
// Optionally, provide the CPU libraries (for various OS/Architecture combinations)
// runtimeOnly "com.jyuzawa:onnxruntime-cpu:1.X.0:osx-x86_64"
// Optionally, provide the GPU libraries (for various OS/Architecture combinations)
// runtimeOnly "com.jyuzawa:onnxruntime-gpu:1.X.0:osx-x86_64"
// Alternatively, do nothing and the Java library path will be used
}

Expand Down
2 changes: 0 additions & 2 deletions src/main/java/com/jyuzawa/onnxruntime/OnnxRuntimeImpl.java
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@

import com.jyuzawa.onnxruntime_extern.OrtApiBase;
import java.lang.System.Logger.Level;
import java.lang.foreign.Arena;
import java.lang.foreign.MemorySegment;

// NOTE: this class actually is more like OrtApiBase
Expand All @@ -22,7 +21,6 @@ enum OnnxRuntimeImpl implements OnnxRuntime {

private OnnxRuntimeImpl() {
Loader.load();
Arena scope = Arena.global();
MemorySegment segment = OrtGetApiBase();
this.ortApiVersion = ORT_API_VERSION();
MemorySegment apiAddress = OrtApiBase.GetApiFunction(segment).apply(ortApiVersion);
Expand Down
5 changes: 1 addition & 4 deletions src/main/java/module-info.java
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,9 @@
* <li>The {@code onnxruntime-cpu} artifact provides support for several common operating systems / CPU architecture
* combinations. For use as an optional runtime dependency. Include one of the OS/Architecture classifiers like
* {@code osx-x86_64} to provide specific support.
* <li>The {@code onnxruntime-gpu} artifact provides GPU (CUDA) support for several common operating systems / CPU
* architecture combinations. For use as an optional runtime dependency. Include one of the OS/Architecture classifiers
* like {@code osx-x86_64} to provide specific support.
* <li>The {@code onnxruntime} artifact contains only bindings and no libraries. This means the native library will need
* to be provided. Use this artifact as a compile dependency if you want to allow your project's users to bring use
* {@code onnxruntime-cpu}, {@code onnxruntime-gpu}, or their own native library as dependencies provided at runtime.
* {@code onnxruntime-cpu} or their own native library as dependencies provided at runtime.
* </ul>
*
* @since 1.0.0
Expand Down

0 comments on commit c2ad8b9

Please sign in to comment.