Skip to content

OpenGradient/solid-ml

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SolidML

SolidML is a comprehensive framework for building the next generation of AI-enabled on-chain applications on the OpenGradient Network. It allows developers to securely execute ML and LLM models through simple function calls directly from smart contracts - all executed as part of an atomic transaction.

What You Can Build

SolidML enables a wide range of AI-powered on-chain applications:

  • AMMs with dynamic fee models
  • Lending pools using models for risk calculation
  • On-chain agents with decision-making capabilities
  • And much more!

Key Benefits

  • Atomic Execution: Inferences are atomically executed as part of the EVM transaction that triggers it, ensuring state consistency
  • Simple Interface: Run inferences through a simple function call without callbacks or handlers
  • Composability: Chain multiple models together with complex logic to support advanced use cases
  • Native Verification: Inference validity proofs (ZKML and TEE) are natively validated by the OpenGradient network protocol

Installation

npm i opengradient-solidml

Core Components

ML and LLM Inference

The heart of SolidML is on-chain model inference. Any model uploaded to the Model Hub is available for use through a unique CID.

Basic ML Model Inference Example

function runZKInference() public {
    // Define model input
    ModelInput memory modelInput = ModelInput(
        new TensorLib.MultiDimensionalNumberTensor[](1),
        new TensorLib.StringTensor[](0));
    
    // Set up numerical input values
    TensorLib.Number[] memory numbers = new TensorLib.Number[](2);
    numbers[0] = TensorLib.Number(7286679744720459, 17); // 0.07286679744720459
    numbers[1] = TensorLib.Number(4486280083656311, 16); // 0.4486280083656311
    modelInput.numbers[0] = TensorLib.numberTensor1D("input", numbers);
    
    // Running on-chain inference with ZK verification
    ModelOutput memory output = OG_INFERENCE_CONTRACT.runModelInference(
        ModelInferenceRequest(
            // Use ZKML for verifiable inference
            ModelInferenceMode.ZK,
            // Model CID from Model Hub
            "QmbbzDwqSxZSgkz1EbsNHp2mb67rYeUYHYWJ4wECE24S7A",
            // Requested model input
            modelInput
    ));

    // Extract and use model output
    if (output.is_simulation_result == false) {
        TensorLib.Number prediction = output.numbers[0].values[0];
        // Use prediction in your business logic...
    }
}

LLM Completion Example

function runLLMCompletion() public {
    string[] memory stopSequence = new string[](1);
    stopSequence[0] = "<end>";
    
    LLMCompletionResponse memory llmResult = OG_INFERENCE_CONTRACT.runLLMCompletion(
        LLMCompletionRequest(
            LLMInferenceMode.VANILLA,
            "meta-llama/Meta-Llama-3-8B-Instruct",
            "Hello ser, who are you?\n<start>",
            1000,
            stopSequence,
            0
    ));
    
    // Use the LLM response
    string memory aiResponse = llmResult.answer;
}

LLM Chat Example

function runLLMChat() public {
    string[] memory stopSequence = new string[](1);
    stopSequence[0] = "<end>";
    
    ChatMessage[] memory messages = new ChatMessage[](1);
    messages[0] = ChatMessage("user", "who are you?", "", "", new ToolCall[](0));
    
    ToolDefinition[] memory tools = new ToolDefinition[](0); 
    
    LLMChatResponse memory response = OG_INFERENCE_CONTRACT.runLLMChat(
        LLMChatRequest(
            LLMInferenceMode.VANILLA,
            "mistralai/Mistral-7B-Instruct-v0.3",
            messages,
            tools,
            "auto",
            50,
            stopSequence,
            0
    ));
    
    // Use the chat response
    string memory aiResponse = response.message.content;
}

Historical Data & Price Feeds

SolidML provides interfaces for accessing historical price data for use in financial models.

function getPrediction(string memory _base, string memory _quote, uint32 _total_candles) public {
    // Define candle types to retrieve
    CandleType[] memory candles = new CandleType[](4);
    candles[0] = CandleType.Open;
    candles[1] = CandleType.High;
    candles[2] = CandleType.Close;
    candles[3] = CandleType.Low;
    
    // Set up historical data query
    HistoricalInputQuery memory input = HistoricalInputQuery({
        base: _base,
        quote: _quote,
        total_candles: _total_candles,
        order: CandleOrder.Ascending,
        candle_types: candles
    });
    
    // Execute inference using historical data
    ModelOutput memory output = OG_HISTORICAL_CONTRACT.runHistoricalInference(
        "QmcLzTJ6yWF5wgW2CAkhNyJ5Tj2sb7YxkeXVVxo3WNW315",
        "open_high_low_close",
        input);
    
    // Handle result
    if (output.is_simulation_result == false) {
        TensorLib.Number prediction = output.numbers[0].values[0];
        // Use prediction...
    }
}

Inference Modes

SolidML supports multiple inference modes:

  • VANILLA: Standard model inference
  • ZK: Zero-Knowledge proof verified inference for higher security guarantees
  • TEE: Trusted Execution Environment for secure processing

OpenGradient Network

The OpenGradient Network is an EVM chain compatible with most existing EVM frameworks and tools. In addition to standard EVM capabilities, it supports native AI inference directly from smart contracts.

Getting Started

  1. Install the package: npm i opengradient-solidml
  2. Import the required interfaces
  3. Use the Model Hub to find model CIDs for your use case
  4. Implement your smart contract logic using SolidML functions

Learn More

Contributing

We welcome contributions! Please see our contribution guidelines for details.

License

SolidML is released under the MIT License.

About

OpenGradient SolidML Library for Native AI Inference from Smart Contracts

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •