SolidML is a comprehensive framework for building the next generation of AI-enabled on-chain applications on the OpenGradient Network. It allows developers to securely execute ML and LLM models through simple function calls directly from smart contracts - all executed as part of an atomic transaction.
SolidML enables a wide range of AI-powered on-chain applications:
- AMMs with dynamic fee models
- Lending pools using models for risk calculation
- On-chain agents with decision-making capabilities
- And much more!
- Atomic Execution: Inferences are atomically executed as part of the EVM transaction that triggers it, ensuring state consistency
- Simple Interface: Run inferences through a simple function call without callbacks or handlers
- Composability: Chain multiple models together with complex logic to support advanced use cases
- Native Verification: Inference validity proofs (ZKML and TEE) are natively validated by the OpenGradient network protocol
npm i opengradient-solidmlThe heart of SolidML is on-chain model inference. Any model uploaded to the Model Hub is available for use through a unique CID.
function runZKInference() public {
// Define model input
ModelInput memory modelInput = ModelInput(
new TensorLib.MultiDimensionalNumberTensor[](1),
new TensorLib.StringTensor[](0));
// Set up numerical input values
TensorLib.Number[] memory numbers = new TensorLib.Number[](2);
numbers[0] = TensorLib.Number(7286679744720459, 17); // 0.07286679744720459
numbers[1] = TensorLib.Number(4486280083656311, 16); // 0.4486280083656311
modelInput.numbers[0] = TensorLib.numberTensor1D("input", numbers);
// Running on-chain inference with ZK verification
ModelOutput memory output = OG_INFERENCE_CONTRACT.runModelInference(
ModelInferenceRequest(
// Use ZKML for verifiable inference
ModelInferenceMode.ZK,
// Model CID from Model Hub
"QmbbzDwqSxZSgkz1EbsNHp2mb67rYeUYHYWJ4wECE24S7A",
// Requested model input
modelInput
));
// Extract and use model output
if (output.is_simulation_result == false) {
TensorLib.Number prediction = output.numbers[0].values[0];
// Use prediction in your business logic...
}
}function runLLMCompletion() public {
string[] memory stopSequence = new string[](1);
stopSequence[0] = "<end>";
LLMCompletionResponse memory llmResult = OG_INFERENCE_CONTRACT.runLLMCompletion(
LLMCompletionRequest(
LLMInferenceMode.VANILLA,
"meta-llama/Meta-Llama-3-8B-Instruct",
"Hello ser, who are you?\n<start>",
1000,
stopSequence,
0
));
// Use the LLM response
string memory aiResponse = llmResult.answer;
}function runLLMChat() public {
string[] memory stopSequence = new string[](1);
stopSequence[0] = "<end>";
ChatMessage[] memory messages = new ChatMessage[](1);
messages[0] = ChatMessage("user", "who are you?", "", "", new ToolCall[](0));
ToolDefinition[] memory tools = new ToolDefinition[](0);
LLMChatResponse memory response = OG_INFERENCE_CONTRACT.runLLMChat(
LLMChatRequest(
LLMInferenceMode.VANILLA,
"mistralai/Mistral-7B-Instruct-v0.3",
messages,
tools,
"auto",
50,
stopSequence,
0
));
// Use the chat response
string memory aiResponse = response.message.content;
}SolidML provides interfaces for accessing historical price data for use in financial models.
function getPrediction(string memory _base, string memory _quote, uint32 _total_candles) public {
// Define candle types to retrieve
CandleType[] memory candles = new CandleType[](4);
candles[0] = CandleType.Open;
candles[1] = CandleType.High;
candles[2] = CandleType.Close;
candles[3] = CandleType.Low;
// Set up historical data query
HistoricalInputQuery memory input = HistoricalInputQuery({
base: _base,
quote: _quote,
total_candles: _total_candles,
order: CandleOrder.Ascending,
candle_types: candles
});
// Execute inference using historical data
ModelOutput memory output = OG_HISTORICAL_CONTRACT.runHistoricalInference(
"QmcLzTJ6yWF5wgW2CAkhNyJ5Tj2sb7YxkeXVVxo3WNW315",
"open_high_low_close",
input);
// Handle result
if (output.is_simulation_result == false) {
TensorLib.Number prediction = output.numbers[0].values[0];
// Use prediction...
}
}SolidML supports multiple inference modes:
- VANILLA: Standard model inference
- ZK: Zero-Knowledge proof verified inference for higher security guarantees
- TEE: Trusted Execution Environment for secure processing
The OpenGradient Network is an EVM chain compatible with most existing EVM frameworks and tools. In addition to standard EVM capabilities, it supports native AI inference directly from smart contracts.
- Install the package:
npm i opengradient-solidml - Import the required interfaces
- Use the Model Hub to find model CIDs for your use case
- Implement your smart contract logic using SolidML functions
We welcome contributions! Please see our contribution guidelines for details.
SolidML is released under the MIT License.