Replies: 1 comment
-
std::array<int64_t, 2> input_shape_{1, static_cast<int64_t>(input_ids.size())};
std::vector<Ort::Value> input_tensors;
input_tensors.push_back(Ort::Value::CreateTensor<int64_t>(
memoryInfo, input_ids.data(), input_ids.size(), input_shape_.data(), input_shape_.size()));
input_tensors.push_back(
Ort::Value::CreateTensor<int64_t>(memoryInfo, attention_mask.data(), attention_mask.size(),
input_shape_.data(), input_shape_.size()));
input_tensors.push_back(
Ort::Value::CreateTensor<int64_t>(memoryInfo, token_type_ids.data(), token_type_ids.size(),
input_shape_.data(), input_shape_.size())); |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
Shyuna
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am trying to deploy the text2vec model on Windows using the onnxruntime C++ API. The onnx model used was downloaded from https://hf-mirror.com/shibing624/text2vec-base-chinese/tree/main/onnx
I used the following code to get the number of inputs and outputs for the model, which shows three inputs: input_ids, attention_mask, and token_type_ids. How do I set the parameters of session.Run, especially inputTensor?
Putting a statement through the tokenizer of text2vec gets
I tried the following code to set up inputTensor and it compiled, but the runtime threw an error. I would like to know how to solve this problem, thank you very much!
Beta Was this translation helpful? Give feedback.
All reactions