You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
throws Microsoft.ML.OnnxRuntimeGenAI.OnnxRuntimeGenAIException: 'Non-zero status code returned while running SkipSimplifiedLayerNormalization node. Name:'/model/layers.0/post_attention_layernorm/SkipLayerNorm' Status Message: D:\a_work\1\s\include\onnxruntime\core/framework/op_kernel_context.h:42 onnxruntime::OpKernelContext::Input Missing Input: model.layers.0.post_attention_layernorm.weight
'
on generator.ComputeLogits();
Is the above model not compatible with Microsoft.ML.OnnxRuntimeGenAI?
throws Microsoft.ML.OnnxRuntimeGenAI.OnnxRuntimeGenAIException: 'Non-zero status code returned while running SkipSimplifiedLayerNormalization node. Name:'/model/layers.0/post_attention_layernorm/SkipLayerNorm' Status Message: D:\a_work\1\s\include\onnxruntime\core/framework/op_kernel_context.h:42 onnxruntime::OpKernelContext::Input Missing Input: model.layers.0.post_attention_layernorm.weight
In case you run into DML issues with ONNX Runtime GenAI v0.5.1, there's also a known ONNX Runtime GenAI regression specific to DML. The fix has been merged here. You can downgrade to ONNX Runtime GenAI v0.5.0, build from source, or wait until a patch is released after MS Ignite.
Describe the bug
Running https://github.com/microsoft/onnxruntime-genai/tree/main/examples/csharp/HelloPhi3V sample
with https://huggingface.co/microsoft/Phi-3.5-vision-instruct-onnx/tree/main/gpu/gpu-int4-rtn-block-32
throws Microsoft.ML.OnnxRuntimeGenAI.OnnxRuntimeGenAIException: 'Non-zero status code returned while running SkipSimplifiedLayerNormalization node. Name:'/model/layers.0/post_attention_layernorm/SkipLayerNorm' Status Message: D:\a_work\1\s\include\onnxruntime\core/framework/op_kernel_context.h:42 onnxruntime::OpKernelContext::Input Missing Input: model.layers.0.post_attention_layernorm.weight
'
on generator.ComputeLogits();
Is the above model not compatible with Microsoft.ML.OnnxRuntimeGenAI?
The CPU model seems to work fine.
https://huggingface.co/microsoft/Phi-3.5-vision-instruct-onnx/tree/main/cpu_and_mobile/cpu-int4-rtn-block-32-acc-level-4
I am looking to use the DML model for Phi-3.5 vision. I assumed this was it. https://huggingface.co/microsoft/Phi-3.5-vision-instruct-onnx/tree/main/gpu/gpu-int4-rtn-block-32
To Reproduce
Steps to reproduce the behavior:
Desktop (please complete the following information):
The text was updated successfully, but these errors were encountered: