Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

大佬你好,你的教程里面用的v2.0的模型,问一下v3和v4可以吗,预处理和后处理有没有区别 #3

Closed
ccl-private opened this issue Apr 30, 2024 · 8 comments
Labels
good first issue Good for newcomers

Comments

@ccl-private
Copy link

如题

@jingsongliujing
Copy link
Owner

v3和v4需要修改前后处理,这个有时间我再做个兼容

@ccl-private
Copy link
Author

好的,给星关注~

@ccl-private
Copy link
Author

我成功将V3跑起来了,v3的rec模型输入高度是48,v2是32。
先将./onnx/util.py里面参数修改parser.add_argument("--rec_image_shape", type=str, default="3, 48, 320")
然后将./onnx/onnx_paddleocr.py里面这一行注释params.rec_image_shape = "3, 32, 320"

@jingsongliujing jingsongliujing pinned this issue May 6, 2024
@jingsongliujing
Copy link
Owner

OK,感谢

@jingsongliujing jingsongliujing added the good first issue Good for newcomers label May 6, 2024
@jingsongliujing
Copy link
Owner

但是实际上我们也试过v3,OCR的中文识别精度不如v2的server版本,v4的话精度虽然比较高,但推理速度上又比较慢,相比之下,v2的server版本是一个速度和精度都比较不错的版本选择

@jingsongliujing
Copy link
Owner

我已经更新到ppocrv4版本的推理,比paddlepaddle框架直接推理快五倍

@ccl-private
Copy link
Author

厉害

@awaqq520
Copy link

awaqq520 commented Jul 1, 2024

我已经更新到ppocrv4版本的推理,比paddlepaddle框架直接推理快五倍

可以将它部署到c++环境中嘛,我弄了onnxruntime静态库和opencv静态库,我想生成一个,不依赖于环境的(只需要onnx和exe就可以进行预测)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

3 participants