📥 Model Download | 📄 Paper Link | 📄 Arxiv Paper Link |
Explore the boundaries of visual-text compression.
- [2025/10/20]🚀🚀🚀 We release DeepSeek-OCR, a model to investigate the role of vision encoders from an LLM-centric viewpoint.
Our environment is cuda11.8+torch2.6.0.
- Clone this repository and navigate to the DeepSeek-OCR folder
git clone https://github.com/deepseek-ai/DeepSeek-OCR.git- Conda
conda create -n deepseek-ocr python=3.12.9 -y
conda activate deepseek-ocr- Packages
- download the vllm-0.8.5 whl
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu118
pip install vllm-0.8.5+cu118-cp38-abi3-manylinux1_x86_64.whl
pip install -r requirements.txt
pip install flash-attn==2.7.3 --no-build-isolationNote: if you want vLLM and transformers codes to run in the same environment, you don't need to worry about this installation error like: vllm 0.8.5+cu118 requires transformers>=4.51.1
- VLLM:
Note: change the INPUT_PATH/OUTPUT_PATH and other settings in the DeepSeek-OCR-master/DeepSeek-OCR-vllm/config.py
cd DeepSeek-OCR-master/DeepSeek-OCR-vllm- image: streaming output
python run_dpsk_ocr_image.py- pdf: concurrency ~2500tokens/s(an A100-40G)
python run_dpsk_ocr_pdf.py- batch eval for benchmarks
python run_dpsk_ocr_eval_batch.py- Transformers
from transformers import AutoModel, AutoTokenizer
import torch
import os
os.environ["CUDA_VISIBLE_DEVICES"] = '0'
model_name = 'deepseek-ai/DeepSeek-OCR'
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModel.from_pretrained(model_name, _attn_implementation='flash_attention_2', trust_remote_code=True, use_safetensors=True)
model = model.eval().cuda().to(torch.bfloat16)
# prompt = "<image>\nFree OCR. "
prompt = "<image>\n<|grounding|>Convert the document to markdown. "
image_file = 'your_image.jpg'
output_path = 'your/output/dir'
res = model.infer(tokenizer, prompt=prompt, image_file=image_file, output_path = output_path, base_size = 1024, image_size = 640, crop_mode=True, save_results = True, test_compress = True)or you can
cd DeepSeek-OCR-master/DeepSeek-OCR-hf
python run_dpsk_ocr.pyThe current open-source model supports the following modes:
- Native resolution:
- Tiny: 512×512 (64 vision tokens)✅
- Small: 640×640 (100 vision tokens)✅
- Base: 1024×1024 (256 vision tokens)✅
- Large: 1280×1280 (400 vision tokens)✅
- Dynamic resolution
- Gundam: n×640×640 + 1×1024×1024 ✅
# document: <image>\n<|grounding|>Convert the document to markdown.
# other image: <image>\n<|grounding|>OCR this image.
# without layouts: <image>\nFree OCR.
# figures in document: <image>\nParse the figure.
# general: <image>\nDescribe this image in detail.
# rec: <image>\nLocate <|ref|>xxxx<|/ref|> in the image.
# '先天下之忧而忧'![]() |
![]() |
![]() |
![]() |
We would like to thank Vary, GOT-OCR2.0, MinerU, PaddleOCR, OneChart, Slow Perception for their valuable models and ideas.
We also appreciate the benchmarks: Fox, OminiDocBench.
@article{wei2024deepseek-ocr,
title={DeepSeek-OCR: Contexts Optical Compression},
author={Wei, Haoran and Sun, Yaofeng and Li, Yukun},
journal={arXiv preprint arXiv:2510.18234},
year={2025}
}



