PaddleOCR-VL 1.5 Docker Deployment and Parameter Tuning

This tutorial uses NVIDIA GPUs as an example; for deployment instructions for other GPUs, refer to the official documentation.

Download Docker-related configuration files

The following links are valid as of February 25, 2026. If they are no longer accessible, please consult the latest links in the GitHub repository:

Adjust Docker configuration

Common adjustments include specifying ports and selecting GPUs.

Adjust the port in compose.yaml:

  paddleocr-vl-api:  
    ...  
    ports:  
# Change port 8080 to expose port 8111  
-     - 8080:8080  
+     - 8111:8080  
    ...  

Adjust the GPU setting in compose.yaml:

  paddleocr-vl-api:  
    ...  
    deploy:  
      resources:  
        reservations:  
          devices:  
            - driver: nvidia  
# Change GPU ID from "0" to "1"  
-             device_ids: ["0"]  
+             device_ids: ["1"]  
              capabilities: [gpu]  
    ...  
  paddleocr-vlm-server:  
    ...  
    deploy:  
      resources:  
        reservations:  
          devices:  
            - driver: nvidia  
# Also update GPU ID here  
-             device_ids: ["0"]  
+             device_ids: ["1"]  
              capabilities: [gpu]  
    ...  

For additional configurations—such as .env setup, vLLM configuration, model path specification, batch size, etc.—please refer to the official website. Here, all settings remain at their defaults.

Start Docker

After completing the adjustments, start Docker:

# Must be executed from the directory containing both compose.yaml and .env  
docker compose up  

References

Official documentation: 使用教程 - PaddleOCR 文档