DeepPavlov is designed for
Please leave us your feedback on how we can improve the DeepPavlov framework.
This repository contains pre-built DeepPavlov images. The images allow you to run DeepPavlov models and communicate them via REST-like HTTP API (see riseapi DeepPavlov docs for more details). Images from these repository are built to be run on GPU and require to have NVIDIA Container Toolkit installed. Dockerfile can be found here.
Run following to start DeepPavlov model:
nvidia-docker run -e CONFIG=deeppavlov_config \ -p host_port:5000 \ -v dp_components_volume:/root/.deeppavlov \ nvcr.io/partners/deeppavlov:latest
host_port - port on which you want to run DeepPavlov model.
dp_components_dir - directory on the host where you can mount DeepPavlov downloaded components dir.
Most of DeepPavlov models use downloadable components (pretrained model pickles, embeddings...) which are downloaded from DeepPavlov servers. To prevent downloading components (some of them are quite heavy) each time you run Docker image for specific DeepPavlov config, you can mount volume. If you do it, DeepPavlov will store components downloaded during the first launch of any DeepPavlov config in this volume, so during further launches DeepPavlov won't reload components. We recommend to use one
dp_components_volume for all models because some of them can use same components. DeepPavlov will automatically manage downloaded components for all configs in this volume.
After model initiate, follow url
http://127.0.0.1:host_port in your browser to get Swagger for model API and endpoint reference.
nvidia-docker run -e CONFIG=ner_ontonotes \ -p 5555:5000 \ -v ~/my_dp_components:/root/.deeppavlov \ nvcr.io/partners/deeppavlov:latest
http://127.0.0.1:5555 URL in your browser to get Swagger with model API info;
Downloadable components located in
~/my_dp_components (contents of this dir is managed by DeepPavlov).
By pulling and using this container, you accept the terms and conditions of Apache 2.0 license.