PyTorch Lightning is a powerful yet lightweight PyTorch wrapper, designed to make high performance AI research simple, allowing you to focus on science, not engineering. PyTorch Lightning is just organized PyTorch, but allows you to train your models on CPU, GPUs or multiple nodes without changing your code. Lightning makes state-of-the-art training features trivial to use with a switch of a flag, such as 16-bit precision, model sharding, pruning and many more.
Lightning ensures that when your network becomes complex your code doesn’t.
Refactoring your models to lightning is simple, allows you to get rid of a ton of boilerplate, reduce cognitive load, and gives you the ultimate flexibility to iterate on research ideas faster with all the latest deep learning best practices.
Lightning structures PyTorch code with these principles:
Lightning forces the following structure to your code which makes it reusable and shareable:
Once you do this, you can train on multiple-GPUs, CPUs and even in 16-bit precision without changing your code!
Get started with our 2 step guide.
docker pull nvcr.io/partners/gridai/pytorch-lightning:v1.3.7
# for single GPU docker run --rm -it nvcr.io/partners/gridai/pytorch-lightning:v1.3.7 bash home/pl_examples/run_examples-args.sh --gpus 1 --max_epochs 5 --batch_size 1024 # for 4 GPUs docker run --rm -it nvcr.io/partners/gridai/pytorch-lightning:v1.3.7 bash home/pl_examples/run_examples-args.sh --gpus 4 --max_epochs 5 --batch_size 1024
If you have any questions please:
Please observe the Apache 2.0 license that is listed in this repository. In addition the Lightning framework is Patent Pending.