New in Composer 0.9: Export for inference APIs, ALiBi for efficient BERT training, TPU beta support, and more!
We are excited to announce the release of Composer 0.9 (release notes)! We've packed a lot of useful features and capabilities into this release - kudos to the Composer community for your engagement, feedback, and contributions.
What's in this release
- Export for inference APIs, supporting ONNX and TorchScript
- ALiBi (Attention with Linear Biases) support for BERT training, improving training speed and accuracy
- Entry point for GLUE (General Language Understanding Evaluation) tasks pre-training and fine-tuning.
- TPU (Tensor Processing Unit) support, available in beta.
- Apple M1 support, available in beta.
- Composer contrib repository, enabling exploration and experimentation with new efficiency algorithms.
Composer is a library for training PyTorch neural networks faster, at lower cost, and to higher accuracy. Composer includes:
- 20+ methods for speeding up training networks for computer vision and language modeling.
- An easy-to-use trainer that is designed for performance and integrates best practices for efficient training.
- Functional forms of all speedup methods that allows integrating them into your existing training loop.
- Strong and reproducible training baselines to get you started as quickly as possible.
Export for inference
Train with Composer and deploy anywhere: we have added a dedicated export API as well as an export training callback to allow you to export Composer-trained models for inference, supporting popular formats such as torchscript and ONNX.
Here’s an example of the dedicated export API storing a model in torchscript format:
And here’s an example of using the training callback, to export for inference at the end of training, this time exporting to ONNX format:
Once a model is trained, it is often exported for deployment. These newly added export APIs allow you to export whether you want to store the exported model locally or to an object store. These APIs also allow you to export from any of the previously created checkpoints during training. Once exported you can use it with your preferred deployment tooling/platform.
Export for inference APIs really make it simple to get your model out of composer for deployment. Please checkout our notebook demonstrating the usage of these new APIs.
ALiBi support for BERT training
You can now use ALiBi (Attention with Linear Biases; Press et al., 2021) when training BERT models with composer, delivering faster training and higher accuracy by leveraging shorter sequence lengths.
ALiBi improves the quality of BERT pre-training, especially when pre-training uses shorter sequence lengths than the downstream (fine-tuning) task. This allows models with ALiBi to reach higher downstream accuracy with less pre-training time.
Applying ALiBi to BERT models is simple when using Composer. It can be included as an algorithm when using the Composer Trainer:
Alternatively, you can also apply ALiBi directly using the Composer Functional API:
Composer’s ALiBi implementation provides out of the box support for any HuggingFace BERT, RoBERTa, or GPT-2 model. In addition, it offers a convenient way to change the maximum sequence length the model can handle, which comes in handy when evaluating or fine-tuning on longer sequences.
Check out our documentation for more tips on getting started.
Entry point for GLUE tasks pre-training and fine-tuning
You can now easily pre-train and fine-tune NLP models across all GLUE (General Language Understanding Evaluation) tasks through one simple entry point!
The entry point handles model saving and loading, spawns GLUE tasks in parallel across all available GPUs, and delivers a highly efficient evaluation of model performance.
To launch the entry point, you can call it from command line, providing a configuration file containing hyper parameters and other parameters (example). Launch the default example noted below:
To get started with Composer’s GLUE entry point, check out the docs.
TPU support (in beta)
You can now use Composer to train your models on TPUs! Support is now available in Beta, and currently only supports single-core TPU training. Try it out, explore optimizations, and share your feedback and feature requests with us so we can make it better for you and for the community.
TPU benchmarks demonstrated top line performance on the latest MLPerf submissions, achieving up to 1.7x speedup on TPUv4 compared to last year. Making TPU devices available in Composer opens up opportunities for even more efficient ML and new algorithmic optimizations.
Using TPUs within Composer is as simple as specifying a ‘tpu’ device, code example below:
To get started training on TPUs, check out Composer’s TPU colab notebook.
Apple M1 support (beta)
Leverage Apple M-series chips to train your models with Composer by providing the device='mps' argument:
We use the latest PyTorch MPS backend to execute the training. This requires torch version ≥1.12, and Max OSX 12.3+. For more information on training with Apple M chips, see the PyTorch 1.12 blog.
Got a new method idea, or published a paper and want those methods to be easily accessible? We’ve created the mcontrib repository (https://github.com/mosaicml/mcontrib), with a lightweight process to contribute new algorithms. We’re happy to work directly with you to benchmark these methods and eventually “promote” them to Composer for use by end customers.
To contribute a new algorithm, simply make a pull request that creates a folder mcontrib/algorithms/your_algo_name, and adds a few files:
- init.py that imports your algorithm class
- metadata.json with some algorithm metadata
- *.py with your code!
- [Optional] a README file
For more details on how to write speed-up methods, see our notebook on custom speed-up methods.
Thanks for reading! If you'd like to learn more about Composer and to be part of the community you are welcomed to download Composer and try it out for your training tasks. As you try it out, come be a part of our community by engaging with us on Twitter, joining our Slack channel, or just giving us a star on Github.
What’s a Rich Text element?
The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.
Static and dynamic content editing
A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!
How to customize formatting for each rich text
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.