New in Composer 0.10: CometML integration, Auto evaluation batch size selection, Streaming dataset preview, and API improvements!
We are excited to announce the release of Composer 0.10 (release notes)! This release packs a lot of useful features and capabilities - kudos to the Composer community for your engagement, feedback, and contributions to this release. For those who want to join the Composer community: learn more about contributing to Composer here, and message us on Slack if you have any questions or suggestions!
What's in this release
- CometML integration for experiment tracking
- Automated evaluation batch size selection to maximize throughput and avoid CUDA OOMs
- API improvements across Metrics, Logging and Evaluation APIs
- New streaming data loading repository in preview!
Composer is a library for training PyTorch neural networks faster, at lower cost, and to higher accuracy. Composer includes:
- 20+ methods for speeding up training networks for computer vision and language modeling.
- An easy-to-use trainer that is designed for performance and integrates best practices for efficient training.
- Functional forms of all speedup methods that allows integrating them into your existing training loop.
- Strong and reproducible training baselines to get you started as quickly as possible.
CometML integration for experiment tracking
Composer now supports the popular Comet ML platform for experiment tracking! To enable logging Composer training into Comet, simply create the CometMLLogger instance, pass it to the Trainer object at initialization, et voila! Your training metrics will show up in Comet user interface!
Automated evaluation batch size selection
Composer supports eval_batch_size='auto'! Now, in conjunction with grad_accum='auto', you can run the same code on any hardware with no changes necessary. This makes it super easy to add evaluation to a training script without having to pick and choose the right batch size to avoid CUDA OOM (Out Of Memory) errors.
Previously, model evaluation required manually tuning the eval_batch_size argument in order to avoid a CUDA OOM error:
Now, eval_batch_size requires no manual tuning, instead tuning happens under the hood automagically 🪄. Since different eval_batch_size values don’t affect results, we can abstract it away and let Composer automatically pick the optimal value to maximize the time it takes to do evaluation depending on the underlying hardware.
Check out the TrainerHparams docstring for more information.
Improvements to the Evaluation, Logging and Metrics APIs
The Evaluation API has been updated to be consistent with the Trainer API. If the eval_dataloader was provided to the Trainer during initialization, eval can be invoked without needing to provide anything additional:
Alternatively, the eval_dataloader can be passed directly to the eval() method:
The eval_dataloader can be a pytorch dataloader, or for multiple metrics, a list of Evaluator objects. For more details, see Evaluation docs.
To further simplify the interface, the Evaluator class now stores evaluation metric names instead of metric instances. For example:
These metric names are matched against the metrics returned by the ComposerModel. The metric instances are now stored as deep copies as state.train_metrics or state.eval_metrics.
We've significantly simplified our internal logging interface:
- Removed the use of LogLevel throughout the logging, which was a mostly unused feature. Filtering logs are the responsibility of the logger.
- For better compatibility with external logging interfaces such as CometML or Weights & Biases, loggers now support the following methods: log_metrics, log_hyperparameters, and log_artifacts. Previous calls to data_fit, data_epoch, .. have been removed.
Previously, ComposerModel implemented the validate(batch: Any) -> Tuple[Any, Any] method which returns an (input, target) tuple, and the Trainer handles updating the metrics. In v0.10, we return control of the metrics update to the user.
Now, models instead implement def eval_forward(batch: Any) which returns the outputs of evaluation, and also def update_metric(batch, outputs, metric) which updates the metric.
An example implementation for classification can be found in our ComposerClassifer base class:
Streaming dataset repository - in preview
We're in the process of splitting out streaming datasets out of Composer and into its own repository!
Streaming dataset is a high-performance drop-in replacement for Torch IterableDataset , and enables you to stream your training data from cloud based object stores into your training node or cluster, with built-in support for popular open source datasets (ADE20K, C4, COCO, ImageNet, etc.)
The new streaming dataset repository adds new features:
- Dataset compression that reduces downloading time and cloud egress fees, supporting various compression formats (gzip, snappy, zstd, bz2, etc.)
- Dataset hashing that ensures data integrity through cryptographic and non-cryptographic hashing algorithm (SHA2, SHA3, MD5, xxHash, etc.)
You can use the streaming Dataset class with the PyTorch native DataLoader class as follows:
You can also bring your custom dataset by wrapping streaming.Dataset , and the underlying Dataset class will handle key concerns such as sharding across worker processes, de-duping training samples, data compression, and data integrity. Below is one of such example:
While still in preview, you can easily try out the new streaming repository by setting a flag version:2 under train_dataset and eval_dataset sections of your Composer YAML configuration. Checkout the Streaming repo and stay tuned to the documentation site which is coming up soon!
Thanks for reading! If you'd like to learn more about Composer and to be part of the community you are welcomed to download Composer and try it out for your training tasks. As you try it out, come be a part of our community by engaging with us on Twitter, joining our Slack channel, or just giving us a star on Github.
What’s a Rich Text element?
The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.
Static and dynamic content editing
A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!
How to customize formatting for each rich text
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.