🚀 Our MPT-7B family of open-source models is trending on the Hugging Face Hub! Take a look at our blog post to learn more. 🚀

✨ We’ve just launched our Inference service. Learn more in our blog post ✨

New in Composer 0.10: CometML integration, Auto evaluation batch size selection, Streaming dataset preview, and API improvements!

New in Composer 0.10: CometML integration, Auto evaluation batch size selection, Streaming dataset preview, and API improvements!

We are excited to announce the release of Composer 0.10 (release notes)! This release packs a lot of useful features and capabilities - kudos to the Composer community for your engagement, feedback, and contributions to this release. For those who want to join the Composer community:  learn more about contributing to Composer here, and message us on Slack if you have any questions or suggestions!

What's in this release

  • CometML integration for experiment tracking
  • Automated evaluation batch size selection to maximize throughput and avoid CUDA OOMs
  • API improvements across Metrics, Logging and Evaluation APIs
  • New streaming data loading repository in preview!

About Composer

Composer is a library for training PyTorch neural networks faster, at lower cost, and to higher accuracy. Composer includes:

  • 20+ methods for speeding up training networks for computer vision and language modeling.
  • An easy-to-use trainer that is designed for performance and integrates best practices for efficient training.
  • Functional forms of all speedup methods that allows integrating them into your existing training loop.
  • Strong and reproducible training baselines to get you started as quickly as possible.

CometML integration for experiment tracking

Composer now supports the popular Comet ML platform for experiment tracking! To enable logging Composer training into Comet, simply create the CometMLLogger instance, pass it to the Trainer object at initialization, et voila! Your training metrics will show up in Comet user interface!

Check out Logging and CometMLLogger docs for more details.

from composer import Trainer
from composer.loggers import CometMLLogger

cometml_logger = CometMLLogger()

trainer = Trainer(
	...
	loggers=[cometml_logger],
)

Automated evaluation batch size selection

Composer supports eval_batch_size='auto'! Now, in conjunction with grad_accum='auto', you can run the same code on any hardware with no changes necessary. This makes it super easy to add evaluation to a training script without having to pick and choose the right batch size to avoid CUDA OOM (Out Of Memory) errors.

Previously, model evaluation required manually tuning the eval_batch_size argument in order to avoid a CUDA OOM error:

train_batch_size: 2048   # Works with any number of GPUs!
grad_accum: 'auto'
eval_batch_size: 2048    # Might break if it doesn't fit in GPU

Now, eval_batch_size requires no manual tuning, instead tuning happens under the hood automagically 🪄. Since different eval_batch_size values don’t affect results, we can abstract it away and let Composer automatically pick the optimal value to maximize the time it takes to do evaluation depending on the underlying hardware.

train_batch_size: 2048   # Works with any number of GPUs!
grad_accum: 'auto'
eval_batch_size: 'auto'  # Works with any number of GPUs!

Check out the TrainerHparams docstring for more information.

Improvements to the Evaluation, Logging and Metrics APIs

Evaluation API

The Evaluation API has been updated to be consistent with the Trainer API. If the eval_dataloader was provided to the Trainer during initialization, eval can be invoked without needing to provide anything additional:

trainer = Trainer(
    eval_dataloader=...
)
trainer.eval()

Alternatively, the eval_dataloader can be passed directly to the eval() method:

trainer = Trainer(
    ...
)
trainer.eval(
    eval_dataloader=...
)

The eval_dataloader can be a pytorch dataloader, or for multiple metrics, a list of Evaluator objects. For more details, see Evaluation docs.

To further simplify the interface, the Evaluator class now stores evaluation metric names instead of metric instances.  For example:

glue_mrpc_task = Evaluator(
    label='glue_mrpc',
    dataloader=mrpc_dataloader,
    metric_names=['BinaryF1Score', 'Accuracy']
)

These metric names are matched against the metrics returned by the ComposerModel. The metric instances are now stored as deep copies as state.train_metrics or state.eval_metrics.

Logging API

We've significantly simplified our internal logging interface:

  • Removed the use of LogLevel throughout the logging, which was a mostly unused feature. Filtering logs are the responsibility of the logger.
  • For better compatibility with external logging interfaces such as CometML or Weights & Biases, loggers now support the following methods: log_metrics, log_hyperparameters, and log_artifacts. Previous calls to data_fit, data_epoch, .. have been removed.

Metrics API

Previously, ComposerModel implemented the validate(batch: Any) -> Tuple[Any, Any] method which returns an (input, target) tuple, and the Trainer handles updating the metrics. In v0.10, we return control of the metrics update to the user.

Now, models instead implement def eval_forward(batch: Any) which returns the outputs of evaluation, and also def update_metric(batch, outputs, metric) which updates the metric.

An example implementation for classification can be found in our ComposerClassifer base class:

def update_metric(self, batch: Any, outputs: Any, metric: Metric) -> None:
    _, targets = batch
    metric.update(outputs, targets)

def eval_forward(self, batch: Any, outputs: Optional[Any] = None) -> Any:
    return outputs if outputs is not None else self.forward(batch)

Streaming dataset repository - in preview

We're in the process of splitting out streaming datasets out of Composer and into its own repository!

Streaming dataset is a high-performance drop-in replacement for Torch IterableDataset , and enables you to stream your training data from cloud based object stores into your training node or cluster, with built-in support for popular open source datasets (ADE20K, C4, COCO, ImageNet, etc.)

The new streaming dataset repository adds new features:

  • Dataset compression that reduces downloading time and cloud egress fees, supporting various compression formats (gzip, snappy, zstd, bz2, etc.)
  • Dataset hashing that ensures data integrity through cryptographic and non-cryptographic hashing algorithm (SHA2, SHA3, MD5, xxHash, etc.)

You can use the streaming Dataset class with the PyTorch native DataLoader class as follows:

import torch
from streaming import Dataset

dataloader = torch.utils.data.DataLoader(dataset=Dataset(remote=s3://...))

You can also bring your custom dataset by wrapping streaming.Dataset , and the underlying Dataset class will handle key concerns such as sharding across worker processes, de-duping training samples, data compression, and data integrity. Below is one of such example:

import torch
from streaming.base import Dataset

# Extending `streaming.Dataset` with custom get functionality
class CustomDataset(Dataset):
   def __init__(self, local, remote):
       super().__init__(local, remote)

   def __getitem__(self, idx: int) -> Any:
       obj = super().__getitem__(idx)
       return obj['x'], obj['y']

# Local caching directory
local = '/tmp/cache'

# Remote location to stream from
remote='s3://mybucket/myfolder'

dataloader = torch.utils.data.DataLoader(dataset=CustomDataset(local=local, remote=remote))

While still in preview, you can easily try out the new streaming repository by setting a flag version:2 under train_dataset and eval_dataset sections of your Composer YAML configuration. Checkout the Streaming repo and stay tuned to the documentation site which is coming up soon!

Learn more!

Thanks for reading! If you'd like to learn more about Composer and to be part of the community you are welcomed to download Composer and try it out for your training tasks. As you try it out, come be a part of our community by engaging with us on Twitter, joining our Slack channel, or just giving us a star on Github.

🤙🏽Team Composer

What’s a Rich Text element?

The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.

Static and dynamic content editing

A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!

How to customize formatting for each rich text

Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.