Llama2-70B-Chat is now available on MosaicML Inference

MosaicML is now part of Databricks

Introducing MPT-30B, the latest addition to the MosaicML Foundation Series of Models.

Train neural networks faster

Composer makes it easy to train models faster at the algorithmic level. Use our collection of speedup methods in your own training loop or—for the best experience—with our Composer trainer.

The latest research at your fingertips

Reproducing science is hard. We draw from the latest research to create a collection of speedup methods that help you stay at the cutting edge. We compose these methods into recipes that make training as fast as possible.

Composer makes efficiency easy

We automatically handle the complexity of adding speed-up methods to your workflow. Experiment with composing these methods by just adding a few lines of code.

Batteries included

Composer comes with standard models and datasets, integrations with HuggingFace, Weights & Biases, DeepSpeed, and TIMM, industrial strength distributed training, streaming dataloaders, and more.

Apply to your code

We provide a functional interface so you can enjoy our speedups in your existing training loop. Use Composer in your codebase to keep updated as we continue to roll out more efficient implementations and kernels.

Try it out today!

Try out the Composer speedup methods in our notebooks, also available on Colab, and get started in minutes.

Need help using Composer in your company?

Looking to speed-up your training using our methods and system optimizations?