.png)
Train Faster & Cheaper on AWS with MosaicML Composer
.png)
Thanks to the AWS team (led by Olivier Cruchant) for breaking down the challenges of deep learning model training - and the benefits of our open source Composer library - in their latest blog post. More efficient training is a superpower for enterprise data scientists, enabling them to iterate faster, reduce cost - or build higher quality models by training on more data in the same amount of time. Composer incorporates dozens of training optimization algorithms that speed up training and improve model quality. These algorithms are available via a Trainer API that makes it easy to compose (get it?) algorithms together into your model training.
Already an AWS customer? We’ve got you covered with a tutorial. Check out a step-by-step demonstration of how you can use Composer on AWS to train ResNet-50 on the ImageNet dataset to the industry-standard 76.6% top-1 accuracy on an EC2 instance in just 27 minutes - and for less than the cost of a large pizza! Once you’re caught up on the blog post from our friends at AWS, head over to our Github site for more info on Composer and join our community today.