Compute, Time and Energy Characterization of Encoder-Decoder Networks with Automatic Mixed Precision Training

08/18/2020
by   Siddharth Samsi, et al.
0

Deep neural networks have shown great success in many diverse fields. The training of these networks can take significant amounts of time, compute and energy. As datasets get larger and models become more complex, the exploration of model architectures becomes prohibitive. In this paper we examine the compute, energy and time costs of training a UNet based deep neural network for the problem of predicting short term weather forecasts (called precipitation Nowcasting). By leveraging a combination of data distributed and mixed-precision training, we explore the design space for this problem. We also show that larger models with better performance come at a potentially incremental cost if appropriate optimizations are used. We show that it is possible to achieve a significant improvement in training time by leveraging mixed-precision training without sacrificing model performance. Additionally, we find that a 1549 network comes at a relatively smaller 63.22 UNet with 4 encoding layers.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset