This chapter highlights the features of Apache MXNet and talks about the latest version of this deep learning software framework.
Apache MXNet is a powerful open-source deep learning software framework instrument helping developers build, train, and deploy Deep Learning models. Past few years, from healthcare to transportation to manufacturing and, in fact, in every aspect of our daily life, the impact of deep learning has been widespread. Nowadays, deep learning is sought by companies to solve some hard problems like Face recognition, object detection, Optical Character Recognition (OCR), Speech Recognition, and Machine Translation.
That’s the reason Apache MXNet is supported by:
Some big companies like Intel, Baidu, Microsoft, Wolfram Research, etc.
Public cloud providers including Amazon Web Services (AWS), and Microsoft Azure
Some big research institutes like Carnegie Mellon, MIT, the University of Washington, and the Hong Kong University of Science & Technology.
There are various deep learning platforms like Torch7, Caffe, Theano, TensorFlow, Keras, Microsoft Cognitive Toolkit, etc. existed then you might wonder why Apache MXNet? Let’s check out some of the reasons behind it:
Apache MXNet solves one of the biggest issues of existing deep learning platforms. The issue is that in order to use deep learning platforms one must need to learn another system for a different programming flavor.
With the help of Apache MXNet developers can exploit the full capabilities of GPUs as well as cloud computing.
Apache MXNet can accelerate any numerical computation and places a special emphasis on speeding up the development and deployment of large-scale DNN (deep neural networks).
It provides the users the capabilities of both imperative and symbolic programming.
If you are looking for a flexible deep learning library to quickly develop cutting-edge deep learning research or a robust platform to push production workload, your search ends at Apache MXNet. It is because of the following features of it:
Whether it is multi-gpu or multi-host training with near-linear scaling efficiency, Apache MXNet allows developers to make most out of their hardware. MXNet also support integration with Horovod, which is an open source distributed deep learning framework created at Uber.
For this integration, following are some of the common distributed APIs defined in Horovod:
In this regard, MXNet offer us the following capabilities:
Device Placement − With the help of MXNet we can easily specify each data structure (DS).
Automatic Differentiation − Apache MXNet automates the differentiation i.e. derivative calculations.
Multi-GPU training − MXNet allows us to achieve scaling efficiency with number of available GPUs.
Optimized Predefined Layers − We can code our own layers in MXNet as well as the optimized the predefined layers for speed also.
Apache MXNet provides its users a hybrid front-end. With the help of the Gluon Python API it can bridge the gap between its imperative and symbolic capabilities. It can be done by calling it’s hybridize functionality.
The linear operations like tens or hundreds of matrix multiplications are the computational bottleneck for deep neural nets. To solve this bottleneck MXNet provides −
Optimized numerical computation for GPUs
Optimized numerical computation for distributed ecosystems
Automation of common workflows with the help of which the standard NN can be expressed briefly.
MXNet has deep integration into high-level languages like Python and R. It also provides support for other programming languages such as-
We do not need to learn any new programming language instead MXNet, combined with hybridization feature, allows an exceptionally smooth transition from Python to deployment in the programming language of our choice.
Apache Software Foundation (ASF) has released the stable version 1.6.0 of Apache MXNet on 21st February 2020 under Apache License 2.0. This is the last MXNet release to support Python 2 as MXNet community voted to no longer support Python 2 in further releases. Let us check out some of the new features this release brings for its users.
Due to its flexibility and generality, NumPy has been widely used by Machine Learning practitioners, scientists, and students. But as we know that, these days’ hardware accelerators like Graphical Processing Units (GPUs) have become increasingly assimilated into various Machine Learning (ML) toolkits, the NumPy users, to take advantage of the speed of GPUs, need to switch to new frameworks with different syntax.
With MXNet 1.6.0, Apache MXNet is moving toward a NumPy-compatible programming experience. The new interface provides equivalent usability as well as expressiveness to the practitioners familiar with NumPy syntax. Along with that MXNet 1.6.0 also enables the existing Numpy system to utilize hardware accelerators like GPUs to speed-up large-scale computations.
Apache TVM, an open-source end-to-end deep learning compiler stack for hardware-backends such as CPUs, GPUs, and specialized accelerators, aims to fill the gap between the productivity-focused deep-learning frameworks and performance-oriented hardware backends. With the latest release MXNet 1.6.0, users can leverage Apache(incubating) TVM to implement high-performance operator kernels in Python programming language. Two main advantages of this new feature are following −
Simplifies the former C++ based development process.
Enables sharing the same implementation across multiple hardware backend such as CPUs, GPUs, etc.
Apart from the above listed features of MXNet 1.6.0, it also provides some improvements over the existing features. The improvements are as follows −
As we know the performance of element-wise operations is memory-bandwidth and that is the reason, chaining such operations may reduce overall performance. Apache MXNet 1.6.0 does element-wise operation fusion, that actually generates just-in-time fused operations as and when possible. Such element-wise operation fusion also reduces storage needs and improve overall performance.
MXNet 1.6.0 eliminates the redundant expressions and simplify the common expressions. Such enhancement also improves memory usage and total execution time.
MXNet 1.6.0 also provides various optimizations to existing features & operators, which are as follows:
Automatic Mixed Precision
Gluon Fit API
Large tensor Support
Higher-order gradient support
Operator performance profiler
Improvements to Gluon APIs
Improvements to Symbol APIs
More than 100 bug fixes