To get started with MXNet, the first thing we need to do, is to install it on our computer. Apache MXNet works on pretty much all the platforms available, including Windows, Mac, and Linux.
Linux OS
We can install MXNet on Linux OS in the following ways −
Graphical Processing Unit (GPU)
Here, we will use various methods namely Pip, Docker, and Source to install MXNet when we are using GPU for processing −
By using Pip method
You can use the following command to install MXNet on your Linus OS −
pip install mxnet
Apache MXNet also offers MKL pip packages, which are much faster when running on intel hardware. Here for example mxnet-cu101mkl means that −
-
The package is built with CUDA/cuDNN
-
The package is MKL-DNN enabled
-
The CUDA version is 10.1
For other option you can also refer to .
By using Docker
You can find the docker images with MXNet at DockerHub, which is available at Let us check out the steps below to install MXNet by using Docker with GPU −
Step 1− First, by following the docker installation instructions which are available at . We need to install Docker on our machine.
Step 2− To enable the usage of GPUs from the docker containers, next we need to install nvidia-docker-plugin. You can follow the installation instructions given at .
Step 3− By using the following command, you can pull the MXNet docker image −
$ sudo docker pull mxnet/python:gpu
Now in order to see if mxnet/python docker image pull was successful, we can list docker images as follows −
$ sudo docker images
For the fastest inference speeds with MXNet, it is recommended to use the latest MXNet with Intel MKL-DNN. Check the commands below −
$ sudo docker pull mxnet/python:1.3.0_cpu_mkl $ sudo docker images
From source
To build the MXNet shared library from source with GPU, first we need to set up the environment for CUDA and cuDNN as follows−
-
Download and install CUDA toolkit, here CUDA 9.2 is recommended.
-
Next download cuDNN 7.1.4.
-
Now we need to unzip the file. It is also required to change to the cuDNN root directory. Also move the header and libraries to local CUDA Toolkit folder as follows −
tar xvzf cudnn-9.2-linux-x64-v7.1 sudo cp -P cuda/include/cudnn.h /usr/local/cuda/include sudo cp -P cuda/lib64/libcudnn* /usr/local/cuda/lib64 sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn* sudo ldconfig
After setting up the environment for CUDA and cuDNN, follow the steps below to build the MXNet shared library from source −
Step 1− First, we need to install the prerequisite packages. These dependencies are required on Ubuntu version 16.04 or later.
sudo apt-get update sudo apt-get install -y build-essential git ninja-build ccache libopenblas-dev libopencv-dev cmake
Step 2− In this step, we will download MXNet source and configure. First let us clone the repository by using following command−
git clone –recursive https://github.com/apache/incubator-mxnet.git mxnet cd mxnet cp config/linux_gpu.cmake #for build with CUDA
Step 3− By using the following commands, you can build MXNet core shared library−
rm -rf build mkdir -p build && cd build cmake -GNinja .. cmake --build .
Two important points regarding the above step is as follows−
If you want to build the Debug version, then specify the as follows−
cmake -DCMAKE_BUILD_TYPE=Debug -GNinja ..
In order to set the number of parallel compilation jobs, specify the following −
cmake --build . --parallel N
Once you successfully build MXNet core shared library, in the build folder in your MXNet project root, you will find libmxnet.so which is required to install language bindings(optional).
Central Processing Unit (CPU)
Here, we will use various methods namely Pip, Docker, and Source to install MXNet when we are using CPU for processing −
By using Pip method
You can use the following command to install MXNet on your Linus OS−
pip install mxnet
Apache MXNet also offers MKL-DNN enabled pip packages which are much faster, when running on intel hardware.
pip install mxnet-mkl
By using Docker
You can find the docker images with MXNet at DockerHub, which is available at . Let us check out the steps below to install MXNet by using Docker with CPU −
Step 1− First, by following the docker installation instructions which are available at . We need to install Docker on our machine.
Step 2− By using the following command, you can pull the MXNet docker image:
$ sudo docker pull mxnet/python
Now, in order to see if mxnet/python docker image pull was successful, we can list docker images as follows −
$ sudo docker images
For the fastest inference speeds with MXNet, it is recommended to use the latest MXNet with Intel MKL-DNN.
Check the commands below −
$ sudo docker pull mxnet/python:1.3.0_cpu_mkl $ sudo docker images
From source
To build the MXNet shared library from source with CPU, follow the steps below −
Step 1− First, we need to install the prerequisite packages. These dependencies are required on Ubuntu version 16.04 or later.
sudo apt-get update sudo apt-get install -y build-essential git ninja-build ccache libopenblas-dev libopencv-dev cmake
Step 2− In this step we will download MXNet source and configure. First let us clone the repository by using following command:
git clone –recursive https://github.com/apache/incubator-mxnet.git mxnet cd mxnet cp config/linux.cmake config.cmake
Step 3− By using the following commands, you can build MXNet core shared library:
rm -rf build mkdir -p build && cd build cmake -GNinja .. cmake --build .
Two important points regarding the above step is as follows−
If you want to build the Debug version, then specify the as follows:
cmake -DCMAKE_BUILD_TYPE=Debug -GNinja ..
In order to set the number of parallel compilation jobs, specify the following−
cmake --build . --parallel N
Once you successfully build MXNet core shared library, in the build folder in your MXNet project root, you will find libmxnet.so, which is required to install language bindings(optional).
MacOS
We can install MXNet on MacOS in the following ways−
Graphical Processing Unit (GPU)
If you plan to build MXNet on MacOS with GPU, then there is NO Pip and Docker method available. The only method in this case is to build it from source.
From source
To build the MXNet shared library from source with GPU, first we need to set up the environment for CUDA and cuDNN. You need to follow the NVIDIA CUDA Installation Guide which is available at and cuDNN Installation Guide, which is available at for mac OS.
Please note that in 2019 CUDA stopped supporting macOS. In fact, future versions of CUDA may also not support macOS.
Once you set up the environment for CUDA and cuDNN, follow the steps given below to install MXNet from source on OS X (Mac)−
Step 1− As we need some dependencies on OS x, First, we need to install the prerequisite packages.
xcode-select –-install #Install OS X Developer Tools /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" #Install Homebrew brew install cmake ninja ccache opencv # Install dependencies
We can also build MXNet without OpenCV as opencv is an optional dependency.
Step 2− In this step we will download MXNet source and configure. First let us clone the repository by using following command−
git clone –-recursive https://github.com/apache/incubator-mxnet.git mxnet cd mxnet cp config/linux.cmake config.cmake
For a GPU-enabled, it is necessary to install the CUDA dependencies first because when one tries to build a GPU-enabled build on a machine without GPU, MXNet build cannot autodetect your GPU architecture. In such cases MXNet will target all available GPU architectures.
Step 3− By using the following commands, you can build MXNet core shared library−
rm -rf build mkdir -p build && cd build cmake -GNinja .. cmake --build .
Two important points regarding the above step is as follows−
If you want to build the Debug version, then specify the as follows−
cmake -DCMAKE_BUILD_TYPE=Debug -GNinja ..
In order to set the number of parallel compilation jobs, specify the following:
cmake --build . --parallel N
Once you successfully build MXNet core shared library, in the build folder in your MXNet project root, you will find libmxnet.dylib, which is required to install language bindings(optional).
Central Processing Unit (CPU)
Here, we will use various methods namely Pip, Docker, and Source to install MXNet when we are using CPU for processing−
By using Pip method
You can use the following command to install MXNet on your Linus OS
pip install mxnet
By using Docker
You can find the docker images with MXNet at DockerHub, which is available at . Let us check out the steps below to install MXNet by using Docker with CPU−
Step 1− First, by following the docker installation instructions which are available at we need to install Docker on our machine.
Step 2− By using the following command, you can pull the MXNet docker image−
$ docker pull mxnet/python
Now in order to see if mxnet/python docker image pull was successful, we can list docker images as follows−
$ docker images
For the fastest inference speeds with MXNet, it is recommended to use the latest MXNet with Intel MKL-DNN. Check the commands below−
$ docker pull mxnet/python:1.3.0_cpu_mkl $ docker images
From source
Follow the steps given below to install MXNet from source on OS X (Mac)−
Step 1− As we need some dependencies on OS x, first, we need to install the prerequisite packages.
xcode-select –-install #Install OS X Developer Tools /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" #Install Homebrew brew install cmake ninja ccache opencv # Install dependencies
We can also build MXNet without OpenCV as opencv is an optional dependency.
Step 2− In this step we will download MXNet source and configure. First, let us clone the repository by using following command−
git clone –-recursive https://github.com/apache/incubator-mxnet.git mxnet cd mxnet cp config/linux.cmake config.cmake
Step 3− By using the following commands, you can build MXNet core shared library:
rm -rf build mkdir -p build && cd build cmake -GNinja .. cmake --build .
Two important points regarding the above step is as follows−
If you want to build the Debug version, then specify the as follows−
cmake -DCMAKE_BUILD_TYPE=Debug -GNinja ..
In order to set the number of parallel compilation jobs, specify the following−
cmake --build . --parallel N
Once you successfully build MXNet core shared library, in the build folder in your MXNet project root, you will find libmxnet.dylib, which is required to install language bindings(optional).
Windows OS
To install MXNet on Windows, following are the prerequisites−
Minimum System Requirements
-
Windows 7, 10, Server 2012 R2, or Server 2016
-
Visual Studio 2015 or 2017 (any type)
-
Python 2.7 or 3.6
-
pip
Recommended System Requirements
-
Windows 10, Server 2012 R2, or Server 2016
-
Visual Studio 2017
-
At least one NVIDIA CUDA-enabled GPU
-
MKL-enabled CPU: Intel® Xeon® processor, Intel® Core™ processor family, Intel Atom® processor, or Intel® Xeon Phi™ processor
-
Python 2.7 or 3.6
-
pip
Graphical Processing Unit (GPU)
By using Pip method−
If you plan to build MXNet on Windows with NVIDIA GPUs, there are two options for installing MXNet with CUDA support with a Python package−
Install with CUDA Support
Below are the steps with the help of which, we can setup MXNet with CUDA.
Step 1− First install Microsoft Visual Studio 2017 or Microsoft Visual Studio 2015.
Step 2− Next, download and install NVIDIA CUDA. It is recommended to use CUDA versions 9.2 or 9.0 because some issues with CUDA 9.1 have been identified in the past.
Step 3− Now, download and install NVIDIA_CUDA_DNN.
Step 4− Finally, by using following pip command, install MXNet with CUDA−
pip install mxnet-cu92
Install with CUDA and MKL Support
Below are the steps with the help of which, we can setup MXNet with CUDA and MKL.
Step 1− First install Microsoft Visual Studio 2017 or Microsoft Visual Studio 2015.
Step 2− Next, download and install intel MKL
Step 3− Now, download and install NVIDIA CUDA.
Step 4− Now, download and install NVIDIA_CUDA_DNN.
Step 5− Finally, by using following pip command, install MXNet with MKL.
pip install mxnet-cu92mkl
From source
To build the MXNet core library from source with GPU, we have the following two options−
Option 1− Build with Microsoft Visual Studio 2017
In order to build and install MXNet yourself by using Microsoft Visual Studio 2017, you need the following dependencies.
Install/update Microsoft Visual Studio.
-
If Microsoft Visual Studio is not already installed on your machine, first download and install it.
-
It will prompt about installing Git. Install it also.
-
If Microsoft Visual Studio is already installed on your machine but you want to update it then proceed to the next step to modify your installation. Here you will be given the opportunity to update Microsoft Visual Studio as well.
Follow the instructions for opening the Visual Studio Installer available at to modify Individual components.
In the Visual Studio Installer application, update as required. After that look for and check VC++ 2017 version 15.4 v14.11 toolset and click Modify.
Now by using the following command, change the version of the Microsoft VS2017 to v14.11−
"C:Program Files (x86)Microsoft Visual Studio2017CommunityVCAuxiliaryBuildvcvars64.bat" -vcvars_ver=14.11
Next, you need to download and install CMake available at It is recommended to use CMake v3.12.2 which is available at because it is tested with MXNet.
Now, download and run the OpenCV package available at which will unzip several files. It is up to you if you want to place them in another directory or not. Here, we will use the path C:utils(mkdir C:utils) as our default path.
Next, we need to set the environment variable OpenCV_DIR to point to the OpenCV build directory that we have just unzipped. For this open command prompt and type set OpenCV_DIR=C:utilsopencvbuild.
One important point is that if you do not have the Intel MKL (Math Kernel Library) installed the you can install it.
Another open source package you can use is OpenBLAS. Here for the further instructions we are assuming that you are using OpenBLAS.
So, Download the OpenBlas package which is available at and unzip the file, rename it to OpenBLAS and put it under C:utils.
Next, we need to set the environment variable OpenBLAS_HOME to point to the OpenBLAS directory that contains the include and lib directories. For this open command prompt and type set OpenBLAS_HOME=C:utilsOpenBLAS.
Now, download and install CUDA available at . Note that, if you already had CUDA, then installed Microsoft VS2017, you need to reinstall CUDA now, so that you can get the CUDA toolkit components for Microsoft VS2017 integration.
Next, you need to download and install cuDNN.
Next, you need to download and install git which is at also.
Once you have installed all the required dependencies, follow the steps given below to build the MXNet source code−
Step 1− Open command prompt in windows.
Step 2− Now, by using the following command, download the MXNet source code from GitHub:
cd C: git clone https://github.com/apache/incubator-mxnet.git --recursive
Step 3− Next, verify the following−
DCUDNN_INCLUDE and DCUDNN_LIBRARY environment variables are pointing to the include folder and cudnn.lib file of your CUDA installed location
C:incubator-mxnet is the location of the source code you just cloned in the previous step.
Step 4− Next by using the following command, create a build directory and also go to the directory, for example−
mkdir C:incubator-mxnetbuild cd C:incubator-mxnetbuild
Step 5− Now, by using cmake, compile the MXNet source code as follows−
cmake -G "Visual Studio 15 2017 Win64" -T cuda=9.2,host=x64 -DUSE_CUDA=1 -DUSE_CUDNN=1 -DUSE_NVRTC=1 -DUSE_OPENCV=1 -DUSE_OPENMP=1 -DUSE_BLAS=open -DUSE_LAPACK=1 -DUSE_DIST_KVSTORE=0 -DCUDA_ARCH_LIST=Common -DCUDA_TOOLSET=9.2 -DCUDNN_INCLUDE=C:cudainclude -DCUDNN_LIBRARY=C:cudalibx64cudnn.lib "C:incubator-mxnet"
Step 6− Once the CMake successfully completed, use the following command to compile the MXNet source code−
msbuild mxnet.sln /p:Configuration=Release;Platform=x64 /maxcpucount
Option 2: Build with Microsoft Visual Studio 2015
In order to build and install MXNet yourself by using Microsoft Visual Studio 2015, you need the following dependencies.
Install/update Microsoft Visual Studio 2015. The minimum requirement to build MXnet from source is of Update 3 of Microsoft Visual Studio 2015. You can use Tools -> Extensions and Updates… | Product Updates menu to upgrade it.
Next, you need to download and install CMake which is available at . It is recommended to use CMake v3.12.2 which is at , because it is tested with MXNet.
Now, download and run the OpenCV package available at which will unzip several files. It is up to you, if you want to place them in another directory or not.
Next, we need to set the environment variable OpenCV_DIR to point to the OpenCV build directory that we have just unzipped. For this, open command prompt and type set OpenCV_DIR=C:opencvbuildx64vc14bin.
One important point is that if you do not have the Intel MKL (Math Kernel Library) installed the you can install it.
Another open source package you can use is OpenBLAS. Here for the further instructions we are assuming that you are using OpenBLAS.
So, Download the OpenBLAS package available at and unzip the file, rename it to OpenBLAS and put it under C:utils.
Next, we need to set the environment variable OpenBLAS_HOME to point to the OpenBLAS directory that contains the include and lib directories. You can find the directory in C:Program files (x86)OpenBLAS
Note that, if you already had CUDA, then installed Microsoft VS2015, you need to reinstall CUDA now so that, you can get the CUDA toolkit components for Microsoft VS2017 integration.
Next, you need to download and install cuDNN.
Now, we need to Set the environment variable CUDACXX to point to the CUDA Compiler(C:Program FilesNVIDIA GPU Computing ToolkitCUDAv9.1binnvcc.exe for example).
Similarly, we also need to set the environment variable CUDNN_ROOT to point to the cuDNN directory that contains the include, lib and bin directories (C:Downloadscudnn-9.1-windows7-x64-v7cuda for example).
Once you have installed all the required dependencies, follow the steps given below to build the MXNet source code−
Step 1− First, download the MXNet source code from GitHub−
cd C: git clone https://github.com/apache/incubator-mxnet.git --recursive
Step 2− Next, use CMake to create a Visual Studio in ./build.
Step 3− Now, in Visual Studio, we need to open the solution file,.sln, and compile it. These commands will produce a library called mxnet.dll in the ./build/Release/ or ./build/Debug folder
Step 4− Once the CMake successfully completed, use the following command to compile the MXNet source code
msbuild mxnet.sln /p:Configuration=Release;Platform=x64 /maxcpucount
Central Processing Unit (CPU)
Here, we will use various methods namely Pip, Docker, and Source to install MXNet when we are using CPU for processing−
By using Pip method
If you plan to build MXNet on Windows with CPUs, there are two options for installing MXNet using a Python package−
Install with CPUs
Use the following command to install MXNet with CPU with Python−
pip install mxnet
Install with Intel CPUs
As discussed above, MXNet has experimental support for Intel MKL as well as MKL-DNN. Use the following command to install MXNet with Intel CPU with Python−
pip install mxnet-mkl
By using Docker
You can find the docker images with MXNet at DockerHub, available at Let us check out the steps below, to install MXNet by using Docker with CPU−
Step 1− First, by following the docker installation instructions which can be read at . We need to install Docker on our machine.
Step 2− By using the following command, you can pull the MXNet docker image−
$ docker pull mxnet/python
Now in order to see if mxnet/python docker image pull was successful, we can list docker images as follows−
$ docker images
For the fastest inference speeds with MXNet, it is recommended to use the latest MXNet with Intel MKL-DNN.
Check the commands below−
$ docker pull mxnet/python:1.3.0_cpu_mkl $ docker images
Installing MXNet On Cloud and Devices
This section highlights how to install Apache MXNet on Cloud and on devices. Let us begin by learning about installing MXNet on cloud.
Installing MXNet On Cloud
You can also get Apache MXNet on several cloud providers with Graphical Processing Unit (GPU) support. Two other kind of support you can find are as follows−
- GPU/CPU-hybrid support for use cases like scalable inference.
- Factorial GPU support with AWS Elastic Inference.
Following are cloud providers providing GPU support with different virtual machine for Apache MXNet−
The Alibaba Console
You can create the NVIDIA GPU Cloud Virtual Machine (VM) available at with the Alibaba Console and use Apache MXNet.
Amazon Web Services
It also provides GPU support and gives the following services for Apache MXNet−
Amazon SageMaker
It manages training and deployment of Apache MXNet models.
AWS Deep Learning AMI
It provides preinstalled Conda environment for both Python 2 and Python 3 with Apache MXNet, CUDA, cuDNN, MKL-DNN, and AWS Elastic Inference.
Dynamic Training on AWS
It provides the training for experimental manual EC2 setup as well as for semi-automated CloudFormation setup.
You can use NVIDIA VM available at with Amazon web services.
Google Cloud Platform
Google is also providing NVIDIA GPU cloud image which is available at to work with Apache MXNet.
Microsoft Azure
Microsoft Azure Marketplace is also providing NVIDIA GPU cloud image available at to work with Apache MXNet.
Oracle Cloud
Oracle is also providing NVIDIA GPU cloud image available at to work with Apache MXNet.
Central Processing Unit (CPU)
Apache MXNet works on every cloud provider’s CPU-only instance. There are various methods to install such as−
-
Python pip install instructions.
-
Docker instructions.
-
Preinstalled option like Amazon Web Services which provides AWS Deep Learning AMI (having preinstalled Conda environment for both Python 2 and Python 3 with MXNet and MKL-DNN).
Installing MXNet on Devices
Let us learn how to install MXNet on devices.
Raspberry Pi
You can also run Apache MXNet on Raspberry Pi 3B devices as MXNet also support Respbian ARM based OS. In order to run MXNet smoothly on the Raspberry Pi3, it is recommended to have a device that has more than 1 GB of RAM and a SD card with at least 4GB of free space.
Following are the ways with the help of which you can build MXNet for the Raspberry Pi and install the Python bindings for the library as well−
Quick installation
The pre-built Python wheel can be used on a Raspberry Pi 3B with Stretch for quick installation. One of the important issues with this method is that, we need to install several dependencies to get Apache MXNet to work.
Docker installation
You can follow the docker installation instructions, which is available at to install Docker on your machine. For this purpose, we can install and use Community Edition (CE) also.
Native Build (from source)
In order to install MXNet from source, we need to follow the following two steps−
Step 1
Build the shared library from the Apache MXNet C++ source code
To build the shared library on Raspberry version Wheezy and later, we need the following dependencies:
-
Git− It is required to pull code from GitHub.
-
Libblas− It is required for linear algebraic operations.
-
Libopencv− It is required for computer vision related operations. However, it is optional if you would like to save your RAM and Disk Space.
-
C++ Compiler− It is required to compiles and builds MXNet source code. Following are the supported compilers that supports C++ 11−
-
G++ (4.8 or later version)
-
Clang(3.9-6)
-
Use the following commands to install the above-mentioned dependencies−
sudo apt-get update sudo apt-get -y install git cmake ninja-build build-essential g++-4.9 c++-4.9 liblapack* libblas* libopencv* libopenblas* python3-dev python-dev virtualenv
Next, we need to clone the MXNet source code repository. For this use the following git command in your home directory−
git clone https://github.com/apache/incubator-mxnet.git --recursive cd incubator-mxnet
Now, with the help of following commands, build the shared library:
mkdir -p build && cd build cmake -DUSE_SSE=OFF -DUSE_CUDA=OFF -DUSE_OPENCV=ON -DUSE_OPENMP=ON -DUSE_MKL_IF_AVAILABLE=OFF -DUSE_SIGNAL_HANDLER=ON -DCMAKE_BUILD_TYPE=Release -GNinja .. ninja -j$(nproc)
Once you execute the above commands, it will start the build process which will take couple of hours to finish. You will get a file named libmxnet.so in the build directory.
Step 2
Install the supported language-specific packages for Apache MXNet
In this step, we will install MXNet Pythin bindings. To do so, we need to run the following command in the MXNet directory−
cd python pip install --upgrade pip pip install -e .
Alternatively, with the following command, you can also create a whl package installable with pip−
ci/docker/runtime_functions.sh build_wheel python/ $(realpath build)
NVIDIA Jetson Devices
You can also run Apache MXNet on NVIDIA Jetson Devices, such as TX2 or Nano as MXNet also support the Ubuntu Arch64 based OS. In order to run, MXNet smoothly on the NVIDIA Jetson Devices, it is necessary to have CUDA installed on your Jetson device.
Following are the ways with the help of which you can build MXNet for NVIDIA Jetson devices:
-
By using a Jetson MXNet pip wheel for Python development
-
From source
But, before building MXNet from any of the above-mentioned ways, you need to install following dependencies on your Jetson devices−
Python Dependencies
In order to use the Python API, we need the following dependencies−
sudo apt update sudo apt -y install build-essential git graphviz libatlas-base-dev libopencv-dev python-pip sudo pip install --upgrade pip setuptools sudo pip install graphviz==0.8.4 jupyter numpy==1.15.2
Clone the MXNet source code repository
By using the following git command in your home directory, clone the MXNet source code repository−
git clone --recursive https://github.com/apache/incubator-mxnet.git mxnet
Setup environment variables
Add the following in your .profile file in your home directory−
export PATH=/usr/local/cuda/bin:$PATH export MXNET_HOME=$HOME/mxnet/ export PYTHONPATH=$MXNET_HOME/python:$PYTHONPATH
Now, apply the change immediately with the following command−
source .profile
Configure CUDA
Before configuring CUDA, with nvcc, you need to check what version of CUDA is running−
nvcc --version
Suppose, if more than one CUDA version is installed on your device or computer and you want to switch CUDA versions then, use the following and replace the symbolic link to the version you want−
sudo rm /usr/local/cuda sudo ln -s /usr/local/cuda-10.0 /usr/local/cuda
The above command will switch to CUDA 10.0, which is preinstalled on NVIDIA Jetson device Nano.
Once you done with the above-mentioned prerequisites, you can now install MXNet on NVIDIA Jetson Devices. So, let us understand the ways with the help of which you can install MXNet−
By using a Jetson MXNet pip wheel for Python development− If you want to use a prepared Python wheel then download the following to your Jetson and run it−
-
MXNet 1.4.0 (for Python 3) available at
-
MXNet 1.4.0 (for Python 2) available at
Native Build (from source)
In order to install MXNet from source, we need to follow the following two steps−
Step 1
Build the shared library from the Apache MXNet C++ source code
To build the shared library from the Apache MXNet C++ source code, you can either use Docker method or do it manually−
Docker method
In this method, you first need to install Docker and able to run it without sudo (which is also explained in previous steps). Once done, run the following to execute cross-compilation via Docker−
$MXNET_HOME/ci/build.py -p jetson
Manual
In this method, you need to edit the Makefile (with below command) to install the MXNet with CUDA bindings to leverage the Graphical Processing units (GPU) on NVIDIA Jetson devices:
cp $MXNET_HOME/make/crosscompile.jetson.mk config.mk
After editing the Makefile, you need to edit config.mk file to make some additional changes for the NVIDIA Jetson device.
For this, update the following settings−
-
Update the CUDA path: USE_CUDA_PATH = /usr/local/cuda
-
Add -gencode arch=compute-63, code=sm_62 to the CUDA_ARCH setting.
-
Update the NVCC settings: NVCCFLAGS := -m64
-
Turn on OpenCV: USE_OPENCV = 1
Now to ensure that the MXNet builds with Pascal’s hardware level low precision acceleration, we need to edit the Mshadow Makefile as follow−
MSHADOW_CFLAGS += -DMSHADOW_USE_PASCAL=1
Finally, with the help of following command you can build the complete Apache MXNet library−
cd $MXNET_HOME make -j $(nproc)
Once you execute the above commands, it will start the build process which will take couple of hours to finish. You will get a file named libmxnet.so in the mxnet/lib directory.
Step 2
Install the Apache MXNet Python Bindings
In this step, we will install MXNet Python bindings. To do so we need to run the following command in the MXNet directory−
cd $MXNET_HOME/python sudo pip install -e .
Once done with above steps, you are now ready to run MXNet on your NVIDIA Jetson devices TX2 or Nano. It can be verified with the following command−
import mxnet mxnet.__version__
It will return the version number if everything is properly working.