In June, 2019, NVIDIA released its latest addition to the Jetson line: the Nano. The Nano is a single-board computer with a Tegra X1 SOC. Essentially, it is a tiny computer with a tiny graphics card. The Nano is capable of running CUDA, NVIDIA’s programming language for general purpose computing on graphics processor units (GPUs). With CUDA, we can run a host of machine learning algorithms that have been optimized for GPUs.
Remember that the Jetson Nano is an embedded device, which means it will likely be slower than any modern desktop or laptop computer you might encounter. As a result, it’s not intended to be used as a development system to train new machine learning models. We recommend investing in a newer graphics card or rent time from a cloud-based GPU server if you wish to train deep models from scratch.
That being said, it can be useful to deploy a model to the Nano if you wish to predict or classify things like images, sounds, etc. As the Nano is an embedded device, it can be easily integrated into other devices, such as a robotic chassis.
Here is a video if you would like to watch the setup:
While the Jetson Nano packs some amazing hardware in a small package, it does not contain everything you need to get started. You will need a few accessories:
Download and Burn Image
Download the latest Jetson Nano Ubuntu image from here: https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-devkit#write. At the time of this writing, the latest image was r3223 (based on Ubuntu 18.04).
Use a program like balenaEtcher to write the image file to your microSD card.
Insert the microSD card into the SD card slot found just underneath the Jetson Nano main card (not the motherboard).
If you are using a power supply with a barrel jack adapter, you will need to put a jumper on the J48 header.
Connect the rest of the accessories to your Jetson Nano:
A Note About NVIDIA’s Online Course
NVIDIA has a free online course that covers the basic of the Jetson Nano. You can sign up for it here: https://courses.nvidia.com/courses/course-v1:DLI+C-RX-02+V1/about. Note that it has you download a different Ubuntu image for the Nano, which comes pre-installed with a number of tools for using and training machine learning models.
Once the Nano boots, you should be presented with Ubuntu’s configuration program. Walk through the steps to set up your username, password, timezone, WiFi, etc.
Once you’ve logged in to Ubuntu, you need to perform a few more steps.
If you’re using an Edimax EW-7811Un like I am (or any other wireless card based on the Realtek 8192 chipset), you’ll need to add it to the blacklist file, as there’s a bug in the Ubuntu kernel we need to work around. This should prevent any dropped connections during an remote desktop or SSH session.
Open the blacklist.conf file with vi:
sudo vi /etc/modeprobe.d/blacklist.conf
Scroll to the bottom of the file and press ‘o’ to insert a new line and edit. Add the following line:
Press ‘esc’ to exit insert mode, and type ‘:wq’ to write the file and exit. Sometimes, it can also help to disable power saving mode to the WiFi card:
sudo iw dev wlan0 set power_save off
You will want to get the IP address of your Nano, too (write this down, as you’ll need it for RDP and SSH):
Finally, install the Remote Desktop Protocol (RDP) server. I know that RDP is a proprietary protocol by Microsoft, but it seems to work better than VNC on the Nano.
sudo apt update
sudo apt install -y xrdp
Follow this with a reboot:
Log in via RDP
If you are on Mac or Linux, you will need to download and install the RDP client of your choice.
On Windows, start the Remote Desktop application. Enter your username and IP address of the Nano. In the Display tab, change the resolution to 1280x1024 and change the Color depth to 16-bit. These will help the RDP session run more smoothly.
In the Experience tab, change the Performance setting to Modem (56 kbps) and de-select the Persistent bitmap caching option. Supposedly, these help optimize the RDP connection for speed over nice visuals.
Connect, and you should be presented with the Nano’s login screen. Enter your password (maybe twice) to get access to the desktop. From there, click the Activities button in the top-left to get the launcher to appear. You should then be able to open a file browser and terminal to begin developing on your remote desktop.
By default, the Jetson Nano should be running an SSH server. Just use your favorite SSH client (e.g. PuTTY on Windows) to connect to the Jetson Nano to get a remote terminal.
If you would like a graphical interface to copy files between your host computer and the Nano, you can use SSHFS. This will let you mount a network drive on your host computer.
For Mac and Linux, follow the instructions here to install an SSHFS client: https://www.digitalocean.com/community/tutorials/how-to-use-sshfs-to-mount-remote-file-systems-over-ssh
On Windows, you’ll need to perform a few more steps:
Once you connect via SSHFS, you should be presented with the directory structure located at /home/<username/ of your Nano. Now, you can enter commands via terminal and work with files using a graphical interface!
Alias Python 3
The Jetson Nano image comes installed with Python 2 and Python 3. However, Python 2 is configured as the default version. If you’re like me and trying to move all your work to Python 3, I recommend setting an alias in your .bashrc file:
Scroll to the bottom of the file and press ‘o’ to insert a new line and edit. Enter the following line:
Exit by pressing ‘esc.’ Type ‘:wq’ to exit and enter the following on the command line to run the .bashrc file:
In preparation for the next tutorial, you will need to install some packages and source code from NVIDIA. This will allow us to play with prepared demos and train a model for classifying images. Start by installing the following packages:
sudo apt install -y git cmake libpython3-dev python3-numpy
When that’s done, clone the jetson-inference repository:
git clone --recursive https://github.com/dusty-nv/jetson-inference
Create a build folder in the jetson-inference directory and execute cmake:
git clone --recursive https://github.com/dusty-nv/jetson-inference
This will take some time. Throughout the build process, you will be asked for your password, and at some point, you will be asked to download pre-trained models. Leave the default GoogleNet and ResNet-18 models selected and press ‘enter’ to continue.
You’ll also be asked to install PyTorch. If you switched over to using Python 3 earlier, you will want to de-select PyTorch for Python 2 and select PyTorch for Python 3 using the spacebar.
Once that’s done, build and install the tools:
sudo make install
In the next tutorial, I’ll show you how to run the demos in the jetson-inference repository and use transfer learning to train a model. Feel free to read through the READMEs in the jetson-inference repository to see the demos: https://github.com/dusty-nv/jetson-inference
In addition to the online course, NVIDIA also has a set of tutorials showing you how to train a deep neural network from scratch using the DIGITS system: https://github.com/dusty-nv/jetson-inference/blob/master/docs/digits-workflow.md. Note that you will need access to a powerful graphics card (on your personal computer or cloud-based server) to use DIGITS.