The Jetson Nano is Nvidia’s latest machine learning development platform. Previous iterations of the Jetson platform were aimed squarely at professional developers looking to make large scale commercial products. They are powerful, yet expensive. With the Jetson Nano, Nvidia has lowered the price of entry and opened the way for a Raspberry-Pi like revolution, this time for machine learning.
The Jetson Nano is a $99 single board computer (SBC) that borrows from the design language of the Raspberry Pi with its small form factor, block of USB ports, microSD card slot, HDMI output, GPIO pins, camera connector (which is compatible with the Raspberry Pi camera), and Ethernet port. However, it isn’t a Raspberry Pi clone. The board is a different size, there is support for Embedded Displayport, and there is a huge heat sink!
Under the heatsink is the production-ready Jetson Nano System on Module (SOM). The development kit is basically a board (with all the ports) for holding the module. In a commercial application the designers would build their products to accept the SOM, not the board.
While Nvidia wants to sell lots of Jetson modules, it is also aiming to sell the board (with module) to enthusiasts and hobbyists who may never use the module version but are happy to create projects based around the development kit, much like they do with the Raspberry Pi.
When you think of Nvidia you probably think about graphics cards and GPUs, and rightly so. While Graphic Processing Units are great for 3D gaming, it also turns out that they are good at running machine learning algorithms.
The Jetson Nano has a 128 CUDA core GPU based on the Maxwell architecture. Each generation of GPU from Nvidia is based on a new microarchitecture design. This central design is then used to create different GPUs (with different core counts, and so on) for that generation. The Maxwell architecture was used first in the GeForce GTX 750 and the GeForce GTX 750 Ti. A second generation Maxwell GPU was introduced with the GeForce GTX 970.
The original Jetson TX1 used a 1024-GFLOP Maxwell GPU with 256 CUDA cores. The Jetson Nano uses a cut-down version of the same processor. According to the boot logs, the Jetson Nano has the same second-generation GM20B variant of the Maxwell GPU, but with half the CUDA cores.
The Jetson Nano comes with a large collection of CUDA demos from smoke particle simulations to Mandelbrot rendering with a healthy dose of Gaussian blurs, jpeg encoding and fog simulations along the way.
The potential for fast and smooth 3D games, like those based on the various 3D engines released under open source from ID software, is good. I couldn’t actually find any that work yet, but I am sure that will change.
Having a good GPU for CUDA based computations and for gaming is nice, but the real power of the Jetson Nano is when you start using it for machine learning (or AI as the marketing people like to call it).
Nvidia has an open source project called “Jetson Inference” which runs on all its Jetson platforms, including the Nano. It demonstrates various clever machine learning techniques, including object recognition and object detection. For developers, it is an excellent starting point for building real-world machine learning projects. For reviewers, it is a cool way to see what the hardware can do!
Also read: How to build your own digital assistant with Raspberry Pi
The object recognition neural network has about 1000 objects in its repertoire. It can work either from still images or live from the camera feed. Similarly, the object detection demo knows about dogs, faces, walking people, airplanes, bottles, and chairs.
When running live from a camera, the objection recognition demo can process (and label) at about 17fps. The object detection demo, searching for faces, runs at about 10fps.
Visionworks is Nvidia’s SDK for computer vision. It implements and extends the Khronos OpenVX standard, and it is optimized for CUDA-capable GPUs and SOCs, including the Jetson Nano.
There are several different VisionWorks demos available for the Jetson Nano including feature tracking, motion estimation, and video stabilization. These are common tasks needed by Robotics and Drones, Autonomous Driving and Intelligent Video Analytics.
Using a 720p HD video feed the feature tracking works at over 100fps, while the motion estimation demo can calculate the motion of around six or seven people (and animals) from a 480p feed at 40fps.
For videographers, the Jetson Nano can stabilize handheld (shaky) video at over 50fps from a 480p input. What these three demos show is real-time computer vision tasks running at high framerates. A sure foundation for creating apps in a wide range of areas that include video input.
The killer demo that Nvidia provided with my review unit is “DeepStream.” Nvidia’s DeepStream SDK is a yet-to-be-released framework for high-performance streaming analytics applications that can be deployed on site in retail outlets, smart cities, industrial inspection areas, and more.
The DeepStream demo shows real-time video analytics on eight 1080p inputs. Each input is H.264 encoded and represents a typical streams coming on a IP camera. It is an impressive demo, showing real-time object tracking of people and cars at 30fps across eight video inputs. Remember this is running on a $99 Jetson Nano!
Raspberry Pi Killer?
As well as a powerful GPU and some sophisticated AI tools, the Jetson Nano is also a fully working desktop computer running a variant of Ubuntu Linux. As a desktop environment it has several distinct advantages over the Raspberry Pi. First, it has 4GB of RAM. Second, it has a quad-core Cortex-A57 based CPU, third is has USB 3.0 (for faster external storage).
While running a full desktop on the Pi can be arduous, the desktop experience provided by the Jetson Nano is much more pleasant. I was able to easily run Chromium with 5 open tabs; LibreOffice Writer; the IDLE python development environment; and a couple of terminal windows. This is mainly because the 4GB of RAM, but startup time, and application performance are also superior to the Raspberry Pi due to the use of Cortex-A57 cores rather than Cortex-A53 cores.
For those interested in some actual performance numbers. Using my threadtesttool (here on GitHub) with eight threads each calculating the first 12,500,000 primes, the Jetson Nano was able to complete the workload in 46 seconds. This compares to four minutes on a Raspberry Pi Model 3 and 21 seconds on my Ryzen 5 1600 desktop.
Using the OpenSSL “speed” test, which tests the performance of cryptographic algorithms. The Jetson Nano is at least 2.5 times faster than the Raspberry Pi 3, peaking at 10 times faster, depending on the exact test.
Related: Learn how to develop Android apps at the DGiT Academy!
One of the key features of the Raspberry Pi is its set of General Purpose Input and Output (GPIO) pins. They allow you to connect the Pi to external hardware like LEDs, sensors, motors, displays, and more.
The Jetson Nano also has a set of GPIO pins and the good news is that they are Raspberry Pi compatible. Initial support is limited to the Adafruit Blinka library and to userland control of the pins. However, all of the plumbing is there to allow broad support for many of the Raspberry Pi HATs available.
To test it all out I took a Pimoroni Rainbow HAT and connected it to the Jetson. The library (https://github.com/pimoroni/rainbow-hat) for the Rainbow HAT is expecting a Raspberry Pi along with some underlying libraries, so I didn’t try to install it, however I did modify one of the example scripts that comes with the Jetson Nano so I could get one of the board’s LEDs to blink on and off via Python.
Because of the high-performance CPU and the desktop like GPU, the Jetson Nano has a large heatsink and you can also buy an optional fan. The board has different power modes which are controlled via a program called nvpmodel. The two main power modes are the 10W configuration, which uses all four CPU cores and allows the GPU to run at maximum speed. The other is the 5W mode, which disables two of the cores and throttles the GPU.
If you are running apps which push the performance of the board you will need to ensure that you use a good power supply. For general usage, you can use USB for power, as long as the supply is rated for at least 2.5A. For high-performance tasks, you should use a 5V/4A power supply, which has a separate socket and is enabled via a jumper on the board.
If you look at the Jetson Nano as an affordable way onto the Jetson platform, it is brilliant. Rather than having to spend $600 or more to get a development kit which is compatible with Nvidia’s machine learning offerings and works with frameworks like VisionWorks, you just pay $99. What you get is still highly capable and able to perform lots of interesting machine learning tasks. Plus, it leaves the door open to upgrading to the bigger versions of Jetson if needed.
As a direct alternative to the Raspberry Pi, the value proposition is less appealing, as the Pi only costs $35 (less if you go with one of the Zero models). Price is key: Do I want a Jetson Nano or three Raspberry Pi boards?
If you want something like the Raspberry Pi, but with more processing power, more GPU grunt and quadruple the RAM, then the Jetson Nano is the answer. Sure, it costs more, but you get more.
Bottom line is this: if the Raspberry Pi is good enough for you, stick with it. If you want better performance, if you want hardware accelerated machine learning, if you want a way into the Jetson ecosystem, then get a Jetson Nano today!
>> Source Link