AirStack Version 0.2 Released

https://deepwavedigital.com/wp-content/uploads/2019/08/dwd2_small_white-01.png
 

06 December 2019

 

Major Upgrades to the AIR-T

The Deepwave Digital team is proud to announce that AirStack Version 0.2.0 is now available for download to AIR-T customers.

Revision 0.2.0 of AirStack saw a number of changes and additional features designed to better support the underlying hardware and to make upgrades easier in the future. We have the end goal of making the AIR-T not just a development board, but a system that can be readily deployed to the field. Without further ado, here is the list of changes in AirStack 0.2.

New Features

  • In Place Firmware Upgrades - Firmware can now be updated directly from the Tegra SOM itself. There is no longer a need for an external PC to be hooked up via JTAG. After upgrading to 0.2.0, all users will be able to upgrade using a simple command line tool.

  • Variable Sample Rates - Improved sample rate decimation logic to allow for various sample rates. Currently the decimation logic supports dividing the base sample rate of 125 MSPS by 1, 2, 4, 8, or 16. We have added the necessary software hooks to leverage the new firmware decimation logic. The supported sample rates can be obtained by calling listSampleRates().

  • Simplistic Software Upgrading - Software libraries critical to the functionality of the AIR-T are now properly Debian packaged. This will allow us to deploy fixes to specific components without needing a whole new OS image.

  • External LO - The tuning frequency of the AIR-T can now be set by an external oscillator. This adds the capability to phase align multiple AIR-T units for MIMO and many other applications.

  • 10 MHz Phase Locking - The AIR-T can now be phase locked to an external frequency reference enabling coherent processing across multiple units.

  • Live Frequency Tuning - The tuning frequency of the AIR-T may now be changed in real-time.

  • TX2i Support - We have implemented and tested our support for the industrial grade NVIDIA Jetson TX2i allowing for improved temperature ranges, vibration tolerance, and environmental conditions.

  • JetPack 4.2.2 - The OS image is now based off of JetPack 4.2.2. This updates various GPU accelerated libraries as well as the kernel itself. Full highlights of these changes can be found here.

  • CUDA NVIDIA's CUDA has been upgraded to 10.0.326.

  • TensorRT NVIDIA's TensorRT has been upgraded to 5.1.6.1 1.

  • Ubuntu 18.04.2 - The operating system has been upgraded from Ubuntu 16.04.

  • Python Support For DNN Optimization - With the upgrade to JetPack 4.2.2, trained deep neural networks (DNN) may now be optimized and deployed solely using Python.

  • Docker Support - Support has been added for building and running Docker containers on the AIR-T.

  • Open Source Upgrades - We have updated our open source libraries, including GR-Wavelearner and GR-CUDA so that they are fully supported by AirStack 0.2.0.

Bug Fixes

Firmware

  • Fixed JESD sync issue where occasionally the AIR-T would not synchronize.

Operating System

  • Fixed issues in order to properly enable the SPI bus on J21, thus allowing for control of external devices.

Drivers

  • Various device driver fixes were implemented to ensure compatibility with newer Linux kernels.
  • Fixed various compatibility issues with GNU Radio, including adding the capability to dynamically change various RF settings.

Download

The AIR-T upgraded software and firmware are available for customers to download in the Developer Portal.

We are in the process of updating and improving our tutorials to provide example code on how to utilize the new software functionality.

We hope to have a roadmap of what’s to come out shortly. As a bit of a preview, we are currently working hard on enabling the TX chain and hope to have some capability to that extent out in early 2020.


[1] Trained networks saved as .plan files on an AirStack 0.1 AIR-T will have to be re-optimized from the source UFF models to be compatible with AirStack 0.2.0 and later

Deepwave’s AIR-T for CBRS Radar Sensor

Deepwave's AIR-T Shows Viability as CBRS Sensor

Deepwave Digital is proud to announce that their sensor has concluded certification testing to become a critical component in the 5G Citizens Broadband Radio Service (CBRS) network: the first commercial spectrum sharing network. The Deepwave team has implemented a deep neural network on their Artificial Intelligence Radio Transceiver (AIR-T) that is capable of detecting, classifying, and reporting the presence of naval radars with extreme accuracy.

Today, Deepwave Digital's partner Key Bridge Wireless announced the conclusion of their Environmental Sensing Capability for the Citizens Broadband Radio Service (CBRS) in a press release. “We have leveraged the latest methods in AI and deep learning to create a sensor that correctly identified every radar signal variant in the certification test suite with extremely high accuracy,” said John Ferguson, CEO of Deepwave Digital. ”Our detection algorithm was trained on tens of thousands of radar variants spanning the entire parameter space. We have coupled this software with our embedded, NVIDIA GPU-based software defined radio. This allowed us to demonstrate that AI is a commercially viable solution to detect and discern current and future incumbent radar waveforms.”

CBRS Overview

Historically, spectral bands have been assigned for specific applications. The CBRS network changes this paradigm by allowing the 3.5 GHz band to be utilized for both naval radars and commercial services such as LTE. A critical component in the CBRS network is the Environmental Sensing Capability (ESC) sensor. This sensor provides the ability to detect and discern the Navy user. If it does not detect a Navy user, the downstream network will provide access to the 3.5 GHz band for commercial services such as LTE. If the ESC does detect a Navy user, the band will not be available to commercial services.

For more information on the AIR-T, signal processing neural networks, or Deepwave Digital, Inc. please contact us.

Deepwave’s Presentation at GTC DC 2019

End-to-End Signal Processing and Deep Learning Using Embedded GPUs

The following presentation was given at NVIDIA's GPU Technology Conference (GTC) in Washington, DC on November 5, 2019. It was a great event where technology was showcased from many different research areas.

Presenter

Daniel Bryant

Abstract

We’ll present the GPU-accelerated digital signal processing (DSP) applications enabled by Deepwave Digital’s AI Radio Transceiver (AIR-T). We’ll also discuss our open source development tools and performance benchmarks for this new type of software-defined radio (SDR). By coupling NVIDIA’s TensorRT toolkit with the AIR-T, clients can rapidly develop and deploy deep learning applications at the edge of wireless systems. We’ll walk through a workflow for deep learning in wireless applications, including the acquisition of training data with the AIR-T, model training and optimization, and live inference on the AIR-T. Our solution addresses the issue of SDR bottlenecks. Because many DSP algorithms are highly parallelizable, GPUs can increase the throughput while maintaining simplistic programmability. With the new shared memory architecture of the NVIDIA Jetson products, GPUs are now a viable solution for optimizing short development times and high data rates while minimizing latency.

Presentation (pdf download)

Intro Slide

Presentation (video stream)

Deepwave Featured as Top 5 Things to See at NVIDIA’s GTC DC

The Premier AI Event is back in DC

"The center of the AI ecosystem shifts to D.C. this fall, when the GPU Technology Conference arrives at the Reagan Center in Washington, from Nov. 4-6."
Check out the full NVIDIA Blog post here.

The GTC DC event is a top-tier AI conference directed towards government, defense, and the private sector. Some of the best AI technology will be on display. Make sure to stop by and say hello to us.

Deepwave Digital will be at GTC DC on November5. Come see our presentation "End-to-end Signal Processing and Deep Learning Using Embedded GPUs" to learn what we have been up to. We will be showing you how to rapidly accelerate signal processing using embedded GPUs.

cuFFT on the AIR-T with GNU Radio

FFTs with CUDA on the AIR-T with GNU Radio

GPUs are extremely well suited for processes that are highly parallel. The Fast Fourier Transform (FFT) is one of the most common techniques in signal processing and happens to be a highly parallel algorithm. In this blog post the Deepwave team walks you though how to leverage the embedded GPU built into the AIR-T to perform high-speed FFTs without the computational bottleneck of a CPU and without having to experience the long development cycle associated with writing VHDL code for FPGAs. By leveraging the GPU on the AIR-T, you get the best of both worlds: fast development time and high speed processing.

You may not be aware, but a while back we pushed a new block to our open source GR-Wavelearner software: a processing block that allows customers to leverage NVIDIA's extremely efficient cuFFT algorithm on the AIR-T, out of the box. Because the AIR-T is the only Software Defined Radio (SDR) with native GPU support, it may be leveraged to accelerate FFT processing capability with very little programming expertise. Here is the short, three step process.

Step 1: Update GR-Wavelearner

The first step is to make sure that the version of GR-Wavelearner installed on your AIR-T is up to date. Instruction for upgrading GR-Wavelearner may be found in this tutorial.

Step 2: Launch the Example Code

GNU Radio companion is located in the Launcher on the left side of the desktop as shown in the figure below. Launch GNU Radio and choose File -> Open, and select the gpu_fft_demo.grc file located in

/usr/local/src/deepwave/gr-wavelearner/examples

Once complete, your desktop will resemble the image below.

AIR-T Desktop

Now simply click the green Play button at the top of the GNU Radio application. That is it! You are now receiving live RF signal data from the AIR-T, executing a cuFFT process in GNU Radio, and displaying the real-time frequency spectrum.

Step 3: Tailoring to Your Application

While the example distributed with GR-Wavelearner will work out of the box, we do provide you with the capability to modify the FFT batch size, FFT sample size, and the ability to do an inverse FFT (additional features coming!). If you are an advanced GNU Radio user, we also provide the source code on our GitHub for you to customize to your needs.

Video Tutorial

We have also recorded the full procedure in a video to help get started. Check it out below.

More Information on the AIR-T

If you do not yet own an AIR-T, please visit our webpage for more information or submit an inquiry to talk to our sales team.

Deepwave Digital Logo

New AIR-T Enclosures

AIR-T Enclosures Fresh off the Production Line

We have just received our first production versions of the new AIR-T software defined radio enclosure and it is beautiful. If you already have and AIR-T, you can order a kit today to protect your SDR. If you are thinking about acquiring our GPU enabled SDR, make sure to talk with us about the enclosure.

The enclosure is expertly constructed from aluminum to produce a polished, elegant, and sleek metallic silver finish. It measures 192 x 182 x 79 mm (7.5 x 7.2 3.1 inches) and the power button illuminates blue when they system is on. All RF ports are brought to the front of the enclosure for ease of use and all computer peripherals connections are brought to the rear.

Submit a sales inquiry here

AIR-T Enclosure Front

AIR-T Enclosure Back

Simplifying AI for Communications & Radar

Title: Simplifying AI for Communications, Radar, and Wireless Systems

Presented by John Ferguson (CEO Deepwave Digital)

Abstract

Radio frequency (RF) systems have become increasingly complex, and the number of connected devices is expected to increase. We'll discuss how deep learning within RF shows promise for dealing with a congested spectrum by enhancing reliability and simplifying the task of building effective wireless systems. Deep learning algorithms within RF technology show superior results, classifying signals well below the noise floor when compared to traditional signal processing methods. We'll describe how we've worked with partners to design a software-configurable wide-band RF transceiver system that can perform real-time DSP and deep learning with an NVIDIA GPU. We'll discuss RF system performance, RF training data collection, and software used to create applications. Additionally, we will present data demonstrating applications in deep learning enabled-RF technology.

 

Welcome to Deepwave Digital

Deepwave Digital directly enables the incorporation of artificial intelligence (AI) in radio frequency (RF) and wireless systems by supplying customers an integrated hardware and software solution. Our technology moves the AI computation engine to the signal edge of the RF system to reduce network bandwidth, latency, and human-driven analysis requirements.