AirStack Version 0.3.0 Released

 

We are excited to announce that AirStack Version 0.3.0 is now available for download in the Developer Portal for the AIR-T. The Deepwave team has been working very hard to bring this update to the community and believe that these new features will allow customers to create new and exciting applications for deep learning on the AIR-T. This software version enables dual channel continuous transmit (2x2 MIMO), signal stream triggering, and a new Python FPGA register API.

 

New Features in AirStack 0.3.0

  • Enabled TX functionality - You can now continuously transmit waveforms using the AIR-T. We currently support all the same features on the transmit channels as we do on the receive channels. For an example of how to transmit a continuous signal with the AIR-T, see the tutorial Transmitting on the AIR-T.

  • Hardware Triggering - We have added support to the RX channels to allow for synchronization of signal reception. Multiple AIR-Ts can now begin receiving signals at the exact same time and also be synchronized with external equipment. To learn how to use this feature, take a look at the tutorial Triggering a Recording.

  • New FPGA Register API - We now allow for direct user access to the FPGA registers via the SoapySDR Register API. Most users won’t necessarily need this functionality, but advanced users that are working with their own custom firmware now have a simplified way to control the FPGA functionality using C++ or Python.

  • Customized Memory Management - We have completely reworked memory management in the DMA driver. You can now custom tailor DMA transfers to/from the FPGA based on your requirements for latency and throughput.

    • The overall size of the per-channel receive buffer is now tunable via a kernel module parameter in Deepwave's the AXI-ST DMA driver. The default size is 256 MB per channel, which is unchanged from prior versions and buffers a half second of data at 125 MSPS. This can be decreased to reduce memory overhead or increased if required by the user application.
    • The size of each unit of data transfer, called a descriptor, is now tunable via a kernel module parameter. This is intended for advanced users. Larger descriptors allow for improved performance, especially when using multiple radio channels at the same time. Smaller descriptors can help to optimize for latency, particularly at lower sample rates. These effects are small but may be useful for some specific applications.

  • Device Details Query - The SoapySDR API and command-line utilities now report the versions of various software and hardware components of the AIR-T. This will make it easy to ensure the various components of AirStack are up to date.

 


For more information on Deepwave Digital please reach out:

Arrow Electronics is now an AIR-T Distributor

Deepwave Digital is excited to announce that we have expanded our distributors to include Arrow Electronics. Arrow is fully stocked and selling the Artificial Intelligence Radio Tranceiver. The following models are available for sale at Arrow's website:

Artifical Intelligence Radio Tranceiver (AIR-T)

AIR7101-A | - Artifical Intelligence Radio Tranceiver (AIR-T)

The AIR7101-A is a high-performance software-defined radio (SDR) seamlessly integrated with state-of-the-art processing and deep learning inference hardware. The incorporation of an embedded graphics processing unit (GPU) enables real-time wideband digital signal processing (DSP) algorithms to be executed in software, without requiring specialized field programmable gate array (FPGA) firmware development. The GPU is the most utilized processor for machine learning, therefore the AIR-T significantly reduces the barrier for engineers to create autonomous signal identification, interference mitigation, and many other machine learning applications. By granting the deep learning algorithm full control over the transceiver system, the AIR-T allows for fully autonomous software defined and cognitive radio.


Artifical Intelligence Radio Tranceiver (AIR-T) with Enclosure

AIR7101-B | AIR-T with Enclosure

The AIR7101-B is a high-performance software-defined radio (SDR) seamlessly integrated with state-of-the-art processing and deep learning inference hardware. The incorporation of an embedded graphics processing unit (GPU) enables real-time wideband digital signal processing (DSP) algorithms to be executed in software, without requiring specialized field programmable gate array (FPGA) firmware development. The GPU is the most utilized processor for machine learning, therefore the AIR-T significantly reduces the barrier for engineers to create autonomous signal identification, interference mitigation, and many other machine learning applications. By granting the deep learning algorithm full control over the transceiver system, the AIR-T allows for fully autonomous software defined and cognitive radio fully installed in an enclosure.

The Air-T enclosure is expertly constructed from aluminum to produce a polished, elegant, and sleek metallic silver finish. It measures 192 x 182 x 79 mm (7.5 x 7.2 3.1 inches) and the power button illuminates blue when the system is on. All RF ports are brought to the front of the enclosure for ease of use and all computer peripherals connections are brought to the rear.

Webinar: Detecting and Labeling Training Data for Signal Classification (Updated)

Webinar: Detecting and Labeling Training Data for Signal Classification

Update

The video stream, slides, and source code are now available to the general public.

Slides

You may download the slides here.

 

Video Stream

 

Source Code

Available in the webinar section of our GitHub here.

 
 


Original Post

Deep Learning Series Part 1 of 2

Deepwave Digital will be hosting their next webinar on April 14, 2020 at 1pm EST. In this webinar, we will demonstrate how to detect and label training data for signal classification, using the cuSignal signal processing techniques learned in our previous webinar. Specifically, you will learn how to leverage GPU signal processing and the AIR-T to detect, label, and record training data from various key FOBs over-the-air. cuSignal is part of the NVIDIA RAPIDS development environment and is an effort to GPU accelerate all of the signal processing functions in the SciPy Signal Library.

 
When: April 14th, 2020 at 1pm EST
 
Register Here

Space is limited so make sure to register in advance. Read below for more information about the webinar and we hope you will join us!

Deepwave Digital, Inc.

 


Webinar Agenda

Introduction to Deepwave Digital

We will introduce you to the Deepwave Digital team and provide an overview of what our startup does. We will also discuss the way we see deep learning being applied to systems and signals.

AirStack Programming API for the AIR-T

We will provide a detailed review of AirStack, the application programming interface (API) for the AIR-T. The figure below outlines the CPU, GPU, and deep learning interfaces supported.

 

Demonstrations

Key FOB Signals for Labeling

We will create a training data set using an assortment of different key FOBs. These data will be added to our AirPack software package in an upcoming release.

Designing a Real-time Power Detector with cuSignal for the AIR-T

In this section, we will walk the attendees through the real-time python code to:

  1. Compute the instantaneous power of the signal stream
  2. Filter and down-sample the power to a lower data rate
  3. Reshape the down-sampled data into detection segments
  4. Perform detection on each segment of the down-sampled data
  5. Display the detected data (if desired)
  6. Record the data to disk for deep learning training

Real-time Signal Detection and Labeling

Here we will demonstrate how to first execute the software to detect, label, and record the signal from the key FOBs, then over-the-air capture these data. An example of this is shown in the video below with recorded sections in blue/aqua.

 

 


When: March 25th 2020 at 1pm EST

Register Here for the Webinar

Deepwave and RAPIDS Team Collaborate on cuSignal 0.13

Deepwave and RAPIDS Team Collaborate on cuSignal 0.13

Over the past few months NVIDIA has been working on a new version of cuSignal: version 0.13. As part of their RAPIDS environment, cuSignal GPU accelerates all of the signal processing functions in the SciPy Signal Library.

Read the Medium full article here.

Online Processing

Deepwave has been working with NVIDIA to make online processing of cuSignal a reality. Check out how to perform signal processing in real time in this video.

More advances to come with cuSignal!

As Deepwave continues to help NVIDIA make GPU based signals processing a reality, check back in with us to find out more.

To get started, see our tutorials here.

The cuSignal source code may be found here.

Deepwave Digital Webinar March 25 2020 (Updated)

Deepwave Digital Webinar

Update

The webinar on March 25, 2020 was a great success and we thank you for attending! The video stream, slides, and source code are now available to the general public.

Slides

You may download the slides here.

 

Video Stream

 

Source Code

Available in the new webinar section of our GitHub here.

 
 


Original Post

Amid all of the uncertainty in our work schedules, we think now is a great time to host a webinar on signal processing and deep learning with GPUs and the AIR-T. The Webinar will cover the items below but more importantly we will be demonstrating the usage of cuSignal on the AIR-T!

When: March 25th 2020 at 1pm EST

Register Here

Space is limited so make sure to register in advance. Read below for more information about the webinar and w e hope you will join us!

Deepwave Digital, Inc.

 

Webinar Agenda

Introduction to Deepwave Digital

We will introduce you to the Deepwave Digital team and provide an overview of what our startup does. We will also discuss the way we see deep learning being applied to systems and signals.

AirStack Programming API for the AIR-T

We will provide a detailed discussion on the application programming interface (API) for the AIR-T, AirStack. The figure below outlines the CPU, GPU, and deep learning interfaces supported.

Demonstrations

Signal Processing Using the GPU on the AIR-T

Here we will discuss programming the embedded NVIDIA Jetson GPU that is part of the AIR-T using CUDA, pyCUDA, and GNU Radio.

cuSignal - NVIDIA's GPU Accelerated Signal Library

cuSignal is an open source GPU accelerated version of Scipy.Signal. The team at NVIDIA started the initiative a few months back and Deepwave has decided to jump on board and contribute. If you are not familiar with cuSignal, it is part of the large RAPIDS project at NVIDIA: the push to GPU accelerate data science libraries.

Read more about cuSignal here

Applications of Deep Learning

Finally we will close the webinar by discussing deep learning applications and how to leverage the AIR-T to acquire data and how to deploy trained neural networks on the AIR-T for inference.

When: March 25th 2020 at 1pm EST

Register Here for the Webinar

Simple Spectrum Data with the AIR-T

Power Spectrum Measurement

If you are looking for a very simple way to acquire the power spectral density of a received signal with the AIR-T, you may like the Soapy Power Project. The resulting spectrum output may be used for monitoring interference, acquiring signals for deep learning, or for examining a test signal. Soapy Power is a part of the larger SoapySDR ecosystem that has built-in support on the AIR-T. In this post, we will walk you through the installation of Soapy Power on the AIR-T and provide a brief demo to help get you started.

Requirements

Take a Spectrum Using the AIR-T

Using Soapy Power, it is very easy to acquire a spectrum snapshot and record to a csv file. Sample rate, center frequency, and processing parameters can all be controlled via command-line arguments as you will see in the below example.

$ soapy_power -g 0 -r 125M -f 2.4G -b 8192 -O data.csv

Let's walk through this command. The soapy_power command is the program being called. the -g 0 option sets the gain to 0 dB. The -r 125M option sets the receiver sample rate to 125 MSPS. The -f 2.4G option tunes the radio to 2.4 GHz frequency. We set the FFT size to be 8192 samples using the -b 8192 and average 100 windows using the -n 100 option. Finally, the output file is defined by the -O data.csv option. Following the execution of the above command, a file is recorded with the spectrum data.

To visualize the data, we will use Python's matplotlib package with the following script:

import numpy as np
from matplotlib import pyplot as plt

with open('data.csv', 'r') as csvfile:
    data_str = csvfile.read()  # Read the data
data = data_str.split(',')  # Use comma as the delimiter

timestamp = data[0] + data[1]  # Timestamp as YYYY-MM-DD hhh:mmm:ss
f0 = float(data[2])  # Start Frequency
f1 = float(data[3])  # Stop Frequency
df = float(data[4])  # Frequency Spacing
sig = np.array(data[6:], dtype=float)   # Signal data
freq = np.arange(f0, f1, df) / 1e9  # Frequency Array

# Plot the data
plt.plot(freq, sig)
plt.xlim([freq[0], freq[-1]])
plt.ylabel('PSD (dB)')
plt.xlabel('Freq (GHz)')
plt.show()

Resulting Power Spectral Density Plot

Signal Output

Visit our documentation page here for the full tutorial including installation instructions.

AirStack Version 0.2 Released

https://deepwavedigital.com/wp-content/uploads/2019/08/dwd2_small_white-01.png
 

06 December 2019

 

Major Upgrades to the AIR-T

The Deepwave Digital team is proud to announce that AirStack Version 0.2.0 is now available for download to AIR-T customers.

Revision 0.2.0 of AirStack saw a number of changes and additional features designed to better support the underlying hardware and to make upgrades easier in the future. We have the end goal of making the AIR-T not just a development board, but a system that can be readily deployed to the field. Without further ado, here is the list of changes in AirStack 0.2.

New Features

  • In Place Firmware Upgrades - Firmware can now be updated directly from the Tegra SOM itself. There is no longer a need for an external PC to be hooked up via JTAG. After upgrading to 0.2.0, all users will be able to upgrade using a simple command line tool.

  • Variable Sample Rates - Improved sample rate decimation logic to allow for various sample rates. Currently the decimation logic supports dividing the base sample rate of 125 MSPS by 1, 2, 4, 8, or 16. We have added the necessary software hooks to leverage the new firmware decimation logic. The supported sample rates can be obtained by calling listSampleRates().

  • Simplistic Software Upgrading - Software libraries critical to the functionality of the AIR-T are now properly Debian packaged. This will allow us to deploy fixes to specific components without needing a whole new OS image.

  • External LO - The tuning frequency of the AIR-T can now be set by an external oscillator. This adds the capability to phase align multiple AIR-T units for MIMO and many other applications.

  • 10 MHz Phase Locking - The AIR-T can now be phase locked to an external frequency reference enabling coherent processing across multiple units.

  • Live Frequency Tuning - The tuning frequency of the AIR-T may now be changed in real-time.

  • TX2i Support - We have implemented and tested our support for the industrial grade NVIDIA Jetson TX2i allowing for improved temperature ranges, vibration tolerance, and environmental conditions.

  • JetPack 4.2.2 - The OS image is now based off of JetPack 4.2.2. This updates various GPU accelerated libraries as well as the kernel itself. Full highlights of these changes can be found here.

  • CUDA NVIDIA's CUDA has been upgraded to 10.0.326.

  • TensorRT NVIDIA's TensorRT has been upgraded to 5.1.6.1 1.

  • Ubuntu 18.04.2 - The operating system has been upgraded from Ubuntu 16.04.

  • Python Support For DNN Optimization - With the upgrade to JetPack 4.2.2, trained deep neural networks (DNN) may now be optimized and deployed solely using Python.

  • Docker Support - Support has been added for building and running Docker containers on the AIR-T.

  • Open Source Upgrades - We have updated our open source libraries, including GR-Wavelearner and GR-CUDA so that they are fully supported by AirStack 0.2.0.

Bug Fixes

Firmware

  • Fixed JESD sync issue where occasionally the AIR-T would not synchronize.

Operating System

  • Fixed issues in order to properly enable the SPI bus on J21, thus allowing for control of external devices.

Drivers

  • Various device driver fixes were implemented to ensure compatibility with newer Linux kernels.
  • Fixed various compatibility issues with GNU Radio, including adding the capability to dynamically change various RF settings.

Download

The AIR-T upgraded software and firmware are available for customers to download in the Developer Portal.

We are in the process of updating and improving our tutorials to provide example code on how to utilize the new software functionality.

We hope to have a roadmap of what’s to come out shortly. As a bit of a preview, we are currently working hard on enabling the TX chain and hope to have some capability to that extent out in early 2020.


[1] Trained networks saved as .plan files on an AirStack 0.1 AIR-T will have to be re-optimized from the source UFF models to be compatible with AirStack 0.2.0 and later

Deepwave’s AIR-T for CBRS Radar Sensor

Deepwave's AIR-T Shows Viability as CBRS Sensor

Deepwave Digital is proud to announce that their sensor has concluded certification testing to become a critical component in the 5G Citizens Broadband Radio Service (CBRS) network: the first commercial spectrum sharing network. The Deepwave team has implemented a deep neural network on their Artificial Intelligence Radio Transceiver (AIR-T) that is capable of detecting, classifying, and reporting the presence of naval radars with extreme accuracy.

Today, Deepwave Digital's partner Key Bridge Wireless announced the conclusion of their Environmental Sensing Capability for the Citizens Broadband Radio Service (CBRS) in a press release. “We have leveraged the latest methods in AI and deep learning to create a sensor that correctly identified every radar signal variant in the certification test suite with extremely high accuracy,” said John Ferguson, CEO of Deepwave Digital. ”Our detection algorithm was trained on tens of thousands of radar variants spanning the entire parameter space. We have coupled this software with our embedded, NVIDIA GPU-based software defined radio. This allowed us to demonstrate that AI is a commercially viable solution to detect and discern current and future incumbent radar waveforms.”

CBRS Overview

Historically, spectral bands have been assigned for specific applications. The CBRS network changes this paradigm by allowing the 3.5 GHz band to be utilized for both naval radars and commercial services such as LTE. A critical component in the CBRS network is the Environmental Sensing Capability (ESC) sensor. This sensor provides the ability to detect and discern the Navy user. If it does not detect a Navy user, the downstream network will provide access to the 3.5 GHz band for commercial services such as LTE. If the ESC does detect a Navy user, the band will not be available to commercial services.

Read the full whitepaper here.

For more information on the AIR-T, signal processing neural networks, or Deepwave Digital, Inc. please contact us.

Deepwave’s Presentation at GTC DC 2019

End-to-End Signal Processing and Deep Learning Using Embedded GPUs

The following presentation was given at NVIDIA's GPU Technology Conference (GTC) in Washington, DC on November 5, 2019. It was a great event where technology was showcased from many different research areas.

Presenter

Daniel Bryant

Abstract

We’ll present the GPU-accelerated digital signal processing (DSP) applications enabled by Deepwave Digital’s AI Radio Transceiver (AIR-T). We’ll also discuss our open source development tools and performance benchmarks for this new type of software-defined radio (SDR). By coupling NVIDIA’s TensorRT toolkit with the AIR-T, clients can rapidly develop and deploy deep learning applications at the edge of wireless systems. We’ll walk through a workflow for deep learning in wireless applications, including the acquisition of training data with the AIR-T, model training and optimization, and live inference on the AIR-T. Our solution addresses the issue of SDR bottlenecks. Because many DSP algorithms are highly parallelizable, GPUs can increase the throughput while maintaining simplistic programmability. With the new shared memory architecture of the NVIDIA Jetson products, GPUs are now a viable solution for optimizing short development times and high data rates while minimizing latency.

Presentation (pdf download)

Intro Slide

Presentation (video stream)

Deepwave Featured as Top 5 Things to See at NVIDIA’s GTC DC

The Premier AI Event is back in DC

"The center of the AI ecosystem shifts to D.C. this fall, when the GPU Technology Conference arrives at the Reagan Center in Washington, from Nov. 4-6."
Check out the full NVIDIA Blog post here.

The GTC DC event is a top-tier AI conference directed towards government, defense, and the private sector. Some of the best AI technology will be on display. Make sure to stop by and say hello to us.

Deepwave Digital will be at GTC DC on November5. Come see our presentation "End-to-end Signal Processing and Deep Learning Using Embedded GPUs" to learn what we have been up to. We will be showing you how to rapidly accelerate signal processing using embedded GPUs.

LinkedIn Youtube Twitter