Firmware Development Kit for the AIR-T
AirStack Sandbox is the new FPGA development kit for customers to deploy their own intellectual property (IP) on the AIR-T's FPGA. The Deepwave team has worked tirelessly to abstract away all of the complications of writing custom firmware and has now enabled our customers to interact with the FPGA in an extremely simplified interface. Difficult board specific system management (power, clocking, timing, bus architecture) are handled by Deepwave developers and AirStack software. Customers can immediately begin to utilize FPGA computational resources without the long initial development times typical of bringing up an entire system.
Customers are able to insert their own IP cores into the FPGA for their own application which may include
- Custom signal processing blocks,
- Communication physical layers,
- Pre-processing for deep learning applications, or
- Feature extraction.
With Sandbox + AirStack, the fully power of the AIR-T's FPGA, GPU, and CPUs are now at your disposal.
AirStack Sandbox Contents
- A top level System Verilog design that instantiates the Deepwave Digital IP core, AXI4-Lite registers, and wiring for basic receive and transmit functionality
- An example AXI4-Lite register space for user control and status registers
- Required timing scripts
- All requried Vivado files1 for each AIR-T model
AirStack Sandbox works flawlessly with the AIR-T's AirStack API. AirStack 0.4.0+ contains all of the necessary drivers and Python interfaces for deployment of custom signal processing blocks.
AirStack Sandbox is an add-on license to the AirStack API and is only available directly through Deepwave Digital. Contact us today to get more information or to make a purchase.
 AirStack Sandbox leverages IP cores that are licensed for free from Xilinx when using Vivado. The JESD204B license must be obtained from Xilinx; however, the evaluation version is compatible with AirStack Sandbox.
27 August 2020
The Deepwave Digital team is proud to announce that AirStack Version 0.4.0 is now available for download to AIR-T customers.
Revision 0.4.0 of AirStack saw a number of changes and additional features designed to better support the underlying hardware and to make upgrades easier in the future. We have the end goal of making the AIR-T a software-defined radio that can be readily deployed to the field for neural network inference with extrem ease. Here is the list of changes in AirStack 0.4.0.
Real-time Linux Kernel - AirStack now ships with a Linux kernel with real-time extensions. This allows customers to better write software with real-time latency requirements.
Native Anaconda Support - The AIR-T now fully supports conda out of the box. This support includes both the AirStack radio drivers and TensorRT for neural network inference.
Multithreading support - AirStack radio drivers now support launching background tasks to handle I/O using native Python. This is a performance improvement for applications that need to receive and transmit at the same time. For more details on how to use this feature, see our example here.
JetPack 4.4 Upgrade - The base operating system image has been upgraded from JetPack version 4.2 to version 4.4. This provides updates and bugfixes to GPU accelerated libraries as well as the kernel itself. Full highlights of these changes can be found here.
CUDA Upgrade - NVIDIA's CUDA has been upgraded to 10.2. Most important for embedded applications are performance improvements which can reduce the time to launch a CUDA kernel by up to 50%.
cuDNN Upgrade - NVIDIA's cuDNN library has been upgraded to 8.0.
TensorRT Upgrade - The AIR-T leverages NVIDIA's TensorRT for neural network inference. For this release, TensorRT has been upgraded to 7.1.3, significantly improving the number of neural network layers supported for inference. Additionally, support for workflows based on ONNX models is much improved in this version of TensorRT.
Ubuntu 18.04.4 - The operating system has been updated to the latest point release of Ubuntu 18.04 with numerous bugfixes and security patches.
Improved Source Code Examples
Training to Deployment Workflow - We have created an open-source example demonstrating the process to design, train, optimize, and deploy a neural network on the AIR-T. See source code here or read the blog post here.
Multithreading on the AIR-T - A new Tutorial has been published covering how to launch background tasks in order to use multiple radio streams simultaneously.
Using cuSignal to Create a Repeater on the AIR-T - We have provided a detailed example of how to create a more complicated application in this Tutorial. The application continuously receive signals from the AIR-T, performs signal processing and detection using the GPU, and re-transmits any signal that passes the detector's dynamic threshold.
When deactivating a stream, only tear down hardware after all TX samples have been transmitted. In previous versions of AirStack, it was possible to turn off the radio while the last few samples had been buffered for transmit but not yet sent over the air.
Improved error reporting in the radio drivers in cases where both receive or transmit channels are being used and an error occurs on only one channel.
Provide more consistent behaviour when re-tuning the radio or changing sample rate while streams are active. Due to data buffering, it was previously possible to have a few samples in the receive buffer from the original frequency or sample rate. Tuning or changing sample rate will now empty data buffers and ensure that all samples are from the correct period of time after one of these operations is performed.
The AIR-T upgraded software and firmware are available for customers to download in the Developer Portal.
Please note that upgrading to AirStack 0.4.0 from previous versions of AirStack requires a re-flash of the operating system in addition to the usual firmware update. Please see the installation procedure to apply the software update to your AIR-T, followed by the firmware update procedure.
Deepwave Digital has just released a comprehensive workflow toolbox for creating, training, optimizing, and deploying a neural network on the AIR-T. This new deployment toolbox works natively on the AIR-T and AirStack without the need to install any new packages or applications. This means that the workflow for an AI enabled radio frequency (RF) system has never been simpler. Now you can deploy an existing TensorFlow model on the AIR-T in less than one minute.
Read more below or here is a link to the code-base that runs natively on any of the AIR-T Embedded Series models.
Training to Deployment Workflow
The figure above outlines the workflow for training, optimizing, and deploying a neural network on the AIR-T. All python packages and dependencies are included on the AirStack 0.3.0+, which is the API for the AIR-T.
Step 1: Train
To simplify the process we provide an example TensorFlow neural network that performs a simple mathematical calculation instead of being trained on data. This toolbox provides all of the necessary code, examples, and benchmarking tools to guide the user in the training to deployment workflow. The process will be the exact same for any other trained neural network.
Step 2: Optimize
Optimize the neural network model using NVIDIA's TensorRT. The output of this step is a file containing the optimized network for deployment on the AIR-T.
Step 3: Deploy
The final step is to deploy the optimized neural network on the AIR-T for inference. This toolbox accomplishes this task by leveraging the GPU/CPU shared memory interface on the AIR-T to receive samples from the receiver and feed the neural network using Zero Copy, i.e., no device-to-host or host-to device copies are performed. This maximizes the data rate while minimizing the latency.
For more information, check out the open source toolbox here that runs natively on any of the AIR-T Embedded Series models.