(Last Mod: 27 November 2010 21:37:21 )
NOTE: WORK VERY MUCH IN PROGRESS
Bookmarks on this page
Links to other pages
SHA-1 Hash Function
LFSR-based Hash Function
Several software applications have been developed to support research in the new field of Concurrent Codes. As is the nature of most R&D software, none of it is what could even remotely be called "commercial quality". To be sure, much of it was written in span of a just a few hours and was intended to explore a particular issue. Most of the software in this category is not archived here. What is here are some of the more developed applications that are being made available to others that might be interested in exploring concurrent codes.
The first engine to be implemented was the Research Engine and its purpose was to serve as a test bed for measuring the performance of the encoding and, more importantly, the decoding algorithms. It was also intended for use as a demonstration tool for briefings, but its utility in that role is somewhat limited since it is completely ANSI-C compliant and therefore doesn’t have any fancy graphics or user interface. This is why Dr. Dino Schweitzer implemented the nicely visual and interactive, but much more limited, BBCVis Java applet. While BBCVis is a very effective tool for communicating the basic abstract concepts of how concurrent coding can be used to obtain keyless jam resistance, it is only capable of working with toy-sized images and packets, though this is more than adequate for its purposes. In contract, the Research Engine is capable of working with realistic size packets and parameters (and then some).
The Research Engine is fashioned around a pretty simple command interpreter. It can also access and run simple scripts, which has proven very convenient and useful. However, it is very basic – for instance, scripts cannot be nested and there is no concept of parameters or any kind of control features that can be embedded. One of the first things on the list to do to enhance it is to modify the interpreter so that it can run nested scripts, which will allow configurations to be defined in one file and then called from many other files. Another thing on the list is to be able to instantiate multiple codec configurations within the Engine and then easily switch back and forth between them. Up to this point this last feature has simply been a wish list item with little actual utility. But now it appears to be something that would be very useful in the actual flight software for the software-defined radios as it will allow test profiles to be written quickly and then modified easily while keeping the profiles for the various radios coordinated and minimizing the risk of introducing errors.
The Research Engine uses a “packet” as it’s primary data entity – a different name, such as “bit stream”, really should be used since a BBC packet is, pretty much by definition, the length of a BBC codeword; the entity in the Research Engine started off as exactly that but then grew into the broader concept of a stream of bits that potentially contains lots of packets. The encoder and decoder (codec) functions either encode messages from a message list into codewords that are placed into the packet or they extract messages from codewords found in the packet and place them into the message list. This is done according to the present codec configuration. On the other side of the packet, the Research Engine is capable of working with several different data sources including images (a couple of BMP files formats, not all), sound files (a couple of WAV file formats, again not all), GnuRadio data stream files, and ASCII text files. For each type of source, there are functions to translate between the source data and a packet. For those sources intended to model analog sources and represent the data as time-sampled and quantized analog data, the Engine performs this using user-defined digital filters and a hysteretic comparator. The key thing is that BBC operations and data source operations are completely segregated and can only interact via the packet, which serves as the bridge between them.
One key shortcoming of the Research Engine is that it was not intended to be
efficient nor applicable to real time applications – these were traded away in
exchange for having it be very flexible and extensible. As such, extensive use
was made of data structures and most communication is done by passing pointers
to those structures. Furthermore, the data and primitive operations on that data
are heavily wrapped and abstracted. Also, a large degree of integrity checking
is performed, even at the lowest levels.
The Real Time Engine is very different from the Research Engine in that it was specifically designed to implement a BBC codec and GnuRadio-compatible baseband modem that are both speed and memory efficient. Virtually no error checking is performed except at the highest levels and extensive use of global or quasi-global data structures is employed. Furthermore, there is limited abstraction beyond the first couple of layers. The goal was to get a feel for how close to real time a BBC decoder could get (the answer being that it can do pretty well, even on a purely sequential machine) and also to develop implementations that are along the lines of what they will need to be in Gnu Radio.
As for the code thus far developed for Gnu Radio, it is extremely limited. Gnu Radio involves several different programming languages, primarily Verilog for the FPGA, C++ for the real-time processing blocks, and Python for the high level application code. In essence, the Python code is used to configure a "flow graph" which merely defines which processing blocks receive data from which and to control it while the C++ processing blocks do the actual work under a real-time scheduler that, in many respects, is the heart and soul of Gnu Radio. Overall, Gnu Radio is a very large and daunting environment to work with and the documentation leaves a lot to be desired. Having said that, there is a very active, though fairly small, community of users that are progressively improving this situation.
For the testing that has been done so far with the radio hardware, which is the Ettus Research Universal Software Radio Peripheral (USRP), it has been sufficient to perform tasks in a very piecemeal and manual fashion. First, the user runs the Real Time Engine (or some test-specific variant of it) to generate a Gnu Radio sample stream file, then a very simple Python program, which is only a slightly modified version of one of the basic demo programs that comes with Gnu Radio, is run to transmit the waveform. On the other radio, another Python program, which is an unmodified Gnu Radio demo program, is run (at the same time, of course) and simply captures the received waveform to a sample stream file. Then the Real Time Engine (or, again, some test-specific variant) is run to demodulate and decode the recorded waveform.
For the next great step in the software development, namely the flight software, a subset of the Research Engine needs to be ported to Python to control the codec configurations and parameters while most of the Real Time Engine needs to be converted to Gnu Radio C++ processing blocks. In time, the goal is to further port these blocks to an FPGA not only to take advantage of the inherent parallelism they offer and to which the BBC algorithms lend themselves, but also to circumvent the USB data transfer bottleneck.
Drawing upon some of the structural concepts used used by Gnu Radio, a new real-time engine, call the SDR Engine, was developed that should permit users to more readily develop and incorporate their own processing modules. In fact, it is a completely generic framework suitable for building up processing systems for many kinds of systems.
The basic idea behind the SDR Engine is quite simple: A system is composed of a collection of Processing Modules or Blocks that are connected together with FIFOs. By connecting each block with a FIFO, the developer of the module is relieved from having to construct an interface that is specific to the module that will be using the data next; instead they simply push the data into an output FIFO. Similarly, there is no need to deal with how to interact with the module that is supplying data to the module, it simply pops data from an input FIFO. Certainly each module still has to ensure that the data is of the correct format, but they don't have to worry about the mechanics of getting and delivering it.
Once the modules are instantiated and connected together, the system can be launched and allowed to start processing the data. The main SDR component consists of an array of these modules; when the system is launched a scheduler runs that repeatedly cycles through the modules, invoking each one in turn. In essence, what results in a cooperative multitasking system in which each module is executed and operates with the understanding that it is to perform only a small task before returning control to the scheduler. For instance, let's assume that a particular module computes the energy in a signal by simply squaring it. Then on each invocation it would remove one data value from its input FIFO, square that value, and push the result into its output FIFO. If there doesn't happen to be any data available in its input FIFO, then the module is expected to take an appropriate action and return control to the scheduler promptly; an appropriate action may be to do nothing or perhaps to push a zero into the output FIFO (the latter might make sense if a true real-time system were being carefully simulated), what would not be appropriate would be to wait until a piece of data became available.