DragonWins Home Page
(Last Mod: 27 November 2010 21:37:20 )
NOTE: WORK VERY MUCH IN PROGRESS
Until the advent of Concurrent Coding Theory, jam-resistant wireless communication systems have relied on either high directionality or spread spectrum techniques. In cases where high directionality is not viable or otherwise desirable, the use of spread spectrum has been the only option. Unfortunately, all traditional spread spectrum techniques rely on a shared-secret key, namely the spreading code; significant jam resistance is only obtained if all legitimate parties each posses this shared secret and all hostile parties remain ignorant of it. If the spreading code is compromised, then virtually all jam resistance is lost; in fact, the system may even be less jam resistant than a comparable narrowband system because of the added complexity and sensitivity to tracking and synchronization that traditional spread spectrum entails.
Concurrent coding theory fundamentally changes this situation since, for the first time, omni-directional communication systems can be implemented that have a level of jam-resistance comparable to traditional spread spectrum systems but without requiring pre-shared secret spreading codes. In other words, the use of concurrent codes makes keyless jam-resistant systems possible.
Systems based on concurrent codes are not without their drawbacks. First and foremost, concurrent coding imposes significant processing requirements on the receiver; applying concurrent coding to compute-limited platforms may be very difficult. It's also very important to recognize that any keyless system automatically forfeits its low probability of intercept (LPI) properties. This is a natural consequence of the fact that if you must assume that the hostile parties have the same information regarding the spreading algorithm as the legitimate parties, then they are as equally capable of detecting and receiving the signal as are legitimate receivers. Similarly, hostile parties are as capable of generating legitimate signals as are legitimate transmitters. For these reasons, the use of encryption and authentication protocols become necessary, despite the additional processing burden.
The purpose of this page is quantitatively compare the jam-resistance of concurrently-coded spread spectrum (CC/SS) systems to traditional spread spectrum systems, particularly frequency-hopped spread spectrum (FH/SS) and direct-sequence spread spectrum (DS/SS). Just as traditional spread spectrum techniques can be overlaid on nearly any underlying modulation scheme, concurrently-coded spread spectrum systems can be realized that employ the fundamentals of all existing spread spectrum techniques.
In order to start with the simple and progress to the complex, the analysis will start with the simplest form of CC/SS, namely using time-hopping (a.k.a., pulse position) spread spectrum over amplitude shift keying, which we will designate as CC/ASK. Before performing the analysis, however, we will first review the concept of "processing gain" in spread spectrum systems, specifically as it relates to jam-resistance, and discuss the textbook receiver performance characteristics of traditional FH and DS systems.
Many different definitions for the "processing gain" of a system exist. In its most general form, a "processing gain" is simply a measure of the improvement in the performance of a system due to the use of some enhancement. In spread spectrum communication systems aimed at achieving jam-resistance, the most appropriate definition for the processing gain of employing spectrum spreading is the ratio of the power that a jammer must use in order to achieve a particular bit error rate in the receiver to the power that would achieve that same rate in the absence of spectrum spreading. For instance, let's assume that the jammer can cause the receiver in a non-spread spectrum system to suffer a bit error rate (BER) of 10-4 using 1W of power under a particular set of conditions. The system then "turns on" the spectrum spreading (leaving everything else the same) and now the jammer finds that they have to transmit 100W of power to cause the same 10-4 BER. The use of spectrum spreading has therefore resulted in a 20dB processing gain.
Certain assumptions about the capabilities of both the legitimate parties and of the hostile jammers must be made in order to make any analysis doable. It is commonly assumed that the receiver is perfectly synchronized with the data stream, at least where symbol synchronization and frame synchronization are concerned. It is also commonly the case that issues such as inter-symbol interference are ignored. Perhaps the most significant assumption is that the receiver is able to adjust the receiver detection parameters so as to always obtain the minimum possible bit error rate for the present operating environment. In practice, none of these assumptions are completely valid; but unless models of realistic parameter setting algorithms are available, these are reasonable assumptions that permit performance bounds to be established. Similarly, it is generally assumed that the jammer has the ability to choose the optimal jamming parameters subject to the constraints on the type of jamming being done. Again, in practice this is not very realistic because it would require the jammer to be able to accurately estimate parameters that they realistically are not going to be able to know very well. But for similar reasons, in the absence of some model for how the jammer is actually going to choose those parameters, this assumption provides useful performance bounds.
The first step in the analysis is to establish the baseline performance for the system when no spectrum spreading is being used. For traditional spread spectrum this is relatively straight forward since spectrum spreading is something that is added to a system in order to, hopefully, enhance its performance. For instance, if a signal would normally be transmitted using BPSK, then the performance of that system without spectrum spreading simply means the expected bit error rate for information transmitted using BPSK as a function of the jammer's output power at the receiver. If spectrum spreading using frequency hopping is now added, the underlying transmission scheme is still BPSK and therefore it is very clear how to analyze the performance so that the relative performance increase is an apples-to-apples comparison.
Another way of looking at this is that, for traditional spread spectrum systems, as you reduce the amount of spectrum spreading being used you eventually are left with a valid baseline communications system that involves no spectrum spreading and it is this baseline system that serves as the performance reference for determining processing gain. For example, in frequency hopping spread spectrum (FH/SS), the spreading code determines which set of frequencies will be used to transmit data at that instant in time, but the underlying baseline system still determines how that data will be transmitted over the current set of frequencies. For instance, if BPSK is being used, then the frequency set at any time consists of a single frequency and each bit of data determines the phase of the transmitted signal during a given symbol period. On the other hand, if 16FSK is being used, then the frequency set consists of sixteen frequencies and each nibble (4-bits) of data determine which of those sixteen frequencies will be active during that symbol period. In either case, the situation that exists if spectrum spreading is removed is obvious - the frequency set simply becomes static and unchanging. A similar situation exists if direct sequence spread spectrum is used.
Unfortunately, the same is not true for concurrently-coded spread spectrum (CC/SS). The reason is that, with traditional spread spectrum, the spreading code and the data are independent. The data is used to control the underlying modulation scheme and the spreading code is used to modulate the resulting signal - or, more-or-less equivalently, the spreading code is used to modulate the data and the resulting data is used to control the underlying modulation scheme. In either case, the spreading code is used to define a "channel" and the data is then transmitted over that channel. With concurrently-coded spread spectrum, the data IS the spreading code and the data is therefore both the data being transmitted over the channel and the thing that defines that channel. As a result, there is no way to remove the spectrum spreading and leave behind a underlying narrowband transmission scheme. Another way to view this is that, in traditional spread spectrum, the manner in which information is injected to and extracted from the signal is independent of the manner in which the signal is spread and despread. In CC/SS the two are fundamentally linked such that there is no way to remove one while retaining the other.
None-the-less, we must still devise some means of comparing the performance of CC/SS to traditional spread spectrum systems. But even what this means is not clear cut since using processing gain to compare two traditional spread spectrum systems is not straightforward or, arguably, even appropriate or possible - at least not directly. For instance, knowing system A using a particular type of modulation and spread spectrum results in greater processing gain at a particular spreading ratio than system B does not mean that system A is more jam-resistant than system B. This would not be the case if the jam-resistance of the underlying modulation scheme used by system B were sufficiently better than that used by system A. The difficulty is due to the fact that the processing gain of each system is measured relative to each system's underlying modulation scheme and not to some common reference.
However, this very lack of a common reference provides a means of comparing CC/SS to traditional forms despite the absence of a comparable definition for processing gain. Arguably the best way to compare two traditional spread spectrum systems is to plot the average bit error rate of each versus the Ebit/N ratio of the two systems where Ebit is the average energy per data bit in the data signal (at the receiver) and N is the total noise power spectral density within the data signal bandwidth (again, at the receiver).
For a given receiver, meaning that the basic modulation and detection schemes are unchanging, the expected bit error rate (BER) is a function of the signal-to-noise ratio at the receiver input. The noise is usually taken to be Additive White Gaussian Noise (AWGN) due predominantly to the thermal noise in the receiver and perhaps the noise temperature of the antenna.
The ideal analysis of the performance of an Amplitude Shift Keyed (ASK) system in such noise is shown below.
This curve is based on the equation in Peterson, Ziemer, and Borth and is a good approximation for large signal-to-noise ratios.
One key observation to make at this point is that we prefer our operating characteristic to be as close to the lower-left portion of the plot area as possible. This simply means that we can achieve a given BER at a lower SNR or, equivalently, we can achieve a lower BER at a given SNR. The goal of a jammer, not surprisingly, is therefore to move us up and/or to the right as much as they can.
So how do we present the performance of spread spectrum systems on the above plot? It turns out that we do so by plotting the BER versus the the SNR at the receiver, just like we do for narrowband systems. After all, a spread spectrum radio is still a radio. But does this mean that we have to plot a family of performance characteristics, each for a different spreading depth? Not in most cases. The reason is that the SNR value, expressed as energy per data bit divided by average spectral noise density within the signal bandwidth, automatically takes care of this. Ebit is the average energy per data bit in the data signal (at the receiver) and N is the total noise power spectral density within the data signal bandwidth (again, at the receiver). N generally consists of the background wideband thermal noise, N0, and the noise added by the jammer, NJ. Usually, in the presence of hostile jamming, the noise from the jammer dominates the thermal noise in the receiver front end and other environmental noise sources and thus Ebit/Nj is the quantity used. The value for Ebit can be obtained by noting that it is simply the time-average power in the data signal divided by the bit rate, or Pdata/Rdata (or simply P/R when no disambiguation is needed). Similarly, the noise power spectral density is the average total noise power divided by the bandwidth, or Pnoise/W. Where the total noise power is dominated by the jammer, this is generally written simply as J/W. Combing these together, we have the relationship
Ebit/N = (P/R)/(J/W) = (P/J)(W/R) = PW/JR
At first glance, this relationship would seem to claim that if the sender were to cut their data rate in half that the jammer would need to double their output power in order to achieve the same average bit error rate, everything else being kept equal. To the degree that the bit error rate is dictated solely by the Ebit/N ratio, this will be a true statement and, in practice, this is often a good approximation over a wide range of operating conditions. The complicating factor is that not all noise is created equal; therefore it should not be surprising that the performance curve obtained when assuming AWGN is going to be different than when considering other types of noise. In particular, a jammer is going to be motivated to construct the most damaging type of noise they can; thus, in the presence of hostile jamming, the entire curve will tend to shift up and to the right.
To see how this comes about, let's consider a simple frequency hopped system. If the system "turns on" spectrum spreading with a depth of 100, this means that it is now transmitting its signal using 100 times the bandwidth that it was previously. For the sake of discussion, let's assume that the underlying data signal has a bandwidth of 1kHz and that now the transmitter is hopping among 100 adjacent 1kHz wide channels. At any given time, however, the transmitter is using a single 1kHz wide channel and to achieve the same BER as before the jammer can just cause the same SNR to exist within that channel. If the jammer knew the hop sequence, then they could track the transmitter's hopping and spend little, if any, more power than they were previously. However, since we are assuming that the spreading code (the hop sequence, in this case) is a shared secret to which the jammer is not privy, they would need to place the same amount of noise power in all 100 channels. Under this type of jamming, known as barrage jamming, the processing gain is equal to the spreading depth, namely 100 or 10dB.
However, the jammer is fully aware of the fact that they are wasting 99% of their power jamming channels that aren't being used. A "smart jammer" is consequently motivated to see if using a different type of noise is more effective. For instance, what if they only jammed 20% of the channels at any given time but put four times the noise power density into those channels? On average, they have only increased their power output by a factor of 80, not 100. They understand that 80% of the time they will not be jamming the channel in use at all and that the overwhelming majority of the data will get through during those periods. But if the additional power in the channels that are being jammed means that only half of that data will get through (which is the best they can hope for since half of it will get through even if the receiver is reduced to random guessing), then they will have raised the BER to 10%, which is a very poor BER indeed. They would need to spend even less power if their goal was simply to achieve the same BER that existed previously, and hence the processing gain of the system will be considerably less than the spreading depth. In fact, it is entirely possible for the effective processing gain to be less than unity, in which case the jammer has the ability to force the same BER while expending less power than they would have using barrage jamming against the underlying narrowband system. Clearly, however, the use of spread spectrum under these conditions would not (or, at least, should not) be considered.
Another way to understand what the smart jammer is doing and why it works is to consider that what is basically happening is that the jammer is choosing to abandon the goal of degrading the entire signal by a uniform amount in favor of letting part, even most, of the signal get through with very little degradation while causing very high degradation to a smaller fraction of the signal. This is known as "partial band jamming" and is a very common -- and very effective -- technique. For a given set of operating conditions, the jammer will find that their is an optimal fraction of the total signal bandwidth that produces the highest BER for the same total output power. However, it turns out that this optimal fraction is a function of the spreading depth and the legitimate signal's SNR at the receiver, so actually operating at the optimal point is not a trivial undertaking for the jammer and thus, in practice, the realized processing gain is likely to be significantly better than the theoretical bounds. Conversely, however, it must be kept in mind that the legitimate receiver is not going to be able to pick their thresholds perfectly or remain perfectly synchronized to the data symbols.
In ASK CC/SS, each mark is transmitted using some form of pulsed signal. It could be as simple as OOK of a single carrier frequency, broadcasting very short high power pulses of energy, or broadcasting broadband random noise. The receiver looks for the presence of energy of the type that the transmitter broadcasts and, if it is above some threshold, declares that a mark was received otherwise decides in favor of a space. Other alternatives exist and the analysis of each variant will likely produce somewhat different results. This, by itself, is not surprising since the same is true for minor variations in the underlying modulation and detection schemes used other forms of spread spectrum.
To keep things as simple as possible, the system used here will broadcast a single tone using pure On/Off Keying. In theory, the jammer could cause mark errors by transmitting sinusoidal pulses that are out of phase with the legitimate pulses when both arrive at the receiver. In practice, doing this deliberately is virtually impossible and the transmitter can be configured to randomly modulate the phase and amplitude of each pulse so that the probability of sufficient annihilation occurring to produce a mark error is extremely low. However, to avoid unrealistically optimistic assumptions, we will assume merely that the sender's and attacker's pulses are constant amplitude pulses -- though not necessarily the same amplitude, though we permit the jammer the capability of choosing this option -- having random relative phase.
The first step in the analysis is to determine the performance of the system in the presence of unintentional AWGN.