A new algorithm for interpreting radar data

By Sam Grayson

Creative Commons Attribution 4.0 International License

Tracking a target for whatever reason is a common problem. Currently, GPS can be used for this, but that envolves tagging the target, which is not very easy.

  • The tags are bulky, because they have to carry batteries.
  • The batteries on a tag have to be changed often.
  • You have to make sure to retrieve your tags when you are done, because the tags are expensive, and you don't want your target to run off with it.
  • GPS isn't as accurate.

On the other hand, we can set up radars on the perimeter of the motion field to track objects. At least 3 radars are needed for the triangulation of multiple targets. Each radar can measure the distance between themselves and their targets. This is better for some applications where you are trying to track smaller movements, for example the movement of people in room. It doesn't have the problems mentioned above. In the end, it comes down to two things: First, no tagging is required so it targets can be tracked passively, second, using a GPS can be overkill, because you are attempting to measure distance from a satelite outerspace to your object which can be thousands of miles away, whereas a radar set up near your target is measuring a distance maybe only 10 feet away.

I find the classical method of radar tracking to be inadequate. So I will demonstrate the classical approach, then a new approach, and test them to see which one works better. For now I will implement the algorithms and test to see if they work correctly on computer generated data. Once this is done, I can feed real data in.

First, I must import some backend functions.


In [1]:
%run run_notebook.ipy
%pylab inline
execute_notebook('util.ipynb')
grid = ((0, 0), (10, 10))  # bounding grid starts at 0,0 and goes to 10, 10


Populating the interactive namespace from numpy and matplotlib

Simulating Radar data

Now I will simulate the radar observing said target. The radar sends out a beam and measures the amount of time it takes for this to comeb back. Each radar has an associated amplitude as a function of time, $ A(t) $, which returns the amplitude of the light returning for a certain distance. This is shaped like a bell curve,

$$ A(t) = e^{- \tfrac {(t - t_1)^2} {\sigma^2}} $$

where $ t_1 $ is the amount of time light takes to get the target, and $ \sigma $ is how accurate the clock is, which in our case happens to be 1 nanosecond. We really want this as a function of distance. The signal travels at the speed of light, and the time it takes total depends on the time it takes to go to there and the time to come back from there, so every time $ t $ can be converted from a distance $ d $, with the speed of light $ c $ like $ t = \frac {2 d} {c} $. We can also invert this $ d = c t \frac {1} {2} $. Thus the amplutide as a function of distance is

$$ A(d) = e^{- \tfrac {(d - d_1)^2} {( \frac {2 \sigma} {c} )^2}} $$

We must also account for power loss, which is inversely proportional to the square of the distance. This depends on a proportionality constant $ k $.

$$ A(d) = e^{- \tfrac {(d - d_1)^2} {( \frac {2 \sigma} {c} )^2}} \frac {k} {d^2} $$

For any given radar and target position, the function get_A will return the amplitude function. Below, the amplitude function is plotted on a bar graph with distance on the x axis and the associated amplitude on the y for a simulation where the radar is at 0, 0 and the target is at 4, 4. This should look like a bell curve, centered around $ 4 \sqrt{2} $ because that is the distance between the radar and the target.


In [2]:
max_distance = distance.euclidean(grid[0], grid[1])  # largets distance in grid
sigma = 0.000000001
sample_size = 100000
c = 299792458 # speed of light in m / s
interval = 0.025  # Space data by 0.01 meters
k = 200
distances = arange(interval, max_distance, interval) # start at 0, go by intervals to max_distance

In [3]:
def get_A(radar_position, target_position):  # returns the amplitude function
    d1 = distance.euclidean(radar_position, target_position)
    def A(d):
        return exp(-(d - d1)**2 / (sigma * c / 2)**2) * k
    return A

def radar_scan(radar, target):
    A = get_A(radar, target)  # gets the amplitude function
    return A(distances)  # applies the Amplitude function to every distance

radar1 = (0, 0)
target = (4, 4)

test_data = radar_scan(radar1, target)
bar_chart1(test_data)


You can see the peak of the bell curve right where we expect it to be. This is what the data coming out of a radar should look like if the radar is at 0, 0 and the target is at 4, 4

Next, I should try to triangulate position for a single target, using simple triangles and circles. First, I need a peak extractor. All I need to do is take a weighted average of all of the data, and that should represent the peak that was used to create the bell curve. Since the data was generated while looking at a point, that was 5.6 meters away, if this works it should detect a peak in the returning light rays at 5.6.


In [4]:
guess = weighted_average(test_data)
actual = distance.euclidean(radar1, target)
print 'Estimated: {guess}, Actual: {actual}'.format(**locals())


Estimated: 5.65685424949, Actual: 5.65685424949

Calculating the intersection of two circles

This means the radar sees the target about 5.6 meters away. Looking only at this data, the target could be at any point that is 5.6 meters away and within the grid. The possible points where the target could be create a circle. To deduce which point the radar is at, I need data from multiple radars, and I need to calculate the intersection of all radars. At least that is what the classical algorithm says.

So to finish the triangulator, I need to calculate the intersection of all of the circles. I used Paul Bourke's article on how to find the intersection of two circles to help me. I will test this on two arbitrary circles. Below, you see the two circles (one blue and one green) with the intersection points (again, one blue and one green) calculated using the circles_intersection function in green.


In [5]:
def circles_intersection(circle1, radius1, circle2, radius2, grid):
    # See Paul Bourke's article:
    # http://paulbourke.net/geometry/circlesphere/ -> Intersection of Two Circles
    dx = (getX(circle2) - getX(circle1))
    dy = (getY(circle2) - getY(circle1))
    d = hypot(dx, dy)
    if d > radius1 + radius2 or d < fabs(radius1 - radius2):
        return None
    a = (radius1**2 - radius2**2 + d**2) / (2.0 * d)
    x2 = getX(circle1) + (dx * a/d)
    y2 = getY(circle1) + (dy * a/d)
    h = sqrt((radius1**2) - (a**2))
    rx = -dy * (h/d)
    ry = dx * (h/d)
    point1 = (x2 + rx, y2 + ry)
    point2 = (x2 - rx, y2 - ry)
    solutions = []
    if in_rectangle(point1, grid):  # checks to see which solution is in bounds
        solutions.append(point1)
    if in_rectangle(point2, grid):
        solutions.append(point2)
    return solutions

point1, point2 = circles_intersection((1, 1), 5, (7, 5), 4, grid)
print 'Intersection points: {point1} and {point2}'.format(**locals())
visualize_circles(circles=[((1, 1), 5),
                          ((7, 5), 4)],
                  ccolors=['b', 'g'],
                  points=[point1,
                          point2],
                  pcolors=['y', 'k'])


Intersection points: (3.040085805920814, 5.5648712911187772) and (5.9983757325407225, 1.1274364011889149)

Putting it all together (classical algorithm)

Now, I can generate radar data, find the peak values, and triangulate them against data from an adjacent radar. Where the two radar's circles interlap is where the object should be.

So now, I will collect radar data from two different points, and try to deduce the position of the target. The first barchart is a historgram of the data from the first radar. The second barchart is a histogram from the second radar. The third graph is the intersection of the peaks of the first and second graphs.

  • The cyan circle represents all of the points where the first radar thinks the target is.
  • The cyan circle is centered around a cyan dot at 0, 0, which represents where the radar itself is.
  • The green circle is all of the points where the second radar thinks it is.
  • The green circle is centered around the green dot at 10, 0.
  • The yellow dot is where their combined guess is. It is almost completely covered up by the black dot.
  • The black dot is where it actually is.

In [6]:
# radar1, and target are defined above
radar2 = (10, 0)

data1 = radar_scan(radar1, target)
data2 = radar_scan(radar2, target)
bar_chart1(data1)
bar_chart1(data2)
distance1 = weighted_average(data1)
distance2 = weighted_average(data2)

guess = circles_intersection(radar1, distance1, radar2, distance2, grid)[0]
print 'Predicted value: {guess}, actual value: {target}'.format(**locals())
visualize_circles(circles=[(radar1, distance1),
                           (radar2, distance2)],
                  ccolors=['b', 'g'],
                  pcolors=['b', 'g', 'y', 'k'],
                  points=[radar1,
                          radar2,
                          guess,
                          target])


Predicted value: (4.0000000000000018, 4.0), actual value: (4, 4)

The alternative algorithm

The more hollistic algorithm which I intend to create is one that calculates triangulation for every point, rather than just the average of the peaks. It is like giving the target a wave probability function, just like an electron. By overlapping probability functions we can find out where the target is. This should make more sense when you see it happen.

The first graph below shows distances on the x axis and the amplitude values on the y axis. The one below that shows distances on the x axis, but it shows amplitude values as color.


In [7]:
heatmap1(test_data)


Creating a probability graph

Then imagine rotating this figure about the origin, so that where we used to see a peak in the bell curve that represented a high probability, we now see a ring of color that represents a high probability. Below, you can see this illustrated for radar 1 and radar 2 respectively. These plots are defined mathematically by a function that takes in a point and returns the amplitude value for the distance between the point and the radar origin $ f(x, y) = A \Big( \sqrt{(x - x_1)^2 + (y - y_1)^2} \Big) $, where $ f(x, y) $ is the color at $ x, y $, $ A(d) $ is the amplitude function for a radar at d as shown above, and $ x_1, y_1 $ is the location of the radar. Conceptually, this justs finds the amplitude function at every point on an x and y graph. If a point is red, that means there is a high liklihood that the object is there (remember, in these graphs a more reddish color represents a higher amplitude value). If a point is blue, the object is probably not there.


In [8]:
res = 0.075

def single_color_function(radar, data):
    # The following function will be sampled at every x and y on the grid.
    def color_function(x, y):
        return data[nearest(distance.euclidean(radar, (x, y)))]
    return color_function

grid1 = single_color_function(radar1, data1)
grid2 = single_color_function(radar2, data2)
heatmap2(grid1, 'radar1 heatmap')
heatmap2(grid2, 'radar2 heatmap')


Overlapping the images

Then we can overlap the above two images. Each point below is the point-wise product of the graphs above. The graph below is defined mathematically by a function $ g(x, y) = A_1 \Big( \sqrt{(x - x_1)^2 + (y - y_1)^2} \Big) * A_2 \Big( \sqrt{(x - x_2)^2 + (y - y_2)^2} \Big) $ where $ A_1 $ represents the amplitude function of the first radar which is located at $ x_1, y_1 $, $ A_2 $ that of the second radar which is located at $ x_2, y_2 $, and $ g(x, y) $ represents the color at the point $ x, y $. This is like finding the places in which the above two graphs overlap.


In [9]:
def two_color_function(grid1, grid2):
    color = lambda x, y: grid1(x, y) * grid2(x, y)
    return color
heatmap2(two_color_function(grid1, grid2), 'Combined radar and radar2')


Initial Results

Notice, I used multiplication to combine the graphs. Each point is the product of its value in the single radar graph. In order to get a point of high probability, you have to have high probability on both of the single radar graphs, not just one. This located a dot where the there is a high probability that the target is there.

Multiple targets

I think my algorithm will excel the most with multiple targets. I can add together multiple radar scans which would be like overlapping the probabilities. Below are the basic probability graphs on one dimension. You may notice that the second hill is much smaller than the first. This is a consequence of the power loss term, $ \tfrac {k} {d^2} $, which makes distant peaks much smaller.


In [10]:
target1 = (5, 4)
target2 = (3, 2)
test_data = radar_scan(radar1, target1) + radar_scan(radar1, target2)
heatmap1(test_data)


I can generalize the process of scanning one radar with multiple targets. I can start with an array of zeros, add the result of scanning the radar with the first target to the array, then add the result of scanning the radar with the second target to the previous array, and so on.


In [11]:
targets = [target1, target2]
def multiple_radar_scan(radar, targets):
    data = zeros(len(distances))
    for target in targets:
        data = data + radar_scan(radar, target)
    return data

In [12]:
data1 = multiple_radar_scan(radar1, targets)
data2 = multiple_radar_scan(radar2, targets)
grid1 = single_color_function(radar1, data1)
grid2 = single_color_function(radar2, data2)
heatmap2(grid1, 'radar1 on multiple targets')
heatmap2(grid2, 'radar1 on multiple targets')


And combining the above to graphs...


In [13]:
heatmap2(two_color_function(grid1, grid2), '2 radars, 2 targets')


This is not what it is supposed to look like. There is supposed to be two fuzzy dots. You can see it is picking up false positives. This is because there is not enough data to solve the problem. We need more radars. You need at least 3 radars to solve 2 targets.


In [14]:
radar3 = (0, 10)
radar4 = (10, 10)
data3 = multiple_radar_scan(radar3, targets)
data4 = multiple_radar_scan(radar4, targets)
grid3 = single_color_function(radar3, data4)
grid4 = single_color_function(radar4, data3)

The first graph below is all of the radar scans overlayed. It is the pointwise sum of all of the radar scans.

$ g(x, y) = A_1 \Big( \sqrt{(x - x_1)^2 + (y - y_1)^2} \Big) + A_2 \Big( \sqrt{(x - x_2)^2 + (y - y_2)^2} \Big) + ... + A_n \Big( \sqrt{(x - x_2)^2 + (y - y_2)^2} \Big) $

The second graph is all of the radar scans combined.

$ g(x, y) = A_1 \Big( \sqrt{(x - x_1)^2 \times (y - y_1)^2} \Big) \times A_2 \Big( \sqrt{(x - x_2)^2 + (y - y_2)^2} \Big) \times ... \times A_n \Big( \sqrt{(x - x_n)^2 + (y - y_n)^2} \Big) $

This is what you are used to seeing. It is the pointwise product of all of the radar scans.


In [15]:
def n_product(*args):
    def color_function(x, y):
        color = 1.0
        for arg in args:
            color = color* arg(x, y)
        return color
    return color_function

def n_sum(*args):
    def color_function(x, y):
        color = 0.0
        for arg in args:
            color = color + arg(x, y)
        return color
    return color_function

heatmap2(n_sum(grid1, grid2, grid3), '3 radars overlayed, 2 targets')
heatmap2(n_product(grid1, grid2, grid3), '3 radars combined, 2 targets')


Success!

With at least 3 radars the problem becomes solvable indeed. If we have three radars each on a different corner, there will be a region where the radar perception will be bad (specifically at the corner where there are no radars). So since our field is a quadrilateral, 4 radars is a natural number to choose. You can see the data with 4 radars is slightly more refined that the data from 3 radars, but still quite comparable.


In [16]:
heatmap2(n_sum(grid1, grid2, grid3, grid4), '4 radars overlayed, 2 targets')
heatmap2(n_product(grid1, grid2, grid3, grid4), '4 radars combined, 2 targets')


It works with 1 target, it works with 2 targets, now let us try with many targets.


In [17]:
def test(radars, targets, title):
    data = [None] * len(radars)
    color_functions = [None] * len(radars)
    for i in range(len(radars)):
        data[i] = multiple_radar_scan(radars[i], targets)
        color_functions[i] = single_color_function(radars[i], data[i])

    heatmap3(n_sum(*color_functions), n_product(*color_functions), title + ' overlayed', title + ' combined', targets)
    plt.show()
    plt.close()

In [23]:
iterations = 3
num_targets = 8
radars = [(0, 0), (10, 0), (0, 10), (10, 10)]

for _ in range(iterations):
    targets = (random.rand(num_targets, 2) * (grid[1][0] - grid[0][0] - 2) + 1).round(decimals=1)
    test(radars, targets, '')


Ghosting and Radar Placement

It turns out that the bands do not always define a unique solution. Sometimes there are what we call ghosts. These occur when four circles intersect on a point even when their is no target there. They are the result of colateral circles near the point intersecting by chance. Innovative radar placement strategies can minimize ghosting.

Here is an example of ghosting.


In [25]:
targets = np.array([(2.5, 5), (7.5, 5)])
radars = [(0, 0), (10, 0), (0, 10), (10, 10)]

test(radars, targets, '')


Here is the same targets with different radar placement.


In [26]:
radars = [(0, 0), (10, 10), (0, 5), (10, 5)]

test(radars, targets, '')


But this placement strategy is not completely immune to ghosts.


In [27]:
radars = [(0, 0), (10, 10), (0, 5), (10, 5)]
targets =np.array([(2.5, 7.5), (7.5, 7.5)])

test(radars, targets, '')