In [92]:
from math import sin,acos,cos,sqrt
import numpy as np

#NAC distortion coefficients

# These are the values ISIS3 is currently using:

odt_x = [0.0,
         1.0018542696237999756,
         0.0,
         0.0,
         -0.00050944404749411103042,
         0.0,
         1.0040104714688599425e-5,
         0.0,
         1.0040104714688599425e-5,
         0.0
        ]
odt_y = [0.0,
         0.0,
         1.0,
         0.00090600105949967496381,
         0.0,0.00035748426266207598964,
         0.0,1.0040104714688599425e-5,
         0.0,
         1.0040104714688599425e-5
        ]

tempCoefficients=[549.5120497341695,0.010185643391234385,0.0,0.0,0.0,0.0]

These values were taken from:
https://naif.jpl.nasa.gov/pub/naif/pds/data/mess-e_v_h-spice-6-v1.0/messsp_1000/data/ik/msgr_mdis_v160.ti

odt_x = [0.0, 1.0020558791381275, 0.0, 0.0, -5.448742222712919e-4, 0.0, 6.597498811862692e-6, 0.0, 6.683129056014682e-6, 0.0 ] odt_y = [ -1.3657975359540047e-5, 0.0, 1.0, 8.85544334965699e-4, 0.0, 3.338939138331483e-4, 0.0, 7.74756721313425e-6, 0.0, 7.79484564042716e-6 ]

The temperature coefficients are currently what ISIS3 uses (ie. the ones generated after the NAC distortion model was refitted using 16 WAC-NAC co-align sets with a total of 544 images and a temperatre range from -30C to 20C.


In [93]:
# Adapted from the i2g_g2i collinearity noteboook
def opk_to_rotation(o, p, k):
    """
    Convert from Omega, Phi, Kappa to a 3x3 rotation matrix
    
    Parameters
    ----------
    o : float
        Omega in radians
    p : float
        Phi in radians
    k : float
        Kappa in radians
        
    Returns
    -------
     : ndarray
       (3,3) rotation array
       
    """
    om = np.empty((3,3))
    om[:,0] = [1,0,0]
    om[:,1] = [0, cos(o), -sin(o)]
    om[:,2] = [0, sin(o), cos(o)]
    
    pm = np.empty((3,3))
    pm[:,0] = [cos(p), 0, sin(p)]
    pm[:,1] = [0,1,0]
    pm[:,2] = [-sin(p), 0, cos(p)]
    
    km = np.empty((3,3))
    km[:,0] = [cos(k), -sin(k), 0]
    km[:,1] = [sin(k), cos(k), 0]
    km[:,2] = [0,0,1]
    
    return km.dot(pm).dot(om)

The temperature dependent distortion function for NAC is modelled the same as the WAC filters (except with different coefficients). The model assumes the form of a polynomial equation of degree 5:

$$fl(t) = A_{0}+A_{1}t+A_{2}t^2+A_{3}t^3+A_{4}t^4+A_{5}t^5$$

The $A_i$ values are the temperature coefficients, and were determined using star calibration images during the orbital phases of the mission. Currently, the model is just linear and only uses $A_{0}$ and $A_{1}$ (with all other coefficients set to 0). This could change in future Spice kernels.


In [94]:
def focalLength(coefs,t):
    '''
    Parameters
    ---------
    coefs:  A list of 6 temperature coefficients (although currently
    only the first two are used)
    t:      The instrument temperature (Celsius)
    Returns
    -------
    focalLength
    '''
    focalLength=0.0
    
    for i in range(0,len(coefs)):
        focalLength=focalLength+coefs[i]*pow(t,i)
        
    return focalLength

In [95]:
def distortionFunction(ux, uy):
    """
    Parameters
    ----------
    ux:  Undistorted x value
    uy:  Undistored y value
    
    Returns
    -------
    dx:  Distorted x value
    dy:  Distorted y value
    
    """ 

    taylor = [1, ux, uy, ux**2, ux*uy, uy**2, ux**3, uy*ux**2, ux*uy**2, uy**3]
    distorted_x = np.dot(odt_x, taylor)
        
    distorted_y = np.dot(odt_y, taylor)
    
    return distorted_x, distorted_y

In [96]:
def distortionJacobian(x,y):
    """
    Parameters
    ----------
    x:  
    y:
    Returns
    -------
    J_{xx}:  Partial_{xx}
    J_{xy}:  Partial_{xy}
    J_{yx}:  Partial_{yx}
    J_{yy}:  Partial_{yy}    
    
    """
    
   
    dx=[0]*10
    dy=[0]*10
    Jxx=0.0
    Jxy=0.0
    Jyx=0.0
    Jyy=0.0
    
    dx[1]=1
    dx[3]=2*x
    dx[4]=y
    dx[6]=3*x*x
    dx[7]=2*x*y
    dx[8]=y*y
    
    dy[2]=1.0
    dy[4]=x
    dy[5]=2*y
    dy[6]=0
    dy[7]=x*x
    dy[8]=2*x*y
    dy[9]=3*y*y
    
    Jxx=np.dot(dx,odt_x)
    Jxy=np.dot(dy,odt_x)
    Jyx=np.dot(dx,odt_y)
    Jyy=np.dot(dy,odt_y)
    
    return [Jxx,Jxy,Jyx,Jyy]

The second order Taylor approximation to a function $f:\mathbb{R}^k \to \mathbb{R}$ at a point $s\in \mathbb{R}^k$ is:

[1] $f(s+h) \approx f(s) +\nabla (f(s))^T h + \frac{1}{2}h^T H[f(s)]h$

In the case of our distortion function, $k=2$ because each distorted focal plane variable is expressed as a function of the undistorted focal-position parameters $x$ and $y$ (i.e. $s = (x,y)$ )

$H$ is the Hessian of $f(s)$ and in this case is expressed simply as:

$$ H(f) = \begin{pmatrix} f_{xx} & f_{xy}\\ f_{yx} & f_{yy} \end{pmatrix} $$

The general form of [1] is:

[2] $f(s) = a+ b^Ts+s^TCs$

Where $a$ is a number, $b$ and $s$ are vectors of dimension $k$, and $C$ is a $k\times k$ symmetric matrix (which is negative-definite). Differentiating [2] gives:

[3] $\nabla f(s) = b+2Cs$

The gradient at $s$ for [3] is maximized for a maximizer vector $\hat{s}$ when:

[4] $0 = b +2C\hat{s}$

Solving for $\hat{s}$:

[5] $\hat{s}= -\frac{1}{2}C^{-1} b $

This is a maximum value of [2] because $C$ was assumed to be negative-definite (ie. all of it's eigenvalues are strictly less than 0).

Applying this to [1] results in:

[6] $f(s+h)\approx a + b^T h + \frac{1}{2}h^TCh$
Where: $a=f(s),b=\nabla f(s),C=H[f(s)]$

$C$ is symmetric by definition of the Hessian matrix and how it is constructed. Therefore (taking the gradient of both sides):

[7] $0=b+C\hat{h}$

So:

[8] $\hat{h} = C^{-1}b$

This implies that the vector $h$ which maximizes [1] is:

[9] $s+\hat{h} = s - C^{-1}b = s - [H(f(s))]^{-1}\nabla f(s)$

So the multivariate Newton-Rapheson method for our particular case is:

  1. Make an initial guess for $\hat{s} = (\hat{x},\hat{y})$ (the maximizer).
  2. Choose a small $\epsilon > 0$.
  3. Iterate:

$i \gets 0$

While $\lVert \nabla f(x_{i}) \rVert > \epsilon$ :

\begin{align*} i &\gets i+1 \\ x_{i} &\gets x_{i-1} - [H(f(s_{i-1})]^{-1} \nabla f(x_{i-1} )\\ y_{i} &\gets y_{i-1} - [H(f(s_{i-1})]^{-1} \nabla f(y_{i-1} ) \end{align*}

$\hat{x} \gets x_{i}$

$\hat{y} \gets y_{i}$

$(\hat{x},\hat{y})$ represent the undistorted focal plane coordinates.


In [97]:
def undistortedFocalCoords(dx,dy,tol):
    
    """
    Parameters:
    -----------
    dx:  distorted x focal plane coordinate
    dy:  distoryted y focal plane coordinate
    Returns
    --------
    ux:  undistorted x focal plane coordinate
    uy:  undistorted y focal plane coordinate            
    
    """
    #initial guess
    x=dx
    y=dy
    
    #This error tolerance is set to approximately
    # one-millionth of a NAC pixel

    iter_ceiling = 20    
    count = 1
    
    [fx,fy]=distortionFunction(x,y)
    
    while ( (abs(fx)+abs(fy))>tol and (count < iter_ceiling) ):
        count = count+1
        [fx,fy]=distortionFunction(x,y)    
        fx=dx-fx
        fy=dy-fy
            
        
        #print (abs(fx)+abs(fy))
        [Jxx,Jxy,Jyx,Jyy] = distortionJacobian(x,y)
        d = Jxx*Jyy-Jxy*Jyx
        if(abs(d) < 1e-6):
            #Error handling maybe should be handled here. 
            #For now we're just going to break out of this loop.
            break
            
        #Update (x,y):
        x = x+((Jyy*fx-Jxy*fy)/d)
        y = y+((Jxx*fy-Jyx*fx)/d)
        print([x,y])
    
    #Method returned with a root
    if (abs(fx)+abs(fy) <=tol):
        print("Method converged!")
        ux=x
        uy=y
    else:
        #Return distorted coordinates
        
        print("Method did not converge =(")
        
        ux=dx
        uy=dy
    
    return [ux,uy]

In [98]:
def rotate(v, m, inverse=False):
    """
    Rotate a given 3D vector by a given rotation matrix
    
    Parameters
    ----------
    v : list
        3 element list representing a 3D vector
    m : list
        3x3 (2D list) rotation matrix to use (see opk_to_rotation)
    inverse : bool
        If True, will perform an inverse rotation
        
    Returns
    -------
     : list
         Vector with rotation applied to it
    """
    if inverse:
        m = np.linalg.inv(m)
    return [
        m[0,0] * v[0] + m[0,1] * v[1] + m[0,2] * v[2],
        m[1,0] * v[0] + m[1,1] * v[1] + m[1,2] * v[2],
        m[2,0] * v[0] + m[2,1] * v[1] + m[2,2] * v[2]
    ]

In [99]:
def groundToImage(ground_point, sensor_point_b, focal_length, rotation_matrix):
    """
    Given a body-fixed ground point, get the line,sample on the image.
    Note: No error checking or intersection checking is performed.
          TODO: Intersection check: check if returned line,sample within image dimensions.
          
    Parameters
    ----------
    ground_point : list
        Body-fixed XYZ coordinate of ground point on target body.
    sensor_point_b : list
        Body-fixed XYZ coordinate of sensor.
    focal_length : float
        Sensor focal length.
    rotation_matrix : list
        3x3 matrix (2D array) to perform rotations between body-fixed and sensor frame, and vice versa.
        
    Returns
    -------
     : list
         Line, Sample value on the image.
    """
    
    # NOTES
    # _b suggests body-fixed coordinates
    # _c suggests sensor frame coordinates
    
    # Determine the center look direction (center_x, center_y, f) of the sensor in body-fixed
    # (e.g. the sensor's optical axis rotated into body-fixed frame)
    sensor_optical_axis_b = rotate([0.,0.,focal_length], rotation_matrix)
    
    # Find the look vector from the sensor optical center to the ground point in body-fixed
    look_ground_b = np.subtract(ground_point, sensor_point_b)
    
    # Normalize by scaling the sensor-to-ground look vector to the sensor optical axis body-fixed vector
    mag = np.sqrt(np.dot(sensor_optical_axis_b, sensor_optical_axis_b))
    mag_ground = np.sqrt(np.dot(look_ground_b, look_ground_b))
    
    look_ground_b = np.multiply((mag / mag_ground), look_ground_b)
    print("\tDEBUG Sensor-to-ground (body-fixed): ", look_ground_b)
    
    # Rotate the sensor-to-ground look vector from body-fixed to sensor frame (inverse rotation)
    look_ground_c = rotate(look_ground_b, rotation_matrix, True)
    print("\tDEBUG Sensor-to-ground (sensor frame): ", look_ground_c)
    
            
    
    
    # Scale the sensor-to-ground sensor frame vector so that it intersects the focal plane
    # (i.e. scale it so its Z-component equals the sensor's focal length)
    focal_plane_coord = np.multiply(look_ground_c, (focal_length / look_ground_c[2]))
    
    # Distortion (**this doesn't do anything right now**)
    #distorted_x, distorted_y = distort(focal_plane_coord[0], focal_plane_coord[1])
    [ux,uy]=distortionFunction(focal_plane_coord[0],focal_plane_coord[1])
    #ux = focal_plane_coord[0]
    #uy = focal_plane_coord[1]
        
    # Convert focal_plane mm to pixels
    # (this would be using itrans from ISD, or inverse of trans from ISD)
    pixels = [
        ux * (1. / 0.014),
        uy * (1. / 0.014)
    ]
    print("\tDEBUG Pixels: ", pixels)
    
    # Convert pixels to line,sample
    sample = pixels[0] + 512.0
    line = pixels[1] + 512.0
    
    return line,sample

In [100]:
# tests/data/EN1007907102M.cub/json image ISD
XL = 1728357.70312
YL = -2088409.0061
ZL = 2082873.92806

o = 2.25613094079
p = 0.094332016311
k = -0.963037547862

temp = -27.22
#f = 549.1178195372703
f= focalLength(tempCoefficients,temp)
pixel_pitch = 0.014 # transX/transY

# Compute the rotation matrix to use from omega, phi, kappa
M = opk_to_rotation(o,p,k)


print()
print("focal length:  result should be:  549.2347965")
print()
print("\n RESULT:  ",focalLength(tempCoefficients,-27.22))





print()
print("undistortedFocalCoords:  result should be:  (10.0176303736875134120509756030514836311340,11.8305020382274683754530997248366475105286,)")
dx=10.0
dy=12.0
print()
print("\n RESULT:  ",undistortedFocalCoords(dx,dy,1.4e-5))
print()


print()
print("distortionFunction:  result should be:  (3.0002024882,4.0148777682)")
dx=10.0
dy=12.0
print()
print("\n RESULT:  ",distortionFunction(3.0,4.0))
print()

print()
print("distortionJacobian:  result should be:  (1.0002482179,-0.0012873696,0.0056769689,1.0034321601)")
dx=10.0
dy=12.0
print()
print("\n RESULT:  ",distortionJacobian(3.0,4.0))
print()


# 512, 512 (distortion irrelevant as this is the center of the image)
print("groundToImage: result should be around 512,512")
X =  1129210.
Y = -1599310.
Z =  1455250.
print()
print("\nRESULT: ", groundToImage([X,Y,Z], [XL, YL, ZL], f, M))
print()
print('*'*80)
print()

# Distorted coordinates do not work for 100,100 ?
# X = 1121630.
# Y = -1606590.
# Z = 1453100.

# Undistorted coordinates work for 100,100
# -- These are obtained by turning off distortion in MdisNacSensorModel::imageToGround,
#    make and ./runTests, then look at the non-truth output values for imageToGround1 test.
print("groundToImage: result should be around 100,100")
uX = 1115920.
uY = -1603550.
uZ = 1460830.
print()
print("\nRESULT: ", groundToImage([uX,uY,uZ], [XL, YL, ZL], f, M))


focal length:  result should be:  549.2347965


 RESULT:   549.2347965210602

undistortedFocalCoords:  result should be:  (10.0176303736875134120509756030514836311340,11.8305020382274683754530997248366475105286,)

[10.017634212081131, 11.830522005529895]
[10.017630373687538, 11.830502038227779]
[10.017630373687513, 11.83050203822747]
Method converged!

 RESULT:   [10.017630373687513, 11.83050203822747]


distortionFunction:  result should be:  (3.0002024882,4.0148777682)


 RESULT:   (3.0002024881550722, 4.0148777682095584)


distortionJacobian:  result should be:  (1.0002482179,-0.0012873696,0.0056769689,1.0034321601)


 RESULT:   [1.0002482179365551, -0.0012873696293298066, 0.0056769688701505758, 1.0034321600700338]

groundToImage: result should be around 512,512

	DEBUG Sensor-to-ground (body-fixed):  [-330.3790909   269.69657757 -346.08131133]
	DEBUG Sensor-to-ground (sensor frame):  [0.0021948838694711981, -0.0016546520812426024, 549.23479651418211]
	DEBUG Pixels:  [0.15706825902395311, -0.11818905271201939]

RESULT:  (511.88181094728799, 512.15706825902396)

********************************************************************************

groundToImage: result should be around 100,100

	DEBUG Sensor-to-ground (body-fixed):  [-336.85974404  266.68750121 -342.14346589]
	DEBUG Sensor-to-ground (sensor frame):  [-5.7703370098602278, -5.7665689736319337, 549.17420879225142]
	DEBUG Pixels:  [-414.46335155365853, -409.21398224058476]

RESULT:  (102.78601775941524, 97.536648446341474)

In [101]:
# NOTES:
# * TODO: implement a intersection check in groundToImage (just check if result is in image dimensions)
# * TODO: fix distortion