Advanced: compute_times & compute_phases

Setup

Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).


In [ ]:
!pip install -I "phoebe>=2.2,<2.3"

As always, let's do imports and create a new Bundle. See Building a System for more details.


In [1]:
import phoebe
from phoebe import u # units

b = phoebe.default_binary()

In [2]:
b.add_dataset('lc', times=phoebe.linspace(0,10,101), dataset='lc01')


Out[2]:
<ParameterSet: 43 parameters | contexts: figure, dataset, compute, constraint>

Overriding Computation Times

If compute_times is not empty (by either providing compute_times or compute_phases), the provided value will be used to compute the model instead of those in the times parameter.

In the case of a mesh dataset or orbit dataset, observations cannot be attached to the dataset, so a times parameter does not exist. In this case compute_times or compute_phases will always be used.


In [3]:
print(b.filter(qualifier=['times', 'compute_times'], context='dataset'))


ParameterSet: 2 parameters
              times@lc01@dataset: [ 0.   0.1  0.2 ...  9.8  9.9 10. ] d
      compute_times@lc01@dataset: [] d

In [4]:
b.set_value('compute_times', phoebe.linspace(0,3,11))

In [5]:
b.run_compute()


Out[5]:
<ParameterSet: 4 parameters | contexts: figure, model>

In [6]:
print("dataset times: {}\ndataset compute_times: {}\nmodel times: {}".format(
    b.get_value('times', context='dataset'),
    b.get_value('compute_times', context='dataset'),
    b.get_value('times', context='model')
    ))


dataset times: [ 0.   0.1  0.2  0.3  0.4  0.5  0.6  0.7  0.8  0.9  1.   1.1  1.2  1.3
  1.4  1.5  1.6  1.7  1.8  1.9  2.   2.1  2.2  2.3  2.4  2.5  2.6  2.7
  2.8  2.9  3.   3.1  3.2  3.3  3.4  3.5  3.6  3.7  3.8  3.9  4.   4.1
  4.2  4.3  4.4  4.5  4.6  4.7  4.8  4.9  5.   5.1  5.2  5.3  5.4  5.5
  5.6  5.7  5.8  5.9  6.   6.1  6.2  6.3  6.4  6.5  6.6  6.7  6.8  6.9
  7.   7.1  7.2  7.3  7.4  7.5  7.6  7.7  7.8  7.9  8.   8.1  8.2  8.3
  8.4  8.5  8.6  8.7  8.8  8.9  9.   9.1  9.2  9.3  9.4  9.5  9.6  9.7
  9.8  9.9 10. ]
dataset compute_times: [0.  0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3. ]
model times: [0.  0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3. ]

compute_times (when not empty) overrides the value of times when computing the model. However, passing times as a keyword argument to run_compute will take precedence over either - and override the computed times across all enabled datasets.


In [7]:
b.run_compute(times=[0,0.2])


Out[7]:
<ParameterSet: 4 parameters | contexts: figure, model>

In [8]:
print("dataset times: {}\ndataset compute_times: {}\nmodel times: {}".format(
    b.get_value('times', context='dataset'),
    b.get_value('compute_times', context='dataset'),
    b.get_value('times', context='model')
    ))


dataset times: [ 0.   0.1  0.2  0.3  0.4  0.5  0.6  0.7  0.8  0.9  1.   1.1  1.2  1.3
  1.4  1.5  1.6  1.7  1.8  1.9  2.   2.1  2.2  2.3  2.4  2.5  2.6  2.7
  2.8  2.9  3.   3.1  3.2  3.3  3.4  3.5  3.6  3.7  3.8  3.9  4.   4.1
  4.2  4.3  4.4  4.5  4.6  4.7  4.8  4.9  5.   5.1  5.2  5.3  5.4  5.5
  5.6  5.7  5.8  5.9  6.   6.1  6.2  6.3  6.4  6.5  6.6  6.7  6.8  6.9
  7.   7.1  7.2  7.3  7.4  7.5  7.6  7.7  7.8  7.9  8.   8.1  8.2  8.3
  8.4  8.5  8.6  8.7  8.8  8.9  9.   9.1  9.2  9.3  9.4  9.5  9.6  9.7
  9.8  9.9 10. ]
dataset compute_times: [0.  0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3. ]
model times: [0.  0.2]

In [9]:
b.run_compute()


Out[9]:
<ParameterSet: 4 parameters | contexts: figure, model>

In [10]:
print("dataset times: {}\ndataset compute_times: {}\nmodel times: {}".format(
    b.get_value('times', context='dataset'),
    b.get_value('compute_times', context='dataset'),
    b.get_value('times', context='model')
    ))


dataset times: [ 0.   0.1  0.2  0.3  0.4  0.5  0.6  0.7  0.8  0.9  1.   1.1  1.2  1.3
  1.4  1.5  1.6  1.7  1.8  1.9  2.   2.1  2.2  2.3  2.4  2.5  2.6  2.7
  2.8  2.9  3.   3.1  3.2  3.3  3.4  3.5  3.6  3.7  3.8  3.9  4.   4.1
  4.2  4.3  4.4  4.5  4.6  4.7  4.8  4.9  5.   5.1  5.2  5.3  5.4  5.5
  5.6  5.7  5.8  5.9  6.   6.1  6.2  6.3  6.4  6.5  6.6  6.7  6.8  6.9
  7.   7.1  7.2  7.3  7.4  7.5  7.6  7.7  7.8  7.9  8.   8.1  8.2  8.3
  8.4  8.5  8.6  8.7  8.8  8.9  9.   9.1  9.2  9.3  9.4  9.5  9.6  9.7
  9.8  9.9 10. ]
dataset compute_times: [0.  0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3. ]
model times: [0.  0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3. ]

Phase-Time Conversion

In addition to the ability to provide compute_times, we can alternatively provide compute_phases. These two parameters are linked via a constraint (see the constraints tutorial), with compute_phases constrained by default.


In [11]:
print(b.filter(qualifier=['times', 'compute_times', 'compute_phases', 'compute_phases_t0'], context='dataset'))


ParameterSet: 4 parameters
              times@lc01@dataset: [ 0.   0.1  0.2 ...  9.8  9.9 10. ] d
      compute_times@lc01@dataset: [0.  0.3 0.6 ... 2.4 2.7 3. ] d
*    compute_phases@lc01@dataset: [ 0.   0.3 -0.4 ...  0.4 -0.3  0. ]
  compute_phases_t0@lc01@dataset: t0_supconj

Essentially, this constraint does the same thing as b.to_phase or b.to_time, using the appropriate t0 according to compute_phases_t0 from the top-level orbit in the hierarchy.


In [12]:
print(b.get_constraint('compute_phases'))


Constrains (qualifier): compute_phases
Expression in solar units (value): times_to_phases({compute_times@lc01@dataset}, {period@binary@component}, {dpdt@binary@component}, {compute_phases_t0@lc01@dataset}, {t0_supconj@binary@component}, {t0_perpass@binary@component}, {t0_ref@binary@component})
Current Result (result): [ 0.   0.3 -0.4 -0.1  0.2  0.5 -0.2  0.1  0.4 -0.3  0. ]

In [13]:
print(b.get_parameter('compute_phases_t0').choices)


['t0_supconj', 't0_perpass', 't0_ref']

In order to provide compute_phases instead of compute_times, we must call b.flip_constraint.


In [14]:
b.flip_constraint('compute_phases', solve_for='compute_times')


Out[14]:
<ConstraintParameter: {compute_times@lc01@dataset} = phases_to_times({compute_phases@lc01@dataset}, {period@binary@component}, {dpdt@binary@component}, {compute_phases_t0@lc01@dataset}, {t0_supconj@binary@component}, {t0_perpass@binary@component}, {t0_ref@binary@component}) (solar units) => [ 0.   0.3 -0.4 -0.1  0.2  0.5 -0.2  0.1  0.4 -0.3  0. ] d>

In [15]:
b.set_value('compute_phases', phoebe.linspace(0,1,11))

In [16]:
print(b.filter(qualifier=['times', 'compute_times', 'compute_phases', 'compute_phases_t0'], context='dataset'))


ParameterSet: 4 parameters
              times@lc01@dataset: [ 0.   0.1  0.2 ...  9.8  9.9 10. ] d
*     compute_times@lc01@dataset: [0.  0.1 0.2 ... 0.8 0.9 1. ] d
     compute_phases@lc01@dataset: [0.  0.1 0.2 ... 0.8 0.9 1. ]
  compute_phases_t0@lc01@dataset: t0_supconj

Note that under the hood, PHOEBE always works in time-space, meaning it is the constrained value of compute_times that is being passed under-the-hood.

Also note that if directly passing compute_phases to b.add_dataset, the constraint will be flipped on our behalf. We would then need to flip the constraint in order to provide compute_times instead.

Finally, it is important to make the distinction that this is not adding support for observations in phases. If we have an old light curve that is only available in phase, we still must convert these to times manually (or via b.to_time). This restriction is intentional: we do not want the mapping between phase and time to change as the ephemeris is changed or fitted, rather we want to try to map from phase to time using the ephemeris that was originally used when the dataset was recorded (if possible, or the best possible guess).

Interpolating the Model

Whether or not we used compute_times/compute_phases or not, it is sometimes useful to be able to interpolate on the resulting model. In the case where we provided compute_times/compute_phases to "down-sample" from a large dataset, this can be particularly useful.

We can call interp_value on any FloatArrayParameter.


In [17]:
b.get_parameter('fluxes', context='model').get_value()


Out[17]:
array([0.99468924, 2.02972764, 2.01532896, 2.01518487, 2.02963908,
       0.99468924, 2.02972764, 2.01532896, 2.01518487, 2.02963908,
       0.99468924])

In [18]:
b.get_parameter('fluxes', context='model').interp_value(times=1.0)


Out[18]:
2.02000294226948

In [19]:
b.get_parameter('fluxes', context='model').interp_value(times=phoebe.linspace(0,3,101))


Out[19]:
array([0.99468924, 1.09819308, 1.20169692, 1.30520076, 1.4087046 ,
       1.51220844, 1.61571228, 1.71921612, 1.82271996, 1.9262238 ,
       2.02972764, 2.02828777, 2.02684791, 2.02540804, 2.02396817,
       2.0225283 , 2.02108843, 2.01964857, 2.0182087 , 2.01676883,
       2.01532896, 2.01531455, 2.01530014, 2.01528574, 2.01527133,
       2.01525692, 2.01524251, 2.0152281 , 2.01521369, 2.01519928,
       2.01518487, 2.01663029, 2.01807571, 2.01952114, 2.02096656,
       2.02241198, 2.0238574 , 2.02530282, 2.02674824, 2.02819366,
       2.02963908, 1.9261441 , 1.82264911, 1.71915413, 1.61565914,
       1.51216416, 1.40866918, 1.30517419, 1.20167921, 1.09818422,
       0.99468924, 1.09819308, 1.20169692, 1.30520076, 1.4087046 ,
       1.51220844, 1.61571228, 1.71921612, 1.82271996, 1.9262238 ,
       2.02972764, 2.02828777, 2.02684791, 2.02540804, 2.02396817,
       2.0225283 , 2.02108843, 2.01964857, 2.0182087 , 2.01676883,
       2.01532896, 2.01531455, 2.01530014, 2.01528574, 2.01527133,
       2.01525692, 2.01524251, 2.0152281 , 2.01521369, 2.01519928,
       2.01518487, 2.01663029, 2.01807571, 2.01952114, 2.02096656,
       2.02241198, 2.0238574 , 2.02530282, 2.02674824, 2.02819366,
       2.02963908, 1.9261441 , 1.82264911, 1.71915413, 1.61565914,
       1.51216416, 1.40866918, 1.30517419, 1.20167921, 1.09818422,
       0.99468924])

In the case of times, this will automatically interpolate in phase-space if the provided time is outside the range of the referenced times array. If you have a logger enabled with at least the 'warning' level, this will raise a warning and state the phases at which the interpolation will be completed.


In [20]:
b.get_parameter('fluxes', context='model').interp_value(times=5)


Out[20]:
0.9946892395215293

Determining & Plotting Residuals

One particularly useful case for interpolating is to compare a model (perhaps computed in phase-space) to a dataset with a large number of datapoints. We can do this directly by calling compute_residuals, which will handle any necessary interpolation and compare the dependent variable between the dataset and models.

Note that if there are more than one dataset or model attached to the bundle, it may be necessary to pass dataset and/or model (or filter in advanced and call compute_residuals on the filtered ParameterSet.

To see this in action, we'll first create a "fake" observational dataset, add some noise, recompute the model using compute_phases, and then calculate the residuals.


In [21]:
b.add_dataset('lc', 
              times=phoebe.linspace(0,10,1000),
              dataset='lc01',
              overwrite=True)


Out[21]:
<ParameterSet: 43 parameters | contexts: figure, dataset, compute, constraint>

In [22]:
b.run_compute(irrad_method='none')


Out[22]:
<ParameterSet: 4 parameters | contexts: figure, model>

In [23]:
fluxes = b.get_value('fluxes', context='model')
b.set_value('fluxes', context='dataset', value=fluxes)

In [24]:
b.flip_constraint('compute_phases', solve_for='compute_times')


Out[24]:
<ConstraintParameter: {compute_times@lc01@dataset} = phases_to_times({compute_phases@lc01@dataset}, {period@binary@component}, {dpdt@binary@component}, {compute_phases_t0@lc01@dataset}, {t0_supconj@binary@component}, {t0_perpass@binary@component}, {t0_ref@binary@component}) (solar units) => [] d>

In [25]:
b.set_value('compute_phases', phoebe.linspace(0,1,101))

In [26]:
b.set_value('teff', component='primary', value=5950)

In [27]:
b.run_compute(irrad_method='none')


Out[27]:
<ParameterSet: 4 parameters | contexts: figure, model>

In [28]:
print(len(b.get_value('fluxes', context='dataset')), len(b.get_value('fluxes', context='model')))


1000 101

In [30]:
b.calculate_residuals()


Out[30]:
$[-0.039661666,~-0.039388964,~-0.039417858,~\dots,~-0.039417296,~-0.039388334,~-0.039661666] \; \mathrm{\frac{W}{m^{2}}}$

If we plot the dataset and model, we see that the model was only computed for one cycle, whereas the dataset extends further in time.


In [31]:
afig, mplfig = b.plot(show=True)


But we can also plot the residuals. Here, calculate_residuals is called internally, interpolating in phase-space, and then plotted in time-space. See the options for y in the plot API docs for more details.


In [32]:
afig, mplfig = b.plot(y='residuals', show=True)