In [2]:
from IPython.display import Image

Explore the uncertainity in the resistance of a sample circuit

http://www.electronics-tutorials.ws/resistor/res_5.html


In [3]:
Image("res4.gif")


Out[3]:
<IPython.core.display.Image object>

For this simple circuit the questions are

  1. what is the end-to-end resistance and its uncertainity?
  2. What are the currents I1 and I2 and uncertaianity?

For each we will assume also that the 12V is not perfect.

We will try and answer these questions for the circuit as designed and also for the cuircuit as built with a few measurments of the resistors and see how that impacts the results.

The results are:

$R_{eff} = 6\Omega$

$I_T = 1A, I_1 = I_2 = 0.5A$


In [5]:
import numpy as np
import matplotlib.pyplot as plt
import pymc3 as pm
import pandas as pd
import seaborn as sns

sns.set()
%matplotlib inline

Components

Each component is assumed to be a normal distriubution of exptected value as listed and the 4, 6, 8 Ohm resistors are 1% while the 12 Ohm is a 5% resistor.

Per http://docs.pymc.io/api/bounds.html we will use bounded variables because resistances that are negative are nonsensical.


In [6]:
## setup the model 

# these are the values and precision of each
Datasheets = {'R1':(6.0, 0.01), 
              'R2':(8.0, 0.01),
              'R3':(4.0, 0.01),
              'R4':(12.0, 0.05),
              'V1':(6.0, 0.01),} # 1% on the 12V power supply


with pm.Model() as model:
    BoundedNormal = pm.Bound(pm.Normal, lower=0.0)
    # http://docs.pymc.io/api/distributions/continuous.html#pymc3.distributions.continuous.Normal
    # in Bayes world these are considered prior distributions, they are based on previous information
    #   that is gained in some other manner, from the datasheet in this case. 
    R1 = BoundedNormal('R1', mu=Datasheets['R1'][0], sd=Datasheets['R1'][0]*Datasheets['R1'][1])
    R2 = BoundedNormal('R2', mu=Datasheets['R2'][0], sd=Datasheets['R2'][0]*Datasheets['R2'][1])
    R3 = BoundedNormal('R3', mu=Datasheets['R3'][0], sd=Datasheets['R3'][0]*Datasheets['R3'][1])
    R4 = BoundedNormal('R4', mu=Datasheets['R4'][0], sd=Datasheets['R4'][0]*Datasheets['R4'][1])
    # don't bound the voltage as negative is possilbe
    V1 = pm.Normal('V1', mu=Datasheets['V1'][0], sd=Datasheets['V1'][0]*Datasheets['V1'][1])
    
    # Match should all be done on paper first to get the full answer, but we will do steps here because one can.
    #   all at once would be much faster.
    
    # just add them, we will not get info on R2_3 at the output unless we wrap them in pm.Deterministic
    R2_3 = R2+R3
    #   R2_3 = pm.Deterministic('R2_3', R2+R3) 
    
    # now get the resistance answer, and we want details
    R_eff = pm.Deterministic('R_eff', 1/(1/R2_3 + 1/R4))
    
    # total current is then just I=V/R
    I_t = pm.Deterministic('I_t', V1/R_eff)
    # and I_1 and I_2
    I_1 = pm.Deterministic('I_1', I_t*R2_3/R4)
    I_2 = pm.Deterministic('I_2', I_t-I_1)
    
    # makes it all a bit clearner to start in a good place
    start = pm.find_MAP()
    # run a fair number of samples, I have a 8 core machine so run 6
    trace = pm.sample(5000, start=start, njobs=6)


logp = 1.3929, ||grad|| = 325.86: 100%|██████████| 41/41 [00:00<00:00, 1926.97it/s]    
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (6 chains in 6 jobs)
NUTS: [V1, R4, R3, R2, R1]
Sampling 6 chains: 100%|██████████| 33000/33000 [00:14<00:00, 2339.32draws/s]

Explore and explain the results

In the summary what is shown is the variable name ans summary information about each variable.

We see that:

$R_{eff} = 5.997 \pm 0.3$, really $R_{eff}$ is between $[5.705, 6.302]$

or $6\Omega \pm 5\%$

$I_t = 1.001 \pm 0.113$, $[0.892, 1.113]$

or $1.001 \pm 11\%$

So if one is going for a 5% accuracy on the current you cannot say that!


In [7]:
I_t_perc = np.percentile(trace['I_t'], (2.5, 97.5))
I_t_mean = trace['I_t'].mean()
print('I_t = {:.4} +/- {:.4}'.format(I_t_mean, I_t_perc[1]-I_t_mean))
print('I_t = {:.4} +/- {:.4}%'.format(I_t_mean, (I_t_perc[1]-I_t_mean)/I_t_mean*100))


I_t = 1.001 +/- 0.05678
I_t = 1.001 +/- 5.67%

In [8]:
pm.summary(trace, varnames=('R_eff', 'I_t', 'I_1', 'I_2'))


Out[8]:
mean sd mc_error hpd_2.5 hpd_97.5 n_eff Rhat
R_eff 5.995583 0.152911 0.000732 5.689104 6.285985 37483.959034 1.000044
I_t 1.001417 0.027590 0.000135 0.948134 1.055179 37864.272722 1.000011
I_1 1.005486 0.077585 0.000375 0.859042 1.157883 36706.968494 1.000025
I_2 -0.004069 0.051935 0.000249 -0.107100 0.093099 36454.164319 1.000024

We also good visual diagnostics in terms of traceplots. The right side shows the draws, this should always look like hash, the left side is the Kernel Density Esimator of the output distribution. For this problem it should look pretty darn Normal.

For both summary and traceplot, leavre off varnames to jst get them all...


In [9]:
pm.traceplot(trace, combined=True, varnames=('R_eff', 'I_t'));


/Users/balarsen/miniconda3/lib/python3.6/site-packages/matplotlib/axes/_base.py:3455: MatplotlibDeprecationWarning: 
The `ymin` argument was deprecated in Matplotlib 3.0 and will be removed in 3.2. Use `bottom` instead.
  alternative='`bottom`', obj_type='argument')
/Users/balarsen/miniconda3/lib/python3.6/site-packages/matplotlib/axes/_base.py:3455: MatplotlibDeprecationWarning: 
The `ymin` argument was deprecated in Matplotlib 3.0 and will be removed in 3.2. Use `bottom` instead.
  alternative='`bottom`', obj_type='argument')

Now lets say that we made some measurments of R1 and we can use those to contrain things

The model has to be a bit different to allow for the measurements. Also we will drop the bounded as while true it is kind of annoying to deal with, cannot observe things on a bounded distribution.


In [ ]:
# setup the model 

# these are the values and precision of each
Datasheets = {'R1':(6.0, 0.01), 
              'R2':(8.0, 0.01),
              'R3':(4.0, 0.01),
              'R4':(12.0, 0.05),
              'V1':(6.0, 0.05),} # 5% on the 12V power supply

measuremnts_R1 = [5.987, 5.987] # we measured it twice

with pm.Model() as model:
    # http://docs.pymc.io/api/distributions/continuous.html#pymc3.distributions.continuous.Normal
    # in Bayes world these are considered prior distributions, they are based on previous information
    #   that is gained in some other manner, from the datasheet in this case. 
    #    R1 = pm.Normal('R1', mu=Datasheets['R1'][0], sd=Datasheets['R1'][0]*Datasheets['R1'][1])
    R2 = pm.Normal('R2', mu=Datasheets['R2'][0], sd=Datasheets['R2'][0]*Datasheets['R2'][1])
    R3 = pm.Normal('R3', mu=Datasheets['R3'][0], sd=Datasheets['R3'][0]*Datasheets['R3'][1])
    R4 = pm.Normal('R4', mu=Datasheets['R4'][0], sd=Datasheets['R4'][0]*Datasheets['R4'][1])
    # don't bound the voltage as negative is possilbe
    V1 = pm.Normal('V1', mu=Datasheets['V1'][0], sd=Datasheets['V1'][0]*Datasheets['V1'][1])

    # so on R1 we took some measurments and so have to build it a bit differently
    # use the datasheet for prior
    R1_mean = pm.Normal('R1_mean', mu=Datasheets['R1'][0], sd=Datasheets['R1'][0]*Datasheets['R1'][1])
    R1 = pm.Normal('R1', mu=R1_mean, sd=Datasheets['R1'][0]*Datasheets['R1'][1], observed=measuremnts_R1)
    
    
    # Match should all be done on paper first to get the full answer, but we will do steps here because one can.
    #   all at once would be much faster.
    
    # just add them, we will not get info on R2_3 at the output unless we wrap them in pm.Deterministic
    R2_3 = R2+R3
    #   R2_3 = pm.Deterministic('R2_3', R2+R3) 
    
    # now get the resistance answer, and we want details
    R_eff = pm.Deterministic('R_eff', 1/(1/R2_3 + 1/R4))
    
    # total current is then just I=V/R
    I_t = pm.Deterministic('I_t', V1/R_eff)
    # and I_1 and I_2
    I_1 = pm.Deterministic('I_1', I_t*R2_3/R4)
    I_2 = pm.Deterministic('I_2', I_t-I_1)
    
    # makes it all a bit clearner to start in a good place
    start = pm.find_MAP()
    # run a fair number of samples, I have a 8 core machine so run 6
    trace = pm.sample(5000, start=start, njobs=6)


logp = 9.4201, ||grad|| = 7.2222: 100%|██████████| 4/4 [00:00<00:00, 1224.79it/s]
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (6 chains in 6 jobs)
NUTS: [R1_mean, V1, R4, R3, R2]
Sampling 6 chains:  26%|██▌       | 8609/33000 [00:03<00:07, 3049.04draws/s]

In [ ]:
pm.summary(trace, varnames=('R_eff', 'I_t', 'I_1', 'I_2'))

In [ ]:
pm.traceplot(trace, combined=True, varnames=('R_eff', 'I_t'));

Results

So while not a huge difference a single measurment of R1 changed the results:

$R_{eff} = [5.705, 6.302]$ became $R_{eff} = [5.688, 6.282]$

$I_t = [0.892, 1.113]$ became $I_t = [0.891, 1.110]$

A not huge difference but a shift down.


In [55]:
I_t_perc = np.percentile(trace['I_t'], (2.5, 97.5))
I_t_mean = trace['I_t'].mean()
print('I_t = {:.4} +/- {:.4}'.format(I_t_mean, I_t_perc[1]-I_t_mean))
print('I_t = {:.4} +/- {:.4}%'.format(I_t_mean, (I_t_perc[1]-I_t_mean)/I_t_mean*100))


I_t = 1.001 +/- 0.1134
I_t = 1.001 +/- 11.33%

How about we buy a better R4?

Buy a 1% not a 5%

rerun first model with the better resistor (different Bayesian Prior)


In [56]:
# setup the model 

# these are the values and precision of each
Datasheets = {'R1':(6.0, 0.01), 
              'R2':(8.0, 0.01),
              'R3':(4.0, 0.01),
              'R4':(12.0, 0.01), # better resistor
              'V1':(6.0, 0.05),} # 5% on the 12V power supply


with pm.Model() as model:
    BoundedNormal = pm.Bound(pm.Normal, lower=0.0)
    # http://docs.pymc.io/api/distributions/continuous.html#pymc3.distributions.continuous.Normal
    # in Bayes world these are considered prior distributions, they are based on previous information
    #   that is gained in some other manner, from the datasheet in this case. 
    R1 = BoundedNormal('R1', mu=Datasheets['R1'][0], sd=Datasheets['R1'][0]*Datasheets['R1'][1])
    R2 = BoundedNormal('R2', mu=Datasheets['R2'][0], sd=Datasheets['R2'][0]*Datasheets['R2'][1])
    R3 = BoundedNormal('R3', mu=Datasheets['R3'][0], sd=Datasheets['R3'][0]*Datasheets['R3'][1])
    R4 = BoundedNormal('R4', mu=Datasheets['R4'][0], sd=Datasheets['R4'][0]*Datasheets['R4'][1])
    # don't bound the voltage as negative is possilbe
    V1 = pm.Normal('V1', mu=Datasheets['V1'][0], sd=Datasheets['V1'][0]*Datasheets['V1'][1])
    
    # Match should all be done on paper first to get the full answer, but we will do steps here because one can.
    #   all at once would be much faster.
    
    # just add them, we will not get info on R2_3 at the output unless we wrap them in pm.Deterministic
    R2_3 = R2+R3
    #   R2_3 = pm.Deterministic('R2_3', R2+R3) 
    
    # now get the resistance answer, and we want details
    R_eff = pm.Deterministic('R_eff', 1/(1/R2_3 + 1/R4))
    
    # total current is then just I=V/R
    I_t = pm.Deterministic('I_t', V1/R_eff)
    # and I_1 and I_2
    I_1 = pm.Deterministic('I_1', I_t*R2_3/R4)
    I_2 = pm.Deterministic('I_2', I_t-I_1)
    
    # makes it all a bit clearner to start in a good place
    start = pm.find_MAP()
    # run a fair number of samples, I have a 8 core machine so run 6
    trace = pm.sample(5000, start=start, njobs=6)


Optimization terminated successfully.
         Current function value: -15.030161
         Iterations: 2
         Function evaluations: 4
         Gradient evaluations: 4
Auto-assigning NUTS sampler...
Initializing NUTS using ADVI...
Average Loss = 277.01:   6%|▌         | 11131/200000 [00:04<01:11, 2634.68it/s]
Convergence archived at 11300
Interrupted at 11,300 [5%]: Average Loss = 3,782.6
100%|██████████| 5500/5500 [00:22<00:00, 248.31it/s]

In [57]:
I_t_perc = np.percentile(trace['I_t'], (2.5, 97.5))
I_t_mean = trace['I_t'].mean()
print('I_t = {:.4} +/- {:.4}'.format(I_t_mean, I_t_perc[1]-I_t_mean))
print('I_t = {:.4} +/- {:.4}%'.format(I_t_mean, (I_t_perc[1]-I_t_mean)/I_t_mean*100))


I_t = 0.9998 +/- 0.09849
I_t = 0.9998 +/- 9.852%

The better resistor helped but are are dominated somewhere else... power supply!


In [ ]: