In [14]:
from IPython.display import Math
import matplotlib.pyplot as plt
#from pylab import * #MATLAB-like API
%matplotlib inline

Linear Program (LP): A Type of Optimization Problem:

decision variables: $$x_{1}, x_{2}, x_{3}, x_{4}$$

goal: find values for decision variables

another goal, maximize this objective function: $$2x_{1} + 3x_{2} + -x_{3} + x_{4}$$

which is linear (i.e., no exponentials, trig functions, etc), so a linear program

linear functions are built as summations of terms where each term is either a constant or a variable * a constant

maximize subject to some constraints (linear equalities or inequalities; linear left-hand side, constant right-hand side):

subject to: $$x_{1} - x_{2} \leq 10$$ $$2x_{1} + x_{2} - x_{3} \geq -5$$ $$-x_{2} + x_{4} = 4$$

m = number of training examples

x's = "input" variable/features

y's = "output" variable/"target" variable

(x, y) = a single training example

(x(*i*), y(*i*)) = i-th training example

Basics of how it works:

first: Training set -> Learning Algorithm -> h (hypothesis: maps x's to y's)

then: x -> h -> y (an estimated value for y)

how to represent h: $$h_{\theta}(x) = \theta_{0} + \theta_{1}$$

shorthand: $$h(x)$$

h is simply a linear function

univariate linear regression / linear regression with one variable (x)


In [59]:
## MATPLOT-like API
#x = linspace(0, 5, 10)
#y = x ** 2
#figure()
#plot(x, y, 'r')
#xlabel('x')
#ylabel('y')
#title('title')
#show()

## OO API
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # left, bottom, width, height (range 0 to 1)

axes.plot(x, x*1.2, label=r"$h(x) = \theta_{0} + \theta_{1}$")
x_coords = [1, 2, 3.5, 4, 5.2]
y_coords = [1.5, 3, 3.5, 4.3, 5.9]
axes.scatter(x_coords, y_coords, marker='x', color='r')
axes.legend(loc=2)
axes.set_xlabel('x')
axes.set_ylabel('y')
axes.set_title('title')


Out[59]:
<matplotlib.text.Text at 0x117d37d10>

In [12]:
%%latex
\begin{align}
\end{align}


\begin{align} \end{align}

In [4]:


In [ ]: