**(C) 2017-2019 by Damir Cavar**

**Download:** This and various other Jupyter notebooks are available from my GitHub repo.

*Machine Learning for NLP: Deep Learning* (Topics in Artificial Intelligence) taught at Indiana University in Spring 2018.

There are many online examples and tutorials on perceptrons and learning. Here is a list of some articles:

- Wikipedia on Perceptrons
- Jurafsky and Martin (ed. 3), Chapter 8

*numpy* and use its *exp* function. We could use the same function from the *math* module, or some other module like *scipy*. The *sigmoid* function is defined as in the textbook:

```
In [1]:
```import numpy as np
def sigmoid(z):
return 1 / (1 + np.exp(-z))

Our example data, **weights** $w$, **bias** $b$, and **input** $x$ are defined as:

```
In [2]:
```w = np.array([0.2, 0.3, 0.8])
b = 0.5
x = np.array([0.5, 0.6, 0.1])

**dot-product** $w \cdot x$ and add the **bias** $b$ to it. The sigmoid function defined above will convert this $z$ value to the **activation value** $a$ of the unit:

```
In [3]:
```z = w.dot(x) + b
print("z:", z)
print("a:", sigmoid(z))

```
```

The task is to implement a simple **perceptron** to compute logical operations like AND, OR, and XOR.

- Input: $x_1$ and $x_2$
- Bias: $b = -1$ for AND; $b = 0$ for OR
- Weights: $w = [1, 1]$

with the following activation function:

$$ y = \begin{cases} \ 0 & \quad \text{if } w \cdot x + b \leq 0\\ \ 1 & \quad \text{if } w \cdot x + b > 0 \end{cases} $$We can define this activation function in Python as:

```
In [4]:
```def activation(z):
if z > 0:
return 1
return 0

For AND we could implement a perceptron as:

```
In [5]:
```w = np.array([1, 1])
b = -1
x = np.array([0, 0])
print("0 AND 0:", activation(w.dot(x) + b))
x = np.array([1, 0])
print("1 AND 0:", activation(w.dot(x) + b))
x = np.array([0, 1])
print("0 AND 1:", activation(w.dot(x) + b))
x = np.array([1, 1])
print("1 AND 1:", activation(w.dot(x) + b))

```
```

For OR we could implement a perceptron as:

```
In [6]:
```w = np.array([1, 1])
b = 0
x = np.array([0, 0])
print("0 OR 0:", activation(w.dot(x) + b))
x = np.array([1, 0])
print("1 OR 0:", activation(w.dot(x) + b))
x = np.array([0, 1])
print("0 OR 1:", activation(w.dot(x) + b))
x = np.array([1, 1])
print("1 OR 1:", activation(w.dot(x) + b))

```
```

There is no way to implement a perceptron for XOR this way.

**(C) 2017-2019 by Damir Cavar**