In many situations, it is very useful to think of some sort of process that you wish to model as a succession of random steps. This can describe a wide variety of phenomena - the behavior of the stock market, models of population dynamics in ecosystems, the properties of polymers, the movement of molecules in liquids or gases, modeling neurons in the brain, or in building Google's PageRank search model. This type of modeling is known as a "random walk", and while the process being modeled can vary tremendously, the underlying process is simple. In this exercise, we are going to model such a random walk and learn about some of its behaviors!
Random walks are a great first step towards modeling dynamic systems. In fact, they provide a near-perfect description of diffusive processes like Brownian motion. Here we will use our knowledge of loops and if-statements to run random walk simulations, study their properties, and connect them to the Einstein diffusion relation.
Put the name of your group members here!
Brownian motion is most commonly exemplified by a single grain of pollen suspended in a liquid solution. It is named after Robert Brown, who studied such pollen grains under a microscope in 1829. But in fact a little Wikipedia-ing reveals this phenomenon has been documented much earlier. An excerpt from a poem by Lucreitus in 60 BC is remarkably on point:
"Observe what happens when sunbeams are admitted into a building and shed light on its shadowy places. You will see a multitude of tiny particles mingling in a multitude of ways... their dancing is an actual indication of underlying movements of matter that are hidden from our sight... It originates with the atoms which move of themselves [i.e., spontaneously]. Then those small compound bodies that are least removed from the impetus of the atoms are set in motion by the impact of their invisible blows and in turn cannon against slightly larger bodies. So the movement mounts up from the atoms and gradually emerges to the level of our senses, so that those bodies are in motion that we see in sunbeams, moved by blows that remain invisible."
Regardless, Einstein comes along in 1905 and rigorously ties the motion of the large particles (e.g. pollen) to the motions of individual atoms. A result of this is the Einstein diffusion relation in 3 dimensions:
$ <(r(t)-r(0))^2> = 6Dt$
where $r(t)$ is the position of a particle at time $t$, $D$ is the diffusion coefficient, and the angle brackets denote an average over an ensemble of measurements.
Today we are going to run random walk simulations and test their ability to model the behavior of diffusive particles.
Imagine that you draw a line on the floor, with a mark every foot. You start at the middle of the line (the point you have decided is the "origin", at $x=0$). You then flip a "fair" coin N times ("fair" means that it has equal chances of coming up heads and tails). Every time the coin comes up heads, you take one step to the right. Every time it comes up tails, you take a step to the left.
The questions we're going to answer:
First: as a group, come up with a solution to this problem on your whiteboard. Use a flow chart, pseudo-code, diagrams, or anything else that you need to get started. Check with an instructor before you continue!
Then: In pairs, write a code that creates a random integer, 1 or 0 (heads or tails). Run it a few times and print out the value to make sure it works - we'll use this as the basis of our random walk model.
In [ ]:
# put your code here.
import random
a = random.randint(0,1)
print(a)
Now, write a code in the space provided below to do a single random walk with $N_{step} = 1000$. You'll start at $x=0$ and take a step right if $a=1$ and left if $a=0$. At the end print the value of $x$. Where did your particle go? Run it a couple of times to see how this changes.
In [ ]:
# put your code here.
x = 0 # initial position (will be modified)
n_steps = 1000 # number of steps
for i in range(0,n_steps):
a = random.randint(0,1)
if a == 0:
x -= 1
else:
x += 1
print(x)
OK, great! But, it would be much better if we could see what is happening. Copy the code from above and modify it to keep track of the number of each step and your distance from the origin, and plot it! (Hint: start with empty arrays or lists, and append to them.) Run this a couple of times to see how it changes.
In [ ]:
# put your code here.
%matplotlib inline
import matplotlib.pyplot as plt
x = 0 # initial position (will be modified)
n_steps = 1000 # number of steps
traj = []
step = []
for i in range(0,n_steps):
a = random.randint(0,1)
if a == 0:
x -= 1
else:
x += 1
traj.append(x)
step.append(i)
print(x)
plt.plot(step,traj)
Now, we want to see how several of these trajectories look at the same time. So, now copy and paste your code from above and add an additional loop where you make five seperate trajectories, and plot each of them separately. There are several ways to store the trajectories - you can make a 2D array, or a list composed of lists of trajectories. Your choice!
In [ ]:
# put your code here.
%matplotlib inline
import matplotlib.pyplot as plt
n_steps = 1000 # number of steps
traj = []
step = []
for trj in range(0,5):
x = 0 # initial position (will be modified)
traj.append([])
step.append([])
for i in range(0,n_steps):
a = random.randint(0,1)
if a == 0:
x -= 1
else:
x += 1
traj[trj].append(x)
step[trj].append(i)
plt.plot(step[trj],traj[trj])
Ok. Now we're getting somewhere. Let's increase the number of trajectories from 5 to 1000. But let's also be nice to our computer and not make it remember the whole trajectory, just the final points. So get rid of the current append command, and instead calculate the average squared final position of the particle., and also plot a histogram of the final distances!. Hint: use a numpy array and append distances to it - you can use the array's functionality to calculate the mean value.
In [ ]:
# put your code here.
import numpy as np
n_steps = 1000 # number of steps
n_traj = 1000
dist = np.array([])
for trj in range(0,n_traj):
x = 0 # initial position (will be modified)
for i in range(0,n_steps):
a = random.randint(0,1)
if a == 0:
x -= 1
else:
x += 1
dist = np.append(dist,x**2)
plt.hist(dist,bins=50)
print("average distance is:", dist.mean())
We're almost ready for Einstein! All we have to do is add another layer to our loop structure. We need to get the average final position of the particle as a function of time. Extend the code above to get the average final position at 10 different time points (t = 100, 200, ..., 1000), and save the results to a list. Plot the results versus time below, and use axis labels.
In [ ]:
# put your code here.
final_pos_avg = []
times = range(100,1100,100)
print(len(times))
n_traj = 1000
for ni in range(0,10):
n = times[ni]
final_pos_avg.append(0)
for trj in range(0,n_traj):
x=0
for i in range(0,n):
a = random.randint(0,1)
if a == 0:
x -= 1
else:
x += 1
final_pos_avg[ni] += x**2
final_pos_avg[ni] /= n_traj
In [ ]:
plt.xlabel('t')
plt.xlim(0,1200)
plt.ylim(0,1200)
plt.ylabel('<x**2>')
plt.plot(times,final_pos_avg)
plt.plot([0,1200],[0,1200],'r--')
Look at the above plot. Is this what you would have expected from the Einstein diffusion equation? Why or why not?
Put your answer there!
Now, we're going to do the same thing, but in two dimensions, x and y. This time, you will start at the origin, pick a random direction (up, down, left, or right), and take one step in that direction. You will then randomly pick a new direction, take a step, and so on, for a total of $N_{step}$ steps.
Questions:
First: As before, come up with a solution to this problem on your whiteboard as a group. Check with an instructor before you continue!
Then: In pairs, write a code in the space provided below to answer these questions!
Start by modifying your coin-flip code to return an integer that is 0,1,2, or 3 (left, right, up, down). Test it a couple of times to see if it works!
In [ ]:
# put your code here.
import random
a = random.randint(0,3)
print(a)
As above, loop over this, but this time use the following rule: (0 => go left, 1 => go right, 2 => go up, 3 => go down). Start at x=0,y=0, and run for 1000 steps, while keeping track of the trajectory. Plot the result as x versus y. Do this a few times and see how it changes!
In [ ]:
# put your code here.
x = y = 0 # initial position (will be modified)
n_steps = 1000 # number of steps
x_traj = []
y_traj = []
step = []
for i in range(0,n_steps):
a = random.randint(0,3)
if a == 0: # go left
x -= 1
elif a==1: # go right
x += 1
elif a==2: # go down
y -= 1
else: #go up
y += 1
x_traj.append(x)
y_traj.append(y)
step.append(i)
plt.plot(x_traj,y_traj)
Now plot the x and y values versus time, individually.
In [ ]:
# put your code here.
plt.plot(step,x_traj,'r-')
plt.plot(step,y_traj,'b-')
How do these curves compare to the results of your 1D random walk?
They look pretty much the same!
Recall the Einstein equation for 3 dimensions: $<(r(t)-r(0))^2> = 6Dt$. Given that $r(0) = 0$ and that $r^2 = x^2 + y^2 + z^2$. What do you think the corresponding Einstein equations are for 1D and 2D?
I think that since $<x^2 + y^2 + z^2> = 6Dt$, and that since all directions should be treated equally, that each direction should contribute 2Dt to the sum. So for 1D the r.h.s. is 2Dt, and for 2D the r.h.s. is 4Dt.
If you have time, copy and paste your 1D random walk code in the cell below. This time, modify your code so that the coin toss is biased - that you are more likely to take a step in one direction than in the other (i.e., the probability of stepping to the right is $p_{step}$, of stepping to the left is $1-p_{step}$, and $p_{step} \neq 0.5$).
How does the distibution of distances gone, as well as the mean distance from the origin, change as $p_{step}$ varies from 0.5?
Hint: How might you use the floating-point random number generator to calculate the chance of taking steps?
In [ ]:
# put your code for Part 3 here. Add extra cells as necessary!
n_steps = 100 # number of steps
n_traj = 1000
prob_right = 0.55
dist = np.array([])
for trj in range(0,n_traj):
x = 0 # initial position (will be modified)
for i in range(0,n_steps):
a = random.random()
if a > prob_right: # step left
x -= 1
else: # step right
x += 1
dist = np.append(dist,x)
plt.hist(dist) #,bins=50)
print("average distance is:", dist.mean(), "average abs. distance is:",np.abs(dist).mean())
For 100 steps, P = probability of stepping to the right. (I am only doing P >= 0.5 because the problem is symmetrical.)
P | <dist> |
<abs_dist> |
---|---|---|
0.5 | -0.01 | 8.03 |
0.55 | 9.96 | 11.62 |
0.6 | 19.94 | 20.01 |
0.8 | 60.06 | 60.06 |
1.0 | 100.0 | 100.0 |
Put your answers here!