'Mean squared displacement from numpy random.normal

I am running a simulation for a random walk using steps lengths from numpy random.normal. I understand that I should get a mean squared displacement of σ(t)² when I use

x += random.normal(loc = 0.0,scale = sigma)

which I do.

What I don't understand is why I get a MSD of σ(t)² when I use

x += sigma*random.normal(loc = 0.0,scale = 1.0)

Here is the full code for my simulation

import numpy as np
import matplotlib as plt

dt = 0.001 #length of time step
tf = 10.0 #time to run simulation
tmax = int(tf/dt) #number of steps to run

sigma = np.sqrt(2*dt) #standard deviation of random walk

run_n = 1000 #number of runs
xp1s = np.zeros(tmax) #x values of sigma*N(0.0,1.0)
xp2s = np.zeros(tmax) #x values of N(0.0,sigma)
for run in range(run_n):
    #how much the particle moves at each point in time
    xp1 = sigma*np.random.normal(0.0,1.0, size = tmax) 
    xp2 = np.random.normal(0.0,sigma, size = tmax)
    #position at each time is the sum of steps before it
    xp1tmp = np.cumsum(xp1) 
    xp2tmp = np.cumsum(xp2)
    
    #get the MSD for each point in time
    xp1s += xp1tmp**2 /run_n
    xp2s += xp2tmp**2 /run_n

    plt.plot(linspace(0,tf,tmax),xp1s)
    plt.plot(linspace(0,tf,tmax),xp2s)

I can't share images, but I do get an MSD of σ(t)² for each simulation.



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source