Racial discrimination continues to be pervasive in cultures throughout the world. Researchers examined the level of racial discrimination in the United States labor market by randomly assigning identical résumés to black-sounding or white-sounding names and observing the impact on requests for interviews from employers.
In the dataset provided, each row represents a resume. The 'race' column has two values, 'b' and 'w', indicating black-sounding and white-sounding. The column 'call' has two values, 1 and 0, indicating whether the resume received a call from employers or not.
Note that the 'b' and 'w' values in race are assigned randomly to the resumes when presented to the employer.
You will perform a statistical analysis to establish whether race has a significant impact on the rate of callbacks for resumes.
Answer the following questions in this notebook below and submit to your Github account.
You can include written notes in notebook cells using Markdown:
In [1]:
import pandas as pd
import numpy as np
from scipy import stats
In [2]:
data = pd.io.stata.read_stata('data/us_job_market_discrimination.dta')
In [3]:
# number of callbacks for black-sounding names
df_race_b=(data[data.race=='b'])
no_calls_b=sum(df_race_b.call)
#data['race'].count()
#data['call'].count()
df_race_w=(data[data.race=='w'])
no_calls_w=sum(df_race_w.call)
print(len(df_race_b))
print(len(df_race_w))
print("The number of calls for a black sounding person is %d and the number of calls for a white sounding person is %d" %(no_calls_b,no_calls_w))
In [4]:
data.head()
Out[4]:
In [5]:
prob_b=no_calls_b/len(df_race_b)
prob_w=no_calls_w/len(df_race_w)
print(prob_b,prob_w)
difference_prob=abs(prob_b - prob_w)
difference_prob
Out[5]:
z-test is more appropriate for this example when compared to t-test. This is a categorical variable where it is better to compute proportions between the two variables by calculating the sum of the win/loss or success/failure than computing the mean.The hypothesis is meant to compare the difference between the two proportions to a null value and hence we can apply the z test.
Central Limit theorem applies to categorical data as well and hence the distribution is mostly normal.
In [6]:
standard_error = np.sqrt((prob_w*(1 - prob_w)/(len(df_race_w))) + (prob_b*(1 - prob_b) /(len(df_race_b))))
#print(standard_error)
critical_value=1.96 #95% confidence Interval from z-table
Margin_error=abs(standard_error*critical_value)
print("The proportion of calls received for White sounding names for thier CV's are in between %F and %F" % (difference_prob + Margin_error,difference_prob - Margin_error))
In [7]:
from statsmodels.stats.weightstats import ztest
z_test = ztest(df_race_w.call,df_race_b.call, alternative = 'two-sided')
print("The p-value is given by %F and the z -score is given by %F" %(z_test[1],z_test[0]))
The p-value is way less than 0.05 for it to be statistically significant. We can safely reject the Null hypothesis and state that White sounding names receive more calls for their CV as compared to Black sounding names. This can be said with a 95% confidence. But again just on this analysis we cannot comment that the race of a person is the most important factor for callback success.