Examining Racial Discrimination in the US Job Market

Background

Racial discrimination continues to be pervasive in cultures throughout the world. Researchers examined the level of racial discrimination in the United States labor market by randomly assigning identical résumés to black-sounding or white-sounding names and observing the impact on requests for interviews from employers.

Data

In the dataset provided, each row represents a resume. The 'race' column has two values, 'b' and 'w', indicating black-sounding and white-sounding. The column 'call' has two values, 1 and 0, indicating whether the resume received a call from employers or not.

Note that the 'b' and 'w' values in race are assigned randomly to the resumes when presented to the employer.

Exercises

You will perform a statistical analysis to establish whether race has a significant impact on the rate of callbacks for resumes.

Answer the following questions in this notebook below and submit to your Github account.

  1. What test is appropriate for this problem? Does CLT apply?
  2. What are the null and alternate hypotheses?
  3. Compute margin of error, confidence interval, and p-value.
  4. Write a story describing the statistical significance in the context or the original problem.
  5. Does your analysis mean that race/name is the most important factor in callback success? Why or why not? If not, how would you amend your analysis?

You can include written notes in notebook cells using Markdown:

Resources



In [2]:
import pandas as pd
import numpy as np
from scipy import stats

In [3]:
data = pd.io.stata.read_stata('data/us_job_market_discrimination.dta')

In [4]:
# number of callbacks for black-sounding names
sum(data[data.race=='b'].call)


Out[4]:
157.0

In [5]:
data.head()


Out[5]:
id ad education ofjobs yearsexp honors volunteer military empholes occupspecific ... compreq orgreq manuf transcom bankreal trade busservice othservice missind ownership
0 b 1 4 2 6 0 0 0 1 17 ... 1.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
1 b 1 3 3 6 0 1 1 0 316 ... 1.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
2 b 1 4 1 6 0 0 0 0 19 ... 1.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
3 b 1 3 4 6 0 1 0 1 313 ... 1.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
4 b 1 3 3 22 0 0 0 0 313 ... 1.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 Nonprofit

5 rows × 65 columns

What test is appropriate for this problem? Does CLT apply?

The appropriate test is hyphothesis test comparing population proportions.


In [25]:
print(data.race.value_counts())


w    2435
b    2435
Name: race, dtype: int64

Because our sample size is large (much greater than 30) Central Limit Theorem applies.

What are the null and alternate hypotheses?


In [57]:
freqs = data.call.value_counts()
p = freqs[1.0] / data.shape[0]
p


Out[57]:
0.080492813141683772

The null hyphothesis states that the race has no impact on the probability of a call, thus H0: pw - pb = 0. The alternate hypothesis states that, in fact, there is a difference between two proportions, H1: pw - pb != 0. (It's going to be two-tailed distribution).

Compute margin of error, confidence interval, and p-value.

I choose significance level of 5%. Therefore I'm going to compute 95% confidence interval for pb - pw.


In [83]:
cnts = data[['race', 'call']].groupby(['race', 'call']).size()
pb = cnts.loc['b'].loc[1] / cnts.loc['b'].sum() # sample proportion
pw = cnts.loc['w'].loc[1] / cnts.loc['w'].sum() # sample proportion
var_pw = pw * (1-pw) / 2435
var_pb = pb * (1-pb) / 2435
std_pw_pb = np.sqrt(var_pw + var_pb)
pw_pb = pw - pb
(pb, pw, var_pw, var_pb, pw_pb, std_pw_pb)


Out[83]:
(0.064476386036960986,
 0.096509240246406572,
 3.5809119833046381e-05,
 2.4771737856498466e-05,
 0.032032854209445585,
 0.0077833705866767544)

In [84]:
d = 1.96 * std_pw_pb
(d, pw_pb - d, pw_pb + d)


Out[84]:
(0.015255406349886438, 0.016777447859559147, 0.047288260559332024)
  • Margin of error: 0.015
  • 95% confidence interval: (0.017, 0.047)
  • Z-value: 1.96

Write a story describing the statistical significance in the context of the original problem.

I assume H0 and if P(pw - pb|H0) < 5%, I'm going to reject the null hypothesis.

I'm going to calculate z-score (how many standard deviations is pw - pb from 0).


In [81]:
stddev_p = np.sqrt((2 * p * (1-p))/1000)
z = (pw - pb - 0) / stddev_p
(stddev_p, z)


Out[81]:
(0.012166652799699822, 2.6328403330648102)

Z-score is 2.63 which is greater than 1.96 which means that the probability of such sampling (given H0) is lower than 5%. Therefore, I'm rejecting the null hypothesis.

Does your analysis mean that race/name is the most important factor in callback success? Why or why not? If not, how would you amend your analysis?

The analysis only means that race/name is a significant factor in callback success, but we cannot conclude that its the most important one. In order to do that we could calculate z-scores for other factors and the one with highest value is the most important.


In [ ]: