Judith Butler has argued that gender is a performative concept, which implies an audience. But different audiences may perceive the performance in different ways. This notebook gathers a few (very tentative) experiments that try to illustrate the different conceptions of gender implicit in books by men and by women.
The underlying data used here is a collection of roughly 78,000 characters from 1800 to 1999, of which about 28,000 are drawn from books written by women. This is itself a subset of a larger collection.
In [2]:
import pandas as pd
import numpy as np
import csv
from collections import Counter
from scipy.stats import pearsonr
In [3]:
metadata = pd.read_csv('../metadata/balanced_character_subset.csv')
timeslice = metadata[(metadata.firstpub >= 1800) & (metadata.firstpub < 2000)]
print('Number of characters: ', len(timeslice.gender))
print('Number identified as women or girls:', sum(timeslice.gender == 'f'))
print('Number drawn from books written by women:', sum(timeslice.authgender == 'f'))
Using a separate script (reproduce_character_models.py), I have trained six different models on subsets of 3000 characters drawn from this larger set. Each training set is divided equally between masculine and feminine characters. Three of the training sets are drawn from books by men; three from books by women.
First let's start by comparing the coefficients of these models. This is not going to be terribly rigorous, quantitatively. I just want to get a sense of a few words that tend to be used differently by men and women, so I can flesh out my observation that these models could--in principle--be considered different "perspectives" on gender.
In [4]:
# We're going to load the features of six models, treating
# them simply as ranked lists of words. Words at the beginning
# of each list tend to be associated with masculine characters;
# words toward the end tend to be associated with feminine characters.
# We could of course use the actual coefficients instead of simple
# ranking, but I'm not convinced that adding and subtracting
# coefficients has a firmer mathematical foundation than
# adding and subtracting ranks.
# In order to compare these lists, we will start by filtering out words
# that don't appear in all six lists. This is a dubious choice,
# but see below for a better way of measuring similarity between
# models based on their predictions.
rootpath = 'models/'
masculineperspectives = []
feminineperspectives = []
for letter in ['A', 'B', 'C']:
feminineperspectives.append(rootpath + 'onlywomenwriters' + letter + '.coefs.csv')
masculineperspectives.append(rootpath + 'onlymalewriters' + letter + '.coefs.csv')
def intersection_of_models(fpaths, mpaths):
paths = fpaths.extend(mpaths)
words = []
for p in fpaths:
thislist = []
with open(p, encoding = 'utf-8') as f:
reader = csv.reader(f)
for row in reader:
if len(row) > 0:
thislist.append(row[0])
words.append(thislist)
shared_features = set.intersection(set(words[0]), set(words[1]), set(words[2]),
set(words[3]), set(words[4]), set(words[5]))
filtered_features = []
for i in range(6):
newlist = []
for w in words[i]:
if w in shared_features:
newlist.append(w)
filtered_features.append(newlist)
feminine_lists = filtered_features[0 : 3]
masculine_lists = filtered_features[3 : 6]
return feminine_lists, masculine_lists
feminine_lists, masculine_lists = intersection_of_models(feminineperspectives, masculineperspectives)
# now let's create a consensus ranking for both groups of writers
def get_consensus(three_lists):
'''
Given three lists, constructs a consensus ranking for each
word. We normalize to a 0-1 scale--not strictly necessary,
since all lists are the same lengths, but it may be more
legible than raw ranks.
'''
assert len(three_lists) == 3
assert len(three_lists[1]) == len(three_lists[2])
denominator = len(three_lists[0]) * 3
# we multiple the denominator by three
# because there are going to be three lists
sum_of_ranks = Counter()
for alist in three_lists:
for index, word in enumerate(alist):
sum_of_ranks[word] += index / denominator
return sum_of_ranks
feminine_rankings = get_consensus(feminine_lists)
masculine_rankings = get_consensus(masculine_lists)
# Now we're going to sort words based on the DIFFERENCE
# between feminine and masculine perspectives.
# Negative scores will be words that are strongly associated with
# men (for women) and women (for men).
# Scores near zero will be words that are around the same position
# in both models of gender.
# Strongly positive scores will be words strongly associated with
# women (for women) and men (for men).
wordrank_pairs = []
for word, ranking in feminine_rankings.items():
if word not in masculine_rankings:
print(error)
else:
difference = ranking - masculine_rankings[word]
wordrank_pairs.append((difference, word))
wordrank_pairs.sort()
In [5]:
# The first hundred words will be negative scores,
# strongly associated with men (for women) and women (for men).
wordrank_pairs[0: 50]
# as you'll see there's a lot of courtship and
# romance here
Out[5]:
In [6]:
# The last hundred words will be positive scores,
# strongly associated with women (for women) and men (for men).
# To keep the most important words at the top of the list,
# I reverse it.
positive = wordrank_pairs[-50 : ]
positive.reverse()
for pair in positive:
print(pair)
# Much harder to characterize, and I won't actually characterize
# this list in the article, but between you and me, I would say
# there's a lot of effort, endeavoring, and thinking here.
# "Jaw," "chin" and "head" are also interesting. Perhaps in some weird way
# they are signs of effort? "She set her jaw ..." Again, I'm not going
# to actually infer anything from that -- just idly speculating.
How stable and reliable are these differences?
We can find out by testing each of the nine possible pairings between our three masculine models and our three feminine models. The answer is that, for words at the top of the list like "love," the differences are pretty robust. They become rapidly less robust as you move down the list, so we should characterize them cautiously.
In [16]:
def get_variation(word, feminine_lists, masculine_lists):
differences = []
for f in feminine_lists:
for m in masculine_lists:
d = (f.index(word) /len(f)) - (m.index(word) / len(m))
differences.append(d)
return differences
print('love')
print(get_variation('love', masculine_lists, feminine_lists))
print('\nwas-marry')
print(get_variation('was-marry', masculine_lists, feminine_lists))
print('\nspend')
print(get_variation('spend', masculine_lists, feminine_lists))
print('\nconscience')
print(get_variation('conscience', masculine_lists, feminine_lists))
print('\nimagination')
print(get_variation('imagination', masculine_lists, feminine_lists))
Okay. The quantitative methodology above was not super-rigorous. I was just trying to get a rough sense of a few words that have notably different gender implications for writers who are men, or women. Let's try to compare these six models a little more rigorously by looking at the predictions they make.
A separate function in reproduce_character_models has already gone through all six of the models used above and applied them to a balanced_test_set comprised of 1000 characters from books by women, and 1000 characters from books by men. (The characters themselves are also equally balanced by gender.) We now compare pairs of predictions about these characters, to see whether models based on books by women agree with each other more than they agree with models based on books by men, and vice-versa.
In [55]:
def model_correlation(firstpath, secondpath):
one = pd.read_csv(firstpath, index_col = 'docid')
two = pd.read_csv(secondpath, index_col = 'docid')
justpredictions = pd.concat([one['logistic'], two['logistic']], axis=1, keys=['one', 'two'])
justpredictions.dropna(inplace = True)
r, p = pearsonr(justpredictions.one, justpredictions.two)
return r
def compare_amongst_selves(listofpredictions):
r_scores = []
already_done = []
for path in listofpredictions:
for otherpath in listofpredictions:
if path == otherpath:
continue
elif (path, otherpath) in already_done:
continue
else:
r = model_correlation(path, otherpath)
r_scores.append(r)
already_done.append((otherpath, path))
# no need to compare a to b AND b to a
return r_scores
def average_r(r_scores):
'''
Technically, you don't directly average r scores; you use a
Fisher's transformation into z scores first. In practice, this
makes only a tiny difference, but ...
'''
z_scores = []
for r in r_scores:
z = np.arctanh(r)
z_scores.append(z)
mean_z = sum(z_scores) / len(z_scores)
mean_r = np.tanh(mean_z)
return mean_r
rootpath = 'predictions/'
masculineperspectives = []
feminineperspectives = []
for letter in ['A', 'B', 'C']:
feminineperspectives.append(rootpath + 'onlywomenwriters' + letter + '.results')
masculineperspectives.append(rootpath + 'onlymalewriters' + letter + '.results')
f_compare = compare_amongst_selves(feminineperspectives)
print(f_compare)
print("similarity among models of characters by women:", average_r(f_compare))
m_compare = compare_amongst_selves(masculineperspectives)
print(m_compare)
print("similarity among models of characters by men:", average_r(m_compare))
In [57]:
def compare_against_each_other(listofmasculinemodels, listoffemininemodels):
r_scores = []
for m in listofmasculinemodels:
for f in listoffemininemodels:
r = model_correlation(m, f)
r_scores.append(r)
return r_scores
both_compared = compare_against_each_other(masculineperspectives, feminineperspectives)
print(both_compared)
print('similarity between pairs of models that cross')
print('the gender boundary: ', average_r(both_compared))
So we end up with three different correlation coefficients. Models based on books by men agree with each other rather strongly; this corresponds to other evidence that men tend to write conventionally gendered characters, which are easy to sort.
Models of gender based on books by women tend to vary more from one random sample to another, suggesting patterns that are not quite as clearly marked. And when we compare a model based on characters written by women to one based on characters written by men, the correlation is weakest of all. Men and women don't entirely agree about definitions of gender.
I have also printed the raw scores above so you get a quick and dirty grasp of uncertainty. We're not being super-systematic about this, and we only have six models. But I think there's going to be a meaningful separation between the three comparisons we're making.
In [ ]: