There are two ways to create an RDD in PySpark. You can parallelize a list
In [1]:
data = sc.parallelize(
[('Amber', 22), ('Alfred', 23), ('Skye',4), ('Albert', 12),
('Amber', 9)])
or read from a repository (a file or a database)
In [2]:
data_from_file = sc.\
textFile(
'/Users/drabast/Documents/PySpark_Data/VS14MORT.txt.gz',
4)
Note, that to execute the code above you will have to change the path where the data is stored. The dataset can be downloaded from http://tomdrabas.com/data/VS14MORT.txt.gz
RDDs are schema-less data structures.
In [3]:
data_heterogenous = sc.parallelize([('Ferrari', 'fast'), {'Porsche': 100000}, ['Spain','visited', 4504]]).collect()
data_heterogenous
Out[3]:
You can access the data in the object as you would normally do in Python.
In [4]:
data_heterogenous[1]['Porsche']
Out[4]:
When you read from a text file, each row from the file forms an element of an RDD.
In [5]:
data_from_file.take(1)
Out[5]:
You can create longer method to transform your data instead of using the lambda
expression.
In [6]:
def extractInformation(row):
import re
import numpy as np
selected_indices = [
2,4,5,6,7,9,10,11,12,13,14,15,16,17,18,
19,21,22,23,24,25,27,28,29,30,32,33,34,
36,37,38,39,40,41,42,43,44,45,46,47,48,
49,50,51,52,53,54,55,56,58,60,61,62,63,
64,65,66,67,68,69,70,71,72,73,74,75,76,
77,78,79,81,82,83,84,85,87,89
]
'''
Input record schema
schema: n-m (o) -- xxx
n - position from
m - position to
o - number of characters
xxx - description
1. 1-19 (19) -- reserved positions
2. 20 (1) -- resident status
3. 21-60 (40) -- reserved positions
4. 61-62 (2) -- education code (1989 revision)
5. 63 (1) -- education code (2003 revision)
6. 64 (1) -- education reporting flag
7. 65-66 (2) -- month of death
8. 67-68 (2) -- reserved positions
9. 69 (1) -- sex
10. 70 (1) -- age: 1-years, 2-months, 4-days, 5-hours, 6-minutes, 9-not stated
11. 71-73 (3) -- number of units (years, months etc)
12. 74 (1) -- age substitution flag (if the age reported in positions 70-74 is calculated using dates of birth and death)
13. 75-76 (2) -- age recoded into 52 categories
14. 77-78 (2) -- age recoded into 27 categories
15. 79-80 (2) -- age recoded into 12 categories
16. 81-82 (2) -- infant age recoded into 22 categories
17. 83 (1) -- place of death
18. 84 (1) -- marital status
19. 85 (1) -- day of the week of death
20. 86-101 (16) -- reserved positions
21. 102-105 (4) -- current year
22. 106 (1) -- injury at work
23. 107 (1) -- manner of death
24. 108 (1) -- manner of disposition
25. 109 (1) -- autopsy
26. 110-143 (34) -- reserved positions
27. 144 (1) -- activity code
28. 145 (1) -- place of injury
29. 146-149 (4) -- ICD code
30. 150-152 (3) -- 358 cause recode
31. 153 (1) -- reserved position
32. 154-156 (3) -- 113 cause recode
33. 157-159 (3) -- 130 infant cause recode
34. 160-161 (2) -- 39 cause recode
35. 162 (1) -- reserved position
36. 163-164 (2) -- number of entity-axis conditions
37-56. 165-304 (140) -- list of up to 20 conditions
57. 305-340 (36) -- reserved positions
58. 341-342 (2) -- number of record axis conditions
59. 343 (1) -- reserved position
60-79. 344-443 (100) -- record axis conditions
80. 444 (1) -- reserve position
81. 445-446 (2) -- race
82. 447 (1) -- bridged race flag
83. 448 (1) -- race imputation flag
84. 449 (1) -- race recode (3 categories)
85. 450 (1) -- race recode (5 categories)
86. 461-483 (33) -- reserved positions
87. 484-486 (3) -- Hispanic origin
88. 487 (1) -- reserved
89. 488 (1) -- Hispanic origin/race recode
'''
record_split = re\
.compile(
r'([\s]{19})([0-9]{1})([\s]{40})([0-9\s]{2})([0-9\s]{1})([0-9]{1})([0-9]{2})' +
r'([\s]{2})([FM]{1})([0-9]{1})([0-9]{3})([0-9\s]{1})([0-9]{2})([0-9]{2})' +
r'([0-9]{2})([0-9\s]{2})([0-9]{1})([SMWDU]{1})([0-9]{1})([\s]{16})([0-9]{4})' +
r'([YNU]{1})([0-9\s]{1})([BCOU]{1})([YNU]{1})([\s]{34})([0-9\s]{1})([0-9\s]{1})' +
r'([A-Z0-9\s]{4})([0-9]{3})([\s]{1})([0-9\s]{3})([0-9\s]{3})([0-9\s]{2})([\s]{1})' +
r'([0-9\s]{2})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})' +
r'([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})' +
r'([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})' +
r'([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})' +
r'([A-Z0-9\s]{7})([\s]{36})([A-Z0-9\s]{2})([\s]{1})([A-Z0-9\s]{5})([A-Z0-9\s]{5})' +
r'([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})' +
r'([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})' +
r'([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})' +
r'([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([\s]{1})([0-9\s]{2})([0-9\s]{1})' +
r'([0-9\s]{1})([0-9\s]{1})([0-9\s]{1})([\s]{33})([0-9\s]{3})([0-9\s]{1})([0-9\s]{1})')
try:
rs = np.array(record_split.split(row))[selected_indices]
except:
rs = np.array(['-99'] * len(selected_indices))
return rs
# return record_split.split(row)
Now, instead of using lambda
we will use the extractInformation(...)
method to split and convert our dataset.
In [7]:
data_from_file_conv = data_from_file.map(extractInformation)
data_from_file_conv.map(lambda row: row).take(1)
Out[7]:
The method is applied to each element of the RDD: in the case for the data_from_file_conv
dataset you can think of this as a transformation of each row.
In [8]:
data_2014 = data_from_file_conv.map(lambda row: int(row[16]))
data_2014.take(10)
Out[8]:
You can combine more columns.
In [9]:
data_2014_2 = data_from_file_conv.map(lambda row: (row[16], int(row[16])))
data_2014_2.take(10)
Out[9]:
The .filter(...)
method allows you to select elements of your dataset that fit specified criteria.
In [10]:
data_filtered = data_from_file_conv.filter(lambda row: row[5] == 'F' and row[21] == '0')
data_filtered.count()
Out[10]:
The .flatMap(...)
method works similarly to .map(...)
but returns a flattened results instead of a list.
In [11]:
data_2014_flat = data_from_file_conv.flatMap(lambda row: (row[16], int(row[16]) + 1))
data_2014_flat.take(10)
Out[11]:
This method returns a list of distinct values in a specified column.
In [12]:
distinct_gender = data_from_file_conv.map(lambda row: row[5]).distinct().collect()
distinct_gender
Out[12]:
The .sample()
method returns a randomized sample from the dataset.
In [13]:
fraction = 0.1
data_sample = data_from_file_conv.sample(False, fraction, 666)
data_sample.take(1)
Out[13]:
Let's confirm that we really got 10% of all the records.
In [14]:
print('Original dataset: {0}, sample: {1}'.format(data_from_file_conv.count(), data_sample.count()))
Left outer join, just like the SQL world, joins two RDDs based on the values found in both datasets, and returns records from the left RDD with records from the right one appended where the two RDDs match.
In [15]:
rdd1 = sc.parallelize([('a', 1), ('b', 4), ('c',10)])
rdd2 = sc.parallelize([('a', 4), ('a', 1), ('b', '6'), ('d', 15)])
rdd3 = rdd1.leftOuterJoin(rdd2)
rdd3.take(5)
Out[15]:
If we used .join(...)
method instead we would have gotten only the values for 'a'
and 'b'
as these two values intersect between these two RDDs.
In [16]:
rdd4 = rdd1.join(rdd2)
rdd4.collect()
Out[16]:
Another useful method is the .intersection(...)
that returns the records that are equal in both RDDs.
In [17]:
rdd5 = rdd1.intersection(rdd2)
rdd5.collect()
Out[17]:
Repartitioning the dataset changes the number of partitions the dataset is divided into.
In [18]:
rdd1 = rdd1.repartition(4)
len(rdd1.glom().collect())
Out[18]:
The method returns n
top rows from a single data partition.
In [19]:
data_first = data_from_file_conv.take(1)
data_first
Out[19]:
If you want somewhat randomized records you can use .takeSample(...)
instead.
In [20]:
data_take_sampled = data_from_file_conv.takeSample(False, 1, 667)
data_take_sampled
Out[20]:
Another action that processes your data, the .reduce(...)
method reduces the elements of an RDD using a specified method.
In [21]:
rdd1.map(lambda row: row[1]).reduce(lambda x, y: x + y)
Out[21]:
If the reducing function is not associative and commutative you will sometimes get wrong results depending how your data is partitioned.
In [22]:
data_reduce = sc.parallelize([1, 2, .5, .1, 5, .2], 1)
I we were to reduce the data in a manner that we would like to divide the current result by the subsequent one, we would expect a value of 10
In [23]:
works = data_reduce.reduce(lambda x, y: x / y)
works
Out[23]:
However, if you were to partition the data into 3 partitions, the result will be wrong.
In [24]:
data_reduce = sc.parallelize([1, 2, .5, .1, 5, .2], 3)
data_reduce.reduce(lambda x, y: x / y)
Out[24]:
The .reduceByKey(...)
method works in a similar way to the .reduce(...)
method but performs a reduction on a key-by-key basis.
In [22]:
data_key = sc.parallelize([('a', 4),('b', 3),('c', 2),('a', 8),('d', 2),('b', 1),('d', 3)],4)
data_key.reduceByKey(lambda x, y: x + y).collect()
Out[22]:
The .count()
method counts the number of elements in the RDD.
In [26]:
data_reduce.count()
Out[26]:
It has the same effect as the method below but does not require shifting the data to the driver.
In [27]:
len(data_reduce.collect()) # WRONG -- DON'T DO THIS!
Out[27]:
If your dataset is in a form of a key-value you can use the .countByKey()
method to get the counts of distinct keys.
In [28]:
data_key.countByKey().items()
Out[28]:
As the name suggests, the .saveAsTextFile()
the RDD and saves it to text files: each partition to a separate file.
In [30]:
data_key.saveAsTextFile('/Users/drabast/Documents/PySpark_Data/data_key.txt')
To read it back, you need to parse it back as, as before, all the rows are treated as strings.
In [31]:
def parseInput(row):
import re
pattern = re.compile(r'\(\'([a-z])\', ([0-9])\)')
row_split = pattern.split(row)
return (row_split[1], int(row_split[2]))
data_key_reread = sc \
.textFile('/Users/drabast/Documents/PySpark_Data/data_key.txt') \
.map(parseInput)
data_key_reread.collect()
Out[31]:
.foreach(...)
A method that applies the same function to each element of the RDD in an iterative way.
In [26]:
def f(x):
print(x)
data_key.foreach(f)