-
Notifications
You must be signed in to change notification settings - Fork 475
/
Copy pathpyspark-session-2015-03-13.txt
executable file
·60 lines (52 loc) · 2.08 KB
/
pyspark-session-2015-03-13.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
pyspark-tutorial-
pyspark-tutorial provides basic algorithms using pyspark
interactive session: valid and tested: Feb. 23, 2015
mparsian@Mahmouds-MacBook:~/zmp/BigData-MapReduce-Course/pyspark# cat data.txt
crazy crazy fox jumped
crazy fox jumped
fox is fast
fox is smart
dog is smart
SPARK_HOME=~/zmp/zs/spark-1.2.0
mparsian@Mahmouds-MacBook:~/zmp/BigData-MapReduce-Course/pyspark# ~/zmp/zs/spark-1.2.0/bin/pyspark
Python 2.6.9 (unknown, Sep 9 2014, 15:05:12)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 1.2.0
/_/
Using Python version 2.6.9 (unknown, Sep 9 2014 15:05:12)
SparkContext available as sc.
>>> sc
<pyspark.context.SparkContext object at 0x10ae02210>
>>> lines = sc.textFile("data.txt", 1)
>>> debuglines = lines.collect();
>>> debuglines
[u'crazy crazy fox jumped', u'crazy fox jumped', u'fox is fast', u'fox is smart', u'dog is smart']
>>> words = lines.flatMap(lambda x: x.split(' '))
>>> debugwords = words.collect();
>>> debugwords
[u'crazy', u'crazy', u'fox', u'jumped', u'crazy', u'fox', u'jumped', u'fox', u'is', u'fast', u'fox', u'is', u'smart', u'dog', u'is', u'smart']
>>> ones = words.map(lambda x: (x, 1))
>>> debugones = ones.collect()
>>> debugones
[(u'crazy', 1), (u'crazy', 1), (u'fox', 1), (u'jumped', 1), (u'crazy', 1), (u'fox', 1), (u'jumped', 1), (u'fox', 1), (u'is', 1), (u'fast', 1), (u'fox', 1), (u'is', 1), (u'smart', 1), (u'dog', 1), (u'is', 1), (u'smart', 1)]
>>> counts = ones.reduceByKey(lambda x, y: x + y)
>>> debugcounts = counts.collect()
>>> debugcounts
[(u'crazy', 3), (u'jumped', 2), (u'is', 3), (u'fox', 4), (u'dog', 1), (u'fast', 1), (u'smart', 2)]
>>>
>>> grouped = ones.groupByKey();
>>> debuggrouped = grouped.collect();
>>> counts.saveAsTextFile("output.txt")
mparsian@Mahmouds-MacBook:~/zmp/BigData-MapReduce-Course/pyspark# cat output.txt/part*
(u'crazy', 3)
(u'jumped', 2)
(u'is', 3)
(u'fox', 4)
(u'dog', 1)
(u'fast', 1)
(u'smart', 2)