If iPython is launched from pyspark, a Spark context (sc) should be available.
In [1]:
sc
Out[1]:
This is a simple parallel operation that runs on the cluster. If this works, pyspark and your cluster are ready!
In [2]:
sc.parallelize(range(1000)).count()
Out[2]:
In [ ]: