In [1]:
library(SparkR)


Attaching package: ‘SparkR’

The following objects are masked from ‘package:stats’:

    cov, filter, lag, na.omit, predict, sd, var

The following objects are masked from ‘package:base’:

    colnames, colnames<-, endsWith, intersect, rank, rbind, sample,
    startsWith, subset, summary, table, transform


In [2]:
sc <- sparkR.init("local[4]", "SparkR", sparkPackages="com.databricks:spark-csv_2.10:1.5.0")
sqlContext <- sparkRSQL.init(sc)


Launching java with spark-submit command /home/and/Documents/Projects/Simba/Simba/engine/bin/spark-submit  --packages com.databricks:spark-csv_2.10:1.5.0 sparkr-shell /tmp/Rtmp2qRAsU/backend_port5c552d92116b 

In [3]:
schema <- structType(structField("tag", "string"), structField("x", "double"), structField("y", "double"))
points <- read.df(sqlContext, "/home/and/Documents/PhD/Code/Y2Q1/SDB/Project/Code/points.txt", source = "com.databricks.spark.csv", schema = schema)

In [4]:
registerTempTable(points, "points")
sql = "SELECT * FROM points WHERE POINT(x, y) IN CIRCLERANGE(POINT(4.5, 4.5), 2)"
print(collect(sql(sqlContext,sql)))


  tag x y
1   D 4 4
2   E 5 5

In [ ]: