I have a multiple files on hdfs I want to be queryable through spark sql JDBC. I can start a spark shell and use "Sqlcontext", etc. What happens if I want to keep the sqlcontext open so that I can have a separate application connect via JDBC to issue queries to it?
Note I know I can run "spark-shell" and open up a local instance of spark, and do import sqlcontext, but the files I have are big in size (100GB), I only have at most 16GB on a single machine, so I want it to take advantage of my 50 node cluster of a single master and 49 slaves for performance. Or spark sql only possible with a single node?