当我运行pyspark.SparkContext('loc', 'pyspark_rec')
时,出现错误,说它无法解析master URL
。作为spark编程的初学者,我不太清楚这意味着什么。但就我的代码而言,我没有使用任何部署模块(YARN,Hadoop等),而是在独立模式下测试代码。所以将URL指定为'loc'我相信很好。但有人可以向我解释我应该如何解决这个问题?谢谢。
以下是错误代码。
File "recommender.py", line 112, in spark_recommendations
sc = pyspark.SparkContext('loc', 'pyspark_rec')
File "/Users/chlee021690/Desktop/Programming/spark/python/pyspark/context.py", line 134, in __init__
self._jsc = self._initialize_context(self._conf._jconf)
File "/Users/chlee021690/Desktop/Programming/spark/python/pyspark/context.py", line 180, in _initialize_context
return self._jvm.JavaSparkContext(jconf)
File "/Users/chlee021690/anaconda/lib/python2.7/site-packages/py4j/java_gateway.py", line 701, in __call__
self._fqn)
File "/Users/chlee021690/anaconda/lib/python2.7/site-packages/py4j/protocol.py", line 300, in get_return_value
format(target_id, '.', name), value)
Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: org.apache.spark.SparkException: Could not parse Master URL: 'loc'
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:1564)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:307)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:53)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:234)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
at py4j.Gateway.invoke(Gateway.java:214)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:79)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:68)
at py4j.GatewayConnection.run(GatewayConnection.java:207)
at java.lang.Thread.run(Thread.java:744)
答案 0 :(得分:2)
你会使用像
这样的东西./bin/pyspark --master local[8]
from pyspark import SparkContext
sc = SparkContext("local", "context")
答案 1 :(得分:2)
主网址通常是独立系统的IP地址(如果是服务器)或localhost。
独立模式:spark:// localhost:7077
服务器模式:spark:// your-master-server-ip-address:7077
希望这会有所帮助。 干杯, 阿希什