尝试在AWS Amazon Cluster上运行我的程序时。
[hadoop @ ip-172-31-5-232〜] $ spark-submit 6.py。
我遇到以下错误:
from django.shortcuts import render
# Create your views here.
from django.http import HttpResponse
def index(request):
return render(request, 'blog/index.html')
def blogpost(request):
return render(request, 'blog/blogpost.html')
这是我的代码示例出现错误的地方:
Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.
Successor = filteredResults3.map(lambda j:matchedSuccessor(j,result))
result= l.map(lambda x : (x[0], list(x[1]))).collect()
if (NbrVertex > (2*(len(filteredResults.collect())+ ExtSimilarity))):
您可以看到下面的图像] 1
答案 0 :(得分:0)
收集使数据进入驱动程序。
后继者...因此通过.map从Worker引用驱动程序。不允许。
该消息确认了Spark范例。