也许这个问题很简单但我在Pyspark的本地目录中读取csv时遇到了问题。
我试过了,
from pyspark.sql.types import *
from pyspark.sql import Row
from pyspark import SparkContext as sc
mydata = sc.textFile("/home/documents/mydata.csv")
newdata = mydata.map(lambda line: line.split(","))
但是收到错误,
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method textFile() must be called with SparkContext instance as first argument (got str instance instead)
现在我的问题是我在此之前打电话给SparkContext
。那为什么我会收到这样的错误?请指导我缺少的地方。