我试图从我的数据集中读取数据,该数据集中包含三列User
,Repository
和Number of Stars
。
在[10]
lines = spark.read.text("Dataset.csv").rdd
print(lines.take(10))
出[10]
[Row(value='0,0,0,290'), Row(value='1,1,1,112'), Row(value='2,2,2,87.8'), Row(value='3,3,3,69.7'), Row(value='4,4,4,65.7'), Row(value='5,5,5,62'), Row(value='6,6,6,61.6'), Row(value='7,7,7,60.7'), Row(value='8,8,8,57.7'), Row(value='9,9,9,56.2')]
在[10]
# Need to convert p[1] from str to int
parts = lines.map(lambda row: row.value.split(","))
print(parts.take(2))
出[11]
[['0', '0', '0', '290'], ['1', '1', '1', '112']]
在[12]
# RDD mapped as int and float from Dataset
ratingsRDD = parts.map(lambda p: Row(userId=int(p[1]),repoId=int(p[2]),repoCount=float(p[3])))
ratings = spark.createDataFrame(ratingsRDD)
print(ratings.head(10))
出[12]
[Row(repoCount=290.0, repoId=0, userId=0), Row(repoCount=112.0, repoId=1, userId=1), Row(repoCount=87.8, repoId=2, userId=2), Row(repoCount=69.7, repoId=3, userId=3), Row(repoCount=65.7, repoId=4, userId=4), Row(repoCount=62.0, repoId=5, userId=5), Row(repoCount=61.6, repoId=6, userId=6), Row(repoCount=60.7, repoId=7, userId=7), Row(repoCount=57.7, repoId=8, userId=8), Row(repoCount=56.2, repoId=9, userId=9)]
在[13]
(training, test) = ratings.randomSplit([0.8, 0.2])
在[14]中:
# Build the recommendation model using ALS on the training data
# Cold start strategy is set to '"drop" to make sure there is no NaN evaluation metrics which would result in error.
als = ALS(maxIter=5, regParam=0.01, userCol="userId", itemCol="repoId", ratingCol="repoCount"
,coldStartStrategy="drop") #Cold-start is set to DROP
model = als.fit(training)
在[15]
#Evaluate the model by computing the RMSE on the test data
predictions = model.transform(test)
type(predictions)
predictions.show(3)
出[15]
+---------+------+------+----------+
|repoCount|repoId|userId|prediction|
+---------+------+------+----------+
+---------+------+------+----------+
我的模型给出了NULL值。我的数据集中有问题吗,还是关于Training的错误假设?
请注意,我在ratingCol
中的ALS
是“星数”,它是显式评分,而不是隐性评分。