我有以下数据框:
+---------------+-----------+-------------+--------+--------+--------+--------+------+-----+
| time_stamp_0|sender_ip_1|receiver_ip_2|s_port_3|r_port_4|acknum_5|winnum_6| len_7|count|
+---------------+-----------+-------------+--------+--------+--------+--------+------+-----+
|06:36:16.293711| 10.0.0.1| 10.0.0.2| 55518| 5001| 0| 58| 65161| 130|
|06:36:16.293729| 10.0.0.1| 10.0.0.2| 55518| 5001| 0| 58| 65913| 130|
|06:36:16.293743| 10.0.0.1| 10.0.0.2| 55518| 5001| 0| 58|131073| 130|
|06:36:16.293765| 10.0.0.1| 10.0.0.2| 55518| 5001| 0| 58|196233| 130|
|06:36:16.293783| 10.0.0.1| 10.0.0.2| 55518| 5001| 0| 58|196985| 130|
|06:36:16.293798| 10.0.0.1| 10.0.0.2| 55518| 5001| 0| 58|262145| 130|
|06:36:16.293820| 10.0.0.1| 10.0.0.2| 55518| 5001| 0| 58|327305| 130|
|06:36:16.293837| 10.0.0.1| 10.0.0.2| 55518| 5001| 0| 58|328057| 130|
|06:36:16.293851| 10.0.0.1| 10.0.0.2| 55518| 5001| 0| 58|393217| 130|
|06:36:16.293873| 10.0.0.1| 10.0.0.2| 55518| 5001| 0| 58|458377| 130|
|06:36:16.293890| 10.0.0.1| 10.0.0.2| 55518| 5001| 0| 58|459129| 130|
|06:36:16.293904| 10.0.0.1| 10.0.0.2| 55518| 5001| 0| 58|524289| 130|
|06:36:16.293926| 10.0.0.1| 10.0.0.2| 55518| 5001| 0| 58|589449| 130|
|06:36:16.293942| 10.0.0.1| 10.0.0.2| 55518| 5001| 0| 58|590201| 130|
|06:36:16.293956| 10.0.0.1| 10.0.0.2| 55518| 5001| 0| 58|655361| 130|
|06:36:16.293977| 10.0.0.1| 10.0.0.2| 55518| 5001| 0| 58|720521| 130|
|06:36:16.293994| 10.0.0.1| 10.0.0.2| 55518| 5001| 0| 58|721273| 130|
|06:36:16.294007| 10.0.0.1| 10.0.0.2| 55518| 5001| 0| 58|786433| 130|
|06:36:16.294028| 10.0.0.1| 10.0.0.2| 55518| 5001| 0| 58|851593| 130|
|06:36:16.294045| 10.0.0.1| 10.0.0.2| 55518| 5001| 0| 58|852345| 130|
+---------------+-----------+-------------+--------+--------+--------+--------+------+-----+
only showing top 20 rows
我必须向dataframe
添加功能和标签以预测计数值。但是当我运行代码时,我会看到以下错误:
Failed to execute user defined function(anonfun$15: (int, int, string, string, int, int, int, int, int) => vector)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
我还cast(IntegerType)
我的所有功能,但又发生了错误。这是我的代码:
val Frist_Dataframe = sqlContext.createDataFrame(Row_Dstream_Train, customSchema)
val toVec9 = udf[Vector, Int, Int, String, String, Int, Int, Int, Int, Int] { (a, b, c, d, e, f, g, h, i) =>
val e3 = c match {
case "10.0.0.1" => 1
case "10.0.0.2" => 2
case "10.0.0.3" => 3
}
val e4 = d match {
case "10.0.0.1" => 1
case "10.0.0.2" => 2
case "10.0.0.3" => 3
}
Vectors.dense(a, b, e3, e4, e, f, g, h, i)
}
val final_df = Dataframe.withColumn(
"features",
toVec9(
// casting into Timestamp to parse the string, and then into Int
$"time_stamp_0".cast(TimestampType).cast(IntegerType),
$"count".cast(IntegerType),
$"sender_ip_1",
$"receiver_ip_2",
$"s_port_3".cast(IntegerType),
$"r_port_4".cast(IntegerType),
$"acknum_5".cast(IntegerType),
$"winnum_6".cast(IntegerType),
$"len_7".cast(IntegerType)
)
).withColumn("label", (Dataframe("count"))).select("features", "label")
final_df.show()
val trainingTest = final_df.randomSplit(Array(0.8, 0.2))
val TrainingDF = trainingTest(0).toDF()
val TestingDF=trainingTest(1).toDF()
TrainingDF.show()
TestingDF.show()
我的依赖关系也是:
libraryDependencies ++= Seq(
"co.theasi" %% "plotly" % "0.2.0",
"org.apache.spark" %% "spark-core" % "2.1.1",
"org.apache.spark" %% "spark-sql" % "2.1.1",
"org.apache.spark" %% "spark-hive" % "2.1.1",
"org.apache.spark" %% "spark-streaming" % "2.1.1",
"org.apache.spark" %% "spark-mllib" % "2.1.1"
)
最有趣的一点是,如果我在代码的最后部分将所有cast(IntegerType)
更改为cast(TimestampType).cast(IntegerType)
,则错误消失,输出将如下所示:
+--------+-----+
|features|label|
+--------+-----+
| null| 130|
| null| 130|
| null| 130|
| null| 130|
| null| 130|
| null| 130|
| null| 130|
| null| 130|
| null| 130|
| null| 130|
| null| 130|
| null| 130|
| null| 130|
| null| 130|
| null| 130|
| null| 130|
| null| 130|
| null| 130|
| null| 130|
| null| 130|
+--------+-----+
更新:应用@Ramesh Maharjan解决方案后,我的数据帧的结果运行良好但是,每当我尝试将我的final_df数据帧拆分为训练并测试结果时,如下所示,我仍然有同样存在空行的问题。
+--------------------+-----+
| features|label|
+--------------------+-----+
| null| 130|
| null| 130|
| null| 130|
| null| 130|
| null| 130|
| null| 130|
| null| 130|
| null| 130|
|[1.497587776E9,13...| 130|
|[1.497587776E9,13...| 130|
|[1.497587776E9,13...| 130|
|[1.497587776E9,13...| 130|
|[1.497587776E9,13...| 130|
|[1.497587776E9,13...| 130|
|[1.497587776E9,13...| 130|
|[1.497587776E9,13...| 130|
|[1.497587776E9,13...| 130|
|[1.497587776E9,13...| 130|
|[1.497587776E9,13...| 130|
|[1.497587776E9,13...| 130|
+--------------------+-----+
你能帮助我吗?
答案 0 :(得分:3)
我没有在您的问题代码中看到count column
生成。除了count
专栏@ Shankar的回答应该可以得到你想要的结果。
以下错误是由于udf
函数的错误定义导致@Shankar在答案中纠正错误。
Failed to execute user defined function(anonfun$15: (int, int, string, string, int, int, int, int, int) => vector)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
以下错误是version
spark-mllib library
与spark-core library
和spark-sql library
不匹配造成的。它们都应该是相同的版本。
error: Caused by: org.apache.spark.SparkException: Failed to execute user defined function(anonfun$15: (int, int, string, string, int, int, int, int, int) => vector) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
我希望解释清楚,希望很快就能解决你的问题。
<强>被修改强>
你还没有像@Shankar所建议的那样改变udf
功能。我也可以添加.trim
val toVec9 = udf ((a: Int, b: Int, c: String, d: String, e: Int, f: Int, g: Int, h: Int, i: Int) =>
{
val e3 = c.trim match {
case "10.0.0.1" => 1
case "10.0.0.2" => 2
case "10.0.0.3" => 3
}
val e4 = d.trim match {
case "10.0.0.1" => 1
case "10.0.0.2" => 2
case "10.0.0.3" => 3
}
Vectors.dense(a, b, e3, e4, e, f, g, h, i)
})
在查看您的依赖项时,您正在使用%%
告诉sbt
在您的系统中下载dependencies
打包的scala
版本。这应该没问题,但是由于你仍然遇到错误,我想将dependencies
更改为
libraryDependencies ++= Seq(
"co.theasi" %% "plotly" % "0.2.0",
"org.apache.spark" % "spark-core_2.11" % "2.1.1",
"org.apache.spark" % "spark-sql_2.11" % "2.1.1",
"org.apache.spark" %% "spark-hive" % "2.1.1",
"org.apache.spark" % "spark-streaming_2.11" % "2.1.1",
"org.apache.spark" % "spark-mllib_2.11" % "2.1.1"
)
答案 1 :(得分:0)
我认为这是你创建udf的方式
val toVec9 = udf ((a: Int, b: Int, c: String, d: String, e: Int, f: Int, g: Int, h: Int, i: Int) =>
{
val e3 = c match {
case "10.0.0.1" => 1
case "10.0.0.2" => 2
case "10.0.0.3" => 3
}
val e4 = d match {
case "10.0.0.1" => 1
case "10.0.0.2" => 2
case "10.0.0.3" => 3
}
Vectors.dense(a, b, e3, e4, e, f, g, h, i)
})
并将其用作
val final_df = Dataframe.withColumn(
"features",
toVec9(
// casting into Timestamp to parse the string, and then into Int
$"time_stamp_0".cast(TimestampType).cast(IntegerType),
$"count".cast(IntegerType),
$"sender_ip_1",
$"receiver_ip_2",
$"s_port_3".cast(IntegerType),
$"r_port_4".cast(IntegerType),
$"acknum_5".cast(IntegerType),
$"winnum_6".cast(IntegerType),
$"len_7".cast(IntegerType)
)
).withColumn("label", (Dataframe("count"))).select("features", "label")
希望这有帮助!