我正在尝试过滤此txt文件
TotalCost|BirthDate|Gender|TotalChildren|ProductCategoryName
1000||Male|2|Technology
2000|1957-03-06||3|Beauty
3000|1959-03-06|Male||Car
4000|1953-03-06|Male|2|
5000|1957-03-06|Female|3|Beauty
6000|1959-03-06|Male|4|Car
我只是想过滤所有原始数据,如果一列包含空元素,则将其删除。
在我的样本数据集中,有三个为空。
但是,当我运行代码时,我得到了空的datascheme。我想念什么吗?
这是我在scala中的代码
import org.apache.spark.sql.SparkSession
object DataFrameFromCSVFile {
def main(args:Array[String]):Unit= {
val spark: SparkSession = SparkSession.builder()
.master("local[*]")
.appName("SparkByExample")
.getOrCreate()
val filePath="src/main/resources/demodata.txt"
val df = spark.read.options(Map("inferSchema"->"true","delimiter"->"|","header"->"true")).csv(filePath)
df.where(!$"Gender".isNull && !$"TotalChildren".isNull).show
}
}
项目在IntelliJ上
非常感谢
答案 0 :(得分:1)
您可以通过多种方式进行操作。下面是一种方法。
import org.apache.spark.sql.SparkSession
object DataFrameFromCSVFile2 {
def main(args:Array[String]):Unit= {
val spark: SparkSession = SparkSession.builder()
.master("local[1]")
.appName("SparkByExample")
.getOrCreate()
val filePath="src/main/resources/demodata.tx"
val df = spark.read.options(Map("inferSchema"->"true","delimiter"->"|","header"->"true")).csv(filePath)
val df2 = df.select("Gender", "BirthDate", "TotalCost", "TotalChildren", "ProductCategoryName")
.filter("Gender is not null")
.filter("BirthDate is not null")
.filter("TotalChildren is not null")
.filter("ProductCategoryName is not null")
df2.show()
}
}
输出:
+------+-------------------+---------+-------------+-------------------+
|Gender| BirthDate|TotalCost|TotalChildren|ProductCategoryName|
+------+-------------------+---------+-------------+-------------------+
|Female|1957-03-06 00:00:00| 5000| 3| Beauty|
| Male|1959-03-06 00:00:00| 6000| 4| Car|
+------+-------------------+---------+-------------+-------------------+
谢谢, Naveen
答案 1 :(得分:0)
您可以按如下所示从数据框中过滤掉它, df.where(!$“ Gender” .isNull &&!$“ TotalChildren” .isNull).show