我仍然在学习颤动,并且通过一个非常奇怪的错误来了,却仍然不明白为什么要这么做。这是我的文件
scala> val df1 = spark.read.option("header", true).option("inferSchema", false).schema(schema1).csv("s3a://mybucket/ybspark/input/PASSENGERS\.csv")
df1: org.apache.spark.sql.DataFrame = [PASSENGERID: int, PCLASS: int ... 9 more fields]
scala> val df2 = spark.read.option("header", true).option("inferSchema", false).schema(schema2).csv("s3a://mybucket/ybspark/input/PASSENGERS\.csv")
df2: org.apache.spark.sql.DataFrame = [PASSENGERID: int, PCLASS: int ... 9 more fields]
scala> val df3 = spark.read.option("header", true).option("inferSchema", false).schema(schema3).csv("s3a://mybucket/ybspark/input/PASSENGERS\.csv")
df3: org.apache.spark.sql.DataFrame = [PASSENGERID: int, PCLASS: int ... 9 more fields]
scala> val df4 = spark.read.option("header", true).option("inferSchema", false).schema(schema4).csv("s3a://mybucket/ybspark/input/PASSENGERS\.csv")
df4: org.apache.spark.sql.DataFrame = [PASSENGERID: int, PCLASS: int ... 9 more fields]
scala> val df5 = spark.read.option("header", true).option("inferSchema", false).schema(schema5).csv("s3a://mybucket/ybspark/input/PASSENGERS\.csv")
df5: org.apache.spark.sql.DataFrame = [PASSENGERID: int, PCLASS: int ... 9 more fields]
错误是
int类型不是double类型的子类型
我想了解为什么出现此错误。
答案 0 :(得分:1)
如错误所述
类型'int'不是类型'double'的子类型
那是因为您将 int 类型的数据存储在 double 类型的变量中,因此请尝试以下代码
double temp = weatherData['main']['temp'].toDouble();
答案 1 :(得分:0)
当您尝试将类型int
的值传递给类型double
的变量时,会发生此错误。
Dart不会自动转换您必须手动处理的那些类型。因此,您在这里有两个选择:
num
,这将自动将值转换为double
,因为int
和double
都从num
扩展。num temperature;
temperature = 24; // works
temperature = 24.5; // also works
num
。之所以有效,是因为num
是double
的超类。int temperature;
temperature = 24.0 as num; // works
// does not work
// temperature = 24.5 as int;