我正在尝试使用spark-cassandra-connector
将cassandra行映射到参数化类型。我一直在尝试使用隐式定义的columnMapper定义映射,因此:
class Foo[T<:Bar:ClassTag:RowReaderFactory] {
implicit object Mapper extends JavaBeanColumnMapper[T](
Map("id" -> "id",
"timestamp" -> "ts"))
def doSomeStuff(operations: CassandraTableScanRDD[T]): Unit = {
println("do some stuff here")
}
}
但是,我遇到了以下错误,我认为这是因为我传递的是RowReaderFactory
而没有正确指定RowReaderFactory
的映射。知道如何指定RowReaderFactory
的映射信息吗?
Exception in thread "main" java.lang.IllegalArgumentException: Failed to map constructor parameter timestamp in Bar to a column of MyNamespace
at com.datastax.spark.connector.mapper.DefaultColumnMapper$$anonfun$4$$anonfun$apply$1.apply(DefaultColumnMapper.scala:78)
at com.datastax.spark.connector.mapper.DefaultColumnMapper$$anonfun$4$$anonfun$apply$1.apply(DefaultColumnMapper.scala:78)
at scala.Option.getOrElse(Option.scala:120)
at com.datastax.spark.connector.mapper.DefaultColumnMapper$$anonfun$4.apply(DefaultColumnMapper.scala:78)
at com.datastax.spark.connector.mapper.DefaultColumnMapper$$anonfun$4.apply(DefaultColumnMapper.scala:76)
at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)
at com.datastax.spark.connector.mapper.DefaultColumnMapper.columnMapForReading(DefaultColumnMapper.scala:76)
at com.datastax.spark.connector.rdd.reader.GettableDataToMappedTypeConverter.<init>(GettableDataToMappedTypeConverter.scala:56)
at com.datastax.spark.connector.rdd.reader.ClassBasedRowReader.<init>(ClassBasedRowReader.scala:23)
at com.datastax.spark.connector.rdd.reader.ClassBasedRowReaderFactory.rowReader(ClassBasedRowReader.scala:48)
at com.datastax.spark.connector.rdd.reader.ClassBasedRowReaderFactory.rowReader(ClassBasedRowReader.scala:43)
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.rowReader(CassandraTableRowReaderProvider.scala:48)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.rowReader$lzycompute(CassandraTableScanRDD.scala:59)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.rowReader(CassandraTableScanRDD.scala:59)
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.verify(CassandraTableRowReaderProvider.scala:147)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.verify(CassandraTableScanRDD.scala:59)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.getPartitions(CassandraTableScanRDD.scala:143)
答案 0 :(得分:1)
原来必须在创建columnMapper
实例的范围内创建Foo
,而不是Foo
本身。
答案 1 :(得分:1)
您可以在Foo的伴随对象中定义隐含,如下所示:
object Foo {
implicit object Mapper extends JavaBeanColumnMapper[T](
Map("id" -> "id",
"timestamp" -> "ts"))
}
Scala将在尝试查找该类的隐式实例时查看类的伴随对象。如果需要,可以在需要隐式的范围内定义它,但是您可能希望在伴随对象中添加它,这样您就不需要在必要时重复它。