我有这个程序正在读取镶木地板文件并将其写入MemSQL表。我可以正确确认Spark读取文件
df.printSchema()
df.show(5)
正确打印架构和数据。
当我查询表时,我得到行的所有NULL值。表中的所有内容都为NULL。我不确定这里出了什么问题。
将镶木地板文件写入memsql的代码
package com.rb.scala
import com.memsql.spark.context.MemSQLContext
import java.sql.{ DriverManager, ResultSet, Connection, Timestamp }
import org.apache.spark._
import org.apache.spark.sql._
import org.apache.spark.sql.types._
import org.apache.spark.sql.catalyst.expressions.RowOrdering
import com.memsql.spark.connector._
import com.memsql.spark.connector.OnDupKeyBehavior._
import com.memsql.spark.connector.dataframe._
import com.memsql.spark.connector.rdd._
import scala.util.control.NonFatal
import org.apache.log4j.Logger
object MemSQLWriter {
def main(arg: Array[String]) {
var logger = Logger.getLogger(this.getClass())
if (arg.length < 1) {
logger.error("=> wrong parameters number")
System.err.println("Usage: MainExample <directory containing the source files to be loaded to database > ")
System.exit(1)
}
val jobName = "MemSQLWriter"
val conf = new SparkConf().setAppName(jobName)
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
val pathToFiles = arg(0)
logger.info("=> jobName \"" + jobName + "\"")
logger.info("=> pathToFiles \"" + pathToFiles + "\"")
val dbHost = "xx.xx.xx.xx"
val dbPort = 3306
val dbName = "memsqlrdd_db"
val user = "root"
val password = ""
val tableName = "target_table"
val dbAddress = "jdbc:mysql://" + dbHost + ":" + dbPort
val df = sqlContext.read.parquet("/projects/example/data/")
val conn = DriverManager.getConnection(dbAddress, user, password)
val stmt = conn.createStatement
stmt.execute("CREATE DATABASE IF NOT EXISTS " + dbName)
stmt.execute("USE " + dbName)
stmt.execute("DROP TABLE IF EXISTS " + tableName)
df.printSchema()
df.show(5)
var columnArr = df.columns
var createQuery:String = " CREATE TABLE "+tableName+" ("
logger.info("=> no of columns : "+columnArr.length)
for(column <- columnArr){
createQuery += column
createQuery += " VARCHAR(100),"
}
createQuery += " SHARD KEY ("+columnArr(0)+"))"
logger.info("=> create table query "+createQuery)
stmt.execute(createQuery)
df.select().saveToMemSQL(dbName, tableName, dbHost, dbPort, user, password, upsertBatchSize = 1000, useKeylessShardedOptimization = true)
stmt.close()
}
}
答案 0 :(得分:2)
您正在创建一个带有SHARD键的表,然后设置useKeylessShardingOptimization = true
,这将给出未定义的行为。把它设置为假,它应该是好的。
另外,我不确定df.select().saveToMemSQL...
是做什么的。试试df.saveToMemSQL ...
验证时,请执行SELECT * FROM table WHERE col IS NOT NULL LIMIT 10
之类的操作,看看您是否确实拥有所有空值。
PS:还有df.createMemSQLTableAs
,可以做你想要的事情。