我有一个包含州,年龄,性别,工资等的CSV文件作为自变量。
依赖变量是流失。
在spark中,我们需要将数据帧转换为libsvm格式。你能告诉我怎么做吗?
libsvm格式为:0 128:51
作为一个特征值,这意味着第128列中的值为51。
答案 0 :(得分:0)
我正在使用hadoop,但逻辑应该相同。我已经为您的用例创建了示例。首先,我创建数据框,然后删除所有具有null或空值的行。之后创建RDD并将Row转换为libsvm格式。 "重新分配(1)"意味着一切只会进入一个文件。例如,将有一个结果列。在CTR预测的情况下,它将仅为1或0。
示例文件输入:
"zip","city","state","latitude","longitude","timezone","dst"
"00210","Portsmouth","NH","43.005895","-71.013202","-5","1"
"00211","Portsmouth","NH","43.005895","-71.013202","-5","1"
"00212","Portsmouth","NH","43.005895","-71.013202","-5","1"
"00213","Portsmouth","NH","43.005895","-71.013202","-5","1"
"00214","Portsmouth","NH","43.005895","-71.013202","-5","1"
"00215","Portsmouth","NH","43.005895","-71.013202","-5","1"
"00501","Holtsville","NY","40.922326","-72.637078","-5","1"
"00544","Holtsville","NY","40.922326","-72.637078","-5","1"
public class LibSvmConvertJob {
private static final String SPACE = " ";
private static final String COLON = ":";
public static void main(String[] args) {
SparkConf sparkConf = new SparkConf().setMaster("local[2]").setAppName("Libsvm Convertor");
JavaSparkContext javaSparkContext = new JavaSparkContext(sparkConf);
SQLContext sqlContext = new SQLContext(javaSparkContext);
DataFrame inputDF = sqlContext.read().format("com.databricks.spark.csv").option("header", "true")
.load("/home/raghunandangupta/inputfiles/zipcode.csv");
inputDF.printSchema();
sqlContext.udf().register("convertToNull", (String v1) -> (v1.trim().length() > 0 ? v1.trim() : null), DataTypes.StringType);
inputDF = inputDF.selectExpr("convertToNull(zip)","convertToNull(city)","convertToNull(state)","convertToNull(latitude)","convertToNull(longitude)","convertToNull(timezone)","convertToNull(dst)").na().drop();
inputDF.javaRDD().map(new Function<Row, String>() {
private static final long serialVersionUID = 1L;
@Override
public String call(Row v1) throws Exception {
StringBuilder sb = new StringBuilder();
sb.append(hashCode(v1.getString(0))).append("\t") //Resultant column
.append("1"+COLON+hashCode(v1.getString(1))).append(SPACE)
.append("2"+COLON+hashCode(v1.getString(2))).append(SPACE)
.append("3"+COLON+hashCode(v1.getString(3))).append(SPACE)
.append("4"+COLON+hashCode(v1.getString(4))).append(SPACE)
.append("5"+COLON+hashCode(v1.getString(5))).append(SPACE)
.append("6"+COLON+hashCode(v1.getString(6)));
return sb.toString();
}
private String hashCode(String value) {
return Math.abs(Hashing.murmur3_32().hashString(value, StandardCharsets.UTF_8).hashCode()) + "";
}
}).repartition(1).saveAsTextFile("/home/raghunandangupta/inputfiles/zipcode");
}
}
答案 1 :(得分:0)
/*
/Users/mac/matrix.txt
1 0.5 2.4 3.0
1 99 34 6454
2 0.8 3.0 4.5
*/
def concat(a:Array[String]):String ={
var result=a(0)+" "
for(i<-1 to a.size.toInt-1)
result=result+i+":"+a(i)(0)+" "
return result
}
val rfile=sc.textFile("file:///Users/mac/matrix.txt")
val f=rfile.map(line => line.split(' ')).map(i=>concat(i))
我相信我有一个更简单的解决方案。