我是map-reduce的新手。我想知道在hadoop中实现自定义数据类型时readfields和write方法的用途是什么?例如,
public class Point3D implements Writable {
public float x;
public float y;
public float z;
public Point3D(float x, float y, float z) {
this.x = x;
this.y = y;
this.z = z;
}
public Point3D() {
this(0.0f, 0.0f, 0.0f);
}
public void write(DataOutput out) throws IOException {
out.writeFloat(x);
out.writeFloat(y);
out.writeFloat(z);
}
public void readFields(DataInput in) throws IOException {
x = in.readFloat();
y = in.readFloat();
z = in.readFloat();
}
public String toString() {
return Float.toString(x) + ", "
+ Float.toString(y) + ", "
+ Float.toString(z);
}
public void set(float x, float y, float z)
{
this.x=x;
this.y=y;
this.z=z;
}
}
在上面的示例中,自定义记录阅读器使用set方法设置x,y和z的值。最后,我们在mapper中获取这些值。但是有什么需要readfealds和write()方法来自可写? PLS。帮助
答案 0 :(得分:1)
readFileds()和write()方法用于读取和写入序列化数据,以便通过网络传输。
以下问题解释了对可写的需求。
What is the reason for having Writable wrapper classes in Hadoop MapReduce for Java types?