hadoop中的自定义可写类,用于多个double值

时间:2014-07-22 12:41:33

标签: java class hadoop mapreduce

我正在尝试将4个数值作为键发出。 我写了自定义可写的Comparable类,但是我遇到了compare()方法 stackoverflow站点中提到了几个解决方案。但这并没有解决我的问题。

我的writableCoparable类是

public class DimensionWritable implements WritableComparable {
    private double keyRow;
    private double keyCol;

    private double valRow;
    private double valCol;


    public  DimensionWritable(double keyRow, double keyCol,double valRow, double valCol) {
        set(keyRow, keyCol,valRow,valCol);
    }
    public void set(double keyRow, double keyCol,double valRow, double valCol) {
        //row dimension
        this.keyRow = keyRow;
        this.keyCol = keyCol;
        //column dimension
        this.valRow = valRow;
        this.valCol = valCol;
    }

    @Override
    public void write(DataOutput out) throws IOException {
        out.writeDouble(keyRow);
        out.writeDouble(keyCol);

        out.writeDouble(valRow);
        out.writeDouble(valCol);
    }
    @Override
    public void readFields(DataInput in) throws IOException {
        keyRow = in.readDouble();
        keyCol = in.readDouble();

        valRow = in.readDouble();
        valCol = in.readDouble();
    }
    /**
     * @return the keyRow
     */
    public double getKeyRow() {
        return keyRow;
    }
    /**
     * @param keyRow the keyRow to set
     */
    public void setKeyRow(double keyRow) {
        this.keyRow = keyRow;
    }
    /**
     * @return the keyCol
     */
    public double getKeyCol() {
        return keyCol;
    }
    /**
     * @param keyCol the keyCol to set
     */
    public void setKeyCol(double keyCol) {
        this.keyCol = keyCol;
    }
    /**
     * @return the valRow
     */
    public double getValRow() {
        return valRow;
    }
    /**
     * @param valRow the valRow to set
     */
    public void setValRow(double valRow) {
        this.valRow = valRow;
    }
    /**
     * @return the valCol
     */
    public double getValCol() {
        return valCol;
    }
    /**
     * @param valCol the valCol to set
     */
    public void setValCol(double valCol) {
        this.valCol = valCol;
    }

    //compare - confusing

}

令人兴奋的是比较语句背后的逻辑 - 它是Hadoop中的密钥交换吗?

如何对上述4个双值实现相同的效果。

更新 我将我的代码编辑为" isnot2bad "说过 但显示

java.lang.Exception: java.lang.RuntimeException: java.lang.NoSuchMethodException: edu.am.bigdata.svmmodel.DimensionWritable.<init>()
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:404)
Caused by: java.lang.RuntimeException: java.lang.NoSuchMethodException: edu.am.bigdata.svmmodel.DimensionWritable.<init>()
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:128)
    at org.apache.hadoop.io.WritableComparator.newKey(WritableComparator.java:113)
    at org.apache.hadoop.io.WritableComparator.<init>(WritableComparator.java:99)
    at org.apache.hadoop.io.WritableComparator.get(WritableComparator.java:55)
    at org.apache.hadoop.mapred.JobConf.getOutputKeyComparator(JobConf.java:819)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.init(MapTask.java:836)
    at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:376)
    at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:85)
    at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:584)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:656)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
    at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:266)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
    at java.util.concurrent.FutureTask.run(FutureTask.java:166)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
    at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.NoSuchMethodException: edu.am.bigdata.svmmodel.DimensionWritable.<init>()
    at java.lang.Class.getConstructor0(Class.java:2721)
    at java.lang.Class.getDeclaredConstructor(Class.java:2002)
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:122)

我做错了吗?

1 个答案:

答案 0 :(得分:8)

如果您想将您的类型用作Hadoop中的键,则必须具有可比性,(您的类型必须为totally ordered),即ab的{​​{1}}个实例{1}}必须相等,或DimensionWritable必须大于或小于a(无论这取决于实施方式)。

通过实施b,您可以定义实例naturally compared to each other的方式。这是通过比较要比较的实例的字段来完成的:

compareTo

注意,还必须实现public int compareTo(DimensionWritable o) { int c = Double.compare(this.keyRow, o.keyRow); if (c != 0) return c; c = Double.compare(this.keyCol, o.keyCol); if (c != 0) return c; c = Double.compare(this.valRow, o.valRow); if (c != 0) return c; c = Double.compare(this.valCol, o.valCol); return c; } ,因为它必须符合您的相等定义(根据hashCode认为相同的两个实例应具有相同的哈希码),并且因为Hadoop要求密钥的哈希码为constant across different JVMs。所以我们再次使用这些字段来计算哈希码:

compareTo