如何避免缓冲区溢出异常?

时间:2014-02-04 19:36:39

标签: java bytebuffer avro

我正在尝试使用BigEndian字节顺序格式正确使用ByteBuffer ..

在将其存储在Cassandra数据库中之前,我想将几​​个字段组合到一个ByteBuffer中。

我将写入Cassandra的字节数组由三个字节数组组成,如下所述 -

short employeeId = 32767;
long lastModifiedDate = "1379811105109L";
byte[] attributeValue = os.toByteArray();

现在我需要在将属性值存储在Cassandra中之前对其进行压缩 -

employeeId (do not snappy compressed)
lastModifiedDate (do not snappy compressed)
attributeValue  (snappy compressed it)

现在,我将employeeIdlastModifiedDate和snappy压缩attributeValue一起编写成一个字节数组,生成的字节数组我将写入Cassandra然后我将拥有我的C ++程序将从Cassandra中检索该字节数组数据,然后对其进行反序列化以提取employeeIdlastModifiedDate并使用snappy从中解压缩attributeValue

为此,我使用ByteBuffer和BigEndian字节顺序格式。

我把这段代码放在一起 -

public static void main(String[] args) throws Exception {

        String text = "Byte Buffer Test";
        byte[] attributeValue = text.getBytes();

        long lastModifiedDate = 1289811105109L;
        short employeeId = 32767;

        // snappy compressing it and this line gives BufferOverflowException
        byte[] compressed = Snappy.compress(attributeValue);

        int size = 2 + 8 + 4 + attributeValue.length; // short is 2 bytes, long 8 and int 4

        ByteBuffer bbuf = ByteBuffer.allocate(size); 

        bbuf.order(ByteOrder.BIG_ENDIAN);
        bbuf.putShort(employeeId);
        bbuf.putLong(lastModifiedDate);
        bbuf.putInt(attributeValue.length);
        bbuf.put(compressed); // storing the snappy compressed data

        bbuf.rewind();

        // best approach is copy the internal buffer
        byte[] bytesToStore = new byte[size];
        bbuf.get(bytesToStore);

        // write bytesToStore in Cassandra...

        // Now retrieve the Byte Array data from Cassandra and deserialize it...
        byte[] allWrittenBytesTest = bytesToStore;//magicFunctionToRetrieveDataFromCassandra();

        // I am not sure whether the below read code will work fine or not..
        ByteBuffer bb = ByteBuffer.wrap(allWrittenBytesTest);

        bb.order(ByteOrder.BIG_ENDIAN);
        bb.rewind();

        short extractEmployeeId = bb.getShort();
        long extractLastModifiedDate = bb.getLong();
        int extractAttributeValueLength = bb.getInt();
        byte[] extractAttributeValue = new byte[extractAttributeValueLength];

        bb.get(extractAttributeValue); // read attributeValue from the remaining buffer

        System.out.println(extractEmployeeId);
        System.out.println(extractLastModifiedDate);
        System.out.println(new String(Snappy.uncompress(extractAttributeValue)));

}

以上代码抛出BufferOverflowException -

Exception in thread "main" java.nio.BufferOverflowException
    at java.nio.HeapByteBuffer.put(HeapByteBuffer.java:165)
    at java.nio.ByteBuffer.put(ByteBuffer.java:813)

为什么我在将数据存储到Cassandra之前压缩数据是因为当我从C ++代码中检索来自Cassandra的数据时,它应该被压缩,因此它将在C ++ Map中占用更少的空间。只有当人们打电话给我们时,我们才会解压缩它。

任何人都可以看看,让我知道我在这里做错了什么?那时我应该如何读取数据呢?

1 个答案:

答案 0 :(得分:1)

在分配原始ByteBuffer时,您应该使用compressed长度。