gzip解压缩java的改进

时间:2015-09-03 19:09:24

标签: java oracle gzip

情景: 我在oracle数据库中有近1500万条记录,每条记录都有一个被压缩的列。任务是导出相同的表但解压缩的列值。我的解决方案步骤如下:

- Read a chunk of data using jdbcTemplate (returns List)
 - For each of the record above decompress the column value and form an updated list
 - Use the above list to insert into another table (This is being executed by another thread).

因此,这里有一批48842条记录是分析,

- Reading takes around 9 seconds
 - Writing takes around 47 seconds    
 - Compression takes around 135 seconds

通过上述处理1500万条记录的分析,这个过程大约需要16-17个小时。有没有办法改善它? 我正在寻找减压技术的一个很大的改进领域。在我的情况下,即使是减压技术的少量改进也会产生巨大的差异。任何帮助将非常感激。

以下是我正在使用的解压缩方法

public String decompressMessage(String message)
    throws Exception
    {
        ByteArrayInputStream byteArrayIPStream = null;
        GZIPInputStream gZipIPStream = null;
        BufferedReader bufferedReader = null;
        String decompressedMessage = "";
        String line="";
        byte[] compressByteArray = null;
        try{
            if(message==null || "".equals(message))
            {
                logger.error("Decompress is not possible as the string is empty");
                return "";
            }
            compressByteArray = Base64.decode(message);
            byteArrayIPStream = new ByteArrayInputStream(compressByteArray);
            gZipIPStream = new GZIPInputStream(byteArrayIPStream);
            bufferedReader = new BufferedReader(new InputStreamReader(gZipIPStream, "UTF-8"));
            while ((line = bufferedReader.readLine()) != null) {                
                decompressedMessage = decompressedMessage + line;               
              }
            return decompressedMessage;
        }
        catch(Exception e)
        {
            logger.error("Exception while decompressing the message with details {}",e);
            return "";
        }
        finally{
            line = null;
            compressByteArray = null;
            if(byteArrayIPStream!=null)
                byteArrayIPStream.close();
            if(gZipIPStream!=null)
                gZipIPStream.close();
            if(bufferedReader!=null)
                bufferedReader.close();
        }
    }

2 个答案:

答案 0 :(得分:2)

当然,最大的问题是在循环中连接一个字符串。字符串是不可变的,这意味着你将O(n 2 )时间复杂度强加给基本为O(n)的作业。

modules替换字符串,并从输入端删除StringWriter。使用BufferedReader后跟Reader#read(char[])累积StringWriter#write(char[])中的数据,最后获取StringWriter字符串。

答案 1 :(得分:1)

让Oracle数据库执行此操作。例如:

-- NOTE: This example would be simpler if compressed_data were a RAW type...
create table matt1 ( compressed_data VARCHAR2(4000) );

-- Put 100,000 rows of compressed data in there
insert into matt1 (compressed_data)
select utl_raw.cast_to_varchar2(utl_compress.lz_compress(src => utl_raw.cast_to_raw(dbms_random.string('a',30) || 'UNCOMPRESSED_DATA' || lpad(rownum,10,'0') || dbms_random.string('a',30))))
from dual
connect by rownum <= 100000;

-- Create the uncompressed version of the table to export
create table matt1_uncompressed as
select utl_raw.cast_to_varchar2(utl_compress.lz_uncompress(src => utl_raw.cast_to_raw(compressed_data))) uncompressed_data
from matt1
where rownum <= 100000;

--- execution time was 3.448 seconds

OP

发布的样本数据更新

您的示例中的数据看起来像是base64编码的。试试这个:

SELECT utl_compress.lz_uncompress(src =>     
utl_encode.base64_decode(utl_raw.cast_to_raw(your_table.compressed_column)))
from your_table;