解析数百万个XML文件 - Java

时间:2017-07-17 03:43:25

标签: java xml xml-parsing out-of-memory

我正在解析xml技术,并决定在DOM解析器上使用SAX。数据,每个近6KB的数百万个xml文件。我正在使用SAXparser。

我遍历调用parser.parse(文件,处理程序)的所有文件,但是在100,000之后我得到了一个内存不足的错误。当我试图转储我的堆并读取它时,我看到很多char数组和字符串被存储。

问题是,如何在没有堆错误的情况下解析数百万个小文件。

import javax.xml.parsers.*;
import org.xml.sax.*;
import org.xml.sax.helpers.*;
import java.util.*;
import java.io.*;
import java.util.logging.Level;
import java.util.logging.Logger;

/**
 *
 * @author Ajinkya Jumbad
 */
public class dataset {

    static List<String> cols;
    public HashMap<String, HashMap> hm = new HashMap<>();
    static int i =0;

    dataset() {
        String coln[] = {
            "UID",
            "Name",
            "NationID",
            "Born",
            "Age",
            "IntCaps",
            "IntGoals",
            "U21Caps",
            "U21Goals",
            "Height",
            "Weight",
            "AerialAbility",
            "CommandOfArea",
            "Communication",
            "Eccentricity",
            "Handling",
            "Kicking",
            "OneOnOnes",
            "Reflexes",
            "RushingOut",
            "TendencyToPunch",
            "Throwing",
            "Corners",
            "Crossing",
            "Dribbling",
            "Finishing",
            "FirstTouch",
            "Freekicks",
            "Heading",
            "LongShots",
            "Longthrows",
            "Marking",
            "Passing",
            "PenaltyTaking",
            "Tackling",
            "Technique",
            "Aggression",
            "Anticipation",
            "Bravery",
            "Composure",
            "Concentration",
            "Vision",
            "Decisions",
            "Determination",
            "Flair",
            "Leadership",
            "OffTheBall",
            "Positioning",
            "Teamwork",
            "Workrate",
            "Acceleration",
            "Agility",
            "Balance",
            "Jumping",
            "LeftFoot",
            "NaturalFitness",
            "Pace",
            "RightFoot",
            "Stamina",
            "Strength",
            "Consistency",
            "Dirtiness",
            "ImportantMatches",
            "InjuryProness",
            "Versatility",
            "Adaptability",
            "Ambition",
            "Loyalty",
            "Pressure",
            "Professional",
            "Sportsmanship",
            "Temperament",
            "Controversy",
            "PositionsDesc",
            "Goalkeeper",
            "Sweeper",
            "Striker",
            "AttackingMidCentral",
            "AttackingMidLeft",
            "AttackingMidRight",
            "DefenderCentral",
            "DefenderLeft",
            "DefenderRight",
            "DefensiveMidfielder",
            "MidfielderCentral",
            "MidfielderLeft",
            "MidfielderRight",
            "WingBackLeft",
            "WingBackRight"};
        cols = Arrays.asList(coln);
        try {
            File f = new File("C:\\Users\\Ajinkya Jumbad\\Desktop\\fmdata");

            //File files[] = f.listFiles();
            for (File file : f.listFiles()) {
                //System.out.println(file.getAbsolutePath());
                if (file.isFile()) {
                    parse p = new parse(file);
                }
            }


            //savefile();
        } catch (Exception ex) {
            Logger.getLogger(dataset.class.getName()).log(Level.SEVERE, null, ex);
        }
    }

    private void savefile() {
        try {
            String file_name = "dataset.csv";
            FileWriter w = new FileWriter(file_name);
            writecsv ws = new writecsv();
            boolean first = true;
            StringBuilder sb = new StringBuilder();
            for (String key : cols) {
                if (!first) {
                    sb.append(",");
                }
                sb.append(key);
                first = false;
            }
            sb.append("\n");
            w.append(sb.toString());
            for (String uid : hm.keySet()) {
                ws.writeLine(w, hm.get(uid));
            }
            w.close();
        } catch (Exception e) {
            System.out.println(e.getMessage());
        }
    }

    public class parse{
        parse(File file){
            try {
                SAXParserFactory parserfac = SAXParserFactory.newInstance();
                parserfac.setNamespaceAware(true);
                SAXParser parser = parserfac.newSAXParser();
                DefaultHandler handler = new DefaultHandler(){
                    HashMap<String, String> ht;
                    @Override
                    public void startDocument() {
                        ht = new HashMap<>();
                    }

                    @Override
                    public void startElement(String namespaceURI,
                            String localName,
                            String qName,
                            Attributes atts) {
                        if (atts.getValue("Value") != null && cols.contains(localName)) {
                            //System.out.println(localName);
                            String key = localName;
                            ht.put(key, atts.getValue("Value"));
                        }
                    }

                    @Override
                    public void endDocument() {
                        String uid = ht.get("UID");
                        hm.put(uid, ht);
                        dataset.i += 1;
                        if(dataset.i%100 == 0){
                            System.out.println(dataset.i);
                        }
                    }

                    @Override
                    public void characters(char ch[], int start, int length) throws SAXException {

                    }

                };
                parser.parse(file, handler);
            } catch (Exception ex) {
                Logger.getLogger(dataset.class.getName()).log(Level.SEVERE, null, ex);
            }
        }
    }

    public static void main(String[] args) {
        dataset ds = new dataset();
    }

}

2 个答案:

答案 0 :(得分:2)

首先,重用SAXParserFactory和解析器本身。创建SAXParserFactory可能非常昂贵,并且创建解析器也不便宜。总之,这些操作可能比实际解析输入花费的时间长得多。但那是关于节省时间,而不是记忆。

就内存而言,我怀疑这个空间都是用你自己的数据结构:特别是你把结果放入的HashMap。尝试使用JVisualVM检查堆以确认这一点。

至于底线,“我如何在不耗尽内存的情况下解析这些数据”,这完全取决于你想要对数据做什么。没有人解析XML数据以获得乐趣;你这样做是因为你想将数据用于某种目的。如果我们不了解更多关于(a)您想要对数据做什么,以及(b)体积计量(您给我们一个广泛的规模指示:但您应该能够告诉我们你希望这个HashMap包含多少条目,以及条目的大小。)

另一个显而易见的小事,万一你不知道它:在Java命令行上使用-Xmx选项来控制可用的堆空间量。

答案 1 :(得分:-1)

A;完成后关闭文件。

B;如果它仍然发生,跟踪可用内存并调用gc()。有点黑客,但如果它有效..

C。;如果您可以访问多个线程,请尽可能多地运行此线程;给每个线程一个数字N并让它处理每个第N个文件。