我有一个要求,需要处理文本文件中的记录并插入/更新到表中。以下是我写的代码。但是当文件中的记录为50,000时,处理记录的时间超过30分钟,如果记录接近80k,则会引发内存不足错误。任何人都可以建议一种方法来优化我编写的代码以提高性能吗?
public static String insertIntoCHG_PNT_Table(String FILE_NAME) throws NumberFormatException, IOException
{
Date DATE_INSERTED = new Date();
String strLine = "";
FileReader fr = new FileReader(FILE_NAME);
BufferedReader br = new BufferedReader(fr);
long SEQ = 0;
double consumption = 1;
String returnString = "";
CHG_PNT insertObj = null;
long KY_PREM_NO = 0;
long KY_SPT = 0;
String COD_COT_TYP = "";
String DT_EFF = "";
String TS_KY_TOT = "";
String COD_COT = "";
String ACL_VLE = "";
String ACL_QTY = "";
String WTR_VLE = "";
String WTR_QTY = "";
String SWG_VLE = "";
String SWG_QTY = "";
String CD_TYPE_ACT = "";
String DT_TERM = "";
String CD_STAT = "";
String DT_STAT = "";
String VLN_PPE_SIZ_COD = "";
String WTR_PPE_SIZ_MTD = "";
String SWG_PPE_SIZ_MTD = "";
while( (strLine = br.readLine()) != null){
/*
* Meter Serial No, Property No, Current Meter Index, Previous meter index, Consumption needs to be added
*
*
*/
String[] split = strLine.split("\\;");
KY_PREM_NO = Long.parseLong(split[0].trim());
KY_SPT = Long.parseLong(split[1].trim());
COD_COT_TYP = split[2].trim();
DT_EFF = split[3].trim();
TS_KY_TOT = split[4].trim();
COD_COT = split[5].trim();
ACL_VLE = split[6].trim();
ACL_QTY = split[7].trim();
WTR_VLE = split[8].trim();
WTR_QTY = split[9].trim();
SWG_VLE = split[10].trim();
SWG_QTY = split[11].trim();
CD_TYPE_ACT = split[12].trim();
DT_TERM = split[13].trim();
CD_STAT = split[14].trim();
DT_STAT = split[15].trim();
VLN_PPE_SIZ_COD = split[16].trim();
WTR_PPE_SIZ_MTD = split[17].trim();
SWG_PPE_SIZ_MTD = split[18].trim();
long counter = 0;
long newCounter = 0;
CHG_PNT checkRecordCount = null;
checkRecordCount = checkAndUpdateRecord(KY_PREM_NO,KY_SPT,COD_COT_TYP,TS_KY_TOT);
try {
if(checkRecordCount == null)
insertObj = new CHG_PNT();
else
insertObj = checkRecordCount;
insertObj.setKY_PREM_NO(KY_PREM_NO);
//insertObj.setSEQ_NO(SEQ);
insertObj.setKY_SPT(KY_SPT);
insertObj.setCOD_COT_TYP(COD_COT_TYP);
insertObj.setDT_EFF(DT_EFF);
insertObj.setTS_KY_TOT(TS_KY_TOT);
insertObj.setCOD_COT(COD_COT);
insertObj.setACL_VLE(Double.parseDouble(ACL_VLE));
insertObj.setACL_QTY(Double.parseDouble(ACL_QTY));
insertObj.setWTR_VLE(Double.parseDouble(WTR_VLE));
insertObj.setWTR_QTY(Double.parseDouble(WTR_QTY));
insertObj.setSWG_VLE(Double.parseDouble(SWG_VLE));
insertObj.setSWG_QTY(Double.parseDouble(SWG_QTY));
insertObj.setCD_TYPE_ACT(CD_TYPE_ACT);
insertObj.setDT_TERM(DT_TERM);
insertObj.setCD_STAT(Double.parseDouble(CD_STAT));
insertObj.setDT_STAT(DT_STAT);
insertObj.setVLN_PPE_SIZ_COD(VLN_PPE_SIZ_COD);
insertObj.setWTR_PPE_SIZ_MTD(WTR_PPE_SIZ_MTD);
insertObj.setSWG_PPE_SIZ_MTD(SWG_PPE_SIZ_MTD);
insertObj.setDATE_INSERTED(DATE_INSERTED);
if(checkRecordCount == null)
{
insertObj.setDATE_INSERTED(DATE_INSERTED);
insertObj.insert();
}
else
{
insertObj.setDATE_MODIFIED(DATE_INSERTED);
insertObj.update();
}
BSF.getObjectManager()._commitTransactionDirect(true);
}catch(Exception e)
{
String abc = e.getMessage();
}
}
fr.close();
br.close();
String localPath = FILE_NAME;
File f = new File(FILE_NAME);
String fullPath = f.getParent();
String fileName = f.getName();
String SubStr1 = new String("Processing");
int index = fullPath.lastIndexOf(SubStr1);
String path = fullPath.substring(0, index);
String destPath = path+"\\Archive\\"+fileName;
PMP_PROPERTIES.copyFile(new File(localPath),new File(destPath));
File file = new File(FILE_NAME);
file.delete();
return null;
}
答案 0 :(得分:1)
有两个主要问题。第一个是性能问题 - 与您的直觉相反,问题是数据库插入速度。
您正在单独的交易中插入每个项目。如果您希望快速插入,则不应该这样做。引入一个计数器变量并仅在每次N次插入时执行一次commint。
int commitStep = 100;
int modCount = 0;
while() {
//... your code
modCount++;
if ( modCount % commitStep == 0 ) {
BSF.getObjectManager()._commitTransactionDirect(true);
}
}
您可以在此处阅读有关sql insert speed-up的更多信息:Sql insert speed up
第二个问题可能是文件读取可扩展性。它适用于较小的文件,但不适用于较大的文件。这个问题Read large files in Java可以很好地解决您的问题。