我正在使用preparedstatement批量更新/插入将数据存储到数据库中。但是需要3分钟才能将1000个条目持续到SQL Server数据库中。我想知道是否有办法改善表现。请检查下面的代码:
SQL UPDATE STATEMENT
String sqlUpdate = "UPDATE details set a=? , b=? , c=? , d=?, e=? where f=?";
updatePstmt = conn.prepareStatement(sqlUpdate);
public static void updateCustomerDetailByBatch(HashMap<String, String []> updateCustDetails ) {
final int BATCH_SIZE=1000;
int batchCtr = 1;
try{
conn.setAutoCommit(false);
MAFLogger.info("Number of Customer details to be updated: "+ updateCustDetails.size());
for (Map.Entry<String, String []> custEntry: updateCustDetails.entrySet()) {
String x = custEntry.getValue()[0];
String y = custEntry.getValue()[1];
String z = custEntry.getKey();
String a = custEntry.getValue()[2];
String b = custEntry.getValue()[3];
String c = custEntry.getValue()[4];
updatePstmt.setString(1, x);
updatePstmt.setString(2, y);
updatePstmt.setString(3, z);
updatePstmt.setString(4, a);
updatePstmt.setString(5, b);
updatePstmt.setString(6, c);
updatePstmt.addBatch();
if (batchCtr % BATCH_SIZE == 0) {
MAFLogger.debug("Batch Ctr is : " + batchCtr+ " Updated Batch ");
updatePstmt.executeBatch();
}
batchCtr++;
}
if (batchCtr % BATCH_SIZE != 0 ) {
MAFLogger.debug("Execute remaining batch update statement contents: "+ batchCtr);
updatePstmt.executeBatch();
}
conn.setAutoCommit(true);
}catch (SQLException sqlE) {
MAFLogger.error("Batch update statement problem : " + sqlE);
}
}
我在这里阅读了有关添加和回答的不同文章,例如link1,link2和link3,但没有任何变化。感谢您是否可以提供帮助。
我在他们的网站“sqljdbc_4.1.5605.100_enu.tar.gz”上可以下载Microsoft JDBC Driver
表索引
index_name index_description index_keys
PK_CC_Details_TEMP clustered, unique, primary key located on PRIMARY f