该程序一个接一个地进行数万个连续插入。我以前从未使用过Hibernate。我的性能非常慢(如果我只是手动连接和执行SQL,我的速度要快10-12倍。根据许多hibernate教程,我的batch_size设置为50。
这是来自单个插入的日志 - 也许您可以帮助我准确理解发生的事情:
START INSERT
11:02:56.121 [main] DEBUG org.hibernate.impl.SessionImpl - opened session at timestamp: 13106053761
11:02:56.121 [main] DEBUG o.h.transaction.JDBCTransaction - begin
11:02:56.121 [main] DEBUG org.hibernate.jdbc.ConnectionManager - opening JDBC connection
11:02:56.121 [main] TRACE o.h.c.DriverManagerConnectionProvider - total checked-out connections: 0
11:02:56.121 [main] TRACE o.h.c.DriverManagerConnectionProvider - using pooled JDBC connection, pool size: 0
11:02:56.121 [main] DEBUG o.h.transaction.JDBCTransaction - current autocommit status: false
11:02:56.121 [main] TRACE org.hibernate.jdbc.JDBCContext - after transaction begin
11:02:56.121 [main] TRACE org.hibernate.impl.SessionImpl - setting flush mode to: MANUAL
11:02:56.121 [main] TRACE o.h.e.def.DefaultLoadEventListener - loading entity: [com.xyzcompany.foo.edoi.ejb.msw000.MSW000Rec#component[keyW000]{keyW000=F000 ADSUFC}]
11:02:56.121 [main] TRACE o.h.e.def.DefaultLoadEventListener - creating new proxy for entity
11:02:56.122 [main] TRACE o.h.e.d.DefaultSaveOrUpdateEventListener - saving transient instance
11:02:56.122 [main] DEBUG o.h.e.def.AbstractSaveEventListener - generated identifier: component[keyW000]{keyW000=F000 ADSUFC}, using strategy: org.hibernate.id.CompositeNestedGeneratedValueGenerator
11:02:56.122 [main] TRACE o.h.e.def.AbstractSaveEventListener - saving [com.xyzcompany.foo.edoi.ejb.msw000.MSW000Rec#component[keyW000]{keyW000=F000 ADSUFC}]
11:02:56.123 [main] TRACE o.h.e.d.AbstractFlushingEventListener - flushing session
11:02:56.123 [main] DEBUG o.h.e.d.AbstractFlushingEventListener - processing flush-time cascades
11:02:56.123 [main] DEBUG o.h.e.d.AbstractFlushingEventListener - dirty checking collections
11:02:56.123 [main] TRACE o.h.e.d.AbstractFlushingEventListener - Flushing entities and processing referenced collections
11:02:56.125 [main] TRACE o.h.e.d.AbstractFlushingEventListener - Processing unreferenced collections
11:02:56.125 [main] TRACE o.h.e.d.AbstractFlushingEventListener - Scheduling collection removes/(re)creates/updates
11:02:56.126 [main] DEBUG o.h.e.d.AbstractFlushingEventListener - Flushed: 1 insertions, 0 updates, 0 deletions to 62 objects
11:02:56.126 [main] DEBUG o.h.e.d.AbstractFlushingEventListener - Flushed: 0 (re)creations, 0 updates, 0 removals to 0 collections
11:02:56.132 [main] TRACE o.h.e.d.AbstractFlushingEventListener - executing flush
11:02:56.132 [main] TRACE org.hibernate.jdbc.ConnectionManager - registering flush begin
11:02:56.132 [main] TRACE o.h.p.entity.AbstractEntityPersister - Inserting entity: [com.xyzcompany.foo.edoi.ejb.msw000.MSW000Rec#component[keyW000]{keyW000=F000 ADSUFC}]
11:02:56.132 [main] DEBUG org.hibernate.jdbc.AbstractBatcher - about to open PreparedStatement (open PreparedStatements: 0, globally: 0)
11:02:56.132 [main] DEBUG org.hibernate.SQL - insert into MSW000 (W000_DATA_REC, W000_FILE_FLAGS, KEY_W000) values (?, ?, ?)
11:02:56.132 [main] TRACE org.hibernate.jdbc.AbstractBatcher - preparing statement
11:02:56.132 [main] TRACE o.h.p.entity.AbstractEntityPersister - Dehydrating entity: [com.xyzcompany.foo.edoi.ejb.msw000.MSW000Rec#component[keyW000]{keyW000=F000 ADSUFC}]
11:02:56.132 [main] TRACE org.hibernate.type.StringType - binding ' ADSUFCA ' to parameter: 1
11:02:56.132 [main] TRACE org.hibernate.type.StringType - binding ' ' to parameter: 2
11:02:56.132 [main] TRACE org.hibernate.type.StringType - binding 'F000 ADSUFC' to parameter: 3
11:02:56.132 [main] DEBUG org.hibernate.jdbc.AbstractBatcher - Executing batch size: 1
11:02:56.133 [main] DEBUG org.hibernate.jdbc.AbstractBatcher - about to close PreparedStatement (open PreparedStatements: 1, globally: 1)
11:02:56.133 [main] TRACE org.hibernate.jdbc.AbstractBatcher - closing statement
11:02:56.133 [main] TRACE org.hibernate.jdbc.ConnectionManager - registering flush end
11:02:56.133 [main] TRACE o.h.e.d.AbstractFlushingEventListener - post flush
11:02:56.133 [main] DEBUG o.h.transaction.JDBCTransaction - commit
11:02:56.133 [main] TRACE org.hibernate.impl.SessionImpl - automatically flushing session
11:02:56.133 [main] TRACE org.hibernate.jdbc.JDBCContext - before transaction completion
11:02:56.133 [main] TRACE org.hibernate.impl.SessionImpl - before transaction completion
11:02:56.133 [main] DEBUG o.h.transaction.JDBCTransaction - committed JDBC Connection
11:02:56.133 [main] TRACE org.hibernate.jdbc.JDBCContext - after transaction completion
11:02:56.133 [main] DEBUG org.hibernate.jdbc.ConnectionManager - transaction completed on session with on_close connection release mode; be sure to close the session to release JDBC resources!
11:02:56.133 [main] TRACE org.hibernate.impl.SessionImpl - after transaction completion
11:02:56.133 [main] TRACE org.hibernate.impl.SessionImpl - closing session
11:02:56.133 [main] TRACE org.hibernate.jdbc.ConnectionManager - performing cleanup
11:02:56.133 [main] DEBUG org.hibernate.jdbc.ConnectionManager - releasing JDBC connection [ (open PreparedStatements: 0, globally: 0) (open ResultSets: 0, globally: 0)]
11:02:56.133 [main] TRACE o.h.c.DriverManagerConnectionProvider - returning connection to pool, pool size: 1
11:02:56.133 [main] TRACE org.hibernate.jdbc.JDBCContext - after transaction completion
11:02:56.133 [main] DEBUG org.hibernate.jdbc.ConnectionManager - transaction completed on session with on_close connection release mode; be sure to close the session to release JDBC resources!
11:02:56.134 [main] TRACE org.hibernate.impl.SessionImpl - after transaction completion
FINISH INSERT
答案 0 :(得分:88)
当你调用session.save()
时,hibernate将生成一个INSERT SQL。这个INSERT SQL将附加在刷新期间发布给DB(即session.flush()
)。
在刷新期间,如果hibernate.jdbc.batch_size
设置为某个非零值,Hibernate将使用JDBC2 API中引入的批处理功能向数据库发出批量插入SQL。
例如,如果您save()
有100条记录,而您的hibernate.jdbc.batch_size
设置为50.在刷新期间,而不是发出以下SQL 100次:
insert into TableA (id , fields) values (1, 'val1');
insert into TableA (id , fields) values (2, 'val2');
insert into TableA (id , fields) values (3, 'val3');
.........................
insert into TableA (id , fields) values (100, 'val100');
Hiberate会将它们分批分组,并且只向DB发出2个SQL,如下所示:
insert into TableA (id , fields) values (1, 'val1') , (2, 'val2') ,(3, 'val3') ,(4, 'val4') ,......,(50, 'val50')
insert into TableA (id , fields) values (51, 'val51') , (52, 'val52') ,(53, 'val53') ,(54, 'val54'),...... ,(100, 'val100')
请注意,如果插入表的主键为GenerationType.Identity
,Hibernate将透明地禁用JDBC级别的插入批处理。
从您的日志中:您save()
只有一条记录,然后是flush()
,因此每次刷新只需要处理一个INSERT SQL。这就是Hibernate无法帮助您批量插入的原因,因为只有一个INSERT SQL需要处理。在调用save()
之前,您应该flush()
达到一定数量的记录,而不是为每flush()
调用save()
。
批量插入的最佳做法是这样的:
Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
for ( int i=0; i<888888; i++ ) {
TableA record = new TableA();
record.setXXXX();
session.save(record)
if ( i % 50 == 0 ) { //50, same as the JDBC batch size
//flush a batch of inserts and release memory:
session.flush();
session.clear();
}
}
tx.commit();
session.close();
您可以批量保存和刷新记录。在每个批处理结束时,您应该清除持久性上下文以释放一些内存以防止内存耗尽,因为每个持久对象都放在第一级缓存(您的JVM内存)中。您还可以禁用二级缓存以减少不必要的开销。
<强>参考:强>
答案 1 :(得分:6)
如果必须使用hibernate进行大批量作业,StatelessSession是可行的方法。它将事物分解为最基本的转换对象到SQL语句映射,并消除了在将行填充到数据库批发中时未使用的ORM功能的所有开销。
对您的实际代码提出建议要比日志更容易:)
答案 2 :(得分:3)
11:02:56.133 [main] DEBUG o.h.transaction.JDBCTransaction - commit
这就是说每次插入后数据库都会提交。确保您没有在插入循环中提交事务/关闭会话。最后一次这样做。