针对Postgres数据库的某个索引SELECT查询需要大量可变的时间 - 从50毫秒到几秒,甚至是偶尔的分钟,即使在最轻的负载下也是如此。
我们的Postgres查询日志记录超过10毫秒的任何内容,但从不记录任何这些。 EXPLAIN输出表明查询不是特别有效,但在这个微小的数据库(000条记录)中它不应该很慢,我们相信Postgres日志。
我们的应用程序日志设置为报告所有 Hibernate,C3P0和Spring / Spring数据记录(请参见结尾处的版本号),证据表明这是一个非常Hibernate / C3P0问题,但是,日志中的所有证据表明,池的大小和利用率暂时没有问题。不幸的是,我们无法深入研究。
你能否就26秒的差距提出解释?
10:19:29.149 DEBUG org.hibernate.SQL [I=9534] - select eventrepor0_.consortium_id as consorti1_3_3_, eventrepor0_.customer_resource_id as customer6_3_3_, eventrepor0_.item_type_id as item2_3_3_, eventrepor0_.reporting_date as reportin3_3_3_, eventrepor0_.event_subtype as event4_3_3_, eventrepor0_.event_count as event5_3_3_, customerre1_.id as id1_2_0_, customerre1_.customer_id as customer2_2_0_, customerre1_.resource_id as resource3_2_0_, resource2_.id as id1_8_1_, resource2_.data_type_id as data2_8_1_, resource2_.platform_id as platform5_8_1_, resource2_.prop_id as prop3_8_1_, resource2_.title as title4_8_1_, resource2_1_.doi as doi1_6_1_, resource2_1_.isbn as isbn2_6_1_, resource2_1_.online_issn as online3_6_1_, resource2_1_.print_issn as print4_6_1_, resource2_1_.publisher as publishe5_6_1_, resource2_1_.yop as yop6_6_1_, case when resource2_1_.id is not null then 1 when resource2_.id is not null then 0 end as clazz_1_, platform3_.id as id1_4_2_, platform3_.api_key as api2_4_2_, platform3_.platform_name as platform3_4_2_, hostnames4_.platform_id as platform1_4_5_, hostnames4_.hostname as hostname2_5_5_ from event_report eventrepor0_ inner join customer_resource customerre1_ on eventrepor0_.customer_resource_id=customerre1_.id left outer join resource resource2_ on customerre1_.resource_id=resource2_.id left outer join published_resource resource2_1_ on resource2_.id=resource2_1_.id left outer join platform platform3_ on resource2_.platform_id=platform3_.id left outer join platform_hostnames hostnames4_ on platform3_.id=hostnames4_.platform_id where eventrepor0_.consortium_id=? and eventrepor0_.customer_resource_id=? and eventrepor0_.item_type_id=? and eventrepor0_.reporting_date=? and eventrepor0_.event_subtype=?
10:19:29.149 DEBUG c.m.v.a.ThreadPoolAsynchronousRunner [I=9534] - com.mchange.v2.async.ThreadPoolAsynchronousRunner@4ffa2724: Adding task to queue -- com.mchange.v2.c3p0.stmt.GooGooStatementCache$1StmtAcquireTask@31e6b320
10:19:29.149 DEBUG c.m.v.c3p0.stmt.GooGooStatementCache [I=9534] - CULLING: update event_report set event_count=event_count+1 where customer_resource_id=? and item_type_id=? and event_subtype=? and reporting_date=? and consortium_id=?
10:19:29.149 DEBUG c.m.v.a.ThreadPoolAsynchronousRunner [I=9534] - com.mchange.v2.async.ThreadPoolAsynchronousRunner@4ffa2724: Adding task to queue -- com.mchange.v2.c3p0.stmt.GooGooStatementCache$StatementDestructionManager$1UncheckedStatementCloseTask@20fa1378
10:19:29.149 DEBUG c.m.v.c3p0.stmt.GooGooStatementCache [I=9534] - cxnStmtMgr.statementSet( org.postgresql.jdbc4.Jdbc4Connection@38e040d2 ).size(): 5
10:19:29.150 DEBUG c.m.v.c3p0.stmt.GooGooStatementCache [I=9534] - checkoutStatement: com.mchange.v2.c3p0.stmt.GlobalMaxOnlyStatementCache stats -- total size: 20; checked out: 5; num connections: 6; num keys: 20
10:19:29.150 TRACE o.h.e.j.internal.JdbcCoordinatorImpl [I=9534] - Registering statement [com.mchange.v2.c3p0.impl.NewProxyPreparedStatement@7cc20161]
10:19:29.150 TRACE o.h.e.j.internal.JdbcCoordinatorImpl [I=9534] - Registering last query statement [com.mchange.v2.c3p0.impl.NewProxyPreparedStatement@7cc20161]
10:19:29.150 TRACE o.h.type.descriptor.sql.BasicBinder [I=9534] - binding parameter [1] as [VARCHAR] -
10:19:29.150 TRACE o.h.type.descriptor.sql.BasicBinder [I=9534] - binding parameter [2] as [BIGINT] - 47
10:19:29.150 TRACE org.hibernate.type.EnumType [I=9534] - Binding [SEARCH_REG] to parameter: [3]
10:19:29.150 TRACE o.h.type.descriptor.sql.BasicBinder [I=9534] - binding parameter [4] as [TIMESTAMP] - Tue Jul 16 00:00:00 BST 2013
10:19:29.151 TRACE o.h.type.descriptor.sql.BasicBinder [I=9534] - binding parameter [5] as [VARCHAR] -
10:19:29.151 TRACE org.hibernate.loader.Loader [I=9534] - Bound [6] parameters total
[... massive gap ...]
10:19:55.644 TRACE o.h.e.j.internal.JdbcCoordinatorImpl [I=9534] - Registering result set [com.mchange.v2.c3p0.impl.NewProxyResultSet@fa7b109]
关于并发性的说明:即使只有一个请求,也存在大量的可变性:50 - 300毫秒端到端,但当一个用户提交一批约100个这样的查找(可能是10到20次)同时),很少有人需要5-10秒。然而,C3P0的统计数据绝不会比:
更糟糕com.mchange.v2.c3p0.stmt.GlobalMaxOnlyStatementCache stats -- total size: 20; checked out: 6; num connections: 6; num keys: 20
这些是非常强大的服务器,因此没有明显的磁盘,网络或CPU活动。我们使用NewRelic进行监控。
我们的数据源设置:
ComboPooledDataSource dataSource = new com.mchange.v2.c3p0.ComboPooledDataSource();
dataSource.setInitialPoolSize(5);
dataSource.setMaxPoolSize(20);
dataSource.setMinPoolSize(5);
dataSource.setMaxStatements(20);
dataSource.setIdleConnectionTestPeriod(3600);
dataSource.setTestConnectionOnCheckin( Boolean.TRUE.toString() );
dataSource.setPreferredTestQuery("select 1");
JPA属性:
props.put("hibernate.dialect", "org.hibernate.dialect.PostgreSQL82Dialect");
props.put("hibernate.show_sql", "false");
props.put("generate_statistics", "false");
props.put("javax.persistence.sharedCache.mode", "ENABLE_SELECTIVE");
props.put("javax.persistence.validation.mode", "NONE");
props.put("hibernate.cache.use_second_level_cache", "false");
props.put("hibernate.cache.region.factory_class", "org.hibernate.cache.impl.NoCachingRegionFactory");
props.put("hibernate.hbm2ddl.auto", "false");
版本:Postgres 9.1.7,带有最新的9.2 JDBC驱动程序; Hibernate 4.2.3.Final; C3P0 0.9.2.1; Spring 3.2.2.RELEASE; Spring Data JPA 1.1.0;雄猫7; JDK 1.7
更新 - 我们当前使用的C3P0属性(切换为使用maxStatementsPerConnection
后)
c.m.v.c.i.AbstractPoolBackedDataSource [] - Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, dataSourceName -> 2s05p58v1s6oref13lw967|538ab4bc, debugUnreturnedConnectionStackTraces -> false, description -> null, driverClass -> org.postgresql.Driver, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, identityToken -> 2s05p58v1s6oref13lw967|538ab4bc, idleConnectionTestPeriod -> 1800, initialPoolSize -> 5, jdbcUrl -> jdbc:postgresql://******, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 20, maxStatements -> 0, maxStatementsPerConnection -> 20, minPoolSize -> 5, numHelperThreads -> 3, preferredTestQuery -> select 1, properties -> {user=******, password=******}, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> true, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, userOverrides -> {}, usesTraditionalReflectiveProxies -> false ]
答案 0 :(得分:2)
我不能确定这是你问题的原因,但我认为这是一个相当不错的镜头!
您已将maxStatements
设置为一个值,该值对于您携带的负载来说太低了。尝试将maxStatements
设置为零(关闭语句缓存),或者尝试将maxStatementsPerConnection
设置为20,我认为这是您可能想要的。您可以设置全局最多20个PreparedStatements,最多由20个Connections共享。这不太可能产生良好的表现。