Glassfish集群突然关闭

时间:2012-11-05 15:46:47

标签: java glassfish shutdown application-server

我有以下问题。我们的Glassfish 3.1服务器配置了一个集群,用于一些繁重的处理。有时很少,群集在没有任何已知原因的情况下关闭。我在日志中找不到任何例外,只是服务器记录关闭消息。 我附加了2个日志文件片段,一个来自DAS,另一个来自一个集群实例。所有节点都在同一台机器上。对此的任何帮助都会受到欢迎。

由于

DAS
11:46:45,322 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [file:/E:/Glassfish3/glassfish/domains/domain1/generated/jsp/managementtool-1.12.0.20/loader_1608711619/logback.xml]
11:46:45,416 |-ERROR in ch.qos.logback.classic.joran.action.InsertFromJNDIAction - [SMTPServer] has null or empty value
11:46:45,416 |-INFO in ch.qos.logback.classic.joran.action.InsertFromJNDIAction - Setting context variable [LOG_LOCATION] to [E:\\v1\\logs]
11:46:45,416 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.rolling.RollingFileAppender]
11:46:45,416 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [FILE]
11:46:45,478 |-INFO in c.q.l.core.rolling.TimeBasedRollingPolicy - No compression will be used
11:46:45,494 |-INFO in c.q.l.core.rolling.TimeBasedRollingPolicy - Will use the pattern E://v1//logs/managementtool/spm-%d{yyyy-MM-dd}.%i.log for the active file
11:46:45,494 |-INFO in ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP@468b2c18 - The date pattern is 'yyyy-MM-dd' from file name pattern 'E://v1//logs/managementtool/spm-%d{yyyy-MM-dd}.%i.log'.
11:46:45,494 |-INFO in ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP@468b2c18 - Roll-over at midnight.
11:46:45,494 |-INFO in ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP@468b2c18 - Setting initial period to Fri Jun 29 11:46:45 EEST 2012
11:46:45,494 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
11:46:45,588 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - Active log file name: E://v1//logs/managementtool/spm-2012-06-29.0.log
11:46:45,588 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - File property is set to [null]
11:46:45,588 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
11:46:45,603 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDOUT]
11:46:45,603 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
11:46:45,603 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to INFO
11:46:45,603 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to Logger[ROOT]
11:46:45,603 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.mypackage.central] to DEBUG
11:46:45,603 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.mypackage.managementtool] to DEBUG
11:46:45,603 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
11:46:45,603 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@80e1ba6 - Registering current configuration as safe fallback     [#|2012-06-29T11:47:09.479+0300|INFO|glassfish3.1.1|javax.enterprise.system.tools.admin.org.glassfish.deployment.admin|_ThreadID=111;_ThreadName=Thread-2;|managementtool-1.12.0.20 was successfully deployed in 49,564 milliseconds.|#]

[#|2012-06-29T11:49:57.263+0300|INFO|glassfish3.1.1|ShoalLogger|_ThreadID=19;_ThreadName=Thread-2;|GMS1015: Received Group Shutting down message from member: server of group: POD_Processing_Cl01|#]

[#|2012-06-29T11:49:57.263+0300|INFO|glassfish3.1.1|ShoalLogger|_ThreadID=18;_ThreadName=Thread-2;|GMS1092: GMS View Change Received for group: POD_Processing_Cl01 : Members in view for MASTER_CHANGE_EVENT(before change analysis) are :
1: MemberId: POD_Processing_Cl01_ins01, MemberType: CORE, Address: 10.220.10.67:9171:228.9.103.177:16084:POD_Processing_Cl01:POD_Processing_Cl01_ins01
2: MemberId: POD_Processing_Cl01_ins01, MemberType: CORE, Address: 10.220.20.110:9181:228.9.103.177:16084:POD_Processing_Cl01:POD_Processing_Cl01_ins01
3: MemberId: POD_Processing_Cl01_ins01, MemberType: CORE, Address: 10.220.20.195:9134:228.9.103.177:16084:POD_Processing_Cl01:POD_Processing_Cl01_ins01
4: MemberId: POD_Processing_Cl01_ins02, MemberType: CORE, Address: 10.220.10.67:9106:228.9.103.177:16084:POD_Processing_Cl01:POD_Processing_Cl01_ins02
5: MemberId: POD_Processing_Cl01_ins02, MemberType: CORE, Address: 10.220.20.110:9152:228.9.103.177:16084:POD_Processing_Cl01:POD_Processing_Cl01_ins02
6: MemberId: POD_Processing_Cl01_ins02, MemberType: CORE, Address: 10.220.20.195:9090:228.9.103.177:16084:POD_Processing_Cl01:POD_Processing_Cl01_ins02
7: MemberId: server, MemberType: SPECTATOR, Address: 10.220.10.67:9157:228.9.103.177:16084:POD_Processing_Cl01:server
8: MemberId: server, MemberType: SPECTATOR, Address: 10.220.20.110:9154:228.9.103.177:16084:POD_Processing_Cl01:server
9: MemberId: server, MemberType: SPECTATOR, Address: 10.220.20.195:9149:228.9.103.177:16084:POD_Processing_Cl01:server
10: MemberId: server, MemberType: SPECTATOR, Address: 10.220.20.197:9129:228.9.103.177:16084:POD_Processing_Cl01:server

Cluster
2012-06-29 11:43:27,396 [com-mypackage-worker-Execution slot for task {runId=4121, environmentId=33}}] DEBUG envId{33} - sesId{} - com.mypackage.worker.taskprocessing.flow.stage.DispatcherNotificationStage[61] - Beginning stage WORKER_DISPATCHER_NOTIFICATION...
2012-06-29 11:43:27,396 [com-mypackage-worker-Execution slot for task {runId=4121, environmentId=33}}] INFO  envId{33} - sesId{} - com.mypackage.worker.communication.jms.JMSCompletionMessageSender[45] - Sending to Dispatcher JMS completion message for task {runId=4121, environmentId=33}, status=RUN_COMPLETED}...
2012-06-29 11:43:27,411 [com-mypackage-worker-Execution slot for task {runId=4121, environmentId=33}}] INFO  envId{33} - sesId{} - com.mypackage.worker.communication.jms.JMSCompletionMessageSender[45] - Sending to Dispatcher JMS completion message for task {runId=4121, environmentId=33}, status=RUN_COMPLETED}...DONE
2012-06-29 11:43:27,411 [com-mypackage-worker-Execution slot for task {runId=4121, environmentId=33}}] DEBUG envId{33} - sesId{} - com.mypackage.worker.taskprocessing.flow.stage.DispatcherNotificationStage[61] - Ending stage WORKER_DISPATCHER_NOTIFICATION...
2012-06-29 11:43:27,411 [com-mypackage-worker-Execution slot for task {runId=4121, environmentId=33}}] INFO  envId{33} - sesId{} - com.mypackage.worker.taskprocessing.slotpool.SlotPool[49] - Releasing execution slot for completed task {runId=4121, environmentId=33}}...
2012-06-29 11:43:27,411 [com-mypackage-worker-Execution slot for task {runId=4121, environmentId=33}}] INFO  envId{33} - sesId{} - com.mypackage.worker.taskprocessing.slotpool.SlotPool[49] - Releasing execution slot for completed task {runId=4121, environmentId=33}}...DONE
2012-06-29 11:43:27,411 [com-mypackage-worker-Execution slot for task {runId=4121, environmentId=33}}] DEBUG envId{33} - sesId{} - com.mypackage.worker.taskprocessing.slotpool.SlotPool[61] - Ending stage WORKER_SLOT_POOL...task {runId=4121, environmentId=33}} 
2012-06-29 11:43:27,411 [com-mypackage-worker-Execution slot for task {runId=4121, environmentId=33}}] DEBUG envId{33} - sesId{} - com.mypackage.worker.taskprocessing.ProcessingSlot[61] - Ending method doProcess()...
2012-06-29 11:43:27,411 [com-mypackage-worker-Execution slot for task {runId=4121, environmentId=33}}] DEBUG envId{33} - sesId{} - com.mypackage.worker.taskprocessing.ProcessingSlot[61] - Ending stage WORKER_SLOT...
2012-06-29 11:44:00,443 [admin-thread-pool-24850(5)] DEBUG envId{} - sesId{} - com.mypackage.worker.service.WorkerInstancePropertiesService[61] - Beginning method destroy()...
2012-06-29 11:44:00,459 [admin-thread-pool-24850(5)] DEBUG envId{} - sesId{} - com.mypackage.worker.service.WorkerInstancePropertiesService[61] - Ending method destroy()...
2012-06-29 11:44:00,459 [admin-thread-pool-24850(5)] DEBUG envId{} - sesId{} - com.mypackage.worker.WorkerImpl[61] - Beginning method destroy()...
2012-06-29 11:44:00,459 [admin-thread-pool-24850(5)] DEBUG envId{} - sesId{} - com.mypackage.worker.WorkerImpl[61] - Ending method destroy()...
2012-06-29 11:44:00,459 [admin-thread-pool-24850(5)] DEBUG envId{} - sesId{} - com.mypackage.worker.taskrepository.TaskPreallocationQueue[61] - Beginning method shutdown()...
2012-06-29 11:44:00,459 [admin-thread-pool-24850(5)] DEBUG envId{} - sesId{} - com.mypackage.worker.taskrepository.TaskPreallocationQueue[61] - Ending method shutdown()...
2012-06-29 11:44:00,459 [admin-thread-pool-24850(5)] INFO  envId{} - sesId{} - org.springframework.context.support.ClassPathXmlApplicationContext[1020] - Closing org.springframework.context.support.ClassPathXmlApplicationContext@aedb61f: startup date [Fri Jun 29 11:31:34 EEST 2012]; root of context hierarchy
2012-06-29 11:44:00,459 [admin-thread-pool-24850(5)] DEBUG envId{} - sesId{} - com.mypackage.appframework.persistence.PersistenceServiceImpl[74] - Destroying datasource for environment id : 33
2012-06-29 11:44:00,599 [admin-thread-pool-24850(5)] INFO  envId{} - sesId{} - com.mypackage.appframework.persistence.PersistenceServiceImpl[76] - Destroyed data source for environment id : 33
2012-06-29 11:44:00,599 [admin-thread-pool-24850(5)] INFO  envId{} - sesId{} - org.hibernate.impl.SessionFactoryImpl[925] - closing
2012-06-29 11:44:00,615 [admin-thread-pool-24850(5)] INFO  envId{} - sesId{} - org.springframework.jmx.export.MBeanExporter[429] - Unregistering JMX-exposed beans on shutdown
2012-06-29 11:44:00,646 [Duration logger] INFO  envId{} - sesId{} - com.mypackage.appframework.util.Consumer[75] - Duration logger thread is shutting down
2012-06-29 11:44:00,646 [Alert reporter] INFO  envId{} - sesId{} - com.mypackage.appframework.util.Consumer[75] - Alert reporter thread is shutting down
2012-06-29 11:44:00,709 [admin-thread-pool-24850(5)] INFO  envId{} - sesId{} - com.mypackage.appframework.cache.metadata.MetadataRegistrationService[78] - Successfully unregistered for environment with id 33 and name Env1
2012-06-29 11:44:00,724 [admin-thread-pool-24850(5)] INFO  envId{} - sesId{} - org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean[441] - Closing JPA EntityManagerFactory for persistence unit 'central'
2012-06-29 11:44:00,740 [admin-thread-pool-24850(5)] INFO  envId{} - sesId{} - org.hibernate.impl.SessionFactoryImpl[925] - closing
2012-06-29 11:44:00,740 [admin-thread-pool-24850(5)] DEBUG envId{} - sesId{} - com.mypackage.central.persistence.CentralDataSource[57] - Destroying central data source.
2012-06-29 11:44:00,818 [admin-thread-pool-24850(5)] DEBUG envId{} - sesId{} - com.mypackage.webframework.config.LogbackContextListener[77] - Stopping the loggerContext for worker on the context destroy event

1 个答案:

答案 0 :(得分:0)

似乎在不同的机器(不同的IP地址)上有多个群集共享相同的GMS地址和端口。这就是为什么当一个集群收到关闭消息时,其他集群也接收它,并采取相应的行动。

1: MemberId: POD_Processing_Cl01_ins01, MemberType: CORE, Address: 10.220.10.67:9171:228.9.103.177:16084:POD_Processing_Cl01:POD_Processing_Cl01_ins01
2: MemberId: POD_Processing_Cl01_ins01, MemberType: CORE, Address: 10.220.20.110:9181:228.9.103.177:16084:POD_Processing_Cl01:POD_Processing_Cl01_ins01
3: MemberId: POD_Processing_Cl01_ins01, MemberType: CORE, Address: 10.220.20.195:9134:228.9.103.177:16084:POD_Processing_Cl01:POD_Processing_Cl01_ins01
4: MemberId: POD_Processing_Cl01_ins02, MemberType: CORE, Address: 10.220.10.67:9106:228.9.103.177:16084:POD_Processing_Cl01:POD_Processing_Cl01_ins02
5: MemberId: POD_Processing_Cl01_ins02, MemberType: CORE, Address: 10.220.20.110:9152:228.9.103.177:16084:POD_Processing_Cl01:POD_Processing_Cl01_ins02
6: MemberId: POD_Processing_Cl01_ins02, MemberType: CORE, Address: 10.220.20.195:9090:228.9.103.177:16084:POD_Processing_Cl01:POD_Processing_Cl01_ins02
7: MemberId: server, MemberType: SPECTATOR, Address: 10.220.10.67:9157:228.9.103.177:16084:POD_Processing_Cl01:server
8: MemberId: server, MemberType: SPECTATOR, Address: 10.220.20.110:9154:228.9.103.177:16084:POD_Processing_Cl01:server
9: MemberId: server, MemberType: SPECTATOR, Address: 10.220.20.195:9149:228.9.103.177:16084:POD_Processing_Cl01:server
10: MemberId: server, MemberType: SPECTATOR, Address: 10.220.20.197:9129:228.9.103.177:16084:POD_Processing_Cl01:server