我们尝试安装并查看LTS sonarqube-6.7.2的功能。当我启动sonarqube时,它无法启动并在sonar.log中抛出错误,如下所示。
要确保我运行的弹性搜索内存命令sonarqube-es,这是sysctl -w vm.max_map_count=262144
。问题是弹性搜索内存还是服务器空间内存?
INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp
directory/opt/sonar/sonarqube-6.7.2/temp
INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='es',
ipcIndex=1,logFilenamePrefix=es]]fromopt/sonar/sonarqube6.7.2/elasticsearch]:/opt/sonar/sonarqube6.7.2/elasticsearch/bin/elasticsearch -Epath.conf=/opt/sonar/sonarqube-6.7.2/temp/conf/es
INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
INFO app[][o.e.p.PluginsService] no modules loaded
INFO app[][o.e.p.PluginsService] loaded plugin
[org.elasticsearch.transport.Netty4Plugin]
INFO app[][o.s.a.SchedulerImpl] Process[es] is up
INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='web',
ipcIndex=2, logFilenamePrefix=web]] from [/opt/sonar/sonarqube-6.7.2]:
/usr/java/jdk1.8.0_101/jre
/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -
Djava.io.tmpdir=/opt/sonar/sonarqube-6.7.2/temp -Xmx512m -Xms128m -
XX:+HeapDumpOnOutOfMemoryError -cp ./lib/common/*:./lib/serv
er/*:/opt/sonar/sonarqube-6.7.2/lib/jdbc/h2/h2-1.3.176.jar
org.sonar.server.app.WebServer /opt/sonar/sonarqube-6.7.2/temp/sq-
process7891544648476349877properties
INFO app[][o.s.a.SchedulerImpl] Process [web] is stopped
INFO app[][o.s.a.SchedulerImpl] Process [es] is stopped
WARN app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value
[es]: 143
INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
<-- Wrapper Stopped
另外es.log
WARN es[][o.e.b.BootstrapChecks] max file descriptors [4096] for
elasticsearch process is too low, increase to at least [65536]
1 INFO es[][o.e.c.s.ClusterService] new_master {sonarqube}
{NRF09Q0aSau2jbr7-dbo7w}{WOHhcKq8Qrqjrm6kEc0QZg}{127.0.0.1}{127.0.0.1:9001}
{rack_id=sonarqube}, reason: zen-disco-elected-as-master ([0] nodes joined)
INFO es[][o.e.n.Node] started
INFO es[][o.e.g.GatewayService] recovered [8] indices into cluster_state
INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from
[RED] to [GREEN] (reason: [shards started [[metadatas][0]] ...]).
INFO es[][o.e.n.Node] stopping ...
INFO es[][o.e.n.Node] stopped
INFO es[][o.e.n.Node] closing ...
INFO es[][o.e.n.Node] closed
INFO es[][o.e.n.Node] initializing ...
INFO es[][o.e.e.NodeEnvironment] using [1] data paths, mounts [[/opt
(/dev/mapper/rootvg-opt_lv)]], net usable_space [4.3gb], net total_space [
4.9gb], spins? [possibly], types [xfs]
INFO es[][o.e.e.NodeEnvironment] heap size [495.3mb], compressed ordinary
object pointers [true]
INFO es[][o.e.n.Node] node name [sonarqube], node ID [NRF09Q0aSau2jbr7-
dbo7w]
INFO es[][o.e.n.Node] version[5.6.3], pid[47816], build[1a2f265/2017-10-
06T20:33:39.012Z], OS[Linux/3.10.0-327.el7.x86_64/amd64], JVM[Oracle
Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_101/25.101-b13]
INFO es[][o.e.n.Node] JVM arguments [-XX:+UseConcMarkSweepGC, -
XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -
XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8,
-Djna.nosys=tr
ue, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -
Dio.netty.noKeySetOptimization=true, -
Dio.netty.recycler.maxCapacityPerThread=0, -
Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -
Dlog4j.skipJansi=true, -Xms512m,
-Xmx512m, -XX:+HeapDumpOnOutOfMemoryError, -
Des.path.home=/opt/sonar/sonarqube-6.7.2/elasticsearch]
INFO es[][o.e.p.PluginsService] loaded module [aggs-matrix-stats]
INFO es[][o.e.p.PluginsService] loaded module [ingest-common]
INFO es[][o.e.p.PluginsService] loaded module [parent-join]
INFO es[][o.e.p.PluginsService] loaded module [percolator]
INFO es[][o.e.p.PluginsService] loaded module [reindex]
INFO es[][o.e.p.PluginsService] loaded module [transport-netty4]
INFO es[][o.e.p.PluginsService] no plugins loaded
15:51:43 INFO es[][o.e.d.DiscoveryModule] using discovery type [zen]
15:51:44 INFO es[][o.e.n.Node] initialized
15:51:44 INFO es[][o.e.n.Node] starting ...
INFO es[][o.e.t.TransportService] publish_address {127.0.0.1:9001},
bound_addresses {127.0.0.1:9001}
WARN es[][o.e.b.BootstrapChecks] max file descriptors [4096] for
elasticsearch process is too low, increase to at least [65536]
INFO es[][o.e.c.s.ClusterService] new_master {sonarqube}{NRF09Q0aSau2jbr7-
dbo7w}{UVPJWQbGSdKpQJZ9dZRwCA}{127.0.0.1}{127.0.0.1:9001}
{rack_id=sonarqube},
reason: zen-disco-elected-as-master ([0] nodes joined)
INFO es[][o.e.n.Node] started
INFO es[][o.e.g.GatewayService] recovered [8] indices into cluster_state
INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from
[RED] to [GREEN] (reason: [shards started [[metadatas][0], [components][0]]
...]).
INFO es[][o.e.n.Node] stopping ...
INFO es[][o.e.n.Node] stopped
INFO es[][o.e.n.Node] closing ...
INFO es[][o.e.n.Node] closed
答案 0 :(得分:0)
以我为例,发现它是由于与Kafka发生端口冲突而发生的。
SonarQube使用嵌入式数据库作为默认数据库,其默认端口为9092,这也是Kafka的默认端口 (在sonarqube / logs / web.log中看到错误,“ java.net.BindException:地址已在使用中”)。
修复: 取消注释并将属性“ sonar.embeddedDatabase.port”的端口更改为9092以外的其他值。