我正在使用Elasticsearch 2.3和Debian 8,谷歌云平台上有3个节点的集群。现在我尝试使用elasticsearch.yml
中的以下代码行启用动态脚本script.inline: on
script.indexed: on
script.engine.groovy.inline.aggs: on
script.groovy.sandbox.enabled: false
现在为了反映所有节点的变化,我试图通过在所有3个节点终端上写下面的命令来重启所有3个节点。
sudo service elasticsearch restart
现在我面临的问题是,当我尝试在终端中编写任何弹性搜索查询时,我在所有3个节点上都出现以下错误。
curl http://localhost:9200
curl: (7) Failed to connect to localhost port 9200: Connection refused
现在任何人都可以告诉我,因为我重新启动了每个节点,为什么我的节点都没有在端口9200上运行。我认为错误是在重新启动我的集群的某个地方,这就是为什么我认为“sudo service elasticsearch restart”没有重启我的ES服务器成功。现在有人可以请教我为什么“sudo service elasticsearch restart”对我不起作用或者如果这不是重启节点集群的正确方法那么实际上是什么?
当我试图在命令下运行时出现此错误
golumyntra@elasticsearch-cluster-1-vm:~$ sudo /etc/init.d/elasticsearch status
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled)
Active: failed (Result: exit-code) since Fri 2016-09-09 18:36:25 UTC; 1h 11min ago
Docs: http://www.elastic.co
Process: 5289 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -Des.pidfile=${PID_DIR}/elasticsearch.pid -
Des.default.path.home=${ES_HOME} -Des.default.path.logs=${LOG_DIR} -Des.default.path.data=${DATA_DIR} -Des.defau
lt.path.conf=${CONF_DIR} (code=exited, status=1/FAILURE)
Process: 5286 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/
SUCCESS)
Main PID: 5289 (code=exited, status=1/FAILURE)
Sep 09 18:36:25 elasticsearch-cluster-1-vm elasticsearch[5289]: at java.nio.file.spi.FileSystemProvider.new...4)
Sep 09 18:36:25 elasticsearch-cluster-1-vm elasticsearch[5289]: at java.nio.file.Files.newInputStream(Files...2)
Sep 09 18:36:25 elasticsearch-cluster-1-vm elasticsearch[5289]: at org.elasticsearch.common.settings.Settin...7)
Sep 09 18:36:25 elasticsearch-cluster-1-vm elasticsearch[5289]: at org.elasticsearch.node.internal.Internal...8)
Sep 09 18:36:25 elasticsearch-cluster-1-vm elasticsearch[5289]: at org.elasticsearch.bootstrap.Bootstrap.in...2)
Sep 09 18:36:25 elasticsearch-cluster-1-vm elasticsearch[5289]: at org.elasticsearch.bootstrap.Bootstrap.in...1)
Sep 09 18:36:25 elasticsearch-cluster-1-vm elasticsearch[5289]: at org.elasticsearch.bootstrap.Elasticsearc...5)
Sep 09 18:36:25 elasticsearch-cluster-1-vm elasticsearch[5289]: Refer to the log for complete error details.
Sep 09 18:36:25 elasticsearch-cluster-1-vm systemd[1]: elasticsearch.service: main process exited, code=ex...URE
Sep 09 18:36:25 elasticsearch-cluster-1-vm systemd[1]: Unit elasticsearch.service entered failed state.
Hint: Some lines were ellipsized, use -l to show in full.
当我尝试运行以下命令获取弹性搜索状态时,我收到此错误
golumyntra@elasticsearch-cluster-1-vm:~$ sudo service elasticsearch status
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled)
Active: failed (Result: exit-code) since Fri 2016-09-09 18:36:25 UTC; 1h 16min ago
Docs: http://www.elastic.co
Process: 5289 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -Des.pidfile=${PID_DIR}/elasticsearch.pid
-Des.default.path.home=${ES_HOME} -Des.default.path.logs=${LOG_DIR} -Des.default.path.data=${DATA_DIR} -Des.def
ault.path.conf=${CONF_DIR} (code=exited, status=1/FAILURE)
Process: 5286 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0
/SUCCESS)
Main PID: 5289 (code=exited, status=1/FAILURE)
Sep 09 18:36:25 elasticsearch-cluster-1-vm elasticsearch[5289]: at java.nio.file.spi.FileSystemProvider.ne...4)
Sep 09 18:36:25 elasticsearch-cluster-1-vm elasticsearch[5289]: at java.nio.file.Files.newInputStream(File...2)
Sep 09 18:36:25 elasticsearch-cluster-1-vm elasticsearch[5289]: at org.elasticsearch.common.settings.Setti...7)
Sep 09 18:36:25 elasticsearch-cluster-1-vm elasticsearch[5289]: at org.elasticsearch.node.internal.Interna...8)
Sep 09 18:36:25 elasticsearch-cluster-1-vm elasticsearch[5289]: at org.elasticsearch.bootstrap.Bootstrap.i...2)
Sep 09 18:36:25 elasticsearch-cluster-1-vm elasticsearch[5289]: at org.elasticsearch.bootstrap.Bootstrap.i...1)
Sep 09 18:36:25 elasticsearch-cluster-1-vm elasticsearch[5289]: at org.elasticsearch.bootstrap.Elasticsear...5)
Sep 09 18:36:25 elasticsearch-cluster-1-vm elasticsearch[5289]: Refer to the log for complete error details.
Sep 09 18:36:25 elasticsearch-cluster-1-vm systemd[1]: elasticsearch.service: main process exited, code=e...URE
Sep 09 18:36:25 elasticsearch-cluster-1-vm systemd[1]: Unit elasticsearch.service entered failed state.
Hint: Some lines were ellipsized, use -l to show in full.
我的 elasticsearch.yml 在下面给出
script.inline: on
script.indexed: on
script.engine.groovy.inline.aggs: on
script.groovy.sandbox.enabled: false
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please see the documentation for further information on configuration options:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html>
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: elasticsearch-cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: "elasticsearch-cluster-3-vm"
#
# Add custom attributes to the node:
#
# node.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch/data
#
# Path to log files:
#
# path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
# bootstrap.mlockall: true
#
# Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory
# available on the system and that the owner of the process is allowed to use this limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
# http.port: 9200
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.zen.ping.unicast.hosts: ["10.140.0.8", "10.140.0.7", "10.140.0.9"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
# discovery.zen.minimum_master_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
# gateway.recover_after_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html>
#
# ---------------------------------- Various -----------------------------------
#
# Disable starting multiple nodes on a single system:
#
# node.max_local_storage_nodes: 1
#
# Require explicit names when deleting indices:
#
# action.destructive_requires_name: true
以下是日志文件 elasticsearch-cluster.log
[2016-09-09 17:34:17,440][INFO ][node ] [elasticsearch-cluster-1-vm] version[2.3.4], pid[48
03], build[e455fd0/2016-06-30T11:24:31Z]
[2016-09-09 17:34:17,447][INFO ][node ] [elasticsearch-cluster-1-vm] initializing ...
[2016-09-09 17:34:18,641][INFO ][plugins ] [elasticsearch-cluster-1-vm] modules [reindex, lang
-expression, lang-groovy], plugins [], sites []
[2016-09-09 17:34:18,685][INFO ][env ] [elasticsearch-cluster-1-vm] using [1] data paths,
mounts [[/elasticsearch (/dev/sdb)]], net usable_space [9.1gb], net total_space [9.7gb], spins? [possibly], typ
es [ext4]
[2016-09-09 17:34:18,686][INFO ][env ] [elasticsearch-cluster-1-vm] heap size [1015.6mb],
compressed ordinary object pointers [true]
[2016-09-09 17:34:18,686][WARN ][env ] [elasticsearch-cluster-1-vm] max file descriptors [
65535] for elasticsearch process likely too low, consider increasing to at least [65536]
[2016-09-09 17:34:23,869][INFO ][node ] [elasticsearch-cluster-1-vm] initialized
[2016-09-09 17:34:23,870][INFO ][node ] [elasticsearch-cluster-1-vm] starting ...
[2016-09-09 17:34:24,010][INFO ][transport ] [elasticsearch-cluster-1-vm] publish_address {10.14
0.0.8:9300}, bound_addresses {[::]:9300}
[2016-09-09 17:34:24,020][INFO ][discovery ] [elasticsearch-cluster-1-vm] elasticsearch-cluster/
KrC2l77BSxWBkp9SaTPiUQ
[2016-09-09 17:34:25,605][INFO ][node ] [elasticsearch-cluster-1-vm] stopping ...
[2016-09-09 17:34:25,622][INFO ][node ] [elasticsearch-cluster-1-vm] stopped
[2016-09-09 17:34:25,622][INFO ][node ] [elasticsearch-cluster-1-vm] closing ...
[2016-09-09 17:34:25,632][INFO ][node ] [elasticsearch-cluster-1-vm] closed
[2016-09-09 17:34:26,714][INFO ][node ] [elasticsearch-cluster-1-vm] version[2.3.4], pid[49
04], build[e455fd0/2016-06-30T11:24:31Z]
[2016-09-09 17:34:26,716][INFO ][node ] [elasticsearch-cluster-1-vm] initializing ...
[2016-09-09 17:34:27,610][INFO ][plugins ] [elasticsearch-cluster-1-vm] modules [reindex, lang
-expression, lang-groovy], plugins [license, marvel-agent], sites []
[2016-09-09 17:34:27,638][INFO ][env ] [elasticsearch-cluster-1-vm] using [1] data paths,
mounts [[/elasticsearch (/dev/sdb)]], net usable_space [9.1gb], net total_space [9.7gb], spins? [possibly], typ
es [ext4]
[2016-09-09 17:34:27,639][INFO ][env ] [elasticsearch-cluster-1-vm] heap size [1015.6mb],
compressed ordinary object pointers [true]
[2016-09-09 17:34:27,640][WARN ][env ] [elasticsearch-cluster-1-vm] max file descriptors [
65535] for elasticsearch process likely too low, consider increasing to at least [65536]
[2016-09-09 17:34:30,764][INFO ][node ] [elasticsearch-cluster-1-vm] initialized
[2016-09-09 17:34:30,767][INFO ][node ] [elasticsearch-cluster-1-vm] starting ...
[2016-09-09 17:34:30,880][INFO ][transport ] [elasticsearch-cluster-1-vm] publish_address {10.14
0.0.8:9300}, bound_addresses {[::]:9300}
[2016-09-09 17:34:30,886][INFO ][discovery ] [elasticsearch-cluster-1-vm] elasticsearch-cluster/
bbFuynfLSMuPNveLrhdf8A
[2016-09-09 17:34:33,942][WARN ][discovery.zen ] [elasticsearch-cluster-1-vm] failed to connect to m
aster [{elasticsearch-cluster-3-vm}{7Q2pD7oFSKmsmhTlaVqiVQ}{10.140.0.9}{10.140.0.9:9300}], retrying...
ConnectTransportException[[elasticsearch-cluster-3-vm][10.140.0.9:9300] connect_timeout[30s]]; nested: ConnectE
xception[Connection refused: /10.140.0.9:9300];
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:987)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:920)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:893)
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:260)
at org.elasticsearch.discovery.zen.ZenDiscovery.joinElectedMaster(ZenDiscovery.java:434)
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:386)
at org.elasticsearch.discovery.zen.ZenDiscovery.access$4800(ZenDiscovery.java:91)
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1237)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: /10.140.0.9:9300
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152)
at org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
... 3 more
[2016-09-09 17:34:36,978][WARN ][discovery.zen ] [elasticsearch-cluster-1-vm] failed to connect to m
aster [{elasticsearch-cluster-3-vm}{7Q2pD7oFSKmsmhTlaVqiVQ}{10.140.0.9}{10.140.0.9:9300}], retrying...
ConnectTransportException[[elasticsearch-cluster-3-vm][10.140.0.9:9300] connect_timeout[30s]]; nested: ConnectE
xception[Connection refused: /10.140.0.9:9300];
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:987)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:920)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:893)
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:260)
at org.elasticsearch.discovery.zen.ZenDiscovery.joinElectedMaster(ZenDiscovery.java:434)
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:386)
at org.elasticsearch.discovery.zen.ZenDiscovery.access$4800(ZenDiscovery.java:91)
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1237)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: /10.140.0.9:9300
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152)
at org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
... 3 more
[2016-09-09 17:34:40,779][INFO ][marvel.agent.exporter ] [elasticsearch-cluster-1-vm] skipping exporter [def
ault_local] as it isn't ready yet
[2016-09-09 17:34:41,855][INFO ][cluster.service ] [elasticsearch-cluster-1-vm] detected_master {elast
icsearch-cluster-3-vm}{LBnGmCW6RRikUqEZPzejjw}{10.140.0.9}{10.140.0.9:9300}, added {{elasticsearch-cluster-3-vm
}{LBnGmCW6RRikUqEZPzejjw}{10.140.0.9}{10.140.0.9:9300},{elasticsearch-cluster-2-vm}{NgAty4rhS5a35208Ive-vw}{10.
140.0.7}{10.140.0.7:9300},}, reason: zen-disco-receive(from master [{elasticsearch-cluster-3-vm}{LBnGmCW6RRikUq
EZPzejjw}{10.140.0.9}{10.140.0.9:9300}])
[2016-09-09 17:34:41,879][INFO ][http ] [elasticsearch-cluster-1-vm] publish_address {10.14
0.0.8:9200}, bound_addresses {[::]:9200}
[2016-09-09 17:34:41,880][INFO ][node ] [elasticsearch-cluster-1-vm] started
[2016-09-09 17:34:41,888][WARN ][discovery.zen ] [elasticsearch-cluster-1-vm] master_switched_while_
finalizing_join, current nodes: {{elasticsearch-cluster-3-vm}{LBnGmCW6RRikUqEZPzejjw}{10.140.0.9}{10.140.0.9:93
00},{elasticsearch-cluster-2-vm}{NgAty4rhS5a35208Ive-vw}{10.140.0.7}{10.140.0.7:9300},{elasticsearch-cluster-1-
vm}{bbFuynfLSMuPNveLrhdf8A}{10.140.0.8}{10.140.0.8:9300},}
[2016-09-09 17:34:41,984][INFO ][cluster.service ] [elasticsearch-cluster-1-vm] detected_master {elast
icsearch-cluster-3-vm}{LBnGmCW6RRikUqEZPzejjw}{10.140.0.9}{10.140.0.9:9300}, reason: zen-disco-receive(from mas
ter [{elasticsearch-cluster-3-vm}{LBnGmCW6RRikUqEZPzejjw}{10.140.0.9}{10.140.0.9:9300}])
[2016-09-09 17:34:42,744][INFO ][license.plugin.core ] [elasticsearch-cluster-1-vm] license [f47ae398-a8f7
-4197-bb34-6ceae7443c8f] - valid
[2016-09-09 17:34:42,747][ERROR][license.plugin.core ] [elasticsearch-cluster-1-vm]
#
# License will expire on [Sunday, October 09, 2016]. If you have a new license, please update it.
# Otherwise, please reach out to your support contact.
#
# Commercial plugins operate with reduced functionality on license expiration:
# - marvel
# - The agent will stop collecting cluster and indices metrics
# - The agent will stop automatically cleaning indices older than [marvel.history.duration]
[2016-09-09 18:36:24,359][INFO ][node ] [elasticsearch-cluster-1-vm] stopping ...
[2016-09-09 18:36:24,531][INFO ][node ] [elasticsearch-cluster-1-vm] stopped
[2016-09-09 18:36:24,531][INFO ][node ] [elasticsearch-cluster-1-vm] closing ...
[2016-09-09 18:36:24,543][INFO ][node ] [elasticsearch-cluster-1-vm] closed
以下是日志文件 elasticsearch.log
[2016-07-27 06:55:15,454][INFO ][node ] [Cold War] version[2.3.4], pid[20240], build[e455fd0
/2016-06-30T11:24:31Z]
[2016-07-27 06:55:15,460][INFO ][node ] [Cold War] initializing ...
[2016-07-27 06:55:16,772][INFO ][plugins ] [Cold War] modules [reindex, lang-expression, lang-g
roovy], plugins [], sites []
[2016-07-27 06:55:16,822][INFO ][env ] [Cold War] using [1] data paths, mounts [[/ (rootfs)
]], net usable_space [7.2gb], net total_space [9.7gb], spins? [unknown], types [rootfs]
[2016-07-27 06:55:16,827][INFO ][env ] [Cold War] heap size [1015.6mb], compressed ordinary
object pointers [true]
[2016-07-27 06:55:16,827][WARN ][env ] [Cold War] max file descriptors [65535] for elastics
earch process likely too low, consider increasing to at least [65536]
[2016-07-27 06:55:21,718][INFO ][node ] [Cold War] initialized
[2016-07-27 06:55:21,731][INFO ][node ] [Cold War] starting ...
[2016-07-27 06:55:21,947][INFO ][transport ] [Cold War] publish_address {127.0.0.1:9300}, bound_a
ddresses {[::1]:9300}, {127.0.0.1:9300}
[2016-07-27 06:55:21,966][INFO ][discovery ] [Cold War] elasticsearch/NpR2fn7ZSaqwTdo32BTRrw
[2016-07-27 06:55:25,104][INFO ][cluster.service ] [Cold War] new_master {Cold War}{NpR2fn7ZSaqwTdo32BT
Rrw}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-07-27 06:55:25,156][INFO ][http ] [Cold War] publish_address {127.0.0.1:9200}, bound_a
ddresses {[::1]:9200}, {127.0.0.1:9200}
[2016-07-27 06:55:25,159][INFO ][node ] [Cold War] started
[2016-07-27 06:55:25,197][INFO ][gateway ] [Cold War] recovered [0] indices into cluster_state
[2016-07-27 06:55:26,355][INFO ][node ] [Cold War] stopping ...
[2016-07-27 06:55:26,371][INFO ][node ] [Cold War] stopped
[2016-07-27 06:55:26,372][INFO ][node ] [Cold War] closing ...
[2016-07-27 06:55:26,395][INFO ][node ] [Cold War] closed
最后我得到了解决方案,错误是我的easticsearch用户没有文件夹etc / elasticsearch的权限。现在只有一个疑问,我应该将elasticsearch用户添加到www-data组并给予权限,还是应该将我的用户添加到elasticsearch组。两者都是一回事,但我想知道将来可能会出现任何问题。
先谢谢
答案 0 :(得分:0)
转到dir /etc/elasticsearch/elasticsearch.yml并添加以下行:
set appname to "System Preferences" -------------------------- Set this to the App you want to look at
display dialog "Set the app you want to look at" default answer "System Preferences" buttons {"OK"} default button 1
set appname to text returned of result
set winstuff to "defaultval"
set menustuff to "defaultval"
tell application appname
activate
end tell
delay 0.5
tell application "System Events"
tell process appname
set winstuff to entire contents of front window
set menustuff to entire contents of menu bar 1
end tell
end tell
--return winstuff & "\r\r\r\r" & menustuff -- comment this out to get just winstuff
return winstuff -- comment this out too to get just menustuff
--return menustuff