无法从hive客户端找到由oozie hive操作创建的表,但可以在HDFS中找到它们

时间:2013-09-10 02:28:18

标签: hadoop hive oozie

我正在尝试通过Oozie Hive Action运行hive脚本,我刚刚在我的script.q中创建了一个hive表'test',并且oozie作业成功运行了,我可以在hdfs路径下找到由oozie job创建的表/用户/蜂巢/仓库。但我无法在Hive Client中通过命令“show tables”找到'test'表。

我认为我的Metastore配置有问题,但我无法弄明白。 有人可以帮忙吗?

oozie admin -oozie http://localhost:11000/oozie -status

系统模式:NORMAL

oozie job -oozie http://localhost:11000/oozie -config C:\Hadoop\oozie-3.2.0-incubating\oozie-win-distro\examples\apps\hive\job.properties -run

职位编号:0000001-130910094106919-oozie-hado-W

Run Result

这是我的oozie-site.xml


   http://www.apache.org/licenses/LICENSE-2.0

除非适用法律要求或书面同意,否则软件   根据许可证分发的“按现状”分发,   不附带任何明示或暗示的保证或条件。   有关管理权限的特定语言,请参阅许可证   许可证下的限制。 - >

<!--
    Refer to the oozie-default.xml file for the complete list of
    Oozie configuration properties and their default values.
-->

<property>
    <name>oozie.service.ActionService.executor.ext.classes</name>
    <value>
        org.apache.oozie.action.email.EmailActionExecutor,
        org.apache.oozie.action.hadoop.HiveActionExecutor,
        org.apache.oozie.action.hadoop.ShellActionExecutor,
        org.apache.oozie.action.hadoop.SqoopActionExecutor
    </value>
</property>

<property>
    <name>oozie.service.SchemaService.wf.ext.schemas</name>
    <value>shell-action-0.1.xsd,email-action-0.1.xsd,hive-action-0.2.xsd,sqoop-action-0.2.xsd,ssh-action-0.1.xsd</value>
</property>

<property>
    <name>oozie.system.id</name>
    <value>oozie-${user.name}</value>
    <description>
        The Oozie system ID.
    </description>
</property>

<property>
    <name>oozie.systemmode</name>
    <value>NORMAL</value>
    <description>
        System mode for  Oozie at startup.
    </description>
</property>

<property>
    <name>oozie.service.AuthorizationService.security.enabled</name>
    <value>false</value>
    <description>
        Specifies whether security (user name/admin role) is enabled or not.
        If disabled any user can manage Oozie system and manage any job.
    </description>
</property>

<property>
    <name>oozie.service.PurgeService.older.than</name>
    <value>30</value>
    <description>
        Jobs older than this value, in days, will be purged by the PurgeService.
    </description>
</property>

<property>
    <name>oozie.service.PurgeService.purge.interval</name>
    <value>3600</value>
    <description>
        Interval at which the purge service will run, in seconds.
    </description>
</property>

<property>
    <name>oozie.service.CallableQueueService.queue.size</name>
    <value>10000</value>
    <description>Max callable queue size</description>
</property>

<property>
    <name>oozie.service.CallableQueueService.threads</name>
    <value>10</value>
    <description>Number of threads used for executing callables</description>
</property>

<property>
    <name>oozie.service.CallableQueueService.callable.concurrency</name>
    <value>3</value>
    <description>
        Maximum concurrency for a given callable type.
        Each command is a callable type (submit, start, run, signal, job, jobs, suspend,resume, etc).
        Each action type is a callable type (Map-Reduce, Pig, SSH, FS, sub-workflow, etc).
        All commands that use action executors (action-start, action-end, action-kill and action-check) use
        the action type as the callable type.
    </description>
</property>

<property>
    <name>oozie.service.coord.normal.default.timeout
    </name>
    <value>120</value>
    <description>Default timeout for a coordinator action input check (in minutes) for normal job.
        -1 means infinite timeout</description>
</property>

<property>
    <name>oozie.db.schema.name</name>
    <value>oozie</value>
    <description>
        Oozie DataBase Name
    </description>
</property>

<property>
    <name>oozie.service.JPAService.create.db.schema</name>
    <value>true</value>
    <description>
        Creates Oozie DB.

        If set to true, it creates the DB schema if it does not exist. If the DB schema exists is a NOP.
        If set to false, it does not create the DB schema. If the DB schema does not exist it fails start up.
    </description>
</property>

<property>
    <name>oozie.service.JPAService.jdbc.driver</name>
    <value>org.apache.derby.jdbc.EmbeddedDriver</value>
    <description>
        JDBC driver class.
    </description>
</property>

<property>
    <name>oozie.service.JPAService.jdbc.url</name>
    <value>jdbc:derby:${oozie.data.dir}/${oozie.db.schema.name}-db;create=true</value>
    <description>
        JDBC URL.
    </description>
</property>

<property>
    <name>oozie.service.JPAService.jdbc.username</name>
    <value>sa</value>
    <description>
        DB user name.
    </description>
</property>

<property>
    <name>oozie.service.JPAService.jdbc.password</name>
    <value>pwd</value>
    <description>
        DB user password.

        IMPORTANT: if password is emtpy leave a 1 space string, the service trims the value,
                   if empty Configuration assumes it is NULL.
    </description>
</property>

<property>
    <name>oozie.service.JPAService.pool.max.active.conn</name>
    <value>10</value>
    <description>
         Max number of connections.
    </description>
</property>

<property>
    <name>oozie.service.HadoopAccessorService.kerberos.enabled</name>
    <value>false</value>
    <description>
        Indicates if Oozie is configured to use Kerberos.
    </description>
</property>

<property>
    <name>local.realm</name>
    <value>LOCALHOST</value>
    <description>
        Kerberos Realm used by Oozie and Hadoop. Using 'local.realm' to be aligned with Hadoop configuration
    </description>
</property>

<property>
    <name>oozie.service.HadoopAccessorService.keytab.file</name>
    <value>${user.home}/oozie.keytab</value>
    <description>
        Location of the Oozie user keytab file.
    </description>
</property>

<property>
    <name>oozie.service.HadoopAccessorService.kerberos.principal</name>
    <value>${user.name}/localhost@${local.realm}</value>
    <description>
        Kerberos principal for Oozie service.
    </description>
</property>

<property>
    <name>oozie.service.HadoopAccessorService.jobTracker.whitelist</name>
    <value> </value>
    <description>
        Whitelisted job tracker for Oozie service.
    </description>
</property>

<property>
    <name>oozie.service.HadoopAccessorService.nameNode.whitelist</name>
    <value> </value>
    <description>
        Whitelisted job tracker for Oozie service.
    </description>
</property>

<property>
    <name>oozie.service.HadoopAccessorService.hadoop.configurations</name>
    <value>*=hadoop-conf</value>
    <description>
        Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the HOST:PORT of
        the Hadoop service (JobTracker, HDFS). The wildcard '*' configuration is
        used when there is no exact match for an authority. The HADOOP_CONF_DIR contains
        the relevant Hadoop *-site.xml files. If the path is relative is looked within
        the Oozie configuration directory; though the path can be absolute (i.e. to point
        to Hadoop client conf/ directories in the local filesystem.
    </description>
</property>

<property>
    <name>oozie.service.WorkflowAppService.system.libpath</name>
    <value>/user/${user.name}/share/lib</value>
    <description>
        System library path to use for workflow applications.
        This path is added to workflow application if their job properties sets
        the property 'oozie.use.system.libpath' to true.
    </description>
</property>

<property>
    <name>use.system.libpath.for.mapreduce.and.pig.jobs</name>
    <value>false</value>
    <description>
        If set to true, submissions of MapReduce and Pig jobs will include
        automatically the system library path, thus not requiring users to
        specify where the Pig JAR files are. Instead, the ones from the system
        library path are used.
    </description>
</property>

<property>
    <name>oozie.authentication.type</name>
    <value>simple</value>
    <description>
        Defines authentication used for Oozie HTTP endpoint.
        Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME#
    </description>
</property>

<property>
    <name>oozie.authentication.token.validity</name>
    <value>36000</value>
    <description>
        Indicates how long (in seconds) an authentication token is valid before it has
        to be renewed.
    </description>
</property>

<property>
    <name>oozie.authentication.signature.secret</name>
    <value>oozie</value>
    <description>
        The signature secret for signing the authentication tokens.
        If not set a random secret is generated at startup time.
        In order to authentiation to work correctly across multiple hosts
        the secret must be the same across al the hosts.
    </description>
</property>

<property>
  <name>oozie.authentication.cookie.domain</name>
  <value></value>
  <description>
    The domain to use for the HTTP cookie that stores the authentication token.
    In order to authentiation to work correctly across multiple hosts
    the domain must be correctly set.
  </description>
</property>

<property>
    <name>oozie.authentication.simple.anonymous.allowed</name>
    <value>true</value>
    <description>
        Indicates if anonymous requests are allowed.
        This setting is meaningful only when using 'simple' authentication.
    </description>
</property>

<property>
    <name>oozie.authentication.kerberos.principal</name>
    <value>HTTP/localhost@${local.realm}</value>
    <description>
        Indicates the Kerberos principal to be used for HTTP endpoint.
        The principal MUST start with 'HTTP/' as per Kerberos HTTP SPNEGO specification.
    </description>
</property>

<property>
    <name>oozie.authentication.kerberos.keytab</name>
    <value>${oozie.service.HadoopAccessorService.keytab.file}</value>
    <description>
        Location of the keytab file with the credentials for the principal.
        Referring to the same keytab file Oozie uses for its Kerberos credentials for Hadoop.
    </description>
</property>

<property>
    <name>oozie.authentication.kerberos.name.rules</name>
    <value>DEFAULT</value>
    <description>
        The kerberos names rules is to resolve kerberos principal names, refer to Hadoop's
        KerberosName for more details.
    </description>
</property>

<!-- Proxyuser Configuration -->

<!--

<property>
    <name>oozie.service.ProxyUserService.proxyuser.#USER#.hosts</name>
    <value>*</value>
    <description>
        List of hosts the '#USER#' user is allowed to perform 'doAs'
        operations.

        The '#USER#' must be replaced with the username o the user who is
        allowed to perform 'doAs' operations.

        The value can be the '*' wildcard or a list of hostnames.

        For multiple users copy this property and replace the user name
        in the property name.
    </description>
</property>

<property>
    <name>oozie.service.ProxyUserService.proxyuser.#USER#.groups</name>
    <value>*</value>
    <description>
        List of groups the '#USER#' user is allowed to impersonate users
        from to perform 'doAs' operations.

        The '#USER#' must be replaced with the username o the user who is
        allowed to perform 'doAs' operations.

        The value can be the '*' wildcard or a list of groups.

        For multiple users copy this property and replace the user name
        in the property name.
    </description>
</property>

-->


这是我的hive-site.xml


[hive-site.xml]

这是我的script.q


create table test(id int);

1 个答案:

答案 0 :(得分:0)

在您的oozie蜂巢行动中,您需要告诉oozie您的蜂房Metastore在哪里。

表示您需要将hive-site.xml作为参数传递。

此外,您还需要为配置单元配置外部Metastore才能使其正常工作。默认的derby数据库配置不适合你。

所以简单步骤

使用外部数据库创建配置单元设置,比如mysql 将该hive-site.xml传递给oozie action

详情请见此处

http://oozie.apache.org/docs/3.3.1/DG_HiveActionExtension.html

由于