无法使用Spark中的Pheonix保存到HBase

时间:2016-01-02 23:30:14

标签: apache-spark phoenix

我正在尝试使用示例代码将数据从spark DataFrame保存到HBase。 我不知道我哪里出错但代码不适合我。

下面是我试过的代码。我能够获得现有表的RDD,但无法保存它。我尝试了几种方法,我已经提到了。

代码:

import scala.reflect.runtime.universe

import org.apache.hadoop.fs.Path
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.mapreduce.TableInputFormat
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.SaveMode

case class Person(id: String, name: String)
object PheonixTest extends App {
  val conf = new SparkConf;
  conf.setMaster("local");
  conf.setAppName("test")
  val sc = new SparkContext(conf)
  val sqlContext = new SQLContext(sc);

  val hbaseConf = HBaseConfiguration.create()
  hbaseConf.set(TableInputFormat.INPUT_TABLE, "table1")
  hbaseConf.addResource(new Path("/Users/srini/softwares/hbase-1.1.2/conf/hbase-site.xml"));

  import org.apache.phoenix.spark._;
  val phDf = sqlContext.phoenixTableAsDataFrame("table1", Array("id", "name"), conf = hbaseConf)

  println("===========>>>>>>>>>>>>>>>>>> " + phDf.show());

  val rdd = sc.parallelize(Seq("sr,Srini","sr2,Srini2"))
  import sqlContext.implicits._;

  val df = rdd.map { x => {val array = x.split(","); Person(array(0), array(1))} }.toDF;

  //df.write.format("org.apache.phoenix.spark").mode("overwrite") .option("table", "table1").option("zkUrl", "localhost:2181").save()

  //df.rdd.saveToP
  df.save("org.apache.phoenix.spark", SaveMode.Overwrite, Map("table" -> "table1", "zkUrl" -> "localhost:2181"))

  sc.stop()

}

的pom.xml

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.srini.plug</groupId>
    <artifactId>data-ingestion</artifactId>
    <version>1.0-SNAPSHOT</version>

    <dependencies>

        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
        </dependency>

        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.4</version>
        </dependency>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.10</artifactId>
            <version>1.5.2</version>
        </dependency>

        <dependency>
            <groupId>com.fasterxml.jackson.dataformat</groupId>
            <artifactId>jackson-dataformat-xml</artifactId>
            <version>2.4.4</version>
        </dependency>

        <dependency>
            <groupId>com.splunk</groupId>
            <artifactId>splunk</artifactId>
            <version>1.5.0.0</version>
        </dependency>


        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.10</artifactId>
            <version>1.5.2</version>
        </dependency>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming_2.10</artifactId>
            <version>1.5.2</version>
        </dependency>

        <dependency>
            <groupId>org.scalaj</groupId>
            <artifactId>scalaj-collection_2.10</artifactId>
            <version>1.5</version>
        </dependency>

        <dependency>
            <groupId>com.google.guava</groupId>
            <artifactId>guava</artifactId>
            <version>12.0</version>
        </dependency>

        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-client</artifactId>
            <version>1.1.2</version>
        </dependency>

        <dependency>
            <groupId>org.apache.phoenix</groupId>
            <artifactId>phoenix-spark</artifactId>
            <version>4.6.0-HBase-1.1</version>
        </dependency>

        <dependency>
            <groupId>com.datastax.spark</groupId>
            <artifactId>spark-cassandra-connector_2.10</artifactId>
            <version>1.4.1</version>
        </dependency>


    </dependencies>

    <repositories>
        <repository>
            <id>ext-release-local</id>
            <url>http://splunk.artifactoryonline.com/splunk/ext-releases-local</url>
        </repository>
    </repositories>

    <build>
        <plugins>
            <plugin>
                <groupId>org.scala-tools</groupId>
                <artifactId>maven-scala-plugin</artifactId>

                <executions>
                    <execution>
                        <id>compile</id>
                        <goals>
                            <goal>compile</goal>
                        </goals>
                        <phase>compile</phase>
                    </execution>

                    <execution>
                        <id>test-compile</id>
                        <goals>
                            <goal>testCompile</goal>
                        </goals>
                        <phase>test-compile</phase>
                    </execution>

                    <execution>
                        <phase>process-resources</phase>
                        <goals>
                            <goal>compile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>

            <plugin>
                <artifactId>maven-compiler-plugin</artifactId>
                <configuration>
                    <source>1.5</source>
                    <target>1.5</target>
                </configuration>
            </plugin>

            <plugin>
                <artifactId>maven-assembly-plugin</artifactId>
                <version>2.5.3</version>
                <executions>
                    <execution>
                        <id>create-archive</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>

                        <configuration>
                            <descriptorRefs>
                                <descriptorRef>
                                    jar-with-dependencies
                                </descriptorRef>
                            </descriptorRefs>
                            <archive>
                                <manifest>
                                    <mainClass>com.srini.ingest.SplunkSearch</mainClass>
                                </manifest>
                            </archive>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>

错误:

16/01/02 18:26:29 INFO ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x152031ff8da001c, negotiated timeout = 90000
16/01/02 18:27:18 INFO RpcRetryingCaller: Call exception, tries=10, retries=35, started=48344 ms ago, cancelled=false, msg=
16/01/02 18:27:38 INFO RpcRetryingCaller: Call exception, tries=11, retries=35, started=68454 ms ago, cancelled=false, msg=
16/01/02 18:27:58 INFO RpcRetryingCaller: Call exception, tries=12, retries=35, started=88633 ms ago, cancelled=false, msg=
16/01/02 18:28:19 INFO RpcRetryingCaller: Call exception, tries=13, retries=35, started=108817 ms ago, cancelled=false, msg=

1 个答案:

答案 0 :(得分:-1)

我注意到两个问题

  1. Zk url。如果您确定zookeeper在本地运行,请使用下面的条目更新您的hosts文件,并将主机名传递给HBaseConfiguration。

    => App running at: http://localhost:3000/ I20160104-14:23:58.795(1)? Exception while invoking method 'headersToken' Error: Did not check() all arguments during call to 'headersToken' I20160104-14:23:58.797(1)? at [object Object]._.extend.throwUnlessAllArgumentsHaveBeenChecked (packages/check/match.js:411:1) I20160104-14:23:58.797(1)? at Object.Match._failIfArgumentsAreNotAllChecked (packages/check/match.js:106:1) I20160104-14:23:58.797(1)? at maybeAuditArgumentChecks (livedata_server.js:1695:18) I20160104-14:23:58.797(1)? at livedata_server.js:708:19 I20160104-14:23:58.797(1)? at [object Object]._.extend.withValue (packages/meteor/dynamics_nodejs.js:56:1) I20160104-14:23:58.797(1)? at livedata_server.js:706:40 I20160104-14:23:58.798(1)? at [object Object]._.extend.withValue (packages/meteor/dynamics_nodejs.js:56:1) I20160104-14:23:58.798(1)? at livedata_server.js:704:46 I20160104-14:23:58.798(1)? at tryCallTwo (/Users/psychomachine/.meteor/packages/promise/.0.5.1.8idxpg++os+web.browser+web.cordova/npm/node_modules/meteor-promise/node_modules/promise/lib/core.js:45:5) I20160104-14:23:58.798(1)? at doResolve (/Users/psychomachine/.meteor/packages/promise/.0.5.1.8idxpg++os+web.browser+web.cordova/npm/node_modules/meteor-promise/node_modules/promise/lib/core.js:171:13)

  2. Phoenix默认情况下大写您的表名和列。因此,将上面的代码更改为 ipaddress hostname