ClassNotFoundException:无法找到数据源:bigquery

时间:2019-11-03 06:51:02

标签: java maven apache-spark google-bigquery google-cloud-dataproc

我正在尝试将数据从Google BigQuery加载到在Google Dataproc上运行的Spark中(我正在使用Java)。我尝试按照此处的说明进行操作:https://cloud.google.com/dataproc/docs/tutorials/bigquery-connector-spark-example

我收到错误消息:“ ClassNotFoundException: Failed to find data source: bigquery。”

我的pom.xml看起来像这样:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.virtualpairprogrammers</groupId>
    <artifactId>learningSpark</artifactId>
    <version>0.0.3-SNAPSHOT</version>
    <packaging>jar</packaging>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
        <java.version>1.8</java.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>2.3.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.11</artifactId>
            <version>2.3.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>2.2.0</version>
        </dependency>
        <dependency>
            <groupId>com.google.cloud.spark</groupId>
            <artifactId>spark-bigquery_2.11</artifactId>
            <version>0.9.1-beta</version>
            <classifier>shaded</classifier>
        </dependency>

    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.5.1</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
            <plugin>
                <artifactId>maven-jar-plugin</artifactId>
                <version>3.0.2</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                    <archive>
                        <manifest>
                            <mainClass>com.virtualpairprogrammers.Main</mainClass>
                        </manifest>
                    </archive>
                </configuration>
            </plugin>
        </plugins>
    </build>
</project>

在我的pom.xml文件中添加了依赖项之后,要下载大量内容以构建.jar,所以我认为我应该具有正确的依赖项?但是,Eclipse还警告我“从不使用import com.google.cloud.spark.bigquery”。

这是我收到错误的代码的一部分:

import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;
import com.google.cloud.spark.bigquery.*;

public class Main {

    public static void main(String[] args) {

        SparkSession spark = SparkSession.builder()
                .appName("testingSql")
                .getOrCreate();

        Dataset<Row> data = spark.read().format("bigquery")
                .option("table","project.dataset.tablename")
                .load()
                .cache();

2 个答案:

答案 0 :(得分:1)

我认为您只是将BQ连接器添加为编译时间依赖项,但是在运行时缺少它。您需要制作一个在工作jar中包含连接器的uber jar(需要更新文档),或者在提交工作gcloud dataproc jobs submit spark --properties spark.jars.packages=com.google.cloud.spark:spark-bigquery_2.11:0.9.1-beta时将其包括在内。

答案 1 :(得分:-1)

我遇到了同样的问题,并将格式从“bigquery”更新为“com.google.cloud.spark.bigquery”,这对我有用。