Flink 1.4引发错误

时间:2018-02-13 12:41:38

标签: apache-flink flink-streaming

只是尝试从flink 1.3迁移到1.4并获得此异常 linux机器: (不在窗口复制)。

我也导入了这个包:

// https://mvnrepository.com/artifact/org.apache.flink/flink-shaded-hadoop2
compile group: 'org.apache.flink', name: 'flink-shaded-hadoop2', version: '1.4.0'

任何帮助?

在flink console:

TriggerWindow(TumblingProcessingTimeWindows(10000), ReducingStateDescriptor{serializer=org.apache.flink.api.java.typeutils.runtime.TupleSerializer@cb6c5dba, reduceFunction=com.clicktale.reducers.MetricsReducer@4e406694}, ProcessingTimeTrigger(), WindowedStream.reduce(WindowedStream.java:241)) -> Sink: Unnamed (1/1)



    java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.fs.LocalFileSystem not a subtype
    at java.util.ServiceLoader.fail(ServiceLoader.java:239)
    at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
    at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
    at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
    at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
    at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2364)
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2375)
    at org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:99)
    at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:401)
    at org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.createHadoopFileSystem(BucketingSink.java:1154)
    at org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.initFileSystem(BucketingSink.java:411)
    at org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.initializeState(BucketingSink.java:355)
    at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
    at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
    at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
    at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:259)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeOperators(StreamTask.java:694)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:682)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:253)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:718)
    at java.lang.Thread.run(Thread.java:748)

2 个答案:

答案 0 :(得分:1)

我遇到了类似的(不是特定的,但依赖于相关性)问题,从1.3迁移到1.4。

在我的情况下,我不得不使用maven原型重新生成一个新的POM文件,然后逐个添加所需的依赖项。 请参阅Java QuickstartScala Quickstart

原因是对依赖结构进行了重大修改。有关详细信息,请参阅Release notes

答案 1 :(得分:0)

请注意,Flink 1.4将加载通过“hadoop classpath”shell命令找到的任何Hadoop jar,这些将首先在类路径中。因此,如果您安装了“hadoop”命令指向的Hadoop的不兼容版本,则可能会遇到此类问题。