Flink作业.UnfulfillableSlotRequestException:无法满足插槽要求。请求资源配置文件(ResourceProfile {UNKNOWN})无法实现

时间:2020-03-10 14:40:56

标签: apache-flink flink-streaming

提交Flink作业

$ ./bin/flink run -m 10.0.2.4:6123  /streaming/mvn-flinkstreaming-scala/mvn-flinkstreaming-scala-1.0.jar
Stream processing!!!!!!!!!!!!!!!!!
org.apache.flink.streaming.api.datastream.DataStreamSink@40ef3420

------------------------------------------------------------
 The program finished with the following exception:

org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: No pooled slot available and request to ResourceManager for new slot failed
    at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
    at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
    at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:607)
    at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
    ... 31 more
Caused by: org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: No pooled slot available and request to ResourceManager for new slot failed
    ... 29 more
Caused by: java.util.concurrent.CompletionException: org.apache.flink.runtime.resourcemanager.exceptions.ResourceManagerException: Could not fulfill slot request 

    org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: java.util.concurrent.ExecutionException: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.

但是当我在UI中检查作业日志时,得到了另一个错误,

Caused by: java.util.concurrent.CompletionException: org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: No pooled slot available and request to ResourceManager for new slot failed
    at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
    at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
    at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:607)
    at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
    ... 31 more
Caused by: org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: No pooled slot available and request to ResourceManager for new slot failed
    ... 29 more
Caused by: java.util.concurrent.CompletionException: org.apache.flink.runtime.resourcemanager.exceptions.ResourceManagerException: Could not fulfill slot request ea

我应该检查什么,我的配置参数如下: A)是-m ip_address:6123正确的选项或8081应该是端口... 配置。

# Note this accounts for all memory usage within the TaskManager process, including JVM metaspace and other overhead.

taskmanager.memory.process.size: 1568m

# To exclude JVM metaspace and overhead, please, use total Flink memory size instead of 'taskmanager.memory.process.size'.
# It is not recommended to set both 'taskmanager.memory.process.size' and Flink memory.
#
# taskmanager.memory.flink.size: 1280m

# The number of task slots that each TaskManager offers. Each slot runs one parallel pipeline.

taskmanager.numberOfTaskSlots: 2

# The parallelism used for programs that did not specify and other parallelism.

parallelism.default: 2

集群开始,

$  bin/start-cluster.sh
Starting cluster.
Starting standalonesession daemon on host centos1.
Starting taskexecutor daemon on host centos2.
Starting taskexecutor daemon on host centos3.

能够在主节点中处理grep,

]$ ps -ef | grep  flink
root      12300      1 10 07:22 pts/0    00:00:05 java -Xms16384m -Xmx16384m -Dlog.file=/storage/flink-1.10.0/log/

找不到与任务管理器相关的查找过程, centos2〜] $ psg flink

这是正确的状态吗?

2 个答案:

答案 0 :(得分:1)

我之前遇到过此问题,这很可能是flink群集上内存不足的指示。不同的错误消息彼此相关也很有意义。

检查您的 jobmanager.heap.sizetaskmanager.heap.size 在配置中,将它们增加到相当大的数量,您应该不再看到此错误。从这里您可以微调实际的内存设置

答案 1 :(得分:1)

我在Flink-1.10.0中遇到了同样的问题。因此,请确保根据数据负载有足够的内存。

我得到的错误:

java.lang.OutOfMemoryError: Metaspace
  • jobmanager.heap.size:1024m(默认)
  • taskmanager.memory.flink.size:1280m(默认)
  • taskmanager.memory.jvm-metaspace.size:256m(默认)

所以我根据数据负载增加了taskmanager.memory.jvm-metaspace.size,它解决了我的问题。

有关更多详细信息,请单击here