Flink不将数据下沉到kafka主题

时间:2018-08-30 14:45:26

标签: apache-flink

我写了一个flink代码,该代码从一个文件夹中读取一个csv文件,并将数据存储在kafka主题上。

这是我的flink工作:

final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();

org.apache.flink.core.fs.Path filePath = new           
org.apache.flink.core.fs.Path(feedFileFolder);

RowCsvInputFormat format = new RowCsvInputFormat(filePath, 
FetchTypeInformation.getTypeInformation());


DataStream<Row> inputStream = env.readFile(format, feedFileFolder, 
FileProcessingMode.PROCESS_CONTINUOUSLY,
parseInt(folderLookupTime));


DataStream<String> speStream = inputStream.filter(new FilterFunction<Row> 
().map(new MapFunction<Row, String>() {
@Override
public String map(Row row) {

            ...............
            return resultingJsonString;
        }
    });

Properties props = Producer.getProducerConfig(propertiesFilePath);

speStream.addSink(new FlinkKafkaProducer011(kafkaTopicName, new
    KeyedSerializationSchemaWrapper<>(new SimpleStringSchema()), props,
     FlinkKafkaProducer011.Semantic.EXACTLY_ONCE));

问题

当我从eclipse中执行上述flink作业时,它的工作绝对正常。 如果我放置一个包含10条记录的文件,则可以在flink UI上看到flink作业在kafka主题上吸收10条记录。

Name           Bytes received   Records received    Records sent
Source:        0 B              0                   1       
Split Reader   1.12 KB          1                   10      
Sink: Unnamed  1.79 KB          10                  0   

但是当我在flink服务器上执行此操作(作为jar)时,该作业无法在kafka主题上接收数据。 Flink UI如下所示:

Name           Bytes received   Records received    Records sent
Source:        0 B              0                   1       
Split Reader   616 B            1                   0       
Sink: Unnamed  450 B            0                   0

0 个答案:

没有答案