Apache Beam Python Streaming编写每小时的Avro文件文件

时间:2019-02-25 13:42:47

标签: python google-cloud-storage avro apache-beam

从pubsub获取消息,然后将其保存到gcs上的每小时或其他时间间隔文件中不起作用。该作业仅在我关闭该作业时才写入文件。谁能指出我正确的方向?

topic = 'test.txt'
jobname = 'streaming-' + topic.replace('.', '-')

input_topic= 'projects/PROJECT/topics/' + topic

u = Utils()
parsed_schema = u.get_parsed_avro_from_schema_service(
    schema_name=topic,
    schema_repo_url='localhost'
)

p = beam.Pipeline(options=pipelineoptions)

messages = p | 'Read from topic: ' + topic >> ReadFromPubSub(topic=input_topic).with_input_types(bytes)

windowed_lines = (
        messages
        | 'decode' >> beam.ParDo(DecodeAvro(), parsed_schema)
        | beam.WindowInto(
                window.FixedWindows(60),
                trigger=AfterWatermark(),
                accumulation_mode=AccumulationMode.DISCARDING
            )
        )

output = windowed_lines | 'write result' >> WriteToAvro(
    file_path_prefix='gs://BUCKET/streaming/tests/',
    shard_name_template=topic.split('.')[0] + '_' + str(uuid.uuid4()) + '_SSSS-of-NNNN',
    schema=parsed_schema,
    file_name_suffix='.avro',
    num_shards=2
)

result = p.run()
result.wait_until_finish()

1 个答案:

答案 0 :(得分:0)

经过更多研究,我发现python sdk还不支持从无界源写入有界源。因此,我必须为此更改为Java sdk。