如果目录结构如下,我如何使用带直接运行程序的Apache-Beam 2.13.0 python sdk读取HDFS中存储的所有镶木文件:
data/
├── a
│ ├── file_1.parquet
│ └── file_2.parquet
└── b
├── file_3.parquet
└── file_4.parquet
我尝试了beam.io.ReadFromParquet
和hdfs://data/*/*
:
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
HDFS_HOSTNAME = 'my-hadoop-master-node.com'
HDFS_PORT = 50070
HDFS_USER = "my-user-name"
pipeline_options = PipelineOptions(hdfs_host=HDFS_HOSTNAME, hdfs_port=HDFS_PORT, hdfs_user=HDFS_USER)
input_file_hdfs_parquet = "hdfs://data/*/*"
p = beam.Pipeline(options=pipeline_options)
lines = p | 'ReadMyFile' >> beam.io.ReadFromParquet(input_file_hdfs_parquet)
_ = p.run()
我遇到以下错误:
IOErrorTraceback (most recent call last)
...
IOError: No files found based on the file pattern hdfs://data/*/*
使用
input_file_hdfs_parquet = "hdfs://data/a/*"
,我可以读取a
目录中的所有文件。