我想检查我正在使用云发布/订阅的云存储中是否有新文件。经过分析后,我想将其保存到另一个云存储中。从这个云存储中,我将使用另一个pub子和数据流提供的模板将文件发送到BigQuery中。
我在运行代码时遇到以下错误:
Traceback (most recent call last):
File "SentAnal.py", line 71, in <module>
"Splitting_Elements_of_Text" >> beam.ParDo(Split()) |
File "C:\Python27\lib\site-packages\apache_beam\io\gcp\pubsub.py", line 141, in __init__
timestamp_attribute=timestamp_attribute)
File "C:\Python27\lib\site-packages\apache_beam\io\gcp\pubsub.py", line 262, in __init__
self.project, self.topic_name = parse_topic(topic)
File "C:\Python27\lib\site-packages\apache_beam\io\gcp\pubsub.py", line 209, in parse_topic
match = re.match(TOPIC_REGEXP, full_topic)
File "C:\Python27\lib\re.py", line 141, in match
return _compile(pattern, flags).match(string)
TypeError: expected string or buffer
这是我的代码段:
from __future__ import absolute_import
import os
import logging
from google.cloud import language
from google.cloud.language import enums
from google.cloud.language import types
from datetime import datetime
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
from apache_beam.options.pipeline_options import GoogleCloudOptions
from apache_beam.options.pipeline_options import StandardOptions
from apache_beam.io.textio import ReadFromText, WriteToText
dataflow_options = ['--project=*********','--job_name=*******','--temp_location=gs://**********/temp','--setup_file=./setup.py']
dataflow_options.append('--staging_location=gs://********/stage')
dataflow_options.append('--requirements_file ./requirements.txt')
options=PipelineOptions(dataflow_options)
gcloud_options=options.view_as(GoogleCloudOptions)
# Dataflow runner
options.view_as(StandardOptions).runner = 'DataflowRunner'
options.view_as(SetupOptions).save_main_session = True
class UserOptions(PipelineOptions):
@classmethod
def _add_argparse_args(cls, parser):
source_date=datetime.now().strftime("%Y%m%d-%H%M%S")
parser.add_value_provider_argument('--input_topic',help=('Input PubSub topic of the form '
'"projects/*****/topics/*****".'))
parser.add_value_provider_argument('--output_topic',help=('Input PubSub topic of the form '
'"projects/**********/topics/*******".'))
class Split(beam.DoFn):
def process(self,element):
element = element.rstrip("\n").encode('utf-8')
text = element.split(',')
result = []
for i in range(len(text)):
dat = text[i]
#print(dat)
client = language.LanguageServiceClient()
document = types.Document(content=dat,type=enums.Document.Type.PLAIN_TEXT)
sent_analysis = client.analyze_sentiment(document=document)
sentiment = sent_analysis.document_sentiment
data = [
(dat,sentiment.score)
]
result.append(data)
return result
class WriteToCSV(beam.DoFn):
def process(self, element):
return [
"{},{}".format(
element[0][0],
element[0][1]
)
]
user_options = options.view_as(UserOptions)
with beam.Pipeline(options=options) as p:
rows = (p
| beam.io.ReadFromPubSub(topic=user_options.input_topic)
.with_output_types(bytes) |
"Splitting_Elements_of_Text" >> beam.ParDo(Split()) |
beam.io.WriteToPubSub(topic=user_options.output_topic)
)
答案 0 :(得分:0)
问题是您已从PubSub读取 bytes ,并尝试在byte元素上使用正则表达式。您首先需要将转换为某种字符串元素。
如果您引用的是streaming wordcount example,特别是文件streaming_wordcount.py,您会看到它们将从PubSub读取的字节解码为一个unicode字符串,如下所示:
messages = (p
| beam.io.ReadFromPubSub(
subscription=known_args.input_subscription)
.with_output_types(bytes))
lines = messages | 'decode' >> beam.Map(lambda x: x.decode('utf-8'))
然后他们对解码后的lines
进行进一步的文本处理。