从App Engine

时间:2017-04-18 17:43:33

标签: python google-app-engine google-cloud-platform google-cloud-dataflow apache-beam

我在GCP技术方面相对较新。目前,我正在做POC来创建一个计划的数据流作业,将数据从谷歌云存储中提取(插入)到BigQuery。在阅读了一些教程和文档之后,我想出了以下内容:

  1. 我首先创建一个读取avro文件并将其加载到BigQuery的数据流作业。此数据流已经过测试并运行良好。

    (self.pipeline
         | output_table + ': read table ' >> ReadFromAvro(storage_input_path)
         | output_table + ': filter columns' >> beam.Map(self.__filter_columns, columns=columns)
         | output_table + ': write to BigQuery' >> beam.Write(
            beam.io.BigQuerySink(output_table,               
       create_disposition=beam.io.BigQueryDisposition.CREATE_NEVER,                               
       write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND)))
    
  2. 为了创建预定作业,我创建了一个简单的Web服务,如下所示:

    import logging
    from flask import Flask
    from common.tableLoader import TableLoader
    from ingestion import IngestionToBigQuery
    from common.configReader import ConfigReader
    app = Flask(__name__)
    @app.route('/')
    def hello():
         """Return a friendly HTTP greeting."""
        logging.getLogger().setLevel(logging.INFO)
        config = ConfigReader('columbus-config')  # TODO read from args
        tables = TableLoader('experience')
        ingestor = IngestionToBigQuery(config.configuration, tables.list_of_tables)
        ingestor.ingest_table()
        return 'Hello World!'```
    
  3. 我还创建了app.yaml:

     runtime: python
     env: flex
     entrypoint: gunicorn -b :$PORT recsys_data_pipeline.main:app
     threadsafe: yes
     runtime_config:
        python_version: 2
        resources:
        memory_gb: 2.0
    
  4. 然后,我使用此命令gcloud app deploy部署了它,但是,我收到了以下错误:

    default[20170417t173837]  ERROR:root:The gcloud tool was not found.
    default[20170417t173837]  Traceback (most recent call last):    
    File "/env/local/lib/python2.7/site-packages/apache_beam/internal/gcp/auth.py", line 109, in _refresh      ['gcloud', 'auth', 'print-access-token'], stdout=processes.PIPE)    
    File "/env/local/lib/python2.7/site-packages/apache_beam/utils/processes.py", line 52, in Popen      return subprocess.Popen(*args, **kwargs)    
    File "/usr/lib/python2.7/subprocess.py", line 710, in __init__      errread, errwrite)    File "/usr/lib/python2.7/subprocess.py", line 1335, in _execute_child      raise child_exception  OSError: [Errno 2] No such file or directory
    

    从上面的消息中,我发现错误来自apache_beam auth.py class,具体来说,它来自以下函数:

    def _refresh(self, http_request):
       """Gets an access token using the gcloud client."""
       try:
         gcloud_process = processes.Popen(['gcloud', 'auth', 'print-access-token'], stdout=processes.PIPE)
       except OSError as exn:
         logging.error('The gcloud tool was not found.', exc_info=True)
         raise AuthenticationException('The gcloud tool was not found: %s' % exn)
      output, _ = gcloud_process.communicate()
      self.access_token = output.strip()
    

    在凭据(service_acount_nameservice_acount_key未给出时调用:

    if google_cloud_options.service_account_name:
          if not google_cloud_options.service_account_key_file:
            raise AuthenticationException(
                'key file not provided for service account.')
          if not os.path.exists(google_cloud_options.service_account_key_file):
            raise AuthenticationException(
                'Specified service account key file does not exist.')
    
    else:
          try:
            credentials = _GCloudWrapperCredentials(user_agent)
            # Check if we are able to get an access token. If not fallback to
            # application default credentials.
            credentials.get_access_token()
            return credentials
    

    所以我有两个问题:

    1. 有没有办法在我的代码或配置文件中的某处“附加”凭据(service_acount_nameservice_acount_key)(例如:在app.yaml中)?
    2. 从Google应用引擎触发数据流作业的最佳做法是什么?
    3. 非常感谢,任何建议和评论都会非常有用!

1 个答案:

答案 0 :(得分:0)

请在https://github.com/amygdala/gae-dataflow查看官方示例。