使用appengine-mapreduce命中内存限制

时间:2012-02-12 17:40:06

标签: python google-app-engine memory-management mapreduce

我正在研究appengine-mapreduce函数并修改了演示以符合我的目的。 基本上我有以下格式的百万行:userid,time1,time2。我的目的是找到每个用户ID的time1和time2之间的差异。

但是,当我在Google App Engine上运行时,我在日志部分遇到了此错误消息:

在为130个请求提供服务后超过180.56 MB的软私有内存限制 在处理此请求时,发现处理此请求的进程使用了​​太多内存并被终止。这可能会导致新进程用于您的应用程序的下一个请求。如果经常看到此消息,则可能是应用程序中存在内存泄漏。

def time_count_map(data):
  """Time count map function."""
  (entry, text_fn) = data
  text = text_fn()

  try:
    q = text.split('\n')
    for m in q:
        reader = csv.reader([m.replace('\0', '')], skipinitialspace=True)
        for s in reader:
            """Calculate time elapsed"""
            sdw = s[1]
            start_date = time.strptime(sdw,"%m/%d/%y %I:%M:%S%p")
            edw = s[2]
            end_date = time.strptime(edw,"%m/%d/%y %I:%M:%S%p")
            time_difference = time.mktime(end_date) - time.mktime(start_date)
            yield (s[0], time_difference)
  except IndexError, e:
    logging.debug(e)


def time_count_reduce(key, values):
  """Time count reduce function."""
  time = 0.0
  for subtime in values:
    time += float(subtime)
    realtime = int(time)
  yield "%s: %d\n" % (key, realtime)

任何人都可以建议我如何更好地优化我的代码?谢谢!

编辑:

这是管道处理程序:

class TimeCountPipeline(base_handler.PipelineBase):
  """A pipeline to run Time count demo.

  Args:
    blobkey: blobkey to process as string. Should be a zip archive with
      text files inside.
  """

  def run(self, filekey, blobkey):
    logging.debug("filename is %s" % filekey)
    output = yield mapreduce_pipeline.MapreducePipeline(
        "time_count",
        "main.time_count_map",
        "main.time_count_reduce",
        "mapreduce.input_readers.BlobstoreZipInputReader",
        "mapreduce.output_writers.BlobstoreOutputWriter",
        mapper_params={
            "blob_key": blobkey,
        },
        reducer_params={
            "mime_type": "text/plain",
        },
        shards=32)
    yield StoreOutput("TimeCount", filekey, output)

Mapreduce.yaml:

mapreduce:
- name: Make messages lowercase
  params:
  - name: done_callback
    value: /done
  mapper:
    handler: main.lower_case_posts
    input_reader: mapreduce.input_readers.DatastoreInputReader
    params:
    - name: entity_kind
      default: main.Post
    - name: processing_rate
      default: 100
    - name: shard_count
      default: 4
- name: Make messages upper case
  params:
  - name: done_callback
    value: /done
  mapper:
    handler: main.upper_case_posts
    input_reader: mapreduce.input_readers.DatastoreInputReader
    params:
    - name: entity_kind
      default: main.Post
    - name: processing_rate
      default: 100
    - name: shard_count
      default: 4

其余文件与演示文稿完全相同。

我已经在dropbox上传了我的代码副本:http://dl.dropbox.com/u/4288806/demo%20compressed%20fail%20memory.zip

2 个答案:

答案 0 :(得分:6)

还要考虑在代码中的常规点调用gc.collect()。我已经看到了几个关于超出软内存限制的SO问题,这些问题通过调用gc.collect()得到了缓解,大多数都与blobstore有关。

答案 1 :(得分:2)

您的输入文件可能超出了内存限制。对于大文件,请使用BlobstoreLineInputReaderBlobstoreZipLineInputReader

这些输入阅读器会传递与map函数不同的内容,它们会传递文件中的start_position和文本行。

您的map功能可能如下所示:

def time_count_map(data):
    """Time count map function."""
    text = data[1]

    try:
        reader = csv.reader([text.replace('\0', '')], skipinitialspace=True)
        for s in reader:
            """Calculate time elapsed"""
            sdw = s[1]
            start_date = time.strptime(sdw,"%m/%d/%y %I:%M:%S%p")
            edw = s[2]
            end_date = time.strptime(edw,"%m/%d/%y %I:%M:%S%p")
            time_difference = time.mktime(end_date) - time.mktime(start_date)
            yield (s[0], time_difference)
    except IndexError, e:
        logging.debug(e)

使用BlobstoreLineInputReader将允许作业运行得更快,因为它可以使用多个分片,最多256个,但这意味着您需要上传未压缩的文件,这可能很痛苦。我通过将压缩文件上传到EC2 Windows服务器来处理它,然后从那里解压缩并上传,因为上游带宽非常大。