使用Pyspark将时间戳写入Postgres

时间:2017-01-19 12:03:01

标签: python postgresql apache-spark pyspark apache-spark-sql

我正在使用Python上的Spark脚本(使用Pyspark)。我有一个函数返回带有一些字段的Ro w,包括

timestamp=datetime.strptime(processed_data[1], DATI_REGEX)

processed_data [1]是一个有效的日期时间字符串。

修改以显示完整代码:

DATI_REGEX = "%Y-%m-%dT%H:%M:%S"

class UserActivity(object):
    def __init__(self, user, rows):
        self.user = int(user)
        self.rows = sorted(rows, key=operator.attrgetter('timestamp'))

    def write(self):
        return Row(
            user=self.user,
            timestamp=self.rows[-1].timestamp,
        )

def parse_log_line(logline):
    try:
       entries = logline.split('\\t')
       processed_data = entries[0].split('\t') + entries[1:]

       return Row(
           ip_address=processed_data[9],
           user=int(processed_data[10]),
           timestamp=datetime.strptime(processed_data[1], DATI_REGEX),
       )
     except (IndexError, ValueError):
          return None


logFile = sc.textFile(...)
rows = (log_file.map(parse_log_line).filter(None)
        .filter(lambda x: current_day <= x.timestamp < next_day))
user_rows = rows.map(lambda x: (x.user, x)).groupByKey()
user_dailies = user_rows.map(lambda x: UserActivity(current_day, x[0], x[1]).write())

当我尝试在PostgreSQL数据库上编写代码时出现问题,执行以下操作:

fields = [
    StructField("user_id", IntegerType(), False),
    StructField("timestamp", TimestampType(), False),
]
schema = StructType(fields)
user_dailies_schema = SQLContext(sc).createDataFrame(user_dailies, schema)
user_dailies_schema.write.jdbc(
    "jdbc:postgresql:.......",
    "tablename")

我收到以下错误:

Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/Users/pau/Downloads/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 172, in main
    process()
  File "/Users/pau/Downloads/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 167, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/Users/pau/Downloads/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream
    vs = list(itertools.islice(iterator, batch))
  File "/Users/pau/Downloads/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/sql/types.py", line 576, in toInternal
  File "/Users/pau/Downloads/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/sql/types.py", line 576, in <genexpr>
  File "/Users/pau/Downloads/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/sql/types.py", line 436, in toInternal
    return self.dataType.toInternal(obj)
  File "/Users/pau/Downloads/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/sql/types.py", line 190, in toInternal
    seconds = (calendar.timegm(dt.utctimetuple()) if dt.tzinfo
AttributeError: 'int' object has no attribute 'tzinfo'

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
    at org.apache.spark.scheduler.Task.run(Task.scala:86)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    ... 1 more

如何解决这个问题?

1 个答案:

答案 0 :(得分:1)

问题相对简单。 PySpark Row是按字段名称排序的tuple。这意味着当你创建:

Row(user=self.user, timestamp=self.rows[-1].timestamp)

输出结构的排序如下:

Row(timestamp, user)
另一方面,

StructType按原样排序。因此,您的代码会尝试将用户ID用作时间戳。您应该返回普通tuple

class UserActivity(object):
    ...
    def write(self):
        return (self.user, timestamp)

或使用按字典顺序排列的架构:

schema = StructType(sorted(fields, key=operator.attrgetter("name")))

最后,您可以使用namedtuple来实现属性访问和预定义顺序。

旁注不要像这样使用groupByKey。这是一个使用reduceByKey的典型案例:

(log_file.map(parse_log_line)
    .map(operator.attrgetter("user", "timestamp"))
    .reduceByKey(max))

有多个字段:

from functools import partial

(log_file.map(parse_log_line)
    .map(lambda x: (x.user, x))
    .reduceByKey(partial(max, key=operator.itemgetter("timestamp")))
    .values())

DataFrame汇总:

from pyspark.sql import functions as f

(sqlContext
    .createDataFrame(
        log_file.map(parse_log_line)
          # Another way to handle ordering is to choose fields
          # before you call createDataFrame
          .map(operator.attrgetter("user", "timestamp")),
        schema)
    .groupBy("user_id")
    .agg(f.max("timestamp").alias("timestamp")))

此外,如果您想要检索SQLContext,您应该使用工厂方法:

SQLContext.getOrCreate(sc)

像你一样创建新的上下文会产生意想不到的副作用。