在大表中计算未读新闻

时间:2017-12-15 11:16:02

标签: postgresql sqlalchemy query-performance postgresql-performance postgresql-10

我有一个非常普遍的(至少我认为)数据库结构:有新闻 (News(id, source_id)),每条新闻都有一个来源(Source(id, url))。来源通过Topic(id, title)汇总到主题(TopicSource(source_id, topic_id))。此外,还有一些用户(User(id, name))可以通过NewsRead(news_id, user_id)将新闻标记为已读。这是一个清理的图表: db diagram

我想在特定用户的主题中计算未读新闻。问题是News表是一个很大的表(10 ^ 6 - 10 ^ 7行)。幸运的是,我不需要知道确切的计数,在将此阈值作为计数值返回阈值后停止计数是可以的。

关于一个主题的this answer ,我想出了以下查询:

SELECT t.topic_id, count(1) as unread_count
FROM (
 SELECT 1, topic_id
 FROM news n
   JOIN topic_source t ON n.source_id = t.source_id
   -- join news_read to filter already read news
   LEFT JOIN news_read r
     ON (n.id = r.news_id AND r.user_id = 1)
 WHERE t.topic_id = 3 AND r.user_id IS NULL
 LIMIT 10 -- Threshold
) t GROUP BY t.topic_id;

query plan 1)。此查询在测试db上大约需要50 ms,这是可以接受的。

现在想要选择多个主题的未读计数。我试着这样选择:

SELECT
  t.topic_id,
  (SELECT count(1)
   FROM (SELECT 1 FROM news n
          JOIN topic_source tt ON n.source_id = tt.source_id
          LEFT JOIN news_read r
            ON (n.id = r.news_id AND r.user_id = 1)
          WHERE tt.topic_id = t.topic_id AND r.user_id IS NULL
          LIMIT 10 -- Threshold
        ) t) AS unread_count
FROM topic_source t WHERE t.topic_id IN (1, 2) GROUP BY t.topic_id;

query plan 2)。但由于我不知道的原因,测试数据需要大约1.5秒,而单个查询的总和应该大约0.2-0.3秒。

我在这里明显遗漏了一些东西。第二个查询中有错误吗?是否有更好(更快)的方式来选择未读新闻的数量?

其他信息:

表格大小:

News - 10^6 - 10^7
User - 10^3
Source - 10^4
Topic - 10^3
TopicSource - 10^5
NewsRead - 10^6

UPD:查询计划清楚地显示我搞砸了第二个查询。任何提示都表示赞赏。

UPD2:我尝试使用横向连接进行此查询,横向连接应该只针对每个topic_id运行第一个(最快的一个)查询:

SELECT
  id,
  count(*)
FROM topic t
  LEFT JOIN LATERAL (
     SELECT ts.topic_id
     FROM news n
       LEFT JOIN news_read r
         ON (n.id = r.news_id AND r.user_id = 1)
       JOIN topic_source ts ON n.source_id = ts.source_id
     WHERE ts.topic_id = t.id AND r.user_id IS NULL
     LIMIT 10
) p ON TRUE
WHERE t.id IN (4, 10, 12, 16)
GROUP BY t.id;

query plan 3)。但似乎Pg规划者对此有不同的看法 - 它运行非常慢的seq扫描和散列连接而不是索引扫描和合并连接。

1 个答案:

答案 0 :(得分:0)

经过一些基准测试后,我终于停止了简单的 UNION ALL 查询,它比我的数据上的横向连接快十倍:

SELECT
  p.topic_id,
  count(*)
FROM (
       SELECT *
       FROM (
              SELECT fs.topic_id
              FROM news n
                LEFT JOIN news_read r
                  ON (n.id = r.news_id AND r.user_id = 1)
                JOIN topic_source fs ON n.source_id = fs.source_id
              WHERE fs.topic_id = 4 AND r.user_id IS NULL
              LIMIT 100
            ) t1
       UNION ALL
       SELECT *
       FROM (
              SELECT fs.topic_id
              FROM news n
                LEFT JOIN news_read r
                  ON (n.id = r.news_id AND r.user_id = 1)
                JOIN topic_source fs ON n.source_id = fs.source_id
              WHERE fs.topic_id = 10 AND r.user_id IS NULL
              LIMIT 100
            ) t1
       UNION ALL
       SELECT *
       FROM (
              SELECT fs.topic_id
              FROM news n
                LEFT JOIN news_read r
                  ON (n.id = r.news_id AND r.user_id = 1)
                JOIN topic_source fs ON n.source_id = fs.source_id
              WHERE fs.topic_id = 12 AND r.user_id IS NULL
              LIMIT 100
            ) t1
       UNION ALL
       SELECT *
       FROM (
              SELECT fs.topic_id
              FROM news n
                LEFT JOIN news_read r
                  ON (n.id = r.news_id AND r.user_id = 1)
                JOIN topic_source fs ON n.source_id = fs.source_id
              WHERE fs.topic_id = 16 AND r.user_id IS NULL
              LIMIT 100
            ) t1
     ) p
GROUP BY p.topic_id;

execute plan

这里的直觉是通过明确指定topic_id,为Pg规划者提供足够的信息来构建有效的计划。

SQLAlchemy的角度来看,这非常简单:

# topic_ids, user_id are defined elsewhere, e.g.
# topic_ids = [4, 10, 12, 16]
# user_id = 1
for topic_id in topic_ids:
    topic_query = (
        db.session.query(News.id, TopicSource.topic_id)
        .join(TopicSource, TopicSource.source_id == News.source_id)
        # LEFT JOIN NewsRead table to filter only unreads
        # (where News.user_id IS NULL)
        .outerjoin(NewsRead,
                   and_(NewsRead.news_id == News.id,
                        NewsRead.user_id == user_id))
        .filter(TopicSource.topic_id == topic_id,
                NewsRead.user_id.is_(None))
        .limit(100))
    topic_queries.append(topic_query)
# Unite queries with UNION ALL
union_query = topic_queries[0].union_all(*topic_queries[1:])
# Groups query by `topic_id` and count unreads
counts = (union_query
          # Using `with_entities(func.count())` to avoid
          # a subquery.  See link below for info:
          # https://gist.github.com/hest/8798884
          .with_entities(TopicSource.topic_id.label('topic_id'),
                         func.count().label('unread_count'))
          .group_by(TopicSource.topic_id))
result = counts.all()