2017-07-30 47 views
1

在我们的API中,我们有一个基本的排名/排行榜功能,其中每个客户端用户都有其可以执行的“操作”列表,每个操作都会得到一个分数,并且所有操作都会记录在“动作”表,然后每个用户可以要求当前月份的排行榜(每个月排行榜重置)。没有什么花哨。如何优化PostgreSQL排行榜窗口函数查询

我们有两个表:与用户表,并用行动表(我已经删除不相关的列):

> \d client_users 
              Table "public.client_users" 
     Column   |   Type    |       Modifiers 
------------------------+-----------------------------+----------------------------------------------------------- 
id      | integer      | not null default nextval('client_users_id_seq'::regclass) 
app_id     | integer      | 
user_id    | character varying   | not null 
created_at    | timestamp without time zone | 
updated_at    | timestamp without time zone | 
Indexes: 
    "client_users_pkey" PRIMARY KEY, btree (id) 
    "index_client_users_on_app_id" btree (app_id) 
    "index_client_users_on_user_id" btree (user_id) 
Foreign-key constraints: 
    "client_users_app_id_fk" FOREIGN KEY (app_id) REFERENCES apps(id) 
Referenced by: 
    TABLE "leaderboard_actions" CONSTRAINT "leaderboard_actions_client_user_id_fk" FOREIGN KEY (client_user_id) REFERENCES client_users(id) 

> \d leaderboard_actions 
             Table "public.leaderboard_actions" 
    Column  |   Type    |       Modifiers 
----------------+-----------------------------+------------------------------------------------------------------ 
id    | integer      | not null default nextval('leaderboard_actions_id_seq'::regclass) 
client_user_id | integer      | 
score   | integer      | not null default 0 
created_at  | timestamp without time zone | 
updated_at  | timestamp without time zone | 
Indexes: 
    "leaderboard_actions_pkey" PRIMARY KEY, btree (id) 
    "index_leaderboard_actions_on_client_user_id" btree (client_user_id) 
    "index_leaderboard_actions_on_created_at" btree (created_at) 
Foreign-key constraints: 
    "leaderboard_actions_client_user_id_fk" FOREIGN KEY (client_user_id) REFERENCES client_users(id) 

我试图优化查询如下:

SELECT 
    cu.user_id, 
    SUM(la.score) AS total_score, 
    rank() OVER (ORDER BY SUM(la.score) DESC) AS ranking 
FROM client_users cu 
JOIN leaderboard_actions la ON cu.id = la.client_user_id 
WHERE cu.app_id = 8 
AND la.created_at BETWEEN '2017-07-01 00:00:00.000000' AND '2017-07-31 23:59:59.999999' 
GROUP BY cu.id 
ORDER BY total_score DESC 
LIMIT 20; 

注:client_users.user_id是VARCHAR “人ID”,该表的连接与client_user.id外键(命名也不是很大,我知道:d)

基本上,我要求PostgreSQL给我排名前20位的用户在当月的个人行为总分排名。

你可以从查询计划中看到的不是那么快:

Limit (cost=8641.96..8642.05 rows=20 width=52) (actual time=135.544..135.560 rows=20 loops=1) 
Output: cu.user_id, (sum(la.score)), (rank() OVER (?)), cu.id 
-> WindowAgg (cost=8641.96..8841.42 rows=44326 width=52) (actual time=135.543..135.559 rows=20 loops=1) 
     Output: cu.user_id, (sum(la.score)), rank() OVER (?), cu.id 
     -> Sort (cost=8641.96..8664.12 rows=44326 width=44) (actual time=135.538..135.539 rows=20 loops=1) 
      Output: (sum(la.score)), cu.id, cu.user_id 
      Sort Key: (sum(la.score)) DESC 
      Sort Method: quicksort Memory: 1451kB 
      -> HashAggregate (cost=7824.77..7957.75 rows=44326 width=44) (actual time=130.938..133.124 rows=10411 loops=1) 
        Output: sum(la.score), cu.id, cu.user_id 
        Group Key: cu.id 
        -> Hash Join (cost=5858.66..7780.44 rows=44326 width=40) (actual time=50.849..111.346 rows=79382 loops=1) 
         Output: cu.id, cu.user_id, la.score 
         Hash Cond: (la.client_user_id = cu.id) 
         -> Index Scan using index_leaderboard_actions_on_created_at on public.leaderboard_actions la (cost=0.09..1736.77 rows=69494 width=8) (actual time=0.020..33.773 rows=79382 loops=1) 
           Output: la.id, la.client_user_id, la.rule_id, la.score, la.created_at, la.updated_at, la.success 
           Index Cond: ((la.created_at >= '2017-07-01 00:00:00'::timestamp without time zone) AND (la.created_at <= '2017-07-31 23:59:59.999999'::timestamp without time zone)) 
         -> Hash (cost=5572.11..5572.11 rows=81846 width=36) (actual time=50.330..50.330 rows=81859 loops=1) 
           Output: cu.user_id, cu.id 
           Buckets: 131072 Batches: 1 Memory Usage: 6583kB 
           -> Seq Scan on public.client_users cu (cost=0.00..5572.11 rows=81846 width=36) (actual time=0.014..34.539 rows=81859 loops=1) 
            Output: cu.user_id, cu.id 
            Filter: (cu.app_id = 8) 
            Rows Removed by Filter: 46610 
Planning time: 1.276 ms 
Execution time: 136.176 ms 
(26 rows) 

为了让你的尺寸的想法:

  • client_users大约有128471行,只有81860通过有针对性的查询(app_id = 8
  • leaderboard_actions在当月有1609992行和79435

任何想法?

谢谢!

+0

不同意你:由于你要求的信息量多,计划*速度很快。 – joanolo

回答

1

你得到的计划实际上比合理快。

可以帮助(但)你的计划,另一对夫妇的索引:

CREATE INDEX idx_client_users_app_id_user 
    ON client_users(app_id, id, user_id) ; 

CREATE INDEX idx_leaderboard_actions_3 
    ON leaderboard_actions(created_at, client_user_id, score) ; 

创建两个索引后,执行

VACUUM ANALYZE client_users; 
VACUUM ANALYZE leaderboard_actions; 

这些指标将允许(最有可能)的查询只能读取它们(而不是表client_usersleaderboard_actions)。所有需要的信息已经存在。该计划应显示一些Index Only Scan

您可以在dbfiddle here找到仿真您的方案执行时间有30%的提高。您可能会在实际方案中获得类似的改进。

+1

非常感谢您的意见。它似乎是完美的。我会密切关注写查询,看看它们是否放慢速度,但它们不应该太大。 –

+1

维护索引会在所有'INSERT'和'UPDATE's(特别是修改任何索引列的那些)上增加一些开销。根据你的场景是* read-heavy *还是* write-heavy *,使用这些索引或多或少都会有意义。在第一种情况下,您会注意到全球性的改进,但第二种情况下,开销并没有得到回报。 – joanolo