2014-02-23 138 views
1

我有一个300万行和1.3GB大小的表。在我的笔记本电脑上使用4GB RAM运行Postgres 9.3。Postgres缓慢的查询(慢速索引扫描)

explain analyze 
select act_owner_id from cnt_contacts where act_owner_id = 2 

我已经cnt_contacts.act_owner_id B树键定义为:

CREATE INDEX cnt_contacts_idx_act_owner_id 
    ON public.cnt_contacts USING btree (act_owner_id, status_id); 

查询约5秒钟

 
Bitmap Heap Scan on cnt_contacts (cost=2598.79..86290.73 rows=6208 width=4) (actual time=5865.617..5875.302 rows=5444 loops=1) 
    Recheck Cond: (act_owner_id = 2) 
    -> Bitmap Index Scan on cnt_contacts_idx_act_owner_id (cost=0.00..2597.24 rows=6208 width=0) (actual time=5865.407..5865.407 rows=5444 loops=1) 
     Index Cond: (act_owner_id = 2) 
Total runtime: 5875.684 ms" 
为什么要花这么长时间运行?

work_mem = 1024MB; 
shared_buffers = 128MB; 
effective_cache_size = 1024MB 
seq_page_cost = 1.0   # measured on an arbitrary scale 
random_page_cost = 15.0   # same scale as above 
cpu_tuple_cost = 3.0 
+0

“cnt_contacts_idx_act_owner_id”索引的定义是什么? –

+0

CREATE INDEX cnt_contacts_idx_act_owner_id ON public.cnt_contacts 使用btree(act_owner_id,status_id); –

+0

你应该创建另一个只有'act_owner_id'的索引。 – frlan

回答

2

您正在笔记本电脑上选择分散在1.3 GB桌子上的5444条记录。 预计需要多长时间?

看起来您的索引没有被缓存,要么是因为它无法在缓存中持续存在,要么是因为这是您第一次使用该部分索引。如果重复运行完全相同的查询,会发生什么情况?相同的查询,但具有不同的常量?

在“explain(analyze,buffers)”下运行查询将有助于获得更多信息,特别是如果您先开启了track_io_timing。

0

好吧,你有大的表格,索引和长时间执行PG的平原。让我们思考如何改进你的计划和缩短时间。你写和删除行。 PG编写和删除元组以及表和索引可能会变得臃肿。为了好搜索,PG将索引加载到共享缓冲区。你需要保持你的索引尽可能干净。对于选择,PG将读取到共享缓冲区而不是搜索。尝试设置缓冲区内存并减少索引和表格膨胀,保持数据库清理。

您做些什么,想一想:

1)只要检查指标重复,并具有很好的选择,你的索引:

WITH table_scans as (
    SELECT relid, 
     tables.idx_scan + tables.seq_scan as all_scans, 
     (tables.n_tup_ins + tables.n_tup_upd + tables.n_tup_del) as writes, 
       pg_relation_size(relid) as table_size 
     FROM pg_stat_user_tables as tables 
), 
all_writes as (
    SELECT sum(writes) as total_writes 
    FROM table_scans 
), 
indexes as (
    SELECT idx_stat.relid, idx_stat.indexrelid, 
     idx_stat.schemaname, idx_stat.relname as tablename, 
     idx_stat.indexrelname as indexname, 
     idx_stat.idx_scan, 
     pg_relation_size(idx_stat.indexrelid) as index_bytes, 
     indexdef ~* 'USING btree' AS idx_is_btree 
    FROM pg_stat_user_indexes as idx_stat 
     JOIN pg_index 
      USING (indexrelid) 
     JOIN pg_indexes as indexes 
      ON idx_stat.schemaname = indexes.schemaname 
       AND idx_stat.relname = indexes.tablename 
       AND idx_stat.indexrelname = indexes.indexname 
    WHERE pg_index.indisunique = FALSE 
), 
index_ratios AS (
SELECT schemaname, tablename, indexname, 
    idx_scan, all_scans, 
    round((CASE WHEN all_scans = 0 THEN 0.0::NUMERIC 
     ELSE idx_scan::NUMERIC/all_scans * 100 END),2) as index_scan_pct, 
    writes, 
    round((CASE WHEN writes = 0 THEN idx_scan::NUMERIC ELSE idx_scan::NUMERIC/writes END),2) 
     as scans_per_write, 
    pg_size_pretty(index_bytes) as index_size, 
    pg_size_pretty(table_size) as table_size, 
    idx_is_btree, index_bytes 
    FROM indexes 
    JOIN table_scans 
    USING (relid) 
), 
index_groups AS (
SELECT 'Never Used Indexes' as reason, *, 1 as grp 
FROM index_ratios 
WHERE 
    idx_scan = 0 
    and idx_is_btree 
UNION ALL 
SELECT 'Low Scans, High Writes' as reason, *, 2 as grp 
FROM index_ratios 
WHERE 
    scans_per_write <= 1 
    and index_scan_pct < 10 
    and idx_scan > 0 
    and writes > 100 
    and idx_is_btree 
UNION ALL 
SELECT 'Seldom Used Large Indexes' as reason, *, 3 as grp 
FROM index_ratios 
WHERE 
    index_scan_pct < 5 
    and scans_per_write > 1 
    and idx_scan > 0 
    and idx_is_btree 
    and index_bytes > 100000000 
UNION ALL 
SELECT 'High-Write Large Non-Btree' as reason, index_ratios.*, 4 as grp 
FROM index_ratios, all_writes 
WHERE 
    (writes::NUMERIC/(total_writes + 1)) > 0.02 
    AND NOT idx_is_btree 
    AND index_bytes > 100000000 
ORDER BY grp, index_bytes DESC) 
SELECT reason, schemaname, tablename, indexname, 
    index_scan_pct, scans_per_write, index_size, table_size 
FROM index_groups; 

2)检查是否有表和索引腹胀?

 SELECT 
     current_database(), schemaname, tablename, /*reltuples::bigint, relpages::bigint, otta,*/ 
     ROUND((CASE WHEN otta=0 THEN 0.0 ELSE sml.relpages::FLOAT/otta END)::NUMERIC,1) AS tbloat, 
     CASE WHEN relpages < otta THEN 0 ELSE bs*(sml.relpages-otta)::BIGINT END AS wastedbytes, 
     iname, /*ituples::bigint, ipages::bigint, iotta,*/ 
     ROUND((CASE WHEN iotta=0 OR ipages=0 THEN 0.0 ELSE ipages::FLOAT/iotta END)::NUMERIC,1) AS ibloat, 
     CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS wastedibytes 
    FROM (
     SELECT 
     schemaname, tablename, cc.reltuples, cc.relpages, bs, 
     CEIL((cc.reltuples*((datahdr+ma- 
      (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::FLOAT)) AS otta, 
     COALESCE(c2.relname,'?') AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages, 
     COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::FLOAT)),0) AS iotta -- very rough approximation, assumes all cols 
     FROM (
     SELECT 
      ma,bs,schemaname,tablename, 
      (datawidth+(hdr+ma-(CASE WHEN hdr%ma=0 THEN ma ELSE hdr%ma END)))::NUMERIC AS datahdr, 
      (maxfracsum*(nullhdr+ma-(CASE WHEN nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2 
     FROM (
      SELECT 
      schemaname, tablename, hdr, ma, bs, 
      SUM((1-null_frac)*avg_width) AS datawidth, 
      MAX(null_frac) AS maxfracsum, 
      hdr+(
       SELECT 1+COUNT(*)/8 
       FROM pg_stats s2 
       WHERE null_frac<>0 AND s2.schemaname = s.schemaname AND s2.tablename = s.tablename 
      ) AS nullhdr 
      FROM pg_stats s, (
      SELECT 
       (SELECT current_setting('block_size')::NUMERIC) AS bs, 
       CASE WHEN SUBSTRING(v,12,3) IN ('8.0','8.1','8.2') THEN 27 ELSE 23 END AS hdr, 
       CASE WHEN v ~ 'mingw32' THEN 8 ELSE 4 END AS ma 
      FROM (SELECT version() AS v) AS foo 
     ) AS constants 
      GROUP BY 1,2,3,4,5 
     ) AS foo 
    ) AS rs 
     JOIN pg_class cc ON cc.relname = rs.tablename 
     JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname = rs.schemaname AND nn.nspname <> 'information_schema' 
     LEFT JOIN pg_index i ON indrelid = cc.oid 
     LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid 
    ) AS sml 
    ORDER BY wastedbytes DESC 

3)您是否从硬盘清理未使用的元组?真空是时候了吗?

SELECT 
    relname AS TableName 
    ,n_live_tup AS LiveTuples 
    ,n_dead_tup AS DeadTuples 
FROM pg_stat_user_tables; 

4)想一想。如果你在db中有10条记录,而10中有8条id = 2,那么这意味着你的索引选择性不好,这样PG就会扫描所有8条记录。但是,你尝试使用ID!= 2索引将工作良好。尝试设置良好的选择索引。

5)使用正确的列类型为您提供数据。如果您可以使用较少的kb类型为您的列转换它。

6)只要检查你的数据库和条件。检查这个开始page 只是试图看到你有在数据库中未使用的数据在表中,索引必须清理,检查选择性为您的索引。尝试使用其他brin索引数据,尝试重新创建索引。