tag: SQL基础
概述使用实例1、存储过程中的慢查询2、慢的DISTINCT语句 (忘了这条吧)3、从间隔中取季度4、提取随机行5、小心使用异常6、触发器里快速比较NEW和OLD变量 (有点老旧)7、快速移去头n条记录8、列值拼接9、使用pgstattuple and pgstatindex中的函数10、可区别的删除11、谓词IN优化12、条形码验证 (BAR code)13、使用hash函数确保文本的唯一性14、获取函数的oid15、通用数组排序16、PL/Perl中的数组变换17、UPC code validation18、每n行显示一次19、数组变换成表20、LIKE的优化21、延迟约束22、去重复行(去重)23、从group中提取前n行24、转换至varchar25、创建聚合参考
概述
整理PostgreSQL中一些常见的SQL小技巧,绝大部分来源于网上。有些技巧可能随着PG版本的不断升级和优化,已经不再实用了,所以要查阅的时候,需要自己审慎的判断以后再做决定,最好亲手试一下,看看是不是达到预期的效果。
使用实例
1、存储过程中的慢查询
有时候用户需要报告存储过程中的慢查询。而直接执行相同的查询速度比较快。原因比较简单。存储过程使用准备好的语句,并且在不知道实际参数的情况下对准备好的语句进行优化。这种行为保证了防sql注入的安全性,但存在一些问题,规划者是盲目的。这个问题并不常见,解决方案很简单——我们必须使用动态查询。但是我们不能忘记仔细检查sql注入。这类问题可以通过对预备语句的分析来发现。
t2=# PREPARE pp(integer) AS SELECT count(*) FROM foo WHERE a BETWEEN $1 + 100 AND $1 + 200;
t2=# EXPLAIN ANALYZE execute pp(100);
QUERY PLAN
------------------------------------------------------------------------------------------------------------
Aggregate (cost=438.60..438.61 rows=1 width=0) (actual time=16.149..16.150 rows=1 loops=1)
-> Bitmap Heap Scan on foo (cost=13.39..437.35 rows=500 width=0) (actual time=1.391..5.458 rows=1033 loops=1)
Recheck Cond: ((a >= ($1 + 100)) AND (a <= ($1 + 200)))
-> Bitmap Index Scan on fx (cost=0.00..13.26 rows=500 width=0) (actual time=1.131..1.131 rows=1033 loops=1)
Index Cond: ((a >= ($1 + 100)) AND (a <= ($1 + 200)))
Total runtime: 16.340 ms
(6 rows)
t2=# EXPLAIN ANALYZE SELECT count(*) FROM foo WHERE a BETWEEN 100 + 100 AND 100 + 200;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------
Aggregate (cost=432.21..432.22 rows=1 width=0) (actual time=15.930..15.932 rows=1 loops=1)
-> Bitmap Heap Scan on foo (cost=22.21..429.78 rows=971 width=0) (actual time=1.179..13.338 rows=1033 loops=1)
Recheck Cond: ((a >= 200) AND (a <= 300))
-> Bitmap Index Scan on fx (cost=0.00..21.97 rows=971 width=0) (actual time=0.917..0.917 rows=1033 loops=1)
Index Cond: ((a >= 200) AND (a <= 300))
Total runtime: 16.058 ms
(6 rows)
执行计划几乎相同,因此我们可以使用预处理语句。PostgreSQL 9.2重新设计了预处理语句的实现——所描述的性能问题应该成为历史。
2、慢的DISTINCT语句 (忘了这条吧)
当前的DISTINCT实现需要排序,这可能很慢。我们应该使用GROUP BY子句——这个子句可以使用散列聚合——它比DISTINCT快得多。
postgres=# select count(*) from (select distinct i from g) a;
count
-------
10001
(1 row)
Time: 1563,109 ms
postgres=# select count(*) from (select i from g group by i) a;
count
-------
10001
(1 row)
Time: 594,481 ms
本主题与PostgreSQL 8.2及更早版本有关。PostgreSQL 8.3及更高版本可以对distinct使用散列聚合。
我们使用PG14来验证下:-)
mydb=# create table t1(id int, col2 varchar(32));
CREATE TABLE
mydb=# insert into t1 select random()*10001::int, 'test'||n from generate_series(1, 500000) as n;
INSERT 0 500000
mydb=# \timing on
Timing is on.
mydb=# select count(*) from (select distinct id from t1) a;
count
-------
10002
(1 row)
Time: 84.615 ms
mydb=# select count(*) from (select id from t1 group by id) a;
count
-------
10002
(1 row)
Time: 107.527 ms
如此看来,distinct现在已经不慢了,甚至比group by还快。
3、从间隔中取季度
从intervalInterval算法中取四分之一有点像炼金术。内部类型间隔包含日、月和年。输入值不归一化,保持输入单位不变(即200天存储为0年、0个月、200天)。有些函数不能很好地处理类似的输入值(例如EXTRACT QUARTER返回第一季度)。我们必须首先使用justify y_interval函数进行规范化(因此间隔200天将存储为0年,6个月和20天)。从这个值中我们可以得到关于季度的正确信息:
postgres=# SELECT interval '200 days', EXTRACT(QUARTER FROM interval '300 days');
interval | date_part
----------+-----------
200 days | 1
(1 row)
postgres=# SELECT justify_interval(interval '200 days'),
EXTRACT(QUARTER FROM justify_interval('200 days'));
justify_interval | date_part
------------------+-----------
6 mons 20 days | 3
(1 row)
4、提取随机行
原始blog: http://www.depesz.com/index.php/2007/09/16/my-thoughts-on-getting-random-row/ 当我做一个数据库审计时,我发现了一个典型但缓慢的解决方案(来自FAQ) ORDER BY random() LIMIT 1。这种方法从不使用索引,而每个类似的查询都使用:完全扫描表和排序表。下一个可能的方法WHERE id >= (random()*C+1)::int LIMIT 1要快得多,但是当序列有一些空白时,则优先选择空白后的第一行。我使用了基于使用一组随机数的成功方法。这个方法的缺点是有返回零行的风险(然后我们必须再次调用statement):
SELECT id, ...
FROM data
WHERE id = ANY(ARRAY(
SELECT (random()*max_id)::int
FROM generate_series(1,20)))
LIMIT 1;
-- max_id is constant equal max(id) from table + 20% reserve
-- id is primary key of table data
5、小心使用异常
异常的捕获有一些开销。所以不要经常使用它。在大多数情况下,这种开销是微不足道的,但是当您使用循环时,这种情况可能会发生变化。接下来的两个过程将新项目添加到唯一记录表中。如果item存在,则返回false,否则返回true。
CREATE OR REPLACE FUNCTION addnew1(integer)
RETURNS boolean AS $$
BEGIN
-- test(a) .. primary key
INSERT INTO test(a) VALUES($1);
RETURN true;
EXCEPTION WHEN OTHERS
RETURN false
END; $$ LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION addnew2(integer)
RETURNS boolean AS $$
BEGIN
IF NOT EXISTS(SELECT a FROM test WHERE a = $1) THEN
INSERT INTO test(a) VALUES($1);
RETURN true;
END IF;
RETURN false;
END; $$ LANGUAGE plpgsql;
存储过程 addnew1比addnew2慢2倍左右,但addnew2更健壮。plpgsql中的每个受保护块都会创建新的子事务。在本例中,时间用于创建新的子trans + INSERT > SELECT + INSERT。
6、触发器里快速比较NEW和OLD变量 (有点老旧)
如果只有在任何列发生更改时才执行trigger的主体,则可以有效地运行trigger。使用new(来自8.2)行操作符IS DISTINCT from,这个测试可以既简单又快速。
IF ROW(NEW.*) IS DISTINCT FROM ROW(OLD.*) THEN
-- continue only if some is changed
IF NEW.a1 IS DISTINCT FROM OLD.a1
OR NEW.a2 IS DISTINCT FROM OLD.a2 THEN
-- continue only if important columns are changed
...
END IF;
END IF;
RETURN NEW;
IF ROW(NEW.a1, NEW.a2) IS DISTINCT FROM ROW(OLD.a1, OLD.a2) THEN
-- important columns are changed
...
END IF;
RETURN NEW;
从8.4开始,可以使用下边的简易的比较:
IF NEW IS DISTINCT FROM OLD THEN
-- continue only if some is changed
...
END IF;
RETURN NEW;
7、快速移去头n条记录
PostgreSQL不支持UPDATE或DELETE语句的LIMIT。有时我们需要使用它(用于模拟队列,…)。经典的解决方案与IN和子查询不太有效:
postgres=# explain analyze delete from del where ctid in (select ctid from del limit 10 offset 0);
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------
Merge IN Join (cost=804.79..854.94 rows=10 width=6) (actual time=63.392..63.464 rows=10 loops=1)
Merge Cond: (public.del.ctid = "IN_subquery".ctid)
-> Sort (cost=804.39..829.39 rows=10000 width=6) (actual time=63.205..63.217 rows=11 loops=1)
Sort Key: public.del.ctid
Sort Method: quicksort Memory: 646kB
-> Seq Scan on del (cost=0.00..140.00 rows=10000 width=6) (actual time=0.045..27.366 rows=9970 loops=1)
-> Sort (cost=0.41..0.43 rows=10 width=6) (actual time=0.172..0.187 rows=10 loops=1)
Sort Key: "IN_subquery".ctid
Sort Method: quicksort Memory: 17kB
-> Limit (cost=0.00..0.14 rows=10 width=6) (actual time=0.021..0.127 rows=10 loops=1)
-> Seq Scan on del (cost=0.00..140.00 rows=10000 width=6) (actual time=0.016..0.030 rows=10 loops=1)
Total runtime: 63.661 ms
(12 rows)
Tom Lane的建议是使用array: (我去,又见到Tom Lane了, 到处都是他)
postgres=# explain analyze delete from del where ctid = any (array(select ctid from del limit 10));
QUERY PLAN
------------------------------------------------------------------------------------------------------------------
Tid Scan on del (cost=0.14..40.37 rows=10 width=6) (actual time=0.200..0.237 rows=10 loops=1)
TID Cond: (ctid = ANY ($0))
InitPlan
-> Limit (cost=0.00..0.14 rows=10 width=6) (actual time=0.046..0.139 rows=10 loops=1)
-> Seq Scan on del (cost=0.00..140.00 rows=10000 width=6) (actual time=0.040..0.065 rows=10 loops=1)
Total runtime: 0.376 ms
这个技巧的核心是对JOIN的保护,它(在本例中)不像seq那样有效。用过滤器扫描。这个技巧在其他情况下也有用。通常我们应该说,对于小N (N < 100),谓词= ANY优于谓词IN(对于PostgreSQL 8.2和8.3)。但实际效率取决于指标。
8、列值拼接
从内容表中如何获取拼接后的结果:
SELECT a.id, a.name,
array_to_string(ARRAY(
SELECT b.name
FROM b
WHERE b.id = a.id
ORDER BY b.name ASC
), ',') AS b_names
FROM a
ORDER BY a.id ASC;
[table a]
id | name
----+------
1 | one
2 | two
3 | three
4 | four
[table b]
id | name
----+------
1 | pizza
1 | hot dog
2 | gorilla
2 | monkey
3 | apple
4 | cheese
4 | milk
4 | eggs
--result
id | name | b_names
----+-------+---------
1 | one | pizza,hot dog
2 | two | gorilla,monkey
3 | three | apple
4 | four | cheese,milk,eggs
9、使用pgstattuple and pgstatindex中的函数
你可以在PostgreSQL 8.1和更高版本中找到很好的贡献包pgstattuple。这个贡献模块包含一些诊断功能,用于检测死元组的部分和检测索引的碎片。下面两个表生成函数使用这些函数更实用:
CREATE OR REPLACE FUNCTION print_table_dead_tp(OUT table_name varchar, OUT tuple_count int8, OUT table_len varchar, OUT free_space varchar,
OUT dead_tuple_percent numeric(5,2), OUT dead_tuple_count integer)
RETURNS SETOF RECORD AS $$
DECLARE r record; s record;
BEGIN
FOR r IN SELECT c.oid, n.nspname || '.' || c.relname as "tablename"
FROM pg_catalog.pg_class c
LEFT JOIN
pg_catalog.pg_namespace n
ON n.oid = c.relnamespace
WHERE c.relkind = 'r'
AND n.nspname NOT IN ('pg_catalog', 'pg_toast')
AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 2
LOOP
s := pgstattuple(r.oid);
table_name := r.tablename;
tuple_count := s.tuple_count; dead_tuple_percent := s.dead_tuple_percent;
dead_tuple_count := s.dead_tuple_count;
table_len := pg_size_pretty(s.table_len);
free_space := pg_size_pretty(s.free_space);
RETURN NEXT;
END LOOP;
RETURN;
END; $$ LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION print_indexes_info(OUT index_name varchar, OUT indexsize varchar,
OUT leaf_pages integer, OUT deleted_pages integer,
OUT leaf_fragmentation double precision, OUT avg_leaf_density double precision)
RETURNS SETOF RECORD AS $$
DECLARE r record; o pgstatindex_type;
BEGIN
FOR r IN SELECT schemaname || '.' || indexname AS indexname FROM pg_indexes LOOP
BEGIN
o := pgstatindex(r.indexname);
index_name := r.indexname;
indexsize := pg_size_pretty(o.index_size); deleted_pages := o.deleted_pages;
leaf_pages := o.leaf_pages; avg_leaf_density := o.avg_leaf_density;
leaf_fragmentation := o.leaf_fragmentation;
RETURN NEXT;
EXCEPTION WHEN OTHERS THEN
indexsize := NULL; deleted_pages := NULL; leaf_fragmentation := NULL;
index_name := r.indexname;
RETURN NEXT;
END;
END LOOP;
RETURN;
END; $$ LANGUAGE plpgsql;
10、可区别的删除
我们可以简单地通过调用to_ascii函数来删除变音符号。有个小问题。这个函数不支持UTF8编码,所以我们必须使用convert函数。
postgres=# select to_ascii(convert('Příliš žlutý kůň' using utf8_to_iso_8859_2),'latin2');
to_ascii
------------------
Prilis zluty kun
(1 row)
注意,对于7.4,转换的名称是"utf_8_to_iso_8859_2"。对于8.3和9.0,我们必须使用小的解决方案:
CREATE FUNCTION to_ascii(bytea, name)
RETURNS text AS 'to_ascii_encname' LANGUAGE internal;
SELECT to_ascii(convert_to('Příliš žlutý kůň', 'latin2'),'latin2');
点评:怀疑自己在扒坟.
11、谓词IN优化
http://archives.postgresql.org/pgsql-performance/
谓词IN优化当值列表大于80个数字时,我发现关于IN谓词优化可能性(pg_performance)。对于较长的列表,最好使用多值创建常数子查询:
SELECT *
FROM tab
WHERE x IN (1,2,3,..n); -- n > 70
-- faster case
SELECT *
FROM tab
WHERE x IN (VALUES(10),(20));
对于大量的项目使用VALUES会更快,所以不要将其用于小的值集。
12、条形码验证 (BAR code)
CREATE FUNCTION barcode_sum(text)
RETURNS bigint AS
$$
SELECT ABS(SUM(CAST(SUBSTRING($1 FROM Weights.seq FOR 1) AS INTEGER) * Weights.wgt))
FROM (VALUES (CAST(1 AS INTEGER), CAST(-1 AS INTEGER)),
(2, +1), (3, -1), (4, +1), (5, -1),
(6, +1), (7, -1), (8, +1), (9, -1), (10, +1),
(11,-1), (12, +1)) AS weights(seq, wgt)
$$ LANGUAGE sql;
一种方案是:
CREATE TABLE products(
barcode char(13) NOT NULL
CONSTRAINT valid_checkdigit
check (MOD(barcode_sum(barcode),10) = CAST(substring(barcode from 13 for 1) AS integer))
CONSTRAINT all_numeric_checkdigit CHECK(barcode NOT SIMILAR TO '%[^0-9]%')
);
INSERT INTO products VALUES('2837232811227');
条形码验证这个技巧是基于char类型的行为。少于13个字符的短文本将被空格填充,因此测试NOT SIMILAR确保正确的位数。
13、使用hash函数确保文本的唯一性
对于较长的文本,经典的惟一索引不是很好的解决方案。使用一些哈希函数是很自然的。这些函数的输出是32个字符长的文本。这是一个十六进制数,所以我们可以通过转换为bytea类型来截断它。注意:哈希不能保证100%的唯一性。
root=# CREATE TABLE test(a text);
CREATE TABLE
root=# CREATE UNIQUE INDEX uidx ON test((decode(md5(a),'hex')));
CREATE INDEX
root=# CREATE UNIQUE INDEX uidx ON test((decode(md5(lower(replace(a,' ',''))),'hex')));
CREATE INDEX
root=# INSERT INTO test VALUES('příliš žluťoučký kůň');
INSERT 0 1
root=# INSERT INTO test VALUES('příliš žluťoučký kůň ');
ERROR: duplicate key violates unique constraint "uidx"
root=# INSERT INTO test VALUES('Příliš žluťoučký kůň');
ERROR: duplicate key violates unique constraint "uidx"
-- less risk, using two hash functions
-- CREATE UNIQUE INDEX uidx ON TEST((decode(md5(a),'hex')),(hashtext(a)));
14、获取函数的oid
CREATE OR REPLACE FUNCTION a(integer, varchar) RETURNS void AS
$$
BEGIN
END;
$$ LANGUAGE plpgsql;
root=# SELECT 'a'::regproc::int;
ERROR: more than one function named "a"
root=# SELECT 'a(integer, varchar)'::regprocedure::int;
int4
-------
57507
(1 row)
root=# SELECT 57507::regprocedure;
regprocedure
------------------------------
a(integer,character varying)
(1 row)
table 的 oid:
SELECT 'oo'::regclass::int;
15、通用数组排序
-- by David Fetter.
CREATE OR REPLACE FUNCTION array_sort (ANYARRAY)
RETURNS ANYARRAY LANGUAGE SQL
AS $$
SELECT ARRAY(
SELECT $1[s.i] AS "foo"
FROM
generate_series(array_lower($1,1), array_upper($1,1)) AS s(i)
ORDER BY foo
);
$$;
-- 更快点儿的
CREATE OR REPLACE FUNCTION array_sort (ANYARRAY)
RETURNS ANYARRAY LANGUAGE SQL
AS $$
SELECT ARRAY(SELECT unnest($1) ORDER BY 1)
$$
-- 或者 by Craig Ringer
CREATE OR REPLACE FUNCTION array_sort (ANYARRAY)
RETURNS ANYARRAY LANGUAGE SQL
AS $$
SELECT array_agg(x ORDER BY x) FROM unnest($1) x;
$$
如果你需要快速的整型数组排序,那么你就需要查看intarray contrib模块。
16、PL/Perl中的数组变换
PL/Perl doesn't support OUT arrays. We can use undocumented procedure _plperl_to_pg_array:
CREATE OR REPLACE FUNCTION foo(OUT a integer[], OUT b integer[]) AS $$
return { a=> _plperl_to_pg_array([1,2,3]),
b=> _plperl_to_pg_array([4,5,6]) };
17、UPC code validation
This is sample of validation of code with control number in Perl (PL/perl). I used this function for compare with PL/pgSQL and C languages. Perl is about 20% faster, but C is 10 times faster than Perl.
CREATE OR REPLACE FUNCTION ean_perl (
TEXT
) RETURNS boolean AS $_$
my $ean = length $_[0] == 12 ? "0$_[0]" : $_[0];
# Make sure we really have an EAN.
return 'false' unless $ean =~ /^\d{13}$/;
my @nums = split '', $ean;
return 10 - (
# Sum even numerals.
( ( $nums[1] + $nums[3] + $nums[5] + $nums[7] + $nums[9]
+ $nums[11]
) * 3 # Multiply total by 3.
# Add odd numerals except for checksum (12).
) + $nums[0] + $nums[2] + $nums[4] + $nums[6] + $nums[8] + $nums[10]
# Compare to the checksum.
) % 10 == $nums[12] ? 'true' : 'false';
$_$ LANGUAGE plperl immutable;
-- rewriting in PG
SELECT (
SELECT mod(
sum(
CAST(substring('2837232811227' from s.seq for 1) AS integer) *
CASE mod(s.seq,2) WHEN 0 THEN 1 ELSE -1 END),
10) FROM generate_series(1,12) as s(seq)) =
CAST(substring('2837232811227' FROM 13 FOR 1) AS integer);
18、每n行显示一次
CREATE TEMP SEQUENCE number;
SELECT * FROM ( SELECT *, nextval('number') AS number FROM datatable ) foo
WHERE foo.number % 5 = 0;
8.4及以后:
SELECT * FROM ( SELECT *, row_number() over () AS number FROM datatable ) foo
WHERE foo.number % 5 = 0;
19、数组变换成表
SELECT (ARRAY[1,3,4,6,7])[idx.n] FROM generate_series(1,7) idx(n);
CREATE OR REPLACE FUNCTION unpack(anyarray)
RETURNS SETOF anyelement AS $$
SELECT $1[i]
FROM generate_series(array_lower($1,1),
array_upper($1,1)) g(i);
$$ LANGUAGE sql STRICT IMMUTABLE;
postgres=# select unpack(array['a','b','c']);
unpack
--------
a
b
c
(3 rows)
8.4及以后中,可以运用: unnest()-
test=*# select unnest(array['a','b','c']);
unnest
--------
a
b
c
(3 rows)
此外,我们可以扩展unnest()函数,使其在数组为空或null时返回带有默认值的一行:
create or replace function unnest(anyarray, anyelement)
returns setof anyelement as $$
select unnest(case when array_upper($1, 1) > 0 then $1 else array[$2] end);
$$ language sql immutable;
gp=> create temp table foo (x text[], y int) distributed by (y);
CREATE TABLE
gp=> insert into foo values
(array['a','b','c'], 1),
(null, 2),
('{}'::text[],3);
INSERT 0 3
gp=> select *,array_upper(x,1) as size from foo;
x | y | size
---------+---+------
{} | 3 |
| 2 |
{a,b,c} | 1 | 3
(3 rows)
gp=> select unnest(x),y from foo;
unnest | y
--------+---
a | 1
b | 1
c | 1
(3 rows)
gp=> select unnest(x,'empty'),y from foo;
unnest | y
--------+---
empty | 3
empty | 2
a | 1
b | 1
c | 1
(5 rows)
20、LIKE的优化
PostgreSQL use index for LIKE only when:
pattern doesn't start with % and _ symbols,
database cluster is initialized with C locale.
Last constraint we can pass by special index:
CREATE INDEX like_index ON people(surname varchar_pattern_ops);
When we work wit pattern like %some (for internet domain searching) we should to use trick:
create functional index on mirrored values.
search with mirrored pattern
PostgreSQL doesn't allow reverse (mirror) function. We can use rvrs function from Orafunc package or we can use PL/Perl.
CREATE OR REPLACE FUNCTION reverse(varchar) RETURNS varchar AS $$
$reversed = reverse $_[0];
return $reversed;
$$ LANGUAGE plperlu IMMUTABLE;
CREATE INDEX rev_email ON users( (reverse(email) ) varchar_pattern_ops );
SELECT * FROM _users WHERE reverse(email) LIKE reverse ('%.cz');
不使用plperl
CREATE OR REPLACE FUNCTION reverse(TEXT) RETURNS TEXT AS $$
SELECT
array_to_string(
ARRAY
( SELECT substring($1, s.i,1) FROM generate_series(length($1), 1, -1) AS s(i) ),
'');
$$ LANGUAGE SQL IMMUTABLE;
9.1及以后有专门的reverse函数。
21、延迟约束
PostgreSQL立即运行所有约束。SQL知道延迟约束,这允许延迟测试接近提交时间。PostgreSQL没有完全支持。您只能延迟引用完整性约束。但是PostgreSQL支持未记录的延迟触发器,所以我们应该使用它。语法类似于触发器定义:
CREATE CONSTRAINT TRIGGER sc
AFTER INSERT
ON fieldtrip_students
INITIALLY DEFERRED
FOR EACH ROW EXECUTE PROCEDURE field_trip();
22、去重复行(去重)
很早以后,有oid字段的,可以用:(现在可以忘了它)
DELETE FROM people WHERE
NOT oid IN (SELECT MIN(oid) FROM people GROUP BY name, surname);
但是新表通常是用这个参数创建的。所以我们不能使用OID。PostgreSQL的每一行都有一个唯一的标识符——ctid。对于ctid类型,我们没有定义任何聚合函数,我们必须使用相关子查询。我在Greg Mullane的演讲中发现了这个技巧。
DELETE FROM lidi WHERE
NOT (ctid, tableoid) = (SELECT (ctid, tableoid) FROM lidi l WHERE
prijmeni=l.prijmeni AND jmeno=l.jmeno LIMIT 1);
23、从group中提取前n行
SELECT * FROM people WHERE id IN (
SELECT id FROM people s
WHERE people.category = s.category
ORDER BY age LIMIT 2)
ORDER BY category, age;
或者:
SELECT s1.*
FROM people s1
LEFT JOIN
people s2
ON s1.category = s2.category AND s1.age < s2.age
GROUP BY s1.id, s1.category
HAVING COUNT(s2.id) <= 1
ORDER BY s1.category, COUNT(s2.id);
24、转换至varchar
PostreSQL中的每种数据类型都有输入和输出函数。我们可以使用这些函数来支持转换为varchar(如果我们错过了它), 8.3以上版本就忘了它吧
testdb011=# SELECT '(1,1)'::point::varchar;
ERROR: cannot cast type point to character varying
testdb011=# SELECT textin(point_out('(1,1)'::point))::varchar;
textin
--------
(1,1)
(1 row)
25、创建聚合
SELECT count(*) FROM sometable WHERE field='x'
这会有一个表的扫描.
CREATE TABLE aggregate AS
(SELECT result,count(result) AS count FROM original_table
GROUP BY result ORDER BY count DESC);
此外,我的数据包含许多一次性的,我不想把它们放入我的聚合表中(我假设如果一个值不在我的聚合表中,那么它就太小了,没有用处)……以下是我省略这些值的方法:
CREATE TABLE aggregate AS
(SELECT * FROM
(SELECT result,count(result) AS count FROM original_table
GROUP BY result ORDER BY count DESC)
AS countquery WHERE count > 3);
这种方法足够快。因为我在“结果”字段上有一个索引,所以它在一分钟内运行300万条记录。生成所有结果值的计数所花费的时间与计算记录总数所花费的时间大致相同。
现在我可以做一个简单(非常快)的查询来获得一个计数:
SELECT * FROM aggregate where result='xxxxx'
参考
[1] https://postgres.cz/wiki/PostgreSQL_SQL_Tricks_I
:https://postgres.cz/wiki/PostgreSQL_SQL_Tricks_I