PostgreSQL Cheatsheet
Related Posts
Table of Contents
- Basics
- Output Formatting
- User Management
- Create User
- Create Role
- Delete Role
- Change User Password
- Allow User To Login
- List Roles / List Users / Check Privileges
- Rename User / Role
- Grant User Superuser Privileges
- Remove User Superuser Privileges
- Grant User Full Access To Database
- Grant User Readonly Access To Database
- Grant Ability To Create Databases
- Grant Access To Public Schema
- Add User To Role
- Remove User From Role
- Change Database Owner
- Change Table Owner
- Mass Change Ownership
- Change Table Owner
- Delete User / Role
- General Administration
- Extensions
- Databases
- Create Table
- Editing Tables
- Indexes / Constraints / Checks
- Enums / Types
- Selecting Data
- Inserting Data
- Updating Data
- Deleting Data
- Schemas
- Working With JSON
- Working With Time
- Working With Network Types
- Comments
- PostGIS
- Misc
- References
Basics
Installation
If you need to connect to a remote server, Ubuntu 16.04 users can install the PostgreSQL client with:
sudo apt install postgresql-client-common postgresql-client-9.5
Login As Master/Root User
postgres
is the master user, (and comes with their own database by default).
sudo -u postgres psql
Login As Normal User
psql \
--user $USER \
--password \
-d $DATABASE \
--host $HOST
- If the user has their own database and you wish to connect to that, you can skip
-D $DATABASE
as you would automatically connect to that. - if you wish to connect to localhost, you can skip
--host
- If the user doesnt need a password, you can skip
--password
- If your current BASH user is the same name as the user in psql you wish to connect as, you can skip
--user
.
List / Show Databases
\list
Switch Database
Technically this command does not "switch" database, you are just closing one connection and opening another.
\connect DBNAME
Show / List Tables
If you are using the psql cli tool:
\dt
This shows all the tables in the schemas in your search path (usually set to just public). The default schema is called public, but there may be others. To show all tables across all schemas, execute the following:
\dt *.*
If you are connecting using something like PHP, then you need to do:
SELECT tablename FROM pg_catalog.pg_tables
Describe Table
If you want to show a table, use:
\d+ tablename
Example output:
Table "public.users"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
----------+------------------------+-----------+----------+---------+----------+--------------+-------------
name | character varying(255) | | not null | | extended | |
username | character varying(255) | | not null | | extended | |
Indexes:
"users_username_key" UNIQUE CONSTRAINT, btree (username)
... or you can use the following to get just the basic information:
\d tablename
Table "public.users"
Column | Type | Collation | Nullable | Default
----------+------------------------+-----------+----------+---------
name | character varying(255) | | not null |
username | character varying(255) | | not null |
Indexes:
"users_username_key" UNIQUE CONSTRAINT, btree (username)
Unfortunately, there is no SHOW CREATE TABLE tablename
. However, if you are using PHP to try and retrieve information
about the tables, then one can use the following to get details about the columns:
SELECT * FROM information_schema.columns
WHERE table_name = 'my_table_name';
.... and the following to get information about the foreign keys:
SELECT
tc.table_schema,
tc.constraint_name,
tc.table_name,
kcu.column_name,
ccu.table_schema AS foreign_table_schema,
ccu.table_name AS foreign_table_name,
ccu.column_name AS foreign_column_name
FROM
information_schema.table_constraints AS tc
JOIN information_schema.key_column_usage AS kcu
ON tc.constraint_name = kcu.constraint_name
AND tc.table_schema = kcu.table_schema
JOIN information_schema.constraint_column_usage AS ccu
ON ccu.constraint_name = tc.constraint_name
AND ccu.table_schema = tc.table_schema
WHERE tc.constraint_type = 'FOREIGN KEY'
AND tc.table_name='my_table_name';
... and the following to get the table's primary key:
SELECT
tc.table_schema,
tc.constraint_name,
tc.table_name,
kcu.column_name,
tc.constraint_type
FROM
information_schema.table_constraints AS tc
JOIN information_schema.key_column_usage AS kcu
ON tc.constraint_name = kcu.constraint_name
AND tc.table_schema = kcu.table_schema
WHERE tc.constraint_type = 'PRIMARY KEY'
AND tc.table_name='my_table_name';
Show / List Sequences
\ds
Alternatively, if you need to use an SQL statement, becuase you need to run this from something like a PHP script, you would run:
SELECT sequence_schema, sequence_name
FROM information_schema.sequences
ORDER BY sequence_name
Exit / Logout
\q
Ctrl
+ D
.
Output Formatting
Flip Rows And Columns
If you are trying to fetch just one row and the table has a lot of columns, you may find it easier if you flip the output so that each column becomes a newline. To do this use:
\x
Scrolling Results - Turn Off Paging
If there are a lot of results, PostgreSQL will implement scrolling by default. To scroll down
through the results, one presses spacebar
. However, if like me, you find this annoying, you
can turn off paging with:
\pset pager off
No Wrap Output
If you have lots of columns in your tables and you don't want word wrapping in your output, you
can run the following command inside PostgreSQL to use less
as your pager, and tell it not to
wrap lines.
\setenv PAGER 'less -S'
If you are using a PostgreSQL docker container it won't have less installed, so you would need
to install less
, and then set the environment variable like so:
PAGER="/usr/bin/less -S"
Output Results To File
You can use \o
as an output buffer that will output the results to a file or pipe. E.g.
db=>\o out.txt
db=>\dt
db=>\o
\o
turns off the output.
You could put as many queries in there as you like.
Clear Screen
If you want to clear the screen of previous results so that the next query and result will appear from the top of the screen, then do the following:
\! clear
User Management
Create User
CREATE USER programster WITH PASSWORD 'thisismypassword';
programster
database, and they will have the ability to see and create tables in the other databases.
Create Role
CREATE ROLE $ROLE_NAME with password '$USER_PASSWORD' login;
login
if you don't want that to be loginable, or be explicit with the use of nologin
.
Delete Role
DROP ROLE $ROLE_NAME;
Change User Password
\password [user]
sudo -u postgres psql postgres
Allow User To Login
If you forgot to add the login
part when creating a user, you can retrospectively change a
role/user so they can log in.
ALTER ROLE "username" WITH LOGIN;
List Users / Show Users
\du
Example output:
List of roles
Role name | Attributes | Member of
------------+------------------------------------------------------------+-----------
myusername | | {}
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
Rename User / Role
ALTER ROLE "old_role_name" RENAME TO "new_role_name";
Grant User Superuser Privileges
ALTER USER $USERNAME WITH SUPERUSER;
Remove User Superuser Privileges
ALTER USER $USERNAME WITH NOSUPERUSER;
Grant User Full Access To Database
GRANT ALL ON database $DATABASE_NAME TO $ROLE_OR_USER;
Grant User Read-Only Access to Database
GRANT pg_read_all_data ON database $DATABASE_NAME TO $ROLE_OR_USER;
pg_read_all_data
came in with PostgreSQL 14,
so if you are using an older version of PostgreSQL, this will not work.
Grant Ability To Create Databases
The command below will give the user named username
the ability to create their own databases.
ALTER USER $USERNAME CREATEDB;
Grant Access To Public Schema
PostgreSQL 15 changes things so that the public schema does not have CREATE privileges granted by default anymore. Thus, if you create a user, and they need to operate on a database in the public schema, then you may need to grant them the ability to work in the public schema. Pick one of the following that is appropriate for your use case:
GRANT USAGE ON SCHEMA public TO my_database_user;
GRANT CREATE ON SCHEMA public TO my_database_user;
GRANT USAGE, CREATE ON SCHEMA public TO my_database_user;
GRANT ALL ON SCHEMA public TO my_database_user;
Add User To Role
GRANT $ROLE_NAME TO $USERNAME;
For example, the PostgreSQL AWS RDS instances have a role called rds_superuser which you need to add users to if you want them to have superuser privileges:
GRANT rds_superuser TO my_database_username;
Remove User From Role
REVOKE $ROLE_NAME FROM $USER;
e.g.
REVOKE rds_superuser TO my_database_username;
Change Database Owner
ALTER DATABASE my_database_name OWNER TO "new_owner_name";
Change Table Owner
ALTER TABLE my_table_name OWNER TO new_owner_name;
Mass Change Ownership
If you want to mass change ownership of items from one role to another, then you can do the following:
REASSIGN OWNED BY "original_owner" TO "new_owner";
Delete User / Role
DELETE USER "my_users_name"
General Administration
Show Running Queries
SELECT * FROM pg_stat_activity;
List User Connections
If you want to see who has an open connection to the database, run the following query:
select
pid,
usename,
datname as database_name,
client_addr as client_address,
application_name,
backend_start,
state,
state_change
from pg_stat_activity;
Show Settings / Variables
If you want to see all the settings for postgresql, then run:
SHOW ALL;
If you want to see a specific setting then specify it instead of all, like:
SHOW max_connections;
Kill A Users Connection / Session
SELECT pg_terminate_backend($PROCESS_ID);
$PROCESS_ID
is the pid
output in listing connections.
Dump / Backup Database
pg_dump \
--host $HOST \
--port "5432" \
--username $USERNAME \
--password \
--no-acl \
--no-owner \
--clean \
--create \
--file /path/to/dump/file.sql \
$DB_NAME
--no-owner
- Do not output commands to set ownership of objects to match the original
database.--no-acl
- Prevent dumping of access privileges (grant/revoke commands).--clean
- Output commands to clean (drop) database objects prior to outputting the commands for creating them.--create
- Begin the output with a command to create the database itself and reconnect to the created database.
--create
or --clean
switches if you wish to import the dump file later to a different database name!
E.g. if you are creating a copy of the database, instead of creating a backup of a host for later restoration.
--compress=9
to have pg_dump compress the output at the most-compressed level as
if having been run through gzip. The default is effectively 0.
Custom Format
I found that using the method above resulted in me losing the use of complicated constraints that were using the btree_gist extension.
To resolve this, I used the same command, but with the --format=custom
parameter, which results in creating a binary dump. This
also means that options like --clean
and --create
have no effect, as one needs to specify them in the pg_restore
command instead
which we will get onto later.
pg_dump \
--host $HOST \
--port "5432" \
--username $USERNAME \
--password \
--no-acl \
--no-owner \
--format=custom \
--file /path/to/dump/file.dump \
$DB_NAME
Restore / Import Database
From SQL File
To restore a dump taken using the way we created in the first pg_dump command which generated a plaintext SQL file, you would just run the SQL file like so:
psql \
--host $HOST \
--port "5432" \
--username $USERNAME \
--password \
-d $DATABASE \
-f $FILEPATH
pg_restore
command, but it wouldn't work on a plain SQL dump like we created.
From Binary Dump
If you used the --format=custom
option when performing the pg_dump, then you will need to use the pg_restore
command in order to restore the database from this file.
This is the method I generally use:
PORT="5432"
USERNAME="username_here"
DB_NAME="replacement_database"
HOST="127.0.0.1"
FILEPATH=/path/to/dump/file.dump
pg_restore \
--host $HOST \
--port $PORT \
--username $USERNAME \
--format=custom \
--password \
--no-owner \
--exit-on-error \
--dbname=$DB_NAME \
$FILEPATH
$DB_NAME
. If one uses the --clean
and/or --create
options, then
the --dbname
option is pretty much ignored, and the name of the database at the point of creating the database dump will be used, which will
error out if you have --clean
specified and it doesn't exist.
Extensions
List Extensions
CLI Command
If you are using the pgsql CLI, then one can simply run:
\dx
Below is the example output:
List of installed extensions
Name | Version | Schema | Description
------------+---------+------------+-----------------------------------------------
btree_gist | 1.6 | public | support for indexing common datatypes in GiST
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
(2 rows)
SQL Query Form
However, if you need to run a traditional query (such as if executing PHP code), then you would need the following:
SELECT * FROM pg_extension;
Example output:
oid | extname | extowner | extnamespace | extrelocatable | extversion | extconfig | extcondition
-------+------------+----------+--------------+----------------+------------+-----------+--------------
14703 | plpgsql | 10 | 11 | f | 1.0 | |
35190 | btree_gist | 10 | 2200 | t | 1.6 | |
(2 rows)
Install Extension
The following command will create the extension if it doesn't already exist:
CREATE EXTENSION IF NOT EXISTS extension_name_here.
Databases
Create A Database
CREATE DATABASE my_database_name
ENCODING 'UTF8'
LC_COLLATE = 'en_GB.UTF-8'
LC_CTYPE = 'en_GB.UTF-8'
TEMPLATE template0;
If you get a locale error, then run the following commands from your BASH shell:
sudo apt-get install language-pack-en
sudo locale-gen en_GB.UTF-8
sudo update-locale
sudo service postgresql restart
If you don't need utf8 and are fine with LATIN encoding, then you can just use:
CREATE DATABASE my_database_name;
Delete/Drop Database
DROP DATABASE my_database_name;
If there is a connection to the database because it is in use, then the command will fail. If you wish to force the dropping of the database in such a scenario, you can do so with:
DROP DATABASE my_database_name WITH (FORCE)
Change Database Name
ALTER DATABASE $CURRENT_DB_NAME RENAME TO $NEW_DB_NAME
Create Table
Basic Create a table Example
CREATE TABLE broker (
id serial NOT NULL,
name varchar(255) NOT NULL,
redirect_url text NOT NULL,
secret varchar(30) NOT NULL,
modified_timestamp timestamp DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (id)
);
AUTO_INCREMENT
but a special serial
and big_serial
type.
Creating Tables With Foreign Keys
CREATE TABLE cities (
city varchar(80) primary key,
location point
);
CREATE TABLE weather (
city varchar(80) references cities(city),
temp_lo int
);
By default, foreign keys will be set to use NO ACTION
for both update and delete actions
(resulting in the same behavioiur as RESTRICT
). You can manually specify what you want like so:
CREATE TABLE weather (
city varchar(80) references cities(city) ON UPDATE CASCADE ON DELETE RESTRICT,
temp_lo int
);
Creating Tables With Enums
CREATE TYPE mood AS ENUM ('sad', 'ok', 'happy');
CREATE TABLE person (
name text,
current_mood mood
);
Creating Tables With Unique Column
CREATE TABLE users (
name varchar(255) NOT NULL,
username varchar(255) UNIQUE NOT NULL
);
Create Table With Combined Unique Key
CREATE TABLE myTable (
id uuid PRIMARY KEY,
col1 INT NOT NULL,
col2 INT NOT NULL,
UNIQUE (col1, col2)
)
Editing Tables
Rename Table
ALTER TABLE original_table_name
RENAME TO new_table_name;
Add Column
ALTER TABLE my_table
ADD COLUMN "new_column"
varchar NOT NULL;
Edit / Alter / Modify Column
In the MySQL world, one would define everything about the column in one statement. In PostgreSQL one uses a step for each part. E.g. one wanted to change the type, set to not being null, and remove the default value, one would do:
ALTER TABLE my_table_name
ALTER COLUMN my_column_name TYPE varchar(255),
ALTER COLUMN my_not_nullable_column_name SET NOT NULL,
ALTER COLUMN my_nullable_column_name DROP NOT NULL,
ALTER COLUMN my_column_name DROP DEFAULT;
Rename Column
ALTER TABLE my_table
RENAME COLUMN "original_name"
TO "new_column_name";
Drop Column
ALTER TABLE my_table
DROP COLUMN "my_column_name";
If you wish to drop multiple columns then you can do this like so:
ALTER TABLE my_table
DROP COLUMN "my_first_column",
DROP COLUMN "my_second_column";
If the column is used by other views/procedures etc, then you may wish to cascade the removals:
ALTER TABLE my_table
DROP COLUMN "my_first_column" CASCADE;
Indexes / Constraints
Create Index
You cannot specify an INDEX
in the create table definition. However, if you specify a
column as unique, then an index is automatically created for that column. If you have a column
that is not unique but also needs to be indexed for quick selections, then you can add an index
after the table has been created with:
CREATE INDEX on tableName ("column_name");
Add Combined Unique Key Constraint
ALTER TABLE my_table
ADD CONSTRAINT constraint_name UNIQUE (column1, column2);
Add Foreign Key Constraint
If you need to update an existing table column, to turn it into a foreign key:
ALTER TABLE my_table
ADD CONSTRAINT my_constraint_name
FOREIGN KEY (my_table_column)
REFERENCES my_other_table (id);
Add Check Constraint
If you need to add a check constraint to an existing table:
ALTER TABLE my_table
ADD CONSTRAINT time_all_or_nothing
CHECK ((time_start IS NOT NULL AND time_end IS NOT NULL) OR (time_start IS NULL AND time_end IS NULL))
Create Table With Check Constraint
If you want to create a table with a check constraint in the definition, rather than having to add it afterwards, here is an example whereby we have a table of events, and an event can be all day, in which case it does not have a start or end time, or it can be part of a day, in which case it must have both a start and an end time.
CREATE TABLE events (
id uuid PRIMARY KEY,
date_start date NOT NULL,
date_end date NOT NULL,
time_start time,
time_end time,
name varchar(255) NOT NULL,
CHECK ((time_start IS NOT NULL AND time_end IS NOT NULL) OR (time_start IS NULL AND time_end IS NULL))
);
Remove / Drop Constraint
If you want to remove a constraint, such as an index or a foreign key, then you would do it like so:
ALTER TABLE my_table DROP CONSTRAINT "my_constraint_name";
Remove / Drop Constraint
If you want to remove a constraint, such as an index or a foreign key, then you would do it like so:
ALTER TABLE my_table DROP CONSTRAINT "my_constraint_name";
Enums / Types
List Types
If you wish to list out all the user created types / enums (not the tables), then run the following psql command:
\dt
Describe Type / Enum
If you wish to find out about a specific type, such as find out the possible values for an enum, then do the following:
\dT+ my_custom_type
Create Type (Enum)
CREATE TYPE mood AS ENUM ('sad', 'ok', 'happy');
Drop/Remove Type (Enum)
If you want to remove a type (such as an Enum you created), then you would do it like so:
DROP TYPE my_type;
Selecting Data
Select Distinct Values {#select-distinct-values)
The following query will output only the unique/distinct values for the myColumnName column in a table.
SELECT DISTINCT myColumnName
FROM my_table_name
Select Distinct Combinations
The following query will output only the unique/distinct combination of column1, column2 values in a table.
SELECT DISTINCT column1, column2
FROM my_table_name
Select First Row Of Distinct Value
The query below will order a table by column3, and select the first row for each distinct value of column1. E.g. there may be multiple column2 values for a single value in column1, but it will only show the first of them when ordered by column3.
SELECT
DISTINCT ON(column1) column1_alias
column2
FROM my_table_name
ORDER BY column3
ORDER BY
clause, but you almost certainly want one in order to achieve
repeatable results due to how the DISTINCT ON
clause will only return the first row for each unique value of column1
Inserting Data
CREATE TABLE cities (
city varchar(80) primary key,
location point
);
INSERT INTO cities (city, location)
VALUES
('Houston', '29.7604, -95.3698'),
('Dallas', '32.7767, -96.7970')
;
Updating Data
UPDATE cities
SET city='Washington'
WHERE city='Houston';
Deleting Data
DELETE FROM cities
WHERE city='Houston';
Schemas
In PostgreSQL, a schema is a namespace that contains named database objects such as tables, views, indexes, data types, functions, stored procedures and operators. A database can contain any number of schemas. A database will contain a default schema called public.
Create Schema
CREATE SCHEMA 'my_schema_name';
Show Current Schema
SELECT current_schema();
Switch / Set Schema
SET SCHEMA 'my_schema_name';
Please note: Setting the schema also sets your search path accordingly.
E.g. the following commands will set the search_path to hello, public
before then setting the
schema to hello
. When we output the search_path, one sees that the search path is set to just
hello
.
bob=# SET search_path TO hello,public;
SET
bob=# SET SCHEMA 'hello';
SET
bob=# SHOW search_path;
search_path
-------------
hello
(1 row)
Rename Schema
ALTER SCHEMA 'my_schema_name'
RENAME TO 'new_schema_name';
Delete / Drop Schema
DROP SCHEMA 'my_schema_name';
Set Search Path
Usually, users just refer to the table name, rather than prefixing with the schema name.
E.g. myTableName
instead of mySchemaName.myTableName
.
When referencing without the schema, this is referred to as an unqualified name.
When PostgreSQL encounters an unqualified name, the system determines the table by following the
search path. The default search path is just set to public, which is also the default schema.
Thus, if you wish to refer to tables in a schema using unqualified names, you can just include it in your search path like so:
SET search_path TO mySchema,public;
Setting Search Path Changes Current Schema
If one sets the search path, one is also setting the current schema. The current schema will
change to whatever is the first schema listed in the search path. For example, if one was to
set the schema to hello, but then set the search path to 'public','hello'
, one would have
changed the current schema back to public as shown below:
bob=# SET SCHEMA 'hello';
SET
bob=# SET search_path TO 'public','hello';
SET
bob=# SELECT current_schema();
current_schema
----------------
public
(1 row)
Show Search Path
SHOW search_path;
Working With JSON
If you use the JSON or JSONB data types
for a column, there are
special operations you can do with it.
For example, I created a server_dump
column that holds the json_encode
of the PHP
$_SERVER superglobal and can then
select the requestor's IP address from this with:
SELECT server_dump->'REMOTE_ADDR' FROM api_requests;
json
data type simply checks the text is valid json and stores it in plain text, rather than
actually processing it. On the other hand, jsonb
gets compressed to binary, and allows things like
more advanced indexing.
Info source 2.
PostgreSQL 14 added support for subscripting syntax, so you can now do:
SELECT server_dump['REMOTE_ADDR'] FROM api_requests;
... or for an example that goes two layers down:
UPDATE shirts
SET details['attributes']['color'] = '"neon blue"'
WHERE id = 123;
Working With Time
Convert Unix Timestamp to Timestamp
Use the to_timestamp
function.
SELECT to_timestamp(my_unix_timestamp_column) as my_timestamp FROM my_table;
Convert Unix Timestamp to Date
To convert a unix timestamp (epoch) to a date (such as to group by a day of the year):
SELECT DATE(to_timestamp(my_unix_timestamp_column)) as my_date FROM my_table;
Convert Timestamp to Unix Timestamp (Epoch)
The following will convert the created_at
field from a timestamp with timezone field, to a
Unix timestamp.
SELECT
EXTRACT(EPOCH FROM created_at)::integer as created_at
FROM my_table;
::integer
at the end. This simply converts it from a double
precision to an integer.
Selecting Between Dates
If you have a table full of rows that have dates / timestamps / etc, and you want to select the
rows between (and including) two dates/times, then one can just use the BETWEEN
keyword like so:
SELECT * FROM my_table
WHERE my_date_field BETWEEN '2019-01-01' AND '2022-01-01'
Grouping Dates By Month
The following example will convert the table's my_date_field into the corresponding month
using the DATE_TRUNC
function. We can then group by that date and order by it.
This is particularly useful for stats/reporting.
SELECT
DATE_TRUNC('month', my_date_field)::date as month,
fome_field
FROM my_table
GROUP BY month
ORDER BY month ASC;
Working With Network Types
PostgreSQL has native support for network types as outlined below. This can make your life much easier, and probably make your code much faster if you move some of the operations to the database from the application layer.
Network Types
inet
The inet
type holds a single ipv4 or ipv6 address. E.g. '192.168.0.1' or
ff06:ffff:300c:326b:0000:0000:0000:0000
. It supports
ipv6 shortening formats, so
one could specify the ipv6 address like ff06:ffff:300c:326b::
instead for the ipv6 address before,
because the double colon means the rest is filled with 0's.
Just to not input the cidr format, even if it is for one address. E.g. 192.168.0.1/32
. For that
you want the cidr
type as outlined below.
cidr
This format is used for ipv4 or ipv6 CIDRs like:
192.168.0.1/32
192.168.0.1/24
2001:4f8:3:ba:2e0:81ff:fe22:d1f1/128
Since this type can represent a range of network addresses, one can use it for some pretty cool network operations, like checking if an IP address is within a range etc. This is most useful for things like specifying the IP address ranges of countries for something like a Geo-blocker. Thus you don't have to specify a row for every possible ipv4 or ipv6 address on the internet, which would not be pragmatic, especially given how larve ipv6 is.
This type will automatically derive the value from a given input. For example a value of 192.168.1
would be converted to 192.168.1.0/24
, and 192.168/24
would be converted to 192.168.0.0/24
.
However, I would not recommend this, as you are likely to just confuse yourself and others as it
might not default to what you are expecting. I would always plug in the full CIDR to be safe.
macaddr
This type represents MAC addresses which are usually found on ethernet cards, but MAC addresses can be used for other purposes as well.
PostgreSQL supports having colons, hyphens, or dots. like so (all of the following below are the same value):
08:00:2b:01:02:03
08-00-2b-01-02-03
08002b:010203
08002b-010203
0800.2b01.0203
0800-2b01-0203
08002b010203
macaddr8
This type is for 8 byte length mac addresses instead of just 6 bytes (above). However, it will accept your 6 byte length mac addresses as well. (Your computer MAC address is almost definitely just 6 bytes unless things have changed since I wrote this).
Operations
The full list of operations can be found in the online docs. However I will outline the ones I find most useful below.
Find If Network Address is Within or Equals
You can use the <<=
to find out if the left side is contained within or equals the right side.
This would expect the left side to be either a network address or a cidr, but the right side to
be a cidr. E.g. let's imagine you have the cidr's that you allow into your website in a column
called my_cidr_column
and you want to check if a visitor who came from '179.62.56.51' is allowed
into your website. You would run:
SELECT count(*) as num_rows FROM my_whitelist_table
WHERE '179.62.56.51' <<= my_cidr_column
... and then you could just check that num_rows
is greater than 0.
You could do the same thing in reverse with >>=
which checks if the left side contains or equals
the right side. Taking our example from above, we would simply do:
SELECT count(*) as num_rows FROM my_whitelist_table
WHERE my_cidr_column >>= '179.62.56.51';
Addition and Subtraction
Believe it or not, you could add/subtract two network addresses. E.g. the following would result in
inserting a value of 192.168.0.200
into your table:
INSERT INTO my_table (id, ipv4_address)
VALUES ('9b2119db-c4b8-4e67-b3e1-94aa5c4dc716', ('192.168.0.1' + 199));
You can even add/subtract two inet addresses:
'192.168.1.43' - '192.168.1.19'
Greater Than and Less Than
You can use >
, <
, >=
, and <=
on two inet types and it will act exactly as you expect.
Comments
Table Comments
You can add comments to a table for others to read later. E.g.
COMMENT ON TABLE mytable IS 'This table is for...';
Column Comments
You can retrospectively add a comment to a table column like so:
COMMENT ON COLUMN my_table.column_name IS 'This is my comment';
If you want to specify the comment at the point of creating the table, then you could do:
CREATE TABLE something (
id serial NOT NULL,
column1 varchar(255) IS 'comment describing column here',
PRIMARY KEY (id)
);
SQL Comments
You can add a comment to a line using --
like so:
-- comment goes here creating the programster user.
CREATE USER programster WITH PASSWORD 'thisismypassword';
Multi-line Comments
You can add a multiline comment anywhere in SQL by using /*
and */
which signify the start
and end of the comment.
/*
* multiline comment here
* creating the programster user.
*/
CREATE USER programster WITH PASSWORD 'thisismypassword';
... or alternatively:
CREATE TABLE something (
id serial NOT NULL,
column1 varchar(255), /* comment here - deliberately not setting NOT NULL for some reason */
PRIMARY KEY (id)
);
PostGIS
The content within this section expects you to have installed the PostGIS extension in your PostgreSQL database, or be running the PostGIS Docker image.
Latitude and Longitude Order
PostGIS uses longitude, latitude ordering for everything, whilst services such as Google maps, and the EPSG:4326 standard use the latitude, longitude ordering.
Spatial Reference Identifier (SRID)
When looking at the documentation for many PostGIS functions, you will see an optional srid
parameter. This specifies what coordinate system one is working within and one always needs to
make sure that one is operating across matching coordinate systems. E.g. when trying to find if
a point is within a shape, both the shape and the point need to be defined using the same
coordinate system.
A common SRID that you will often see, and is a default in many of the PostGIS functions is
4326
, which represents spatial data using longitude and latitude coordinates on the Earth's
surface as defined in the WGS84 standard, which is also used for the
Global Positioning System (GPS).
Inserting Geometries
The following snippet shows you how you can insert various geometric types into the database.
CREATE TABLE geometries (name varchar, geom geometry);
INSERT INTO geometries VALUES
('Point', 'POINT(0 0)'),
('Linestring', 'LINESTRING(0 0, 1 1, 2 1, 2 2)'),
('Polygon', 'POLYGON((0 0, 1 0, 1 1, 0 1, 0 0))'),
('PolygonWithHole', 'POLYGON((0 0, 10 0, 10 10, 0 10, 0 0),(1 1, 1 2, 2 2, 2 1, 1 1))'),
('Collection', 'GEOMETRYCOLLECTION(POINT(2 0),POLYGON((0 0, 1 0, 1 1, 0 1, 0 0)))');
SELECT name, ST_AsText(geom) FROM geometries;
ST_Contains
The ST_Contains function checks whether the first geometry fully contains the second geometry specified. For example, the code below shows how to check if the geometry column of my_table contains the GPS coordinates of Big Ben.
$longitude = -0.1246
$latitude = 51.5007
$srid = 4326
$query =
"SELECT *
FROM my_table
WHERE ST_Contains(
my_geometry_column,
ST_GeomFromText('POINT({$longitude} {$latitude})', {$srid})
);";
POINT
coordinates.
ST_GeomFromGeoJSON
The ST_GeomFromGeoJSON function converts a GeoJSON string into a geometry object. This is incredibly useful for taking online GeoJSON data sources (such as this), and importing them into your database.
$geoJsonString = '{
"type": "Polygon",
"coordinates": [
[
[100.0, 0.0],
[101.0, 0.0],
[101.0, 1.0],
[100.0, 1.0],
[100.0, 0.0]
]
]
}';
$escapedGeoJsonString = pg_escape_literal($conn, $geoJsonString);
$query = "INSERT INTO my_table (name, geometry_column) VALUES (
'myShape',
(SELECT ST_GeomFromGeoJSON({$escapedGeoJsonString}))
);";
Polygon
type in the example. E.g. this includes the
MultiPolygon etc.
ST_Within
ST_Within is the inverse of contains, in that it checks if the first parameter is within the second, instead of checking if the second parameter contains the second. (order of the parameters).
Below is an example where I dynamically check if a table points latitude and logitude are within the provided GeoJSON polygon (used from a Leaflet codebase).
ST_Within(
ST_SetSRID(ST_MakePoint(table_longitude_column, table_latitude_column), 4326),
ST_GeomFromGeoJSON('{.....}')
);
Misc
Passwordless Login
Sometimes one needs to be able to run commands from a script, and thus one cannot perform the interactive password input.
Thus, one needs to be able to "provide" the password in a secure manner (specifying --password=something
doesn't work).
There are two main ways to do this as outlined below:
PGPASS File
Create a .pgpass
file in your home directory with the connection details of your databases like so:
HOST:PORT:DB_NAME:USERNAME:PASSWORD
e.g.
localhost:5432:postgres:myadmin:Str0ngP@ssw0rd
Then you should be able to log in without entering a password, if you provide the same three following attributes that match up:
- database name
- host
- user
--password
to your psql
command, so don't forget to not include that.
One can specify wildcards in your .pgpass
file. E.g.
*:*:*:postgres:myadmin:Str0ngP@ssw0rd
If you don't have a home directory, or if you need to specify a custom location, you can set the location of your .pgpass file by setting an environment variable like so:
export PGPASSFILE='/custom/path/to/.pgpass'
Finally, your .pgpass file will only work if you set its permissions to 600 (like your SSH keys).
sudo chmod 600 .pgpass
Environment Variable
You can temporarily set your password by setting the PGPASSWORD
environment variable like so:
export PGPASSWORD
--password
to your psql
command, so don't forget to not include that.
Quotation Marks
Double quotes ("
) are used to denote column names, and single quotes ('
) are also only used to
denote values. When working with UUIDs, its best to always wrap in single quotes.
UPDATE my_table
SET "my_column" = 'f68b8ef1-ed46-42bd-9619-4ae37dae3eb3'
WHERE "uuid"='97c681fc-d217-4bcb-970d-e54793d8fd94';
when
.
Output of Nulls
Unlike MySQL which will show "null" as a value when showing data, like so:
mysql> select * from cities;
+---------+----------+
| city | location |
+---------+----------+
| Houston | NULL |
+---------+----------+
1 row in set (0.00 sec)
...PostgreSQL will just show emptiness like so:
postgres=# select * from cities;
city | location
------------+--------------------
Dallas | (32.7767,-96.797)
Washington | (29.7604,-95.3698)
London |
(3 rows)
Get Table Stats
The command below is a good way to get some stats about your tables. It can be quite useful when you want to see how "active" your tables are, or see how many "dead tuples" are taking up space and need deleting by the vacuum process. (Dead tuples are rows that have been deleted but have not been removed from disk yet by the vacuum process).
SELECT
relname as table_name,
n_tup_ins as "inserts",
n_tup_upd as "updates",
n_tup_del as "deletes",
n_live_tup as "live_tuples",
n_dead_tup as "dead_tuples",
last_vacuum,
last_autovacuum,
last_analyze,
last_autoanalyze,
vacuum_count,
autovacuum_count,
analyze_count,
autoanalyze_count
FROM pg_stat_user_tables
ORDER BY table_name;
References
- Crunchydata - Postgres 14: It's The Little Things
- postgresqltutorial.com - PostgreSQL List Users
- Stack Overflow - psql - save results of command to a file
- Stack Overflow - In Postgresql, Force unique on combination of two columns
- postgresqltutorial.com - PostgreSQL Foreign Key
- Stack Overflow - Show tables in PostgreSQL
- Why unsigned integer is not available in PostgreSQL?
- How to configure postgresql for the first time?
- Stack Overflow - How do you create a read-only user in PostgreSQL?
- Stack Overflow - Amazon RDS PostgreSQL: how to create user?
- Stack Overflow - Check Postgres access for a user
- PostgreSQL Docs - Enumerated Types
- PostgreSQL: Can you create an index in the CREATE TABLE definition?
- Stack Overflow - How to be able to vertically scroll terminal to see all data from a large table
- PostgreSQLDocs - JSON Functions and Operators
- Stack Overflow - input file appears to be a text format dump. Please use psql
- Ask Ubuntu - Set PostgreSQL pager to less?
- Stack Overflow - How to list table foreign keys
- PostgreSQL Docs - pgdump
- PostgreSQL Docs - Schemas
- PostgreSQLTutoiral.com - PostgreSQL Schema
- PostGIS.net - Geometries
- Stack Overflow - Preferred order of writing latitude & longitude tuples in GIS services
- postgresqltutorial.com - PostgreSQL BETWEEN
- Stack Overflow - Using psql how do I list extensions installed in a database?
- New Public Schema Permissions in PostgreSQL 15
- Hevodata.com - PostgreSQL DISTINCT Clause: Syntax & Query Examples | A 101 Guide
- PostgreSQL Tutorial - PostgreSQL SELECT DISTINCT
- How to use .pgpass in PostgreSQL?
- Database Administrators - PostgreSQL: Using the .pgpass file
- soft-builder.com - How to list sequences in PostgreSQL database
- PostgreSQL Docs - Network Address Types
- PostgreSQL Docs - Network Address Functions and Operators
First published: 16th August 2018