MacLochlainns Weblog

Michael McLaughlin's Technical Blog

Site Admin

Archive for November, 2019

PostgreSQL Upsert Advanced

without comments

Nine years after writing how to use the MERGE statement in Oracle, I am writing how you implement an UPSERT statement in PostgreSQL. I wrote an initial post going over the basics of PostgreSQL’s upsert implementation of the INSERT statement with an DO UPDATE clause and a DO NOTHING clause.

I thought it was interesting that the PostgreSQL Upsert Using INSERT ON CONFLICT Statement web page didn’t cover using a subquery as the source for an INSERT statement.

Demonstration

Here are the steps to accomplish an import/upload with the COPY statement and the INSERT statement with DO UPDATE and DO NOTHING clauses.

Step #1 : Position your CSV file in the physical directory

The example uses the following upload directory:

/u01/app/postgres/upload

Put the following text in to the kingdom_import.csv file.

Narnia,77600,Peter the Magnificent,1272-03-20,1292-06-19
Narnia,77600,Edmund the Just,1272-03-20,1292-06-19
Narnia,77600,Susan the Gentle,1272-03-20,1292-06-19
Narnia,77600,Lucy the Valiant,1272-03-20,1292-06-19
Narnia,42100,Peter the Magnificent,1531-04-12,1531-05-31
Narnia,42100,Edmund the Just,1531-04-12,1531-05-31
Narnia,42100,Susan the Gentle,1531-04-12,1531-05-31
Narnia,42100,Lucy the Valiant,1531-04-12,1531-05-31
Camelot,15200,King Arthur,0631-03-10,0686-12-12
Camelot,15200,Sir Lionel,0631-03-10,0686-12-12
Camelot,15200,Sir Bors,0631-03-10,0635-12-12
Camelot,15200,Sir Bors,0640-03-10,0686-12-12
Camelot,15200,Sir Galahad,0631-03-10,0686-12-12
Camelot,15200,Sir Gawain,0631-03-10,0686-12-12
Camelot,15200,Sir Tristram,0631-03-10,0686-12-12
Camelot,15200,Sir Percival,0631-03-10,0686-12-12
Camelot,15200,Sir Lancelot,0670-09-30,0682-12-12
Camelot,15200,Modred,0681-09-30,0682-12-12

Step #2 : Run the script that creates tables and sequences

Copy the following code into a create_kingdom_knight_tables.sql file within a directory of your choice. Then, you run it as the student user. Assuming you put the code in the create_kingdom_knight_tables.sql script, you can call it like so

\i create_kingdom_knight_tables.sql

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
-- Conditionally drop three tables and sequences.
DO $$
DECLARE
  /* Declare an indefinite length string and record variable. */
  SQL  VARCHAR;
  ROW  RECORD;
 
  /* Declare a cursor. */
  table_cursor CURSOR FOR
    SELECT TABLE_NAME
    FROM   information_schema.tables
    WHERE  table_catalog = 'videodb'
    AND    table_schema = 'public'
    AND    TABLE_NAME IN ('kingdom','knight','kingdom_knight_import');
BEGIN
  /* Open the cursor. */
  OPEN table_cursor;
  LOOP
    /* Fetch table names. */
    FETCH table_cursor INTO ROW;
 
    /* Exit when no more records are found. */
    EXIT WHEN NOT FOUND;
 
    /* Concatenate together a DDL to drop the table with prejudice. */
    SQL := 'DROP TABLE IF EXISTS '||ROW.table_name||' CASCADE';
 
    /* Execute the DDL statement. */
    EXECUTE SQL;
  END LOOP;
 
  /* Close the cursor. */
  CLOSE table_cursor;
END;
$$;
 
-- Create normalized kingdom table.
CREATE TABLE kingdom
( kingdom_id    SERIAL
, kingdom_name  VARCHAR(20)
, population    INTEGER
, CONSTRAINT kingdom_uq_key
  UNIQUE ( kingdom_name
         , population ));
 
-- Create normalized knight table.
CREATE TABLE knight
( knight_id             SERIAL
, knight_name           VARCHAR(24)
, kingdom_allegiance_id INTEGER
, allegiance_start_date DATE
, allegiance_end_date   DATE
, CONSTRAINT knight_uq_key 
  UNIQUE ( knight_name
         , kingdom_allegiance_id
         , allegiance_start_date
         , allegiance_end_date ));
 
-- Create external import table.
CREATE TABLE kingdom_knight_import
( kingdom_name          VARCHAR(20)
, population            INTEGER
, knight_name           VARCHAR(24)
, allegiance_start_date DATE
, allegiance_end_date   DATE);

Step #3 : Run the COPY command.

Run the COPY command to move the data from the Comma Separated Values (CSV) file to the kingdom_knight_import table. Then, run it as the student account.

1
2
COPY kingdom_knight_import
FROM '/u01/app/postgres/upload/kingdom_import1.csv' DELIMITERS ',' CSV;

Step #4 : Create the upload_kingdom procedure

Copy the following code into a create_kingdom_knight_procedure.sql file within a directory of your choice. Assuming you put the code in the create_kingdom_knight_procedure.sql script, you can call it like so

\i create_kingdom_knight_procedure.sql

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
CREATE PROCEDURE upload_kingdom() AS 
$$
DECLARE
 
  /* Declare error handling variables. */
  err_num      TEXT;
  err_msg      INTEGER;
 
BEGIN
 
  /* Insert only unique rows. The DISTINCT operator compresses the
     result set to a unique set and avoids consuming sequence values
     for non-unique result sets. */
  INSERT INTO kingdom
  ( kingdom_name
  , population )
  ( SELECT   DISTINCT
             kki.kingdom_name
    ,        kki.population
    FROM     kingdom_knight_import kki LEFT JOIN kingdom k
    ON       kki.kingdom_name = k.kingdom_name
    AND      kki.population = k.population)
  ON CONFLICT ON CONSTRAINT kingdom_uq_key
  DO NOTHING;
 
  /* Insert only unique rows. */
  INSERT INTO knight
  ( knight_name
  , kingdom_allegiance_id
  , allegiance_start_date
  , allegiance_end_date )
  (SELECT kki.knight_name
   ,      k.kingdom_id
   ,      kki.allegiance_start_date AS start_date
   ,      kki.allegiance_end_date AS end_date
   FROM   kingdom_knight_import kki INNER JOIN kingdom k
   ON     kki.kingdom_name = k.kingdom_name
   AND    kki.population = k.population LEFT JOIN knight kn 
   ON     k.kingdom_id = kn.kingdom_allegiance_id
   AND    kki.knight_name = kn.knight_name
   AND    kki.allegiance_start_date = kn.allegiance_start_date
   AND    kki.allegiance_end_date = kn.allegiance_end_date)
  ON CONFLICT ON CONSTRAINT knight_uq_key
  DO NOTHING;
 
EXCEPTION
  WHEN OTHERS THEN
    err_num := SQLSTATE;
    err_msg := SUBSTR(SQLERRM,1,100);
    RAISE NOTICE 'Trapped Error: %', err_msg;
END
$$ LANGUAGE plpgsql;

Step #5 : Run the upload_kingdom procedure and query the results

You run the upload_kingdom procedure with the CALL statement and then query the results. Assuming you put the code in the call_kingdom_knight_procedure.sql script, you can call it like so

\i call_kingdom_knight_procedure.sql

/* Call the upload_kingdom procedure. */
CALL upload_kingdom();
 
/* Query the kingdom_knight_import table. */
SELECT   kingdom_name
,        population
,        knight_name
,        date_trunc('second',allegiance_start_date AT TIME ZONE 'MST') AS allegiance_start_date
,        date_trunc('second',allegiance_end_date AT TIME ZONE 'MST') AS allegiance_end_date
FROM     kingdom_knight_import;
 
/* Query the kingdom table. */
SELECT   *
FROM     kingdom;
 
/* Query the knight table. */
SELECT   kn.knight_id
,        kki.knight_name
,        k.kingdom_id
,        date_trunc('second',kki.allegiance_start_date AT TIME ZONE 'MST') AS start_date
,        date_trunc('second',kki.allegiance_end_date AT TIME ZONE 'MST') AS end_date
FROM     kingdom_knight_import kki INNER JOIN kingdom k
ON       kki.kingdom_name = k.kingdom_name
AND      kki.population = k.population LEFT JOIN knight kn
ON       k.kingdom_id = kn.kingdom_allegiance_id
AND      kki.knight_name = kn.knight_name
AND      kki.allegiance_start_date = kn.allegiance_start_date
AND      kki.allegiance_end_date = kn.allegiance_end_date;

It prints the following results:

CALL
 kingdom_name | population |      knight_name      | allegiance_start_date | allegiance_end_date 
--------------+------------+-----------------------+-----------------------+---------------------
 Narnia       |      77600 | Peter the Magnificent | 1272-03-19 23:59:56   | 1292-06-18 23:59:56
 Narnia       |      77600 | Edmund the Just       | 1272-03-19 23:59:56   | 1292-06-18 23:59:56
 Narnia       |      77600 | Susan the Gentle      | 1272-03-19 23:59:56   | 1292-06-18 23:59:56
 Narnia       |      77600 | Lucy the Valiant      | 1272-03-19 23:59:56   | 1292-06-18 23:59:56
 Narnia       |      42100 | Peter the Magnificent | 1531-04-11 23:59:56   | 1531-05-30 23:59:56
 Narnia       |      42100 | Edmund the Just       | 1531-04-11 23:59:56   | 1531-05-30 23:59:56
 Narnia       |      42100 | Susan the Gentle      | 1531-04-11 23:59:56   | 1531-05-30 23:59:56
 Narnia       |      42100 | Lucy the Valiant      | 1531-04-11 23:59:56   | 1531-05-30 23:59:56
 Camelot      |      15200 | King Arthur           | 0631-03-09 23:59:56   | 0686-12-11 23:59:56
 Camelot      |      15200 | Sir Lionel            | 0631-03-09 23:59:56   | 0686-12-11 23:59:56
 Camelot      |      15200 | Sir Bors              | 0631-03-09 23:59:56   | 0635-12-11 23:59:56
 Camelot      |      15200 | Sir Bors              | 0640-03-09 23:59:56   | 0686-12-11 23:59:56
 Camelot      |      15200 | Sir Galahad           | 0631-03-09 23:59:56   | 0686-12-11 23:59:56
 Camelot      |      15200 | Sir Gawain            | 0631-03-09 23:59:56   | 0686-12-11 23:59:56
 Camelot      |      15200 | Sir Tristram          | 0631-03-09 23:59:56   | 0686-12-11 23:59:56
 Camelot      |      15200 | Sir Percival          | 0631-03-09 23:59:56   | 0686-12-11 23:59:56
 Camelot      |      15200 | Sir Lancelot          | 0670-09-29 23:59:56   | 0682-12-11 23:59:56
(18 rows)
 
 kingdom_id | kingdom_name | population 
------------+--------------+------------
          1 | Narnia       |      42100
          2 | Narnia       |      77600
          3 | Camelot      |      15200
(3 rows)
 
 knight_id |      knight_name      | kingdom_id |     start_date      |      end_date       
-----------+-----------------------+------------+---------------------+---------------------
         1 | Peter the Magnificent |          2 | 1272-03-19 23:59:56 | 1292-06-18 23:59:56
         2 | Edmund the Just       |          2 | 1272-03-19 23:59:56 | 1292-06-18 23:59:56
         3 | Susan the Gentle      |          2 | 1272-03-19 23:59:56 | 1292-06-18 23:59:56
         4 | Lucy the Valiant      |          2 | 1272-03-19 23:59:56 | 1292-06-18 23:59:56
         5 | Peter the Magnificent |          1 | 1531-04-11 23:59:56 | 1531-05-30 23:59:56
         6 | Edmund the Just       |          1 | 1531-04-11 23:59:56 | 1531-05-30 23:59:56
         7 | Susan the Gentle      |          1 | 1531-04-11 23:59:56 | 1531-05-30 23:59:56
         8 | Lucy the Valiant      |          1 | 1531-04-11 23:59:56 | 1531-05-30 23:59:56
         9 | King Arthur           |          3 | 0631-03-09 23:59:56 | 0686-12-11 23:59:56
        10 | Sir Lionel            |          3 | 0631-03-09 23:59:56 | 0686-12-11 23:59:56
        11 | Sir Bors              |          3 | 0631-03-09 23:59:56 | 0635-12-11 23:59:56
        12 | Sir Bors              |          3 | 0640-03-09 23:59:56 | 0686-12-11 23:59:56
        13 | Sir Galahad           |          3 | 0631-03-09 23:59:56 | 0686-12-11 23:59:56
        14 | Sir Gawain            |          3 | 0631-03-09 23:59:56 | 0686-12-11 23:59:56
        15 | Sir Tristram          |          3 | 0631-03-09 23:59:56 | 0686-12-11 23:59:56
        16 | Sir Percival          |          3 | 0631-03-09 23:59:56 | 0686-12-11 23:59:56
        17 | Sir Lancelot          |          3 | 0670-09-29 23:59:56 | 0682-12-11 23:59:56
        69 | Modred                |          3 | 0681-09-29 23:59:56 | 0682-12-11 23:59:56
(18 rows)

As always, I hope this works to help those trying to solve a similar problem.

Written by maclochlainn

November 25th, 2019 at 1:24 am

PostgreSQL Upsert Intro

with one comment

Oracle and SQL Server use the MERGE statement, MySQL uses the REPLACE INTO statement or ON DUPLICATE KEY, but PostgreSQL uses an upsert. The upsert isn’t a statement per se. It is like MySQL’s INSERT statement with the ON DUPLICATE KEY clause. PostgreSQL uses an ON CONFLICT clause in the INSERT statement and there anonymous block without the $$ delimiters.

The general behaviors of upserts is covered in the PostgreSQL Tutorial. It has the following prototype:

INSERT INTO TABLE_NAME(column_list) VALUES(value_list)
ON CONFLICT target action;

The target can be a column name, an ON CONSTRAINT constraint name, or a WHERE predicate, while the action can be DO NOTHING (or ignore) or a DO UPDATE statement. I wrote the following example to show how to leverage a unique constraint with a DO NOTHING and DO UPDATE behavior.

My example conditionally drops a table, creates a table with a unique constraint, inserts a few rows, updates with a DO UPDATE clause, updates with DO NOTHING clause, and queries the results with a bit of formatting.

  1. Conditionally drop the test table.

    /* Suppress warnings from the log file. */
    SET client_min_messages = 'error';
     
    /* Conditionally drop table. */
    DROP TABLE IF EXISTS test;

  2. Create the test table.

    /* Create a test table. */
    CREATE TABLE test
    ( test_id      SERIAL
    , first_name   VARCHAR(20)
    , middle_name  VARCHAR(20)
    , last_name    VARCHAR(20)
    , updated      INTEGER DEFAULT 0
    , update_time  TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP
    , CONSTRAINT test_uq_key UNIQUE (first_name,middle_name,last_name));

  3. Insert six rows into the test table.

    /* Insert six rows. */
    INSERT INTO test
    ( first_name, middle_name, last_name )
    VALUES
     ('Harry','James','Potter')
    ,('Ginerva','Molly','Potter')
    ,('Lily','Luna','Potter')
    ,('Albus','Severus','Potter')
    ,('James',NULL,'Potter')
    ,('Lily',NULL,'Potter');

  4. Create a five second delay.

    /* Sleep for 5 seconds. */
    DO $$
    BEGIN
      PERFORM pg_sleep(5);
    END;
    $$;

  5. Use the INSERT statement with a DO UPDATE clause that increments the updated column of the test table.

    /* Upsert on unique key constraint conflict. */
    INSERT INTO test
    ( first_name
    , middle_name
    , last_name )
    VALUES
    ('Harry'
    ,'James'
    ,'Potter')
    ON CONFLICT ON CONSTRAINT test_uq_key
    DO
      UPDATE
      SET    updated = excluded.updated + 1
      ,      update_time = CURRENT_TIMESTAMP;

  6. Use the INSERT statement with a DO NOTHING clause.

    /* Upsert on unique key constraint ignore update. */
    INSERT INTO test
    ( first_name
    , middle_name
    , last_name )
    VALUES
    ('Harry'
    ,'James'
    ,'Potter')
    ON CONFLICT ON CONSTRAINT test_uq_key
    DO NOTHING;

  7. Query the test table.

    /* Formatted query to demonstrate result of UPSERT statement. */
    SELECT   test_id
    ,        last_name || ', '
    ||       CASE
               WHEN middle_name IS NOT NULL THEN first_name || ' ' || middle_name
               ELSE first_name
             END AS full_name
    ,        updated
    ,        date_trunc('second',update_time AT TIME ZONE 'MST') AS "timestamp"
    FROM     test
    ORDER BY last_name
    ,        first_name
    ,        CASE
               WHEN middle_name IS NOT NULL THEN middle_name
               ELSE 'A'
             END;

    Display results:

     test_id |       full_name       | updated |      timestamp      
    ---------+-----------------------+---------+---------------------
           4 | Potter, Albus Severus |       0 | 2019-11-24 19:23:10
           2 | Potter, Ginerva Molly |       0 | 2019-11-24 19:23:10
           1 | Potter, Harry James   |       1 | 2019-11-24 19:23:15
           5 | Potter, James         |       0 | 2019-11-24 19:23:10
           6 | Potter, Lily          |       0 | 2019-11-24 19:23:10
           3 | Potter, Lily Luna     |       0 | 2019-11-24 19:23:10
    (6 rows)

As always, I hope this helps those looking for clear examples to solve problems.

Written by maclochlainn

November 24th, 2019 at 7:26 pm

Postgres Reading Files

without comments

I’m working on items for migrating my database class from Oracle to PostgreSQL. I ran into an interesting limitation when I tried using the COPY command to read an external CSV file.

I had prepared the system by creating a new directory hierarchy owned by the postgres user on top of a /u01/app mount point. I set the ownership of the directories and files with the following command from the /u01/app mount point:

chown -R postgres:postgres postgres

After running the following command:

COPY transaction_upload
FROM '/u01/app/upload/postgres/transaction_upload_postgres.csv' DELIMITERS ',' CSV;

The command raised the following error:

COPY transaction_upload FROM '/u01/app/upload/postgres/transaction_upload_postgres.csv' DELIMITERS ',' CSV;
ERROR:  must be superuser or a member of the <code>pg_read_server_files</code> role to COPY from a file
HINT:  Anyone can COPY to stdout or from stdin. psql's \copy command also works for anyone.

The two options for fixing the problem are: Changing the student user to a superuser, and granting the pg_read_server_files role to the student user. Changing the student user to a superuser isn’t really a practical option. So, I connected as the postgres superuser and granted the pg_read_server_files role to the student user. It is a system level role and therefore doesn’t limit the role to only the videodb database.

As the postgres user, type the following command to grant the pg_read_server_files role to the system user:

GRANT pg_read_server_files TO student;

After granting the role to the student user, I created a small test case. The test table definition is:

CREATE TABLE test
( id          INTEGER
, first_name  VARCHAR(20)
, last_name   VARCHAR(20));

I created a test.csv file in the /u01/app/upload/postgres directory, like:

1,Simon,Bolivar
2,Peter,Davenport
3,Michael,Swan

The test.csv file requires the following permissions and ownerships:

-rw-r--r--. 1 postgres postgres 49 Nov 13 10:56 test.csv

The permissions are user read-write, groups read, and others read. The ownership should be granted to postgres and the primary group for the postgres user, which should also be postgres.

You can then connect to psql as the student user with the database set to videodb and run the following copy command:

COPY test
FROM '/u01/app/upload/postgres/test.csv' DELIMITERS ',' CSV;

If you put a comma at the end of each line, like you would do in MySQL, it raises an error. The trailing comma raises the following error:

ERROR:  extra data after last expected column

If you forget a delimiting commas somewhere on a line, the copy command raises the following error:

ERROR:  missing data for column "last_name"
CONTEXT:  COPY tester, line 3: "3,Michael Swan"

The error points to the column after the missing column. The context points to the line number while displaying the text.

You should take careful note that the copy command is an appending command. If you run it a second time, you insert a duplicate set of values in the target table.

After experimenting, its time to fix my student instance. The transaction_upload_mysql.csv file has two critical errors that need to be fixed. They are:

  1. A comma terminates each line, which would raise an extra data after last expected column error.
  2. A comma terminates each line followed by some indefinite amount of whitespace, which would also raise an extra data after last expected column error.

Since I have students with little expertise in Unix or Linux commands, I must provide a single command that they can use to convert the file with problems to one without problems. However, they should copy the transaction_upload_mysql.csv file to ensure they don’t disable the equivalent functionality for the MySQL solution space.

They should copy two files as the root user from the mysql directory to the postgres directory, as follows:

cp /u01/app/mysql/upload/transaction_upload_mysql.csv /u01/app/postgres/upload/transaction_upload_postgres.csv
cp /u01/app/mysql/upload/transaction_upload2_mysql.csv /u01/app/postgres/upload/transaction_upload2_postgres.csv

As the root user in the /u01/app/upload/postgres directory, run the following command:

cat transaction_upload_postgres.csv | sed -e 's/\,$//g' > x; cat x | sed -e 's/\,[[:space:]]*$//g' > y; mv y transaction_upload_postgres.csv; rm x

Please check the file permissions and ownerships with the ll (long list) command. If the file isn’t like this:

-rw-r--r--. 1 postgres postgres 49 Nov 13 10:56 transaction_upload_postgres.csv

Then, they should be able to change it as the root user with these commands:

chown postgres:postgres transaction_upload_postgres.csv
chmod 544 transaction_upload_postgres.csv

Lastly, they should connect to the psql as the student user, using the videodb database and run the following command:

COPY transaction_upload
FROM '/u01/app/postgres/upload/transaction_upload_postgres.csv' DELIMITERS ',' CSV;

A query of the import table with this:

SELECT COUNT(*) FROM transaction_upload;

should return:

 count 
-------
 11520
(1 row)

As always, I hope this helps those looking for some explanation and example on the copy feature of PostgreSQL.

Written by maclochlainn

November 13th, 2019 at 7:38 am

PostgreSQL and LPAD

without comments

While porting my Oracle code to PostgreSQL I encountered a little quirk. It’s probably not a quirk except for the fact that I’ve worked in Oracle so long. Oracle implicitly type casts so well that we seldom notice.

PostreSQL doesn’t work like Oracle. It does type cast sometimes but not very often. I tried porting the following segment from my Oracle stored procedure to PostgreSQL:

/* Add account number with zeros. */
FOR j IN 1..50 LOOP
  INSERT INTO account_list
  VALUES
  ( lv_airport_code||'-'||LPAD(j,6,'0')
  , NULL
  , NULL
  , 1002
  , 1002 );
END LOOP;

Oracle implicitly casts the integer j to a text string before running the LPAD function. In PostgreSQL, it doesn’t cast the integer to a text string before calling the LPAD function. Since the LPAD function is not overloaded, calling it with an integer, integer, and text set of parameters fails at runtime but the code doesn’t raise an exception when compiling it as a stored procedure.

At runtime, it raises the following error:

ERROR:  invalid INPUT syntax FOR INTEGER: "function lpad(integer, integer, unknown) does not exist"
CONTEXT:  PL/pgSQL FUNCTION seed_account_list() line 48 at assignment

You can fix the code by explicitly casting the LPAD function’s first parameter to a text string. You do that as follows below:

/* Add account number with zeros. */
FOR j IN 1..50 LOOP
  INSERT INTO account_list
  VALUES
  ( lv_airport_code||'-'||LPAD(j::text,6,'0')
  , NULL
  , NULL
  , 1002
  , 1002 );
END LOOP;

This fixes the casting problem. As always, I hope this helps those looking for a solution.

Written by maclochlainn

November 12th, 2019 at 4:08 pm

Postgres Foreign Keys

without comments

Just sorting out how to query the information_schema to discover the magic for a query of a table’s foreign key constraints. This query works to return the foreign key constraints:

SELECT   conrelid::regclass::text AS table_from
,        conname AS foreign_key
,        pg_get_constraintdef(oid)
FROM     pg_constraint
WHERE    contype = 'f'
AND      connamespace = 'public'::regnamespace 
AND      conrelid::regclass::text = 'rental_item'
ORDER BY conrelid::regclass::text
,        conname;

It returns the following for the rental_item table:

 table_from  |   foreign_key    |                           pg_get_constraintdef                            
-------------+------------------+---------------------------------------------------------------------------
 rental_item | fk_rental_item_1 | FOREIGN KEY (rental_id) REFERENCES rental(rental_id)
 rental_item | fk_rental_item_2 | FOREIGN KEY (item_id) REFERENCES item(item_id)
 rental_item | fk_rental_item_3 | FOREIGN KEY (created_by) REFERENCES system_user(system_user_id)
 rental_item | fk_rental_item_4 | FOREIGN KEY (last_updated_by) REFERENCES system_user(system_user_id)
 rental_item | fk_rental_item_5 | FOREIGN KEY (rental_item_type) REFERENCES common_lookup(common_lookup_id)
(5 rows)

As always, I post these tidbit for others to discover and use with less pain.

Written by maclochlainn

November 6th, 2019 at 12:31 am

Postgres Overloaded Routines

without comments

Earlier I showed how to write an anonymous block in PostgreSQL PL/pgSQL to drop routines, like functions and procedures. However, it would only work when they’re not overloaded functions or procedures. The following lets you drop all routines, including overloaded functions and procedures. Overloaded procedures are those that share the same name but have different parameter lists.

Before you can test the anonymous block, you need to create a set of overloaded functions or procedures. You can create a set of overloaded hello procedures with the following syntax:

CREATE FUNCTION hello()
RETURNS text AS
$$
DECLARE
  output  VARCHAR;
BEGIN
  SELECT 'Hello World!' INTO output;
  RETURN output;
END
$$ LANGUAGE plpgsql;
 
CREATE FUNCTION hello(whom text)
RETURNS text AS
$$
DECLARE
  output  VARCHAR;
BEGIN
  SELECT CONCAT('Hello ',whom,'!') INTO output;
  RETURN output;
END
$$ LANGUAGE plpgsql;
 
CREATE FUNCTION hello(id int, whom text)
RETURNS text AS
$$
DECLARE
  output  VARCHAR;
BEGIN
  SELECT CONCAT('[',id,'] Hello ',whom,'!') INTO output;
  RETURN output;
END
$$ LANGUAGE plpgsql;

You can test the overloaded hello function, like so from the videodb schema:

videodb=> SELECT hello();
    hello     
--------------
 Hello World!
(1 ROW)
 
videodb=> SELECT hello('Captain Marvel');
         hello         
-----------------------
 Hello Captain Marvel!
(1 ROW)
 
videodb=> SELECT hello(1,'Captain America');
           hello            
----------------------------
 [1] Hello Captain America!
(1 ROW)

Then, you can query the information_schema to verify that you’ve created a set of overloaded procedures with the following query:

SELECT   proc.specific_schema AS procedure_schema
,        proc.specific_name
,        proc.routine_name AS procedure_name
,        proc.external_language
,        args.parameter_name
,        args.parameter_mode
,        args.data_type
FROM     information_schema.routines proc left join information_schema.parameters args
ON       proc.specific_schema = args.specific_schema
AND      proc.specific_name = args.specific_name
WHERE    proc.routine_schema NOT IN ('pg_catalog', 'information_schema')
AND      proc.routine_type IN ('FUNCTION','PROCEDURE')
ORDER BY procedure_schema
,        specific_name
,        procedure_name
,        args.ordinal_position;

It should return the following:

 procedure_schema | specific_name | procedure_name | external_language | parameter_name | parameter_mode | data_type 
------------------+---------------+----------------+-------------------+----------------+----------------+-----------
 public           | hello_35451   | hello          | PLPGSQL           |                |                | 
 public           | hello_35452   | hello          | PLPGSQL           | whom           | IN             | text
 public           | hello_35453   | hello          | PLPGSQL           | id             | IN             | integer
 public           | hello_35453   | hello          | PLPGSQL           | whom           | IN             | text
(4 rows)

The set session command maps the videodb catalog for the following anonymous block program.

SET SESSION "videodb.catalog_name" = 'videodb';

The following anonymous block lets you get rid of any ordinary or overloaded function and procedure:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
DO $$
DECLARE
  /* Declare an indefinite length string for SQL statement. */
  sql  VARCHAR;
 
  /* Declare variables to manage cursor return values. */
  row  RECORD;
  arg  VARCHAR;
 
  /* Declare parameter list. */
  list VARCHAR;
 
  /* Declare a routine cursor. */
  routine_cursor CURSOR FOR
    SELECT routine_name
    ,      specific_name
    ,      routine_type
    FROM   information_schema.routines
    WHERE  specific_catalog = current_setting('videodb.catalog_name')
    AND    routine_schema = 'public';
 
  /* Declare a parameter cursor. */
  parameter_cursor CURSOR (cv_specific_name varchar) FOR
    SELECT args.data_type
    FROM   information_schema.parameters args
    WHERE  args.specific_schema = 'public'
    AND    args.specific_name = cv_specific_name;
 
BEGIN
  /* Open the cursor. */
  OPEN routine_cursor;
  <<row_loop>>
  LOOP
    /* Fetch table names. */
    FETCH routine_cursor INTO row;
 
    /* Exit when no more records are found. */
    EXIT row_loop WHEN NOT FOUND;
 
    /* Initialize parameter list. */
    list := '(';
 
    /* Open the parameter cursor. */
    OPEN parameter_cursor(row.specific_name::varchar);
    <<parameter_loop>>
    LOOP
      FETCH parameter_cursor INTO arg;
 
      /* Exit the parameter loop. */
      EXIT parameter_loop WHEN NOT FOUND;
 
      /* Add parameter and delimit more than one parameter with a comma. */
      IF LENGTH(list) > 1 THEN
        list := CONCAT(list,',',arg);
      ELSE
        list := CONCAT(list,arg);
      END IF;
    END LOOP;
 
    /* Close the parameter list. */
    list := CONCAT(list,')');
 
    /* Close the parameter cursor. */
    CLOSE parameter_cursor;
 
    /* Concatenate together a DDL to drop the table with prejudice. */
    sql := 'DROP '||row.routine_type||' IF EXISTS '||row.routine_name||list;
 
    /* Execute the DDL statement. */
    EXECUTE sql;
  END LOOP;
 
  /* Close the routine_cursor. */
  CLOSE routine_cursor;
END;
$$;

Now, you possess the magic to automate cleaning up your schema when you combine this with my earlier post on dynamically dropping tables, sequences, and triggers.

Written by maclochlainn

November 5th, 2019 at 12:11 am

DBeaver for PostgreSQL

without comments

I’m migrating my database classes from the Oracle database to the PostgreSQL database. Using the Oracle Express Edition has always required a virtualized image because students use Windows and Mac OS. Also, the university doesn’t like my use of a virtualized image. Virtualization imposes incremental cost on students to have high end laptops.

The available Docker images don’t typically support the Oracle Express Edition. That means there are licensing implications tied to Oracle.

As a committee, we decided to use PostgreSQL as our new database platform. We opted to use PostgreSQL over MySQL because it supports arrays and stored procedures. PostgreSQL PL/pgSQL are a solid implementation of stored procedures. While MySQL supports a stored procedure language, it effectively discourages using it.

We use sqlplus, mysql, or psql Command-Line Interface (CLI) as the primary interface. That’s because CLI supports production-oriented deployment and DevOps practices. The Open Source DBeaver project provides a clean native GUI management console for PostgreSQL. DBeaver also supports other databases, which pgAdmin doesn’t.

You click the Download button from the DBeaver home page. It support Windows, Mac OS, and Linux distributions. I downloaded the Linux RPM package to my Downloads directory. You can run the RPM with the following syntax:

rpm -ivh dbeaver-ce-6.2.3-stable.x86_64.rpm

It doesn’t check for dependencies. The installation is quick. You can verify that it’s installed with the following command:

which dbeaver

It installs here:

/usr/bin/dbeaver

If you failed to install the PostgreSQL JAR file, you’ll see the following error message:

It will look like the following on Linux:

You should have a user in the pg_hba.conf file, like the following:

host    all             all             127.0.0.1/32            trust

As always, I hope this helps those solving problems.

Written by maclochlainn

November 3rd, 2019 at 5:06 pm

Java and Postgres

without comments

I wanted to get Java working with PostgreSQL to test some GUI interfaces on Linux. Figuring out the necessary JAR file for the JDBC was my first hurdle. I found it was postgreSQL-42-2.5.jar file.

You can download it with the following command line:

wget https://jdbc.postgresql.org/download/postgresql-42.2.5.jar

I downloaded it to a Java directory off the home/student directory. Then, I added the following CLASSPATH to local java.env environment file.

#!/usr/bin/bash
 
# Set the Java CLASSPATH environment variable to include the JDBC jar file.
export set CLASSPATH=/home/student/Java/postgresql-42.2.5.jar:.

I sourced the postgresql-42.2.5.jar file and I wrote the following JavaTest.java program:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
/* Import classes. */
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.logging.Level;
import java.util.logging.Logger;
 
public class JavaTest { 
  public static void main(String[] args) {
    /* Set default strings for the connection. */
    String url = "jdbc:postgresql://localhost:5432/videodb";
    String user = "student";
    String password = "student";
 
    /* Try the connection and statement. */
    try {
      Connection conn = DriverManager.getConnection(url, user, password);
      Statement st = conn.createStatement();
      ResultSet rs = st.executeQuery("SELECT VERSION()");
 
      if (rs.next()) {
        System.out.println(rs.getString(1));
      }
 
      /* Close the result set and statement. */
      rs.close();
      st.close();
    }
    catch (SQLException e) {
      Logger logger = Logger.getLogger(JavaTest.class.getName());
      logger.log(Level.SEVERE, e.getMessage(), e);
    }
  }
}

I compiled the JavaTest.java program and tested it. It failed with the following error:

FATAL: Ident authentication failed for user - Unable to connect to PostgreSQL

The failure occurred because I hadn’t allowed the connection in PostgreSQL’s pg_hba.conf file. I changed the following line in my pg_hba.conf file:

host    all             all             127.0.0.1/32            ident

to

host    all             all             127.0.0.1/32            trust

Then, I restarted the postgresql-11 service, like this:

systemctl restart postgresql-11.service

The JavaTest program ran successfully and returned:

PostgreSQL 11.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 9.1.1 20190503 (Red Hat 9.1.1-1), 64-bit

You can extend the logic to output a comma-separated value file by leveraging the JDBC help page, like this:

/* Import classes. */
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.logging.Level;
import java.util.logging.Logger;
 
/* Create class withonly a static method for runtime testing. */
public class JavaReturnSet { 
  public static void main(String[] args) {
    /* Set default strings for the connection. */
    String url = "jdbc:postgresql://localhost:5432/videodb";
    String user = "student";
    String password = "student";
 
    /* Try the connection and statement. */
    try {
      /* Set connection, statement, and result set. */
      Connection conn = DriverManager.getConnection(url, user, password);
      Statement st = conn.createStatement();
 
      /* Use the + to concatenate lines for query clarity. */
      ResultSet rs = st.executeQuery("SELECT m.account_number\n" +
                                     ",      CONCAT(c.last_name,', ',c.first_name)\n" +
                                     "FROM   member m JOIN contact c\n" +
                                     "ON     m.member_id = c.member_id");
 
      /* Get query metadata for subsequent management of results. */
      ResultSetMetaData rsmd = rs.getMetaData();
      String line;
 
      while (rs.next()) {
        /* Initialize the line output for each row. */
        line = "";
 
        /* Process the columns. */
        for (int i = 0; i < rsmd.getColumnCount(); i++) {
          if (rsmd.getColumnType(i + 1) == 12) {
            line = line + "\"" + rs.getString(i + 1) + "\"";
          }
          else {
            line = line + rs.getInt(i + 1);
          }
 
          /* Put a comma between output columns. */
          if (i < rsmd.getColumnCount() - 1) {
            line = line + ",";
          }
        }
        System.out.println(line);
      }
 
      /* Close the result set and statement. */
      rs.close();
      st.close();
    }
    catch (SQLException e) {
      Logger logger = Logger.getLogger(JavaReturnSet.class.getName());
      logger.log(Level.SEVERE, e.getMessage(), e);
    }
  }
}

As always, I hope this helps those looking for a clean solution.

Written by maclochlainn

November 2nd, 2019 at 11:46 pm