MacLochlainns Weblog

Michael McLaughlin's Technical Blog

Site Admin

Archive for the ‘DBA’ Category

PostgreSQL CLI Error

without comments

Problems get reported to me all the time on installations for my students, this one was interesting. They got an error complaining about a missing libpq.so.5 library.

psql: /usr/pgsql-11/lib/libpq.so.5: no version information available (required by psql)
psql: /usr/pgsql-11/lib/libpq.so.5: no version information available (required by psql)
could not change directory to "/root": Permission denied
psql (11.7, server 11.8)
Type "help" for help.
 
postgres=#

It appeared as a mismatch of libraries but it’s not that. For reference, this was a Fedora instance. I ran the rpm utility:

rpm -qa | grep postgres

It returned:

postgresql11-libs-11.8-1PGDG.f30.x86_64
postgresql-11.7-2.fc30.x86_64
postgresql-server-11.7-2.fc30.x86_64

Then, I had them run the rpm utility again looking for the Python driver for PostgreSQL:

rpm -qa | grep psycopg2

It returned:

python3-psycopg2-2.7.7-1.fc30.x86_64

Then, it was easy to explain. The Python psycopg2 library uses both PostgreSQL 11.7 and 11.8 dependent libraries and the libpq.so.5 library is missing version information. You must ignore the error, which is really only a warning message, when you want to work on Fedora, PostgreSQL 11, and Python 3.

Written by maclochlainn

March 2nd, 2022 at 12:41 am

PostgreSQL Tables

without comments

The most straightforward way to view the description of a PostgreSQL table is the \d command. For example, this lets you display an account_list table:

\d account_list

Unfortunately, this shows you the table, indexes, and foreign key constraints. Often, you only want to see the list of columns in positional order. So, I wrote a little function to let me display only the table and columns.

There are a few techniques in the script that might seem new to some developers. For example, the data types of the return parameter values of a function that returns values from the data dictionary are specific to types used by the data dictionary. These specialized types are required because the SQL cursor gathers the information from the data dictionary in the information_schema, and most of these types can’t be cast as variable length strings.

A simple assumption that the data dictionary strings would implicitly cast to variable length strings is incorrect. That’s because while you can query them like VARCHAR variables they don’t cast to variable length string. If you wrote a wrapper function that returned VARCHAR variables, you would probably get a result like this when you call your function:

ERROR:  structure of query does not match function result type
DETAIL:  Returned type information_schema.sql_identifier does not match expected type character varying in column 1.

The “character varying” is another name for a VARCHAR data type. Some notes will advise you to fix this type of error by using the column name and a %TYPE. The %TYPE anchors the data type in the function’s parameter list to the actual data type of the data dictionary’s table. You would implement that suggestion with code like:

RETURNS TABLE ( table_schema      information_schema.columns.table_schema%TYPE
              , table_name        information_schema.columns.table_name%TYPE
              , ordinal_position  information_schema.columns.ordinal_position%TYPE
              , column_name       information_schema.columns.column_name%TYPE
              , data_type         information_schema.columns.data_type%TYPE
              , is_nullable       information_schema.columns.is_nullable%TYPE ) AS

Unfortunately, your function would raise a NOTICE for every dynamically anchored column at runtime. The NOTICE messages would appear as follows for the describe_table function with anchored parameter values:

psql:describe_table.sql:34: NOTICE:  type reference information_schema.columns.table_schema%TYPE converted to information_schema.sql_identifier
psql:describe_table.sql:35: NOTICE:  type reference information_schema.columns.table_name%TYPE converted to information_schema.sql_identifier
psql:describe_table.sql:36: NOTICE:  type reference information_schema.columns.ordinal_position%TYPE converted to information_schema.cardinal_number
psql:describe_table.sql:37: NOTICE:  type reference information_schema.columns.column_name%TYPE converted to information_schema.sql_identifier
psql:describe_table.sql:38: NOTICE:  type reference information_schema.columns.data_type%TYPE converted to information_schema.character_data
psql:describe_table.sql:39: NOTICE:  type reference information_schema.columns.is_nullable%TYPE converted to information_schema.yes_or_no

As a rule, there’s a better solution when you know how to discover the underlying data types. You can discover the required data types with the following query of the pg_attribute table in the information_schema:

SELECT attname
,      atttypid::regtype
FROM   pg_attribute
WHERE  attrelid = 'information_schema.columns'::regclass
AND    attname IN ('table_schema','table_name','ordinal_position','column_name','data_type','is_nullable')
ORDER  BY attnum;

It returns:

     attname      |              atttypid
------------------+------------------------------------
 table_schema     | information_schema.sql_identifier
 table_name       | information_schema.sql_identifier
 ordinal_position | information_schema.cardinal_number
 column_name      | information_schema.sql_identifier
 is_nullable      | information_schema.yes_or_no
 data_type        | information_schema.character_data
(6 rows)

Only the character_data type can be replaced with a VARCHAR data type, the others should be typed as shown above. Here’s the modified describe_table function.

CREATE OR REPLACE
  FUNCTION describe_table (table_name_in  VARCHAR)
  RETURNS TABLE ( table_schema      information_schema.sql_identifier
                , table_name        information_schema.sql_identifier
                , ordinal_position  information_schema.cardinal_number
                , column_name       information_schema.sql_identifier
                , data_type         VARCHAR
                , is_nullable       information_schema.yes_or_no ) AS
$$
BEGIN
  RETURN QUERY
  SELECT   c.table_schema
  ,        c.table_name
  ,        c.ordinal_position
  ,        c.column_name
  ,        CASE
             WHEN c.character_maximum_length IS NOT NULL
             THEN CONCAT(c.data_type, '(', c.character_maximum_length, ')')
             ELSE
               CASE
                 WHEN c.data_type NOT IN ('date','timestamp','timestamp with time zone')
                 THEN CONCAT(c.data_type, '(', numeric_precision::text, ')')
                 ELSE c.data_type
               END
           END AS modified_type
  ,        c.is_nullable
  FROM     information_schema.columns c
  WHERE    c.table_schema NOT IN ('information_schema', 'pg_catalog')
  AND      c.table_name = table_name_in
  ORDER BY c.table_schema
  ,        c.table_name
  ,        c.ordinal_position;
END;
$$ LANGUAGE plpgsql;

If you’re new to PL/pgSQL table functions, you can check my basic tutorial on table functions. You call the describe_table table function with the following syntax:

SELECT * FROM describe_table('account_list');

It returns:

 table_schema |  table_name  | ordinal_position |   column_name    |        data_type         | is_nullable
--------------+--------------+------------------+------------------+--------------------------+-------------
 public       | account_list |                1 | account_list_id  | integer(32)              | NO
 public       | account_list |                2 | account_number   | character varying(10)    | NO
 public       | account_list |                3 | consumed_date    | date                     | YES
 public       | account_list |                4 | consumed_by      | integer(32)              | YES
 public       | account_list |                5 | created_by       | integer(32)              | NO
 public       | account_list |                6 | creation_date    | timestamp with time zone | NO
 public       | account_list |                7 | last_updated_by  | integer(32)              | NO
 public       | account_list |                8 | last_update_date | timestamp with time zone | NO
(8 rows)

As always, I hope this helps those looking for a solution to functions that wrap the data dictionary and display table data from the PostgreSQL data dictionary.

Written by maclochlainn

February 27th, 2022 at 12:43 am

Oracle Container User

without comments

After you create and provision the Oracle Database 21c Express Edition (XE), you can create a c##student container user with the following two step process.

  1. Create a c##student Oracle user account with the following command:

    CREATE USER c##student IDENTIFIED BY student
    DEFAULT TABLESPACE users QUOTA 200M ON users
    TEMPORARY TABLESPACE temp;

  2. Grant necessary privileges to the newly created c##student user:

    GRANT CREATE CLUSTER, CREATE INDEXTYPE, CREATE OPERATOR
    ,     CREATE PROCEDURE, CREATE SEQUENCE, CREATE SESSION
    ,     CREATE TABLE, CREATE TRIGGER, CREATE TYPE
    ,     CREATE VIEW TO c##student;

As always, I hope this helps those looking for how to do something that’s less than clear because everybody uses tools.

Written by maclochlainn

January 31st, 2022 at 5:58 pm

Protocol adapter error

without comments

One of the errors that defeats a lot of new users who install the Oracle Database on the Windows operating system is a two-step event. The first step occurs when you try to connect to the database and it raises the following error:

SQL*Plus: Release 18.0.0.0.0 - Production on Fri Jan 7 21:00:42 2022
Version 18.4.0.0.0
 
Copyright (c) 1982, 2018, Oracle.  All rights reserved.
 
ERROR:
ORA-12541: TNS:no listener

The second step may occur after you get the “no listener” error when you try to start the Oracle listener and it fails to start. The Oracle listener control command is:

lsnrctl start

When it returns the following error:

LSNRCTL FOR 64-bit Windows: Version 18.0.0.0.0 - Production ON 07-JAN-2022 21:02:20
 
Copyright (c) 1991, 2018, Oracle.  ALL rights reserved.
 
Starting tnslsnr: please wait...
 
Unable TO OpenSCManager: err=5
TNS-12560: TNS:protocol adapter error
TNS-00530: Protocol adapter error

The problem is generally in two configuration files. They are the listener.ora and tnsnames.ora files. This typically occurs when the developer fails to set the localhost in the Windows operating system hosts configuration file. The chain of events that causes these errors can be avoided when the user puts the following two lines:

127.0.0.1      localhost
::1            localhost

in the following hosts file:

C:\Windows\system32\drivers\etc\hosts

You can typically avoid these errors when you configure the hosts configuration file correctly before installing the Oracle Database. That’s because the Oracle database installation will use localhost keyword instead of the current, and typically DHCP assigned, IP address.

The loss of connectivity errors typically occur when the IP address changes after the installation. DHCP IP addresses often change as machines disconnect and reconnect to a network.

You can fix a DHCP IP installation of an Oracle database by editing the listener.ora and tnsnames.ora files. You replace the IP addresses with the localhost keyword.

The listener.ora and tnsnames.ora files look like the following for an Oracle Database 21c Express Edition (provided you installed them in a C:\app\username directory:

listener.ora

# listener.ora Network Configuration File: C:\app\username\product\21.0.0\dbhomeXE\NETWORK\ADMIN\listener.ora
# Generated by Oracle configuration tools.
 
DEFAULT_SERVICE_LISTENER = XE
 
SID_LIST_LISTENER =
 (SID_LIST =
   (SID_DESC =
     (SID_NAME = CLRExtProc)
     (ORACLE_HOME = C:\app\username\product\21.0.0\dbhomeXE)
     (PROGRAM = extproc)
     (ENVS = "EXTPROC_DLLS=ONLY:C:\app\username\product\21.0.0\dbhomeXE\bin\oraclr21.dll")
   )
 )
 
LISTENER =
 (DESCRIPTION_LIST =
   (DESCRIPTION =
     (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
     (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
   )
 )

tnsnames.ora

# tnsnames.ora Network Configuration File: C:\app\mclaughlinm\product\21.0.0\dbhomeXE\NETWORK\ADMIN\tnsnames.ora
# Generated by Oracle configuration tools.
 
XE =
 (DESCRIPTION =
   (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
   (CONNECT_DATA =
     (SERVER = DEDICATED)
     (SERVICE_NAME = XE)
   )
 )
 
LISTENER_XE =
 (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
 
 
ORACLR_CONNECTION_DATA =
 (DESCRIPTION =
   (ADDRESS_LIST =
     (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
   )
   (CONNECT_DATA =
     (SID = CLRExtProc)
     (PRESENTATION = RO)
   )
 )

As always, I hope this helps those looking for a solution to something that can take more time than it should to fix.

Oracle’s Sparse Lists

without comments

Oracle’s PL/SQL Programming Language is really quite nice. I’ve written 8 books on it and still have fun coding in it. One nasty little detail about Oracle’s lists, introduced in Oracle 8 as PL/SQL Tables according their documentation, is they rely on sequential numeric indexes. Unfortunately, Oracle lists support a DELETE method, which can create gaps in the sequential indexes.

Oracle calls a sequence without gaps densely populated and a sequence with gaps sparsely populated. This can cause problems when PL/SQL code inadvertently removes elements at the beginning, end, or somewhere in the middle of the list. That’s because a program can then pass the sparsely populated list as a parameter to another stored function or procedure where the developer may traverse the list in a for-loop. That traversal may raise an exception in a for-loop, like this when it has gaps in the index sequence:

DECLARE
*
ERROR AT line 1:
ORA-01403: no data found
ORA-06512: AT line 20

Oracle’s myriad built-in libraries don’t offer a function to compact a sparsely populated list into a densely populated list. This post provides a compact stored procedure that converts a sparsely populated list to a densely populated list.

The first step to using the compact stored procedure requires that you create an object type in SQL, like this list of 20-character strings:

DROP TYPE list;
CREATE OR REPLACE
  TYPE list IS TABLE OF VARCHAR2(20);
/

Now, you can implement the compact stored procedure by passing the User-Defined Type as it’s sole parameter.

CREATE OR REPLACE
  PROCEDURE compact ( sparse IN OUT LIST ) IS
    /* Declare local variables. */
    iterator  NUMBER;           -- Leave iterator as null.
 
    /* Declare new list. */
    dense     LIST := list();
  BEGIN
    /*
      Initialize the iterator with the starting value, which is
      necessary because the first element of the original list
      could have been deleted in earlier operations. Setting the
      initial iterator value to the first numeric index value
      ensures you start at the lowest available index value.
    */
    iterator := sparse.FIRST;
 
    /* Convert sparsely populated list to densely populated. */
    WHILE (iterator <= sparse.LAST) LOOP
      dense.EXTEND;
      dense(dense.COUNT) := sparse(iterator);
      iterator := sparse.NEXT(iterator);
    END LOOP;
 
    /* Replace the input parameter with the compacted list. */
    sparse := dense;
  END;
/

Before we test the compact stored procedure, let’s create deleteElement stored procedure for our testing:

CREATE OR REPLACE
  PROCEDURE deleteElement ( sparse   IN OUT LIST
                          , element  IN     NUMBER ) IS
  BEGIN
    /* Delete a value. */
    sparse.DELETE(element);
  END;
/

Now, let’s use an anonymous block to test compacting a sparsely populated list into a densely populated list. The test program will remove the first, last, and one element in the middle before printing the sparsely populated list’s index and string values. This test will show you gaps in the remaining non-sequential index values.

After you see the gaps, the test program compacts the remaining list values into a new densely populated list. It then prints the new index values with the data values.

DECLARE
  /* Declare a four item list. */
  lv_strings  LIST := list('one','two','three','four','five','six','seven');
BEGIN
  /* Check size of list. */
  dbms_output.put_line('Print initial list size:  ['||lv_strings.COUNT||']');
  dbms_output.put_line('===================================');
 
  /* Delete a value. */
  deleteElement(lv_strings,lv_strings.FIRST);
  deleteElement(lv_strings,3);
  deleteElement(lv_strings,lv_strings.LAST);
 
  /* Check size of list. */
  dbms_output.put_line('Print modified list size: ['||lv_strings.COUNT||']');
  dbms_output.put_line('Print max index and size: ['||lv_strings.LAST||']['||lv_strings.COUNT||']');
  dbms_output.put_line('===================================');
  FOR i IN 1..lv_strings.LAST LOOP
    IF lv_strings.EXISTS(i) THEN
      dbms_output.put_line('List list index and item: ['||i||']['||lv_strings(i)||']');
    END IF;
  END LOOP;
 
  /* Call a procedure by passing current sparse collection and
     the procedure returns dense collection. */
  dbms_output.put_line('===================================');
  dbms_output.put_line('Compacting list.');
  compact(lv_strings);
  dbms_output.put_line('===================================');
 
  /* Print the new maximum index value and list size. */
  dbms_output.put_line('Print new index and size: ['||lv_strings.LAST||']['||lv_strings.COUNT||']');
  dbms_output.put_line('===================================');
  FOR i IN 1..lv_strings.COUNT LOOP
    dbms_output.put_line('List list index and item: ['||i||']['||lv_strings(i)||']');
  END LOOP;
  dbms_output.put_line('===================================');
END;
/

It produces output, like:

Print initial list size:  [7]
===================================
Print modified list size: [4]
Print max index and size: [6][4]
===================================
List list index and item: [2][two]
List list index and item: [4][four]
List list index and item: [5][five]
List list index and item: [6][six]
===================================
Compacting list.
===================================
Print new index and size: [4][4]
===================================
List list index and item: [1][two]
List list index and item: [2][four]
List list index and item: [3][five]
List list index and item: [4][six]
===================================

You can extend this concept by creating User-Defined Types with multiple attributes, which are essentially lists of tuples (to draw on Pythonic lingo).

Written by maclochlainn

October 4th, 2021 at 11:49 pm

MySQL sakila Database

without comments

While I thought my instructions were clear, it appears there should have been more in my examples for using the MySQL MSI. A key thing that happened is that students opted not to install:

Samples and Examples 8.0.22

Unfortunately, they may not have read the Preface of Alan Beaulieu’s Learning SQL, 3rd Edition where he explains how to manually download the files from the MySQL web site. Here are those, very clear, instructions (pg. XV) with my additions in italics for the MySQL Shell:

First, you will need to launch the mysql command-line client or the mysqlsh command-line shell, and provide a password, and then perform the following steps:

  1. Go to https://dev.mysql.com/doc/index-other.html and download the files for the “sakila database” under the Example Database section.
  2. Put the files in the local directory such as C:\temp\sakila-db (used for the next two steps, but overwrite with your directory path).
  3. Type

    source c:\temp\sakila-db\sakila-schema.sql

    and press enter.

  4. Type

    source c:\temp\sakila-db\sakila-data.sql

    and press enter.

These instructions let you create the sakila database without rerunning the MSI to add a product. Naturally, you can avoid these steps by using the GUI approach provided in the MySQL MSI file.

As always, I hope this helps those looking for how to solve problems.

Written by maclochlainn

January 9th, 2021 at 11:22 am

MySQL Self-Join

with 4 comments

I’m switching to MySQL and leveraging Alan Beaulieu’s Learning SQL as a supporting reference for my Database Design and Development course. While reviewing Alan’s Chapter 5: Querying Multiple Tables, I found his coverage of using self-joins minimal.

In fact, he adds a prequel_film_id column to the film table in the sakila database and then a single row to demonstrate a minimal self-join query. I wanted to show them how to view a series of rows interconnected by a self-join, like the following:

SELECT   f.title AS film
,        fp.title AS prequel
FROM     film f LEFT JOIN film fp
ON       f.prequel_id = fp.film_id
WHERE    f.series_name = 'Harry Potter'
ORDER BY f.series_number;

It returns the following result set:

+----------------------------------------------+----------------------------------------------+
| film                                         | prequel                                      |
+----------------------------------------------+----------------------------------------------+
| Harry Potter and the Chamber of Secrets      | Harry Potter and the Sorcerer's Stone        |
| Harry Potter and the Prisoner of Azkaban     | Harry Potter and the Chamber of Secrets      |
| Harry Potter and the Goblet of Fire          | Harry Potter and the Prisoner of Azkaban     |
| Harry Potter and the Order of the Phoenix    | Harry Potter and the Goblet of Fire          |
| Harry Potter and the Half Blood Prince       | Harry Potter and the Order of the Phoenix    |
| Harry Potter and the Deathly Hallows: Part 1 | Harry Potter and the Half Blood Prince       |
| Harry Potter and the Deathly Hallows: Part 2 | Harry Potter and the Deathly Hallows: Part 1 |
+----------------------------------------------+----------------------------------------------+
7 rows in set (0.00 sec)

Then, I thought about what questions the students might ask. For example, why doesn’t the query return the first film that doesn’t have a prequel. So, I took the self-join to the next level to display the first film having no prequel, like this:

SELECT   f.title AS film
,        IFNULL(
           CASE
             WHEN NOT f.film_id = fp.film_id
             AND      f.prequel_id = fp.film_id THEN fp.title		   
           END,'None') AS prequel
FROM     film f LEFT JOIN film fp
ON       f.prequel_id = fp.film_id
WHERE    f.series_name = 'Harry Potter'
ORDER BY f.series_number;

The CASE operator in the SELECT-list filters the result set by eliminating rows erroneously returned. Without the CASE filter, the query would return the original Harry Potter and the Sorcerer’s Stone film matched agains a NULL and all of the other sequels. The CASE operator effectively limits the result set for the LEFT JOIN to only the following data:

+----------------------------------------------+----------------------------------------------+
| film                                         | prequel                                   |
+----------------------------------------------+----------------------------------------------+
| Harry Potter and the Sorcerer's Stone        | NULL                                         |
+----------------------------------------------+----------------------------------------------+

The IFNULL() built-in function lets you replace the NULL value returned as the prequel’s title value. The IFNULL() function substitutes a 'None' string literal for a NULL value. The query returns the following result set:

+----------------------------------------------+----------------------------------------------+
| film                                         | prequel                                      |
+----------------------------------------------+----------------------------------------------+
| Harry Potter and the Sorcerer's Stone        | None                                         |
| Harry Potter and the Chamber of Secrets      | Harry Potter and the Sorcerer's Stone        |
| Harry Potter and the Prisoner of Azkaban     | Harry Potter and the Chamber of Secrets      |
| Harry Potter and the Goblet of Fire          | Harry Potter and the Prisoner of Azkaban     |
| Harry Potter and the Order of the Phoenix    | Harry Potter and the Goblet of Fire          |
| Harry Potter and the Half Blood Prince       | Harry Potter and the Order of the Phoenix    |
| Harry Potter and the Deathly Hallows: Part 1 | Harry Potter and the Half Blood Prince       |
| Harry Potter and the Deathly Hallows: Part 2 | Harry Potter and the Deathly Hallows: Part 1 |
+----------------------------------------------+----------------------------------------------+
8 rows in set (0.01 sec)

Alan’s modification of the sakila.film table had the following two related design flaws:

  • It didn’t provide a way to guarantee the ordering of films with prequels because relational databases don’t guarantee ordered result sets unless you use an ORDER BY clause, which typically requires a column to order.
  • It didn’t provide a way to isolate a series of films.

I modified the film table differently by adding the series_name, series_number, and prequel_id columns. The series_name column lets you group results and the series_number column lets you order by a preserved sequence that you store as part of the data The prequel_id column lets you connect to the prequel film, much like the backward portion of a doubly linked list.

The new sakila.film table is:

+----------------------+---------------------------------------------------------------------+------+-----+-------------------+-----------------------------------------------+
| Field                | Type                                                                | Null | Key | Default           | Extra                                         |
+----------------------+---------------------------------------------------------------------+------+-----+-------------------+-----------------------------------------------+
| film_id              | smallint unsigned                                                   | NO   | PRI | NULL              | auto_increment                                |
| title                | varchar(255)                                                        | NO   | MUL | NULL              |                                               |
| description          | text                                                                | YES  |     | NULL              |                                               |
| release_year         | year                                                                | YES  |     | NULL              |                                               |
| language_id          | tinyint unsigned                                                    | NO   | MUL | NULL              |                                               |
| original_language_id | tinyint unsigned                                                    | YES  | MUL | NULL              |                                               |
| rental_duration      | tinyint unsigned                                                    | NO   |     | 3                 |                                               |
| rental_rate          | decimal(4,2)                                                        | NO   |     | 4.99              |                                               |
| length               | smallint unsigned                                                   | YES  |     | NULL              |                                               |
| replacement_cost     | decimal(5,2)                                                        | NO   |     | 19.99             |                                               |
| rating               | enum('G','PG','PG-13','R','NC-17')                                  | YES  |     | G                 |                                               |
| special_features     | set('Trailers','Commentaries','Deleted Scenes','Behind the Scenes') | YES  |     | NULL              |                                               |
| last_update          | timestamp                                                           | NO   |     | CURRENT_TIMESTAMP | DEFAULT_GENERATED on update CURRENT_TIMESTAMP |
| series_name          | varchar(20)                                                         | YES  |     | NULL              |                                               |
| series_number        | int unsigned                                                        | YES  |     | NULL              |                                               |
| prequel              | int unsigned                                                        | YES  |     | NULL              |                                               |
+----------------------+---------------------------------------------------------------------+------+-----+-------------------+-----------------------------------------------+
16 rows in set (0.21 sec)

After adding the three new columns, I inserted eight rows for the original Harry Potter films. You can use the following script in the MySQL client (mysql) to add the columns and insert the data to test the preceding queries:

-- Use sakila database.
USE sakila;
 
-- Add a prequel_id column to the sakila.film table.
ALTER TABLE film
ADD (series_name    varchar(20)),
ADD (series_number  int unsigned),
ADD (prequel_id     int unsigned);
 
-- Set primary to foreign key local variable.
SET @sv_film_id = 0;
 
-- Insert Harry Potter films in sakila.film table with classic values clause.
INSERT INTO film
( title
, description
, release_year
, language_id
, original_language_id
, rental_duration
, rental_rate
, length
, replacement_cost
, rating
, special_features
, last_update
, series_name
, series_number
, prequel_id )
VALUES
('Harry Potter and the Sorcerer''s Stone'
,'A film about a young boy who on his eleventh birthday discovers, he is the orphaned boy of two powerful wizards and has unique magical powers.'
, 2001
, 1
, NULL
, 3
, 0.99
, 152
, 19.99
,'PG'
,'Trailers'
,'2001-11-04'
,'Harry Potter'
, 1
, NULL );
 
-- Assign the last generated primary key value to the local variable.
SET @sv_film_id := last_insert_id();
 
-- Insert 2nd film in sakila.film table with classic values clause.
INSERT INTO film
( title
, description
, release_year
, language_id
, original_language_id
, rental_duration
, rental_rate
, length
, replacement_cost
, rating
, special_features
, last_update
, series_name
, series_number
, prequel_id )
VALUES
('Harry Potter and the Chamber of Secrets'
,'A film where Harry returning to Hogwarts, still famous and a hero, when strange things start to happen ... people are turning to stone and no-one knows what, or who, is doing it.'
, 2002
, 1
, NULL
, 3
, 0.99
, 160
, 19.99
,'PG'
,'Trailers'
,'2002-11-15'
,'Harry Potter'
, 2
, @sv_film_id );
 
-- Assign the last generated primary key value to the local variable.
SET @sv_film_id := last_insert_id();
 
-- Insert 3rd film in sakila.film table with classic values clause.
INSERT INTO film
( title
, description
, release_year
, language_id
, original_language_id
, rental_duration
, rental_rate
, length
, replacement_cost
, rating
, special_features
, last_update
, series_name
, series_number
, prequel_id )
VALUES
('Harry Potter and the Prisoner of Azkaban'
,'A film where Harry, Ron, and Hermione return for their third year at Hogwarts and are forced to face escaped prisoner, Sirius Black.'
, 2004
, 1
, NULL
, 3
, 0.99
, 141
, 19.99
,'PG'
,'Trailers'
,'2004-06-04'
,'Harry Potter'
, 3
, @sv_film_id );
 
-- Assign the last generated primary key value to the local variable.
SET @sv_film_id := last_insert_id();
 
-- Insert 4th film in sakila.film table with classic values clause.
INSERT INTO film
( title
, description
, release_year
, language_id
, original_language_id
, rental_duration
, rental_rate
, length
, replacement_cost
, rating
, special_features
, last_update
, series_name
, series_number
, prequel_id )
VALUES
('Harry Potter and the Goblet of Fire'
,'A film where where Harry Potter''s name emerges from the Goblet of Fire, and he becomes a competitor in a grueling battle for glory among three wizarding schools - the Triwizard Tournament.'
, 2005
, 1
, NULL
, 3
, 0.99
, 157
, 19.99
,'PG'
,'Trailers'
,'2005-11-18'
,'Harry Potter'
, 4
, @sv_film_id );
 
-- Assign the last generated primary key value to the local variable.
SET @sv_film_id := last_insert_id();
 
-- Insert 5th film in sakila.film table with classic values clause.
INSERT INTO film
( title
, description
, release_year
, language_id
, original_language_id
, rental_duration
, rental_rate
, length
, replacement_cost
, rating
, special_features
, last_update
, series_name
, series_number
, prequel_id )
VALUES
('Harry Potter and the Order of the Phoenix'
,'A film where Lord Voldemort has returned, but the Ministry of Magic is doing everything it can to keep the wizarding world from knowing the truth.'
, 2007
, 1
, NULL
, 3
, 0.99
, 138
, 19.99
,'PG-13'
,'Trailers'
,'2007-07-12'
,'Harry Potter'
, 5
, @sv_film_id );
 
-- Assign the last generated primary key value to the local variable.
SET @sv_film_id := last_insert_id();
 
-- Insert 6th film in sakila.film table with classic values clause.
INSERT INTO film
( title
, description
, release_year
, language_id
, original_language_id
, rental_duration
, rental_rate
, length
, replacement_cost
, rating
, special_features
, last_update
, series_name
, series_number
, prequel_id )
VALUES
('Harry Potter and the Half Blood Prince'
,'A film where Voldemort is tightening his grip on Hogwarts and it is no longer the safe haven it once was. Harry and Dumbledore work to find the key to unlock the Dark Lord''s defenses.'
, 2009
, 1
, NULL
, 3
, 0.99
, 153
, 19.99
,'PG'
,'Trailers'
,'2009-07-15'
,'Harry Potter'
, 6
, @sv_film_id );
 
-- Assign the last generated primary key value to the local variable.
SET @sv_film_id := last_insert_id();
 
-- Insert 7th film in sakila.film table with classic values clause.
INSERT INTO film
( title
, description
, release_year
, language_id
, original_language_id
, rental_duration
, rental_rate
, length
, replacement_cost
, rating
, special_features
, last_update
, series_name
, series_number
, prequel_id )
VALUES
('Harry Potter and the Deathly Hallows: Part 1'
,'A film where Harry, Ron and Hermione set out on their perilous mission to track down and destroy the Horcruxes - the keys to Voldemort''s immortality.'
, 2010
, 1
, NULL
, 3
, 0.99
, 146
, 19.99
,'PG-13'
,'Trailers'
,'2010-11-19'
,'Harry Potter'
, 7
, @sv_film_id );
 
-- Assign the last generated primary key value to the local variable.
SET @sv_film_id := last_insert_id();
 
-- Insert 8th film in sakila.film table with classic values clause.
INSERT INTO film
( title
, description
, release_year
, language_id
, original_language_id
, rental_duration
, rental_rate
, length
, replacement_cost
, rating
, special_features
, last_update
, series_name
, series_number
, prequel_id )
VALUES
('Harry Potter and the Deathly Hallows: Part 2'
,'A film where Harry, Ron and Hermione set out on their perilous mission to track down and destroy the Horcruxes - the keys to Voldemort''s immortality.'
, 2011
, 1
, NULL
, 3
, 0.99
, 130
, 19.99
,'PG-13'
,'Trailers'
,'2011-07-15'
,'Harry Potter'
, 8
, @sv_film_id );

You can put the following commands into a SQL script file to revert the sakila.film table to its base configuration:

DELETE FROM film WHERE film_id > 1000;
ALTER TABLE film DROP COLUMN series_name;
ALTER TABLE film DROP COLUMN series_number;
ALTER TABLE film DROP COLUMN prequel_id;
ALTER TABLE film AUTO_INCREMENT = 1000;

As always, I hope this helps those looking for how to solve a new problem.

Written by maclochlainn

November 22nd, 2020 at 4:03 pm

Oracle External Table

without comments

Supporting my student labs requires the ability to use external tables. I didn’t have a chance to test external tables when I adopted Oracle’s 18c Docker installations. The following are the instructions for the macOS version, which work. Unfortunately, I couldn’t sort out a way to implement external tables in the Windows version of Oracle’s 18c Docker installation. If somebody knows the way to do that, feel free to drop me a note or a URL to where the information may be found.

These two blog posts explain Oracle’s external tables:

External Tables
External Tables with Preprocessing

These instructions build on my base macOS instructions, which you can find in this earlier Install, configure, and use an Oracle Docker Container blog post. You can find the existing Oracle virtual directories when you connect as the system user and query the dba_directories view. I used the following SQL*Plus formatting commands for the query:

SET PAGESIZE 99
COL directory_name FORMAT A24
COL directory_path FORMAT A54

and the following SQL query:

SELECT directory_name
,      directory_path
FROM   dba_directories;

It returns the following results:

DIRECTORY_NAME		 DIRECTORY_PATH
------------------------ ------------------------------------------------------
ORA_DBMS_FCP_LOGDIR	 /opt/oracle/product/18c/dbhomeXE/cfgtoollogs
SDO_DIR_WORK		 /ade/b/2794046351/oracle/work
SDO_DIR_ADMIN		 /ade/b/2794046351/oracle/md/admin
ORA_DBMS_FCP_ADMINDIR	 /opt/oracle/product/18c/dbhomeXE/rdbms/admin
XMLDIR			 /opt/oracle/product/18c/dbhomeXE/rdbms/xml
XSDDIR			 /opt/oracle/product/18c/dbhomeXE/rdbms/xml/schema
ORACLE_OCM_CONFIG_DIR2	 /opt/oracle/product/18c/dbhomeXE/ccr/state
ORACLE_OCM_CONFIG_DIR	 /opt/oracle/product/18c/dbhomeXE/ccr/state
OPATCH_INST_DIR 	 /opt/oracle/product/18c/dbhomeXE/OPatch
DATA_PUMP_DIR		 /opt/oracle/admin/XE/dpdump/
OPATCH_SCRIPT_DIR	 /opt/oracle/product/18c/dbhomeXE/QOpatch
OPATCH_LOG_DIR		 /opt/oracle/product/18c/dbhomeXE/rdbms/log
ORACLE_BASE		 /opt/oracle
ORACLE_HOME		 /opt/oracle/product/18c/dbhomeXE
 
14 rows selected.

In that prior post, you will find instructions for creating an environment file. The following instructions leverage the $ORACLE_BASE environment variable, which points to the /opt/oracle directory in the Docker environment.

Here are the steps to complete the external file setup, including how to position the physical comma-separated value (CSV) files in the available but otherwise hidden Docker directories. Hidden might be the wrong word choice but they’re not visible from the host macOS operating system. You have to connect to the Docker instance as the root user.

The following Docker command, used in the prior blog post, connects as the root user:

docker exec -it videodb bash

The instructions start before connecting to and after connecting to the Docker instance:

  1. Put the transaction_upload.csv and transaction_upload2.csv files into the following macOS host directory:

    /Users/<installuser>/oracle/student

    which matches to the following internal Docker directory:

    /home/student
  2. As the connected root user, change directory to the $ORACLE_BASE (/opt/oracle) directory with the following command:

    cd $ORACLE_BASE

  3. As the connected root user, make an upload directory inside the $ORACLE_BASE (/opt/oracle) directory with the following command:

    mkdir upload

    As the root user, change the ownership of the upload director from root as the owner and primary group to oracle as the owner and oinstall as the primary group with the following command:

    chown oracle:oinstall upload

  4. As the connected root user, copy the transaction_upload.csv and transaction_upload2.csv files from the /home/student directory to the $ORACLE_BASE (/opt/oracle) directory with the following command:

    cp ~student/transaction_upload*.csv $ORACLE_BASE/upload/.

  5. As the connected root user, change from the $ORACLE_BASE (/opt/oracle) directory to the upload subdirectory with the following command:

    cd upload

    As the root user, change the ownership of the transaction_upload.csv and transaction_upload2.csv files from root as the owner and primary group to oracle as the owner and oinstall as the primary group with the following command:

    chown oracle:oinstall transaction_upload*.csv

  6. As the connected Oracle system user, create the upload virtual directory with the following command:

    CREATE DIRECTORY upload AS '/opt/oracle/upload';

    As the connected Oracle system user, grant read and write privileges to the Oracle c##student user with the following command:

    GRANT READ, WRITE ON DIRECTORY upload TO c##student;

  7. As the connected Oracle c##student user, create the transaction_upload externally managed table with the following command:

    CREATE TABLE transaction_upload
    ( account_number          VARCHAR2(10)
    , first_name              VARCHAR2(20)
    , middle_name             VARCHAR2(20)
    , last_name               VARCHAR2(20)
    , check_out_date          DATE
    , return_date             DATE
    , rental_item_type        VARCHAR2(12)
    , transaction_type        VARCHAR2(14)
    , transaction_amount      NUMBER
    , transaction_date        DATE
    , item_id                 NUMBER
    , payment_method_type     VARCHAR2(14)
    , payment_account_number  VARCHAR2(19))
      ORGANIZATION EXTERNAL
      ( TYPE oracle_loader
        DEFAULT DIRECTORY upload
        ACCESS PARAMETERS
        ( RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII
          BADFILE     'UPLOAD':'transaction_upload.bad'
          DISCARDFILE 'UPLOAD':'transaction_upload.dis'
          LOGFILE     'UPLOAD':'transaction_upload.log'
          FIELDS TERMINATED BY ','
          OPTIONALLY ENCLOSED BY "'"
          MISSING FIELD VALUES ARE NULL )
        LOCATION ('transaction_upload.csv'))
    REJECT LIMIT UNLIMITED;

    As the connected Oracle c##student user, query the transaction_upload table to verify that you can read the external file source through the virtual upload directory.

    COL  record  FORMAT 99,999  HEADING "Record|Count"
    SELECT TO_CHAR(COUNT(*),'99,999') AS record
    FROM   transaction_upload;

    It should return the following value:

    Record
    Count
    -------
     11,520

  8. As the connected Oracle c##student user, create the transaction_reversal externally managed table with the following command:

    CREATE TABLE transaction_reversal
    ( transaction_id          NUMBER
    , transaction_account     VARCHAR2(15)
    , transaction_type        VARCHAR2(30)
    , transaction_date        DATE
    , transaction_amount      NUMBER
    , rental_id               NUMBER
    , payment_method_type     NUMBER
    , payment_account_number  VARCHAR2(20)
    , created_by              NUMBER
    , creation_date           DATE
    , last_updated_by         NUMBER
    , last_update_date        DATE)
      ORGANIZATION EXTERNAL
      ( TYPE oracle_loader
        DEFAULT DIRECTORY upload
        ACCESS PARAMETERS
        ( RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII
          BADFILE     'UPLOAD':'transaction_upload2.bad'
          DISCARDFILE 'UPLOAD':'transaction_upload2.dis'
          LOGFILE     'UPLOAD':'transaction_upload2.log'
          FIELDS TERMINATED BY ','
          OPTIONALLY ENCLOSED BY "'"
          MISSING FIELD VALUES ARE NULL )
        LOCATION ('transaction_upload2.csv'))
    REJECT LIMIT UNLIMITED;

    As the connected Oracle c##student user, query the transaction_reversal table to verify that you can read the external file source through the virtual upload directory.

    COL  record  FORMAT 99,999  HEADING "Record|Count"
    SELECT TO_CHAR(COUNT(*),'99,999') AS record
    FROM   transaction_reversal;

    It should return the following value:

    Record
    Count
    -------
      1,170

Unfortunately, the file permission on the Windows version of the Oracle Docker 18c installation make it more difficult to install.

Written by maclochlainn

November 14th, 2020 at 10:57 pm

Conditional Updates

without comments

While I’m switching labs next term after more than a decade with more comprehensive lab set, I’m hoping the new exercises build the students’ core SQL skill set. Next term, I hope to see whether the change is successful. I’ve opted for using Alan Beaulieu’s Learning SQL: Generate, Manipulate, and Retrieve Data, 3rd Edition, because it’s a great book and uses the MySQL database.

One exercise that the students will lose is a data migration exercise from a badly designed common_lookup table to a well designed common_lookup table. The starting point is shown below on the left and the fixed version is on the right.

        

There are several problems with the original common_lookup table’s design. The first problem is that the common_lookup_context column does not uniquely identify a location within the data model for at least one list of lookup values. While it uses table names generally, it has no way to support two or more lists within the same table. It also uses a 'MULTIPLE' string for a list of values that supports two tables. The two tables supported by 'MULTIPLE' string are the address and telephone tables.

The lab instructions have the students add the following three columns to the table:

  • common_lookup_table
  • common_lookup_column
  • common_lookup_code

Together the combination of the common_lookup_table and common_lookup_column columns create a non-unique super key. The super key identifies micro subtables. The combination of the common_lookup_table, common_lookup_column, and common_lookup_type columns creates a unique natural key that defines all possible values for a lookup list based on a column in a table.

The lab asked the students to use the existing data, string literal values, and some simple rules to populate the new common_lookup_table and common_lookup_column columns with data. The rules or steps were:

  1. Migrate valid table names from the common_lookup_context column to the common_lookup_table column.
  2. Migrate a literal 'ADDRESS' value into the common_lookup_table column when the common_lookup_context column holds a 'MULTIPLE' string value.
  3. Migrate valid table names from the common_lookup_context column to the common_lookup_column column by appending a '_TYPE' string literal to the common_lookup_context column values, except for those three rows that have a ‘VISA_CARD’, ‘MASTER_CARD’, or ‘DISCOVER_CARD’. The three exempted rows should update the common_lookup_column column with a 'CREDIT_CARD_TYPE' string literal.
  4. Migrate a literal 'ADDRESS_TYPE' value into the common_lookup_column column when the common_lookup_context column holds a 'MULTIPLE' string value.
  5. After these changes, insert two new rows in the common_lookup table. They should contain 'TELEPHONE' and 'TELEPHONE_TYPE' string literal values. One of the rows should contain 'HOME' for the common_lookup_type column value and the other 'WORK' for the same column.
  6. Then, the students were asked to update the foreign key column value in the telephone_type column of the telephone table.

It was a tremendous learning experience for those who did it because there were so many steps required to migrate the structure and data. Years ago, I would offer students half their final grade if they could complete the first four steps in a single UPDATE statement. Occasionally, I had students complete it. We worked through the problem with a small group of students today in one of my two weekly tutoring sessions. I thought it might be nice to document the solution, which use CASE operators in the SET clause of the UPDATE statement.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
UPDATE common_lookup
SET    common_lookup_table =
         CASE
           WHEN NOT common_lookup_context = 'MULTIPLE' THEN
             common_lookup_context
           ELSE
             'ADDRESS'
         END
,      common_lookup_column =
         CASE
           WHEN common_lookup_table = 'MEMBER' AND
                common_lookup_type IN ('VISA_CARD','MASTER_CARD','DISCOVER_CARD') THEN
             'CREDIT_CARD_TyPE'
           WHEN NOT common_lookup_context = 'MULTIPLE' THEN
             CONCAT(common_lookup_context,'_TYPE')
           ELSE
             'ADDRESS_TYPE'
         END;

As a rule, students would solve Step #6, which migrates the foreign key values of the telephone table’s telephone_type column to the new rows inserted into the common_lookup table. Most would accomplish that step with two UPDATE statements. Very few could see how to create a single UPDATE statement for both conditions and migrate from a now obsolete foreign key value that pointed to the rows of the address table’s rows in the common_lookup table to a valid foreign key value pointed to the telephone table’s rows in the common_lookup table:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
UPDATE   telephone
SET      telephone_type =
           CASE
             WHEN common_lookup_type = 'HOME' THEN
               (SELECT common_lookup_id
                FROM common_lookup
                WHERE common_lookup_table = 'TELEPHONE'
                AND common_lookup_type = 'HOME')
             ELSE
               (SELECT common_lookup_id
                FROM common_lookup
                WHERE common_lookup_table = 'TELEPHONE'
                AND common_lookup_type = 'WORK')
             END
WHERE    telephone_type = 
           (SELECT common_lookup_id
            FROM common_lookup
            WHERE common_lookup_table = 'ADDRESS'
            AND common_lookup_type IN ('HOME','WORK');

As always, I hope this helps those looking for new ideas in SQL.

Written by maclochlainn

November 5th, 2020 at 11:03 pm

Wrap Oracle SQL*Plus

without comments

One of the key problems with Oracle’s deployment is that you can not use the up-arrow key to navigate the sqlplus command-line history. Here’s little Bash shell function that you can put in your .bashrc file. It requires you to have your system administrator install the rlwrap package, which wraps the sqlplus command-line history.

You should also set the $ORACLE_HOME environment variable before you put this function in your .bashrc file.

sqlplus () 
{
    # Discover the fully qualified program name. 
    path=`which rlwrap 2>/dev/null`
    file=''
 
    # Parse the program name from the path.
    if [ -n ${path} ]; then
        file=${path##/*/}
    fi;
 
    # Wrap when there is a file and it is rewrap.
    if [ -n ${file} ] && [[ ${file} = "rlwrap" ]]; then
        rlwrap sqlplus "${@}"
    else
        echo "Command-line history unavailable: Install the rlwrap package."
        $ORACLE_HOME/bin/sqlplus "${@}"
    fi
}

If you port this shell script to an environment where rlwrap is not installed, it simply prints the error message and advises you to install the rlwrap package.

As always, I hope this helps those looking for a solution.

Written by maclochlainn

June 29th, 2020 at 10:53 pm