MacLochlainns Weblog

Michael McLaughlin's Technical Blog

Site Admin

Archive for the ‘Database Design’ Category

Oracle External Table

without comments

Supporting my student labs requires the ability to use external tables. I didn’t have a chance to test external tables when I adopted Oracle’s 18c Docker installations. The following are the instructions for the macOS version, which work. Unfortunately, I couldn’t sort out a way to implement external tables in the Windows version of Oracle’s 18c Docker installation. If somebody knows the way to do that, feel free to drop me a note or a URL to where the information may be found.

These two blog posts explain Oracle’s external tables:

External Tables
External Tables with Preprocessing

These instructions build on my base macOS instructions, which you can find in this earlier Install, configure, and use an Oracle Docker Container blog post. You can find the existing Oracle virtual directories when you connect as the system user and query the dba_directories view. I used the following SQL*Plus formatting commands for the query:

SET PAGESIZE 99
COL directory_name FORMAT A24
COL directory_path FORMAT A54

and the following SQL query:

SELECT directory_name
,      directory_path
FROM   dba_directories;

It returns the following results:

DIRECTORY_NAME		 DIRECTORY_PATH
------------------------ ------------------------------------------------------
ORA_DBMS_FCP_LOGDIR	 /opt/oracle/product/18c/dbhomeXE/cfgtoollogs
SDO_DIR_WORK		 /ade/b/2794046351/oracle/work
SDO_DIR_ADMIN		 /ade/b/2794046351/oracle/md/admin
ORA_DBMS_FCP_ADMINDIR	 /opt/oracle/product/18c/dbhomeXE/rdbms/admin
XMLDIR			 /opt/oracle/product/18c/dbhomeXE/rdbms/xml
XSDDIR			 /opt/oracle/product/18c/dbhomeXE/rdbms/xml/schema
ORACLE_OCM_CONFIG_DIR2	 /opt/oracle/product/18c/dbhomeXE/ccr/state
ORACLE_OCM_CONFIG_DIR	 /opt/oracle/product/18c/dbhomeXE/ccr/state
OPATCH_INST_DIR 	 /opt/oracle/product/18c/dbhomeXE/OPatch
DATA_PUMP_DIR		 /opt/oracle/admin/XE/dpdump/
OPATCH_SCRIPT_DIR	 /opt/oracle/product/18c/dbhomeXE/QOpatch
OPATCH_LOG_DIR		 /opt/oracle/product/18c/dbhomeXE/rdbms/log
ORACLE_BASE		 /opt/oracle
ORACLE_HOME		 /opt/oracle/product/18c/dbhomeXE
 
14 rows selected.

In that prior post, you will find instructions for creating an environment file. The following instructions leverage the $ORACLE_BASE environment variable, which points to the /opt/oracle directory in the Docker environment.

Here are the steps to complete the external file setup, including how to position the physical comma-separated value (CSV) files in the available but otherwise hidden Docker directories. Hidden might be the wrong word choice but they’re not visible from the host macOS operating system. You have to connect to the Docker instance as the root user.

The following Docker command, used in the prior blog post, connects as the root user:

docker exec -it videodb bash

The instructions start before connecting to and after connecting to the Docker instance:

  1. Put the transaction_upload.csv and transaction_upload2.csv files into the following macOS host directory:

    /Users/<installuser>/oracle/student

    which matches to the following internal Docker directory:

    /home/student
  2. As the connected root user, change directory to the $ORACLE_BASE (/opt/oracle) directory with the following command:

    cd $ORACLE_BASE

  3. As the connected root user, make an upload directory inside the $ORACLE_BASE (/opt/oracle) directory with the following command:

    mkdir upload

    As the root user, change the ownership of the upload director from root as the owner and primary group to oracle as the owner and oinstall as the primary group with the following command:

    chown oracle:oinstall upload

  4. As the connected root user, copy the transaction_upload.csv and transaction_upload2.csv files from the /home/student directory to the $ORACLE_BASE (/opt/oracle) directory with the following command:

    cp ~student/transaction_upload*.csv $ORACLE_BASE/upload/.

  5. As the connected root user, change from the $ORACLE_BASE (/opt/oracle) directory to the upload subdirectory with the following command:

    cd upload

    As the root user, change the ownership of the transaction_upload.csv and transaction_upload2.csv files from root as the owner and primary group to oracle as the owner and oinstall as the primary group with the following command:

    chown oracle:oinstall transaction_upload*.csv

  6. As the connected Oracle system user, create the upload virtual directory with the following command:

    CREATE DIRECTORY upload AS '/opt/oracle/upload';

    As the connected Oracle system user, grant read and write privileges to the Oracle c##student user with the following command:

    GRANT READ, WRITE ON DIRECTORY upload TO c##student;

  7. As the connected Oracle c##student user, create the transaction_upload externally managed table with the following command:

    CREATE TABLE transaction_upload
    ( account_number          VARCHAR2(10)
    , first_name              VARCHAR2(20)
    , middle_name             VARCHAR2(20)
    , last_name               VARCHAR2(20)
    , check_out_date          DATE
    , return_date             DATE
    , rental_item_type        VARCHAR2(12)
    , transaction_type        VARCHAR2(14)
    , transaction_amount      NUMBER
    , transaction_date        DATE
    , item_id                 NUMBER
    , payment_method_type     VARCHAR2(14)
    , payment_account_number  VARCHAR2(19))
      ORGANIZATION EXTERNAL
      ( TYPE oracle_loader
        DEFAULT DIRECTORY upload
        ACCESS PARAMETERS
        ( RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII
          BADFILE     'UPLOAD':'transaction_upload.bad'
          DISCARDFILE 'UPLOAD':'transaction_upload.dis'
          LOGFILE     'UPLOAD':'transaction_upload.log'
          FIELDS TERMINATED BY ','
          OPTIONALLY ENCLOSED BY "'"
          MISSING FIELD VALUES ARE NULL )
        LOCATION ('transaction_upload.csv'))
    REJECT LIMIT UNLIMITED;

    As the connected Oracle c##student user, query the transaction_upload table to verify that you can read the external file source through the virtual upload directory.

    COL  record  FORMAT 99,999  HEADING "Record|Count"
    SELECT TO_CHAR(COUNT(*),'99,999') AS record
    FROM   transaction_upload;

    It should return the following value:

    Record
    Count
    -------
     11,520

  8. As the connected Oracle c##student user, create the transaction_reversal externally managed table with the following command:

    CREATE TABLE transaction_reversal
    ( transaction_id          NUMBER
    , transaction_account     VARCHAR2(15)
    , transaction_type        VARCHAR2(30)
    , transaction_date        DATE
    , transaction_amount      NUMBER
    , rental_id               NUMBER
    , payment_method_type     NUMBER
    , payment_account_number  VARCHAR2(20)
    , created_by              NUMBER
    , creation_date           DATE
    , last_updated_by         NUMBER
    , last_update_date        DATE)
      ORGANIZATION EXTERNAL
      ( TYPE oracle_loader
        DEFAULT DIRECTORY upload
        ACCESS PARAMETERS
        ( RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII
          BADFILE     'UPLOAD':'transaction_upload2.bad'
          DISCARDFILE 'UPLOAD':'transaction_upload2.dis'
          LOGFILE     'UPLOAD':'transaction_upload2.log'
          FIELDS TERMINATED BY ','
          OPTIONALLY ENCLOSED BY "'"
          MISSING FIELD VALUES ARE NULL )
        LOCATION ('transaction_upload2.csv'))
    REJECT LIMIT UNLIMITED;

    As the connected Oracle c##student user, query the transaction_reversal table to verify that you can read the external file source through the virtual upload directory.

    COL  record  FORMAT 99,999  HEADING "Record|Count"
    SELECT TO_CHAR(COUNT(*),'99,999') AS record
    FROM   transaction_reversal;

    It should return the following value:

    Record
    Count
    -------
      1,170

Unfortunately, the file permission on the Windows version of the Oracle Docker 18c installation make it more difficult to install.

Written by maclochlainn

November 14th, 2020 at 10:57 pm

Postgres Foreign Keys

without comments

Just sorting out how to query the information_schema to discover the magic for a query of a table’s foreign key constraints. This query works to return the foreign key constraints:

SELECT   conrelid::regclass::text AS table_from
,        conname AS foreign_key
,        pg_get_constraintdef(oid)
FROM     pg_constraint
WHERE    contype = 'f'
AND      connamespace = 'public'::regnamespace 
AND      conrelid::regclass::text = 'rental_item'
ORDER BY conrelid::regclass::text
,        conname;

It returns the following for the rental_item table:

 table_from  |   foreign_key    |                           pg_get_constraintdef                            
-------------+------------------+---------------------------------------------------------------------------
 rental_item | fk_rental_item_1 | FOREIGN KEY (rental_id) REFERENCES rental(rental_id)
 rental_item | fk_rental_item_2 | FOREIGN KEY (item_id) REFERENCES item(item_id)
 rental_item | fk_rental_item_3 | FOREIGN KEY (created_by) REFERENCES system_user(system_user_id)
 rental_item | fk_rental_item_4 | FOREIGN KEY (last_updated_by) REFERENCES system_user(system_user_id)
 rental_item | fk_rental_item_5 | FOREIGN KEY (rental_item_type) REFERENCES common_lookup(common_lookup_id)
(5 rows)

As always, I post these tidbit for others to discover and use with less pain.

Written by maclochlainn

November 6th, 2019 at 12:31 am

MySQL Update in mysqli

without comments

Somebody didn’t like the MySQLi Update Query example on the tutorialspoint.com website because it use the procedure mysqli_query style. Here’s a simple example of using the object-oriented method version. More or less, instead of query it uses the more intuitive execute() method.

The update_member function contains the logic and below it is a call to the test the function. It relies on a MySQLCredentials.inc file that contains the hostname, user name, password, and database name. You can create create member table, like my example in MySQL 8, or any other table in your MySQL database.

<?php
/*
||  Function Name: update_member
*/
function update_member($account_number, $member_type, $credit_card_number, $credit_card_type) {
 
  // Include the credentials file if omitted.
  include_once("MySQLCredentials.inc");
 
  // Assign credentials to connection.
  $mysqli = new mysqli(HOSTNAME, USERNAME, PASSWORD, DATABASE);
 
  // Check for connection error and print message.
  if ($mysqli->connect_errno) {
    print $mysqli->connect_error."<br />";
    print "Connection not established ...<br />";
  }
  else {
    // Initial statement.
    $stmt = $mysqli->stmt_init();
 
    /* Disabling auto commit when you want two or more statements executed as a set.
    || ------------------------------------------------------------
    ||  You would add the following command to disable the default
    ||  of auto commit.
    ||  ------------------------------
    ||   $mysqli->autocommit(FALSE);
    || ------------------------------------------------------------
    */
 
    // Declare a static query.
    $sql = "UPDATE   member\n"
         . "SET      member_type = ?\n"
         . ",        credit_card_number = ?\n"
         . ",        credit_card_type = ?\n"
         . "WHERE    account_number = ?\n";
 
    /* Prepare statement.
    || ------------------------------------------------------------
    ||  Please note that the bind_param method is a position 
    ||  rather than named notation, which means you must provide
    ||  the variables in the same order as they are found in
    ||  the defined $sql variable as "?".
    || ------------------------------------------------------------
    ||  print($sql);
    ||  print("Member Type:      [1][".$member_type."]\n");
    ||  print("Credit Card No:   [2][".$credit_card_number."]\n");
    ||  print("Credit Card Type: [3][".$credit_card_type."]\n");
    ||  print("Account Number:   [4][".$account_number."]\n");
    || ------------------------------------------------------------
    */
    if ($stmt->prepare($sql)) {
      $stmt->bind_param("ssss",$member_type,$credit_card_number,$credit_card_type,$account_number); } 
 
    // Attempt query and exit with failure before processing.
    if (!$stmt->execute()) {
 
      // Print failure to resolve query message.
      print $mysqli->error."<br />";
      print "Failed to resolve query ...<br />";
    }
    else {   
      /*  Manually commiting writes when you have disabled the
      ||  default auto commit setting, explained above.
      || ------------------------------------------------------------
      ||  You would add the following command to commit the 
      ||  transaction.
      ||  ------------------------------
      ||   $mysqli->commit();
      || ------------------------------------------------------------
      */
    }
  }
}
 
// Test case
update_member('US00011', '1006', '6011-0000-0000-0078', '1007');
?>

I put this logic in a function.php file. If you do the same, you can run the test case like this from the command line:

php function.sql

As always, I hope this helps.

Written by maclochlainn

September 14th, 2019 at 6:20 pm

Misleading ORA- Message

without comments

Oracle error messages are more or less the best in the industry but time-to-time they saddle you with a bad or misleading message. For example, I was running one of the code modules from my Oracle Database 12c PL/SQL Programming book for a class exercise and got this error message:

BEGIN
*
ERROR AT line 1:
ORA-22288: FILE OR LOB operation  failed
ORA-06512: AT "STUDENT.LOAD_CLOB_FROM_FILE", line 71
ORA-06512: AT line 11

Oddly enough, it was simple to identify generally. It failed on a call to the DBMS_LOB.LOADCLOBFROMFILE procedure. However, the better question is why did it fail because the virtual directory resolved and the permissions worked.

The first test was to try another file, which worked perfectly with the same code. That meant it had to be something with the physical file. I took a look and sure enough I found a character set problem, like the following:

… he reveals that the Nazgûl, or Ringwraiths, have left Mordor to capture the Ring and kill whoever carries it.

and,

The group flees to the elvish realm of Lothlórien …

The “û” and “ó” characters were incompatible with the default NLS_LANG setting of the database and a CLOB limits the use of non-standard character sets. It’s ashamed that Oracle didn’t through a character set error, which would have expedited resolution of the problem.

As always, I hope this helps those looking for solutions.

Written by maclochlainn

August 17th, 2019 at 4:52 pm

Cassandra on Fedora 27

without comments

The last time that I installed Cassandra was on a version of Fedora 20. So, I new the first thing to check was the installation of Java. You can check the Java installation with two statements on a fresh installation of Fedora 27. You need to check the Java runtime and then the Java Software Development Kit before installing, starting, and using Cassandra.

Installing Prerequisites

You check the Java runtime with this command:

java -version

It should return:

openjdk version "1.8.0_171"
OpenJDK Runtime Environment (build 1.8.0_171-b10)
OpenJDK 64-Bit Server VM (build 25.171-b10, mixed mode)

You check the Java Software Development Kit (JSDK) with this command:

javac -version

It should return:

javac 1.8.0_171

After verifying the Java and JSDK installation, you can install the Cassandra packages with the following yum command as the root user or a user with sudoer privileges:

yum install -y *cassandra*

You should see a successful installation log like:

Starting Cassandra

After you install Cassandra, you can start it as any sudoer user with the following syntax:

sudo cassandra -R

Using Cassandra

You can connect to the Cassandra server with the cqlsh client software. You use the following syntax:

cqlsh

You should see the Cassandra version information, and then you can type help at the cqlsh> prompt to see the available commands:

Connected TO Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.11.1 | CQL spec 3.4.4 | Native protocol v4]
USE HELP FOR help.
cqlsh> help
 
Documented shell commands:
===========================
CAPTURE  CLS          COPY  DESCRIBE  EXPAND  LOGIN   SERIAL  SOURCE   UNICODE
CLEAR    CONSISTENCY  DESC  EXIT      HELP    PAGING  SHOW    TRACING
 
CQL help topics:
================
AGGREGATES               CREATE_KEYSPACE           DROP_TRIGGER      TEXT     
ALTER_KEYSPACE           CREATE_MATERIALIZED_VIEW  DROP_TYPE         TIME     
ALTER_MATERIALIZED_VIEW  CREATE_ROLE               DROP_USER         TIMESTAMP
ALTER_TABLE              CREATE_TABLE              FUNCTIONS         TRUNCATE 
ALTER_TYPE               CREATE_TRIGGER            GRANT             TYPES    
ALTER_USER               CREATE_TYPE               INSERT            UPDATE   
APPLY                    CREATE_USER               INSERT_JSON       USE      
ASCII                    DATE                      INT               UUID     
BATCH                    DELETE                    JSON            
BEGIN                    DROP_AGGREGATE            KEYWORDS        
BLOB                     DROP_COLUMNFAMILY         LIST_PERMISSIONS
BOOLEAN                  DROP_FUNCTION             LIST_ROLES      
COUNTER                  DROP_INDEX                LIST_USERS      
CREATE_AGGREGATE         DROP_KEYSPACE             PERMISSIONS     
CREATE_COLUMNFAMILY      DROP_MATERIALIZED_VIEW    REVOKE          
CREATE_FUNCTION          DROP_ROLE                 SELECT          
CREATE_INDEX             DROP_TABLE                SELECT_JSON

Here’s my script that creates Cassandra keyspace, which is more or less a database. You use the USE command to connect to the keyspace or database, like you would in MySQL. You do not have sequences in Cassandra because they’re not a good fit for a distributed architecture. Cassandra does not support a native procedural extension like relational databases. You must create User-defined functions (UDFs) by embedding the logic in Java.

This script does the following:

  • Creates a keyspace
  • Uses the keyspace
  • Conditionally drops tables and functions
  • Creates two tables
  • Inserts data into the two tables
  • Queries data from the tables

I also included a call to a UDF inside a query in two of the examples. One of the queries demonstrates how to return a JSON structure from a query. To simplify things and provide clarification of the scripts behaviors, the details are outlined below.

  • The first segment of the script creates the keyspace, changes the scope to use the keyspace, conditionally drop tables, create tables, and insert values into the tables:

    /* Create a keyspace in Cassandra, which is like a database
       in MySQL or a schema in Oracle. */
    CREATE KEYSPACE IF NOT EXISTS student
      WITH REPLICATION = {
         'class':'SimpleStrategy'
        ,'replication_factor': 1 }
      AND DURABLE_WRITES = true;
     
    /* Use the keyspace or connect to the database. */
    USE student;
     
    /* Drop the member table from the student keyspace. */
    DROP TABLE IF EXISTS member;
     
    /* Create a member table in the student keyspace. */
    CREATE TABLE member
    ( member_number       VARCHAR
    , member_type         VARCHAR
    , credit_card_number  VARCHAR
    , credit_card_type    VARCHAR
    , PRIMARY KEY ( member_number ));
     
    /* Conditionally drop the contact table from the student keyspace. */
    DROP TABLE IF EXISTS contact;
     
    /* Create a contact table in the student keyspace. */
    CREATE TABLE contact
    ( contact_number      VARCHAR
    , contact_type        VARCHAR
    , first_name          VARCHAR
    , middle_name         VARCHAR
    , last_name           VARCHAR
    , member_number       VARCHAR
    , PRIMARY KEY ( contact_number ));
     
    /* Insert a row into the member table. */
    INSERT INTO member
    ( member_number, member_type, credit_card_number, credit_card_type )
    VALUES
    ('SFO-12345','GROUP','2222-4444-5555-6666','VISA');
     
    /* Insert a row into the contact table. */
    INSERT INTO contact
    ( contact_number, contact_type, first_name, middle_name, last_name, member_number )
    VALUES
    ('CUS_00001','FAMILY','Barry', NULL,'Allen','SFO-12345');
     
    /* Insert a row into the contact table. */
    INSERT INTO contact
    ( contact_number, contact_type, first_name, middle_name, last_name, member_number )
    VALUES
    ('CUS_00002','FAMILY','Iris', NULL,'West-Allen','SFO-12345');
     
    /* Insert a row into the member table. */
    INSERT INTO member
    ( member_number, member_type, credit_card_number, credit_card_type )
    VALUES
    ('SFO-12346','GROUP','3333-8888-9999-2222','VISA');
     
    /* Insert a row into the contact table. */
    INSERT INTO contact
    ( contact_number, contact_type, first_name, middle_name, last_name, member_number )
    VALUES
    ('CUS_00003','FAMILY','Caitlin','Marie','Snow','SFO-12346');
  • The following queries the member table:

    /* Select all columns from the member table. */
    SELECT * FROM member;

    It returns the following:

     member_number | credit_card_number  | credit_card_type | member_type
    ---------------+---------------------+------------------+-------------
         SFO-12345 | 2222-4444-5555-6666 |             VISA |       GROUP
         SFO-12346 | 3333-8888-9999-2222 |             VISA |       GROUP
  • Create a concatenate User-defined function (UDF) for Cassandra. The first step requires you to edit the cassandra.yaml file, which you find in the /etc/cassandra/default.conf directory. There is a single parameter that you need to edit, and it is the enable_user_defined_functions parameter. By default the parameter is set to false, and you need to enable it to create UDFs.

    After you make the edit, the cassandra.yaml file should look like this:

    1089
    1090
    1091
    1092
    1093
    
    # If unset, all GC Pauses greater than gc_log_threshold_in_ms will log at
    # INFO level
    # UDFs (user defined functions) are disabled by default.
    # As of Cassandra 3.0 there is a sandbox in place that should prevent execution of evil code.
    enable_user_defined_functions: true

    After you make the change, you can create your own UDF. The following UDF formats the first, middle, and last name so there’s only one whitespace between the first and last name when there middle name value is null.

    This type of function must use a CALLED ON NULL INPUT clause in lieu of a RETURNS NULL ON NULL INPUT clause. The latter would force the function to return a null value if any one of the parameters were null.

    /* Drop the concatenate function because a replace disallows changing a
       RETURNS NULL ON NULL INPUT with a CALLED ON NULL INPUT without raising
       an "89: InvalidRequest" exception. */
    DROP FUNCTION concatenate;
     
    /* Create a user-defined function to concatenate names. */
    CREATE OR REPLACE FUNCTION concatenate (first_name VARCHAR, middle_name VARCHAR, last_name VARCHAR)
    CALLED ON NULL INPUT
    RETURNS VARCHAR
    LANGUAGE java
    AS $$
      /* Concatenate first and last names when middle name is null, and
         first, middle, and last names when middle name is not null. */
      String name;
     
      /* Check for null middle name. */
      if (middle_name == null) {
        name = first_name + " " + last_name; }
      else {
        name = first_name + " " + middle_name + " " + last_name; }
     
      return name;
    $$;
  • Query the values from the contact table with the UDF function in the SELECT-list:

    /* Query the contact information. */
    SELECT member_number
    ,      contact_number
    ,      contact_type
    ,      concatenate(first_name, middle_name, last_name) AS full_name
    FROM   contact;

    It returns the following:

     member_number | contact_number | contact_type | full_name
    ---------------+----------------+--------------+--------------------
         SFO-12345 |      CUS_00001 |       FAMILY |        Barry Allen
         SFO-12345 |      CUS_00002 |       FAMILY |    Iris West-Allen
         SFO-12346 |      CUS_00003 |       FAMILY | Caitlin Marie Snow
  • Query the values from the contact table with a JSON format:

    /* Query the contact information and return in a JSON format. */
    SELECT JSON
           contact_number
    ,      contact_type
    ,      concatenate(first_name, middle_name, last_name) AS full_name
    FROM   contact;

    It returns the following:

     [json]
    -------------------------------------------------------------------------------------------------
    {"contact_number": "CUS_00001", "contact_type": "FAMILY", "full_name": "Barry Allen"}
    {"contact_number": "CUS_00002", "contact_type": "FAMILY", "full_name": "Iris West-Allen"}
    {"contact_number": "CUS_00003", "contact_type": "FAMILY", "full_name": "Caitlin Marie Snow"}

You can call the script from a relative directory inside cqlsh, like this:

SOURCE 'cstudent.cql'

At the end of the day, the concept of adding and removing nodes is attractive. Though, the lack of normal relational mechanics and narrowly supported set of CQL semantics leaves me with open questions. For example, is clustering without a coordinator really valuable enough to settle for eventual, or tunable, consistency with such a narrowly scoped query language?

As always, I hope this helps those looking for a quick how-to on Cassandra.

Written by maclochlainn

May 17th, 2018 at 12:38 am

Setup PostgreSQL

without comments

After you have installed PostgeSQL on Fedora 27 and initialized the database, you have several steps to configure a new instance. This blog post shows you how to create all the implementation pieces for a student database.

Connect as the postgres user from a sudoer user. It requires you to connect as the root user before you connect as the postgres user.

sudo sh

As the root user, you don’t need a password to connect as the postgres user:

su - postgres

As the postgres user, you own the PostgreSQL database and can connect to the database like the following:

[postgres@localhost ~]$ psql
psql (9.6.8)
Type "help" for help.
 
postgres=#

After you connect as the privileged postgres user, you can check the default data location with the following command:

postgres=# show data_directory;

It will return:

   data_directory    
---------------------
 /var/lib/pgsql/data
(1 row)

After you’ve initialized the PostgreSQL database, you may need to restart your database manually unless you configure the server to start it when you boot your server. The syntax to start the PostgreSQL database at the command-line as the postgres privileged user is:

postgres -D /var/lib/pgsql/data &

You can find detailed instructions in Chapter 18.3 Starting the Database Server web page. There are several options available to you to automate the starting process.

The instructions to build a postgresql.service actually require modification for Fedora 27. You should create the following file in the /etc/systemd/system directory:

[Unit]
Description=PostgreSQL database server
Documentation=man:postgres(1)
 
[Service]
Type=notify
User=postgres
ExecStart=/usr/bin/postgres -D /var/lib/pgsql/data
ExecReload=/bin/kill -HUP $MAINPID
KillMode=mixed
KillSignal=SIGINT
TimeoutSec=0
 
[Install]
WantedBy=multi-user.target

You can start the PostgreSQL service with the following command:

systemctl start postgresql.service >/dev/null

The following steps create tablespace, database, role, and user:

  1. Create Tablespace

You can create a video_db tablespace with the following syntax:

CREATE TABLESPACE video_db
  OWNER postgres
  LOCATION '/var/lib/pgsql/data';

This will return the following:

WARNING:  tablespace location should not be inside the data directory
CREATE TABLESPACE

The warning only lets you know that you really shouldn’t create table spaces in the default data directory. You can query whether you successfully create the video_db tablespace with the following:

SELECT * FROM pg_tablespace;

It should return the following:

  spcname   | spcowner | spcacl | spcoptions 
------------+----------+--------+------------
 pg_default |       10 |        | 
 pg_global  |       10 |        | 
 video_db   |       10 |        | 
(3 rows)
  1. Create a Database

You can create a videodb database with the following syntax:

CREATE DATABASE videodb
  WITH OWNER = postgres
       ENCODING = 'UTF8'
       TABLESPACE = video_db
       LC_COLLATE = 'en_US.UTF-8'
       LC_CTYPE = 'en_US.UTF-8'
       CONNECTION LIMIT = -1;

Then, you can assign comment to the database with the following syntax:

COMMENT ON DATABASE videodb IS 'Video Database';
  1. Create a Role, Grant, and User

In this section you create a dba role, grant privileges on a videodb database to a role, and create a user with the role that you created previously with the following three statements. There are three steps in this sections.

  • The first step creates a dba role:

    CREATE ROLE dba WITH SUPERUSER;
  • The second step grants all privileges on a videodb database to a dba role:

    GRANT ALL PRIVILEGES ON DATABASE videodb TO dba;
  • The third step creates a student user with the dba role:

    CREATE USER student
      WITH ROLE dba
           PASSWORD 'student';
  1. Connect to the videodb as the student user

You connect to the videodb database as the student user with the following command:

sql -d videodb -U student;

After connecting to the videodb database, you can query the current database, like

SELECT current_database();

It should return the following:

 current_database 
------------------
 videodb
(1 row)

This has shown you how to create a videodb database, dba role, and student user.

Written by maclochlainn

May 13th, 2018 at 1:01 am

Install Cassandra on Fedora

with one comment

It was quite interesting to discover that DataStax no longer provides the DataStax Community version of Apache Cassandra or the DataStax Distribution of Apache Cassandra. Needless to say, I was quite disappointed because it means folks will get less opportunity to learn how to use Cassandra because it makes it more difficult for beginning developers.

I spent a good hour sorting through what was available and then figuring out the real requirements to install Apache Cassandra 3.11. These are the instructions.

Install Java and JRE as Prerequisites

If you don’t have the JRE installed, you should download it from Oracle’s website and install it. After you download the latest version of the JRE package (jre-8u141-linux-x64.rpm). You should use the rpm utility to install the JRE package, like the following example:

rpm -ivh /home/student/Downloads/jre-8*.rpm

It should generate the following installation report:

Preparing...                          ################################# [100%]
	package jre1.8.0_141-1.8.0_141-fcs.x86_64 is already installed
sh-4.2# rpm -qa jre
sh-4.2# rpm -qf jre
error: file /jre: No such file or directory
sh-4.2# rpm -qa | grep jre
jre1.8.0_141-1.8.0_141-fcs.x86_64
sh-4.2# rpm -qa | grep jre | rpm -qi
rpm: no arguments given for query
sh-4.2# rpm -qi `rpm -qa | grep jre`
Name        : jre1.8.0_141
Version     : 1.8.0_141
Release     : fcs
Architecture: x86_64
Install Date: Mon 24 Jul 2017 11:09:58 PM PDT
Group       : Development/Tools
Size        : 139460427
License     : http://java.com/license
Signature   : (none)
Source RPM  : jre1.8.0_141-1.8.0_141-fcs.src.rpm
Build Date  : Wed 12 Jul 2017 04:47:52 AM PDT
Build Host  : jdk7-lin2-amd64
Relocations : /usr/java 
Packager    : Java Software <jre-comments@java.sun.com>
Vendor      : Oracle Corporation
URL         : URL_REF
Summary     : Java Platform Standard Edition Runtime Environment
Description :
The Java Platform Standard Edition Runtime Environment (JRE) contains
everything necessary to run applets and applications designed for the
Java platform. This includes the Java virtual machine, plus the Java
platform classes and supporting files.
 
The JRE is freely redistributable, per the terms of the included license.

Confirm Java and JRE Installation

You can check the current installed version of Java and JRE by using the alternatives utility with the --config option and the keyword of java or jre.

sh-4.2# alternatives --config java

It should generate the following list when you check for the java library:

There are 3 programs which provide 'java'.
 
  Selection    Command
-----------------------------------------------
*  1           /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79-2.5.5.0.fc20.x86_64/jre/bin/java
 + 2           /usr/lib/jvm/jre-1.8.0-openjdk.x86_64/bin/java
   3           /usr/java/jre1.8.0_141/bin/java
 
Enter to keep the current selection[+], or type selection number:

It should generate the following list when you check for the javac library:

There are 2 programs which provide 'javac'.
 
  Selection    Command
-----------------------------------------------
*  1           /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79-2.5.5.0.fc20.x86_64/bin/javac
 + 2           /usr/lib/jvm/java-1.8.0-openjdk.x86_64/bin/javac
 
Enter to keep the current selection[+], or type selection number:

After installing and selecting them as the designated alternative, if you have more than one Java or JRE installed on your OS, you should create a configuration file for the root user. You should include the following to set your $PATH, $JAVA_HOME, and $JRE_HOME environment variables:

# Add the Java and JRE paths to the $PATH environments.
export set PATH=$PATH:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.45.x86_64:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.45.x86_64/jre
 
# Add the $JAVA_HOME and $JRE_HOME environment variables.
export set JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.45.x86_64/
export set JRE_HOME=/usr

Install Apache Cassandra

The yum utility is the best way to install Apache Cassandra. However, you will need to configure the /etc/yum.repos.d/cassandra.repo before you attempt to install Cassandra 3.11 from the Apache organization, like this:

[cassandra]
name=Apache Cassandra
baseurl=https://www.apache.org/dist/cassandra/redhat/311x/
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://www.apache.org/dist/cassandra/KEYS

After you’ve added the necessary yum configuration file and ensured you’re using both Java 1.8 and JRE 1.8, you can install Apache Cassandra with the following yum command as the root user or as a sudoer member with the sudo command:

yum install -y cassandra

If successful, you should see the following output:

Loaded plugins: langpacks, refresh-packagekit
cassandra/signature                                         |  819 B  00:00     
cassandra/signature                                         | 2.9 kB  00:00 !!! 
mysql-connectors-community                                  | 2.5 kB  00:00     
mysql-tools-community                                       | 2.5 kB  00:00     
mysql56-community                                           | 2.5 kB  00:00     
http://yum.postgresql.org/9.3/fedora/fedora-20-x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found
Trying other mirror.
updates/20/x86_64/metalink                                  | 2.6 kB  00:00     
Resolving Dependencies
--> Running transaction check
---> Package cassandra.noarch 0:3.11.0-1 will be installed
--> Finished Dependency Resolution
 
Dependencies Resolved
 
================================================================================
 Package            Arch            Version            Repository          Size
================================================================================
Installing:
 cassandra          noarch          3.11.0-1           cassandra           28 M
 
Transaction Summary
================================================================================
Install  1 Package
 
Total download size: 28 M
Installed size: 37 M
Downloading packages:
warning: /var/cache/yum/x86_64/20/cassandra/packages/cassandra-3.11.0-1.noarch.rpm: Header V4 RSA/SHA256 Signature, key ID fe4b2bda: NOKEY
Public key for cassandra-3.11.0-1.noarch.rpm is not installed
cassandra-3.11.0-1.noarch.rpm                               |  28 MB  00:07     
Retrieving key from https://www.apache.org/dist/cassandra/KEYS
Importing GPG key 0xF2833C93:
 Userid     : "Eric Evans <eevans@sym-link.com>"
 Fingerprint: cec8 6bb4 a0ba 9d0f 9039 7cae f835 8fa2 f283 3c93
 From       : https://www.apache.org/dist/cassandra/KEYS
Importing GPG key 0x8D77295D:
 Userid     : "Eric Evans <eevans@sym-link.com>"
 Fingerprint: c496 5ee9 e301 5d19 2ccc f2b6 f758 ce31 8d77 295d
 From       : https://www.apache.org/dist/cassandra/KEYS
Importing GPG key 0x2B5C1B00:
 Userid     : "Sylvain Lebresne (pcmanus) <sylvain@datastax.com>"
 Fingerprint: 5aed 1bf3 78e9 a19d ade1 bcb3 4bd7 36a8 2b5c 1b00
 From       : https://www.apache.org/dist/cassandra/KEYS
Importing GPG key 0x0353B12C:
 Userid     : "T Jake Luciani <jake@apache.org>"
 Fingerprint: 514a 2ad6 31a5 7a16 dd00 47ec 749d 6eec 0353 b12c
 From       : https://www.apache.org/dist/cassandra/KEYS
Importing GPG key 0xFE4B2BDA:
 Userid     : "Michael Shuler <michael@pbandjelly.org>"
 Fingerprint: a26e 528b 271f 19b9 e5d8 e19e a278 b781 fe4b 2bda
 From       : https://www.apache.org/dist/cassandra/KEYS
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction (shutdown inhibited)
Warning: RPMDB altered outside of yum.
  Installing : cassandra-3.11.0-1.noarch                                    1/1 
  Verifying  : cassandra-3.11.0-1.noarch                                    1/1 
 
Installed:
  cassandra.noarch 0:3.11.0-1                                                   
 
Complete!

Starting Cassandra

You should start Cassandra as the cassandra user. Before starting Cassandra, you need to create a .bashrc file for the cassandra user because one isn’t created by default since you can’t log on to the Linux OS as the cassandra user. The home directory for the cassandra user is /var/lib/cassandra and the owner of that directory is the root user.

As the root user, create the following .bashrc file for the cassandra user:

# Wrap sqlplus with rlwrap to edit prior lines with the
# up, down, left and right keys.
cqlsh()
{
  if [ "$RLWRAP" = "0" ]; then
    cqlsh "$@"
  else
    rlwrap cqlsh "$@"
  fi
}
 
# Set vi as a command line editor.
set -o vi
 
# Add the Java and JRE paths to the $PATH environments.
export set PATH=$PATH:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.45.x86_64:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.45.x86_64/jre
 
# Add the $JAVA_HOME and $JRE_HOME environment variables.
export set JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.45.x86_64/
export set JRE_HOME=/usr

You should start Cassandra in background, like this:

cassandra

Using Cassandra

As the student user in my developer Fedora instance, you should be able to connect using the following:

cqlsh

You will see the following:

Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.11.0 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cqlsh> HELP
 
Documented shell commands:
===========================
CAPTURE  CLS          COPY  DESCRIBE  EXPAND  LOGIN   SERIAL  SOURCE   UNICODE
CLEAR    CONSISTENCY  DESC  EXIT      HELP    PAGING  SHOW    TRACING
 
CQL help topics:
================
AGGREGATES               CREATE_KEYSPACE           DROP_TRIGGER      TEXT     
ALTER_KEYSPACE           CREATE_MATERIALIZED_VIEW  DROP_TYPE         TIME     
ALTER_MATERIALIZED_VIEW  CREATE_ROLE               DROP_USER         TIMESTAMP
ALTER_TABLE              CREATE_TABLE              FUNCTIONS         TRUNCATE 
ALTER_TYPE               CREATE_TRIGGER            GRANT             TYPES    
ALTER_USER               CREATE_TYPE               INSERT            UPDATE   
APPLY                    CREATE_USER               INSERT_JSON       USE      
ASCII                    DATE                      INT               UUID     
BATCH                    DELETE                    JSON            
BEGIN                    DROP_AGGREGATE            KEYWORDS        
BLOB                     DROP_COLUMNFAMILY         LIST_PERMISSIONS
BOOLEAN                  DROP_FUNCTION             LIST_ROLES      
COUNTER                  DROP_INDEX                LIST_USERS      
CREATE_AGGREGATE         DROP_KEYSPACE             PERMISSIONS     
CREATE_COLUMNFAMILY      DROP_MATERIALIZED_VIEW    REVOKE          
CREATE_FUNCTION          DROP_ROLE                 SELECT          
CREATE_INDEX             DROP_TABLE                SELECT_JSON

Written by maclochlainn

July 25th, 2017 at 9:23 pm

Using a Sparse Index

with 2 comments

My vacation from my blog is officially over. The question that I’m answering today is: How can you pass a set of non-sequential ID values to a function and return a result set? You can solve the problem by passing an ADT (Attribute Data Type) or UDT (User Defined Type) variable into a subquery of a cursor. The subquery leverages the TABLE function to translate the ADT or UDT into SQL result set, which is equivalent to a comma-delimited list of values.

You can also solve this problem with Native Dynamic SQL (NDS). However, the person who posed the question didn’t want to use NDS to build out a variable length list of comma-delimited numbers.

You need to create three object types for this example. They are:

  • a list of numbers
  • a record structure, declared as an object type without methods
  • a list of the record structure

These are the SQL commands to create the required data types:

CREATE OR REPLACE
  TYPE list_ids IS TABLE OF NUMBER;
/
CREATE OR REPLACE
  TYPE item_struct IS OBJECT
  ( item_id       NUMBER
  , item_title    VARCHAR2(80)
  , release_date  DATE );
/
CREATE OR REPLACE
  TYPE item_struct_list IS TABLE OF item_struct;
/

Next, you create a nonsynchronous function. It takes a sparsely populated list of values that map to the surrogate key of the column, which is typically the table’s primary key column. It returns a collection of the item_struct object type. This type of function is an object-table function.

The code follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
CREATE OR REPLACE
  FUNCTION nonsynchronous 
  ( pv_list_ids  LIST_IDS ) RETURN item_struct_list IS
    /* Declare a record data structure list. */
    lv_struct_list  ITEM_STRUCT_LIST := item_struct_list();
 
    /* Declare a sparsely indexed list of film items. */
    CURSOR get_items
    ( cv_list_ids  LIST_IDS ) IS
      SELECT   item_id AS item_id
      ,        item_title
      ||       CASE
                 WHEN item_subtitle IS NOT NULL THEN
                   ': '|| item_subtitle
               END AS item_title
      ,        release_date AS release_date
      FROM     item
      WHERE    item_id IN (SELECT *
                           FROM   TABLE(cv_list_ids))
      ORDER BY item_id;
BEGIN
  /* Lood through the sparsely populated list of numbers. */
  FOR i IN get_items(pv_list_ids) LOOP
    lv_struct_list.EXTEND;
    lv_struct_list(lv_struct_list.COUNT) := item_struct( item_id      => i.item_id
                                                       , item_title   => i.item_title
                                                       , release_date => i.release_date );
  END LOOP;
 
  /* Return the record structure list. */
  RETURN lv_struct_list;
END;
/

The foregoing nonsynchronous function uses a nested query that transforms to a result set on lines 18 and 19. In the execution block of the program, it uses a call to the item_struct structure to capture and assign row values to an element of the lv_struct_list variable.

You can now test the nonsynchronous function with the following query:

COL item_id      FORMAT 9999  HEADING "Item|ID #"
COL item_title   FORMAT A40   HEADING "Item Title"
COL release_date FORMAT A11   HEADING "Release|Date"
SELECT   *
FROM     TABLE(nonsynchronous(list_ids(1002, 1013, 1007)));

The query returns the record set as an ordered list in the result set, like:

Item					       Release
 ID # Item Title			       Date
----- ---------------------------------------- -----------
 1002 Star Wars I: Phantom Menace	       04-MAY-99
 1007 RoboCop				       24-JUL-03
 1013 The DaVinci Code			       19-MAY-06

I hope this answers the question about how to get results sets with sparsely populated ID values.

Written by maclochlainn

May 11th, 2016 at 1:37 am

Using MySQL Workbench

with 2 comments

I’ve been setting up a simplified lab environment to let my students learn use in class. This added content will show them how to do reverse engineering with MySQL Workbench.

It’s a complete Fedora image with MySQL and Oracle Database 11g for the course. The uncompressed image is 14GB and the compressed image is 5.3GB. I chose Fedora because it’s the smallest open source image that supports both environments, and Fedora is the closest to Red Hat and Oracle Unbreakable Linux. I’m inclined to make the instance available generally but haven’t figured out the best way to do that.

Here are the new instructions I’m adding and if you have any input leave it as a comment. 😉

You connect as the student user, which puts you in the /home/student directory. Once connected to the Fedora OS, you open a Terminal session by clicking on Activities in the upper right hand corner, and then you type terminal in the search box. When you’re in the Terminal session, use the following command to verify that the apply_mysql_lab1.sql file is correct:

cat Data/cit225/mysql/lab1/apply_mysql_lab1.sql

It should display the following commands:

\. /home/student/Data/cit225/mysql/lib/cleanup.sql
\. /home/student/Data/cit225/mysql/lib/create_mysql_store_ri.sql
\. /home/student/Data/cit225/mysql/lib/seed_mysql_store_ri.sql

You can run the apply_mysql_lab1.sql to create the tables in the studentdb database, and seed them with data. Assuming you’re in the same /home/student directory, you connect to the studentdb database with the following syntax:

mysql -ustudent -pstudent studentdb

or, more securely:

mysql -ustudent studentdb -p

Having connected to the studentdb database, you can run the following command:

\. /home/student/Data/cit225/mysql/lab1/apply_mysql_lab1.sql

It creates a Video store image and seeds it with some basic data. You can use the show command to see the tables you’ve created, like:

show tables;

It displays:

+---------------------+
| Tables_in_studentdb |
+---------------------+
| address             |
| common_lookup       |
| contact             |
| contacts            |
| current_rental      |
| item                |
| member              |
| rental              |
| rental_item         |
| street_address      |
| system_user         |
| telephone           |
+---------------------+
12 rows in set (0.00 sec)

After successfully creating and seeding the studentdb database, you can run MySQL Workbench by launching it from the search field (recommended). Alternatively, you can open it from a terminal session with the following command. Unfortunately, a command-line launch links the terminal and the MySQL Workbench processes and closing the terminal will close the MySQL Workbench.

mysql-workbench

Here are the instructions for the lab with MySQL Workbench:

FedoraMySQLWorkbenchHome

  1. The first displayed page of MySQL Workbench is the home page (click on it or any of the others to see the full size image). Click the symbol to the right of the MySQL Workbench title.

MySQL_Lab3_01

  1. After clicking the the symbol, it launches the Startup New Connection dialog. Enter a name for your new connection. I recommend you enter MySQLConnection.

MySQL_Lab3_02

  1. Click the Test Connection button to see if it works.

MySQL_Lab3_03

  1. When you click the Test Connection button, MySQL Workbench prompts you for a password. After entering a password and verifying the connection, click the OK button to test the connection.

MySQL_Lab3_04

  1. Click the OK button to continue.

MySQL_Lab3_05

  1. Click the gray highlighted MySQLConnection connection icon below the MySQL Connection title to launch the MySQL Workbench application.

MySQL_Lab3_03

  1. When you click the gray highlighted MySQLConnection connection icon, MySQL Workbench prompts you for a password. After entering a password and verifying the connection, click the OK button to connect to the MySQL Workbench application.

MySQL_Lab3_06

  1. The MySQL Workbench launches in the default view.

MySQL_Lab3_07

  1. Click on the Database menu option and then the Reverse Engineering… option, as shown in the illustration.

MySQL_Lab3_08

  1. This displays the Set Parameters for Connecting to a DBMS dialog. Click the Stored Connection list of values. Choose the MySQLConnection (if you used my suggestion) or the one you created from the list of values of the Stored Connection element. Click the Next button to continue.

MySQL_Lab3_09

  1. Enter the password and click the OK button to connect to the MySQL database.

MySQL_Lab3_10

  1. This is an in-progress display, it runs waiting for the password and until the step of the wizard completes.

MySQL_Lab3_11

  1. This dialog displays when the MySQL Workbench application connects to the database, retrieves a schema (database) list from the database management system, and checks the common server configuration issues. Click the Next button to move to the next step of the wizard.

MySQL_Lab3_12

  1. This Select Schemas to Reverse Engineer dialog displays any available schemas. You check the schemas that you want. Click the Next button to move to the next step of the wizard.

MySQL_Lab3_09

  1. Enter the password and click the OK button to connect to the MySQL database to retrieve objects from the database management system.

MySQL_Lab3_14

  1. This dialog displays when the MySQL Workbench application retrieves objects from the database. Click the Next button to move to the next step of the wizard.

MySQL_Lab3_15

  1. This dialog displays when the MySQL Workbench application retrieves objects from the schemata and checks the result. Click the Next button to move to the next step of the wizard.

MySQL_Lab3_15

  1. This dialog displays the tables, views, and routines to import. Click the Execute button to move to the next step of the wizard.

MySQL_Lab3_16

  1. This dialog displays the tables, views, and routines to import. Click the Execute button to move to the next step of the wizard.

MySQL_Lab3_17

  1. This dialog displays shows the reverse engineering objects and puts them in the object image. Click the Next button to move to the next step of the wizard.

MySQL_Lab3_18

  1. This dialog displays a summary of reverse engineering objects. Click the Close button to move to the next step of the wizard.

MySQL_Lab3_19

  1. The MySQL Model dialog displays after you complete the reverse engineering process. Click the EER Diagram icon or EER Diagram tab to see visual depiction of the database objects.

MySQL_Lab3_20

  1. The EER Diagram is equivalent to the EER Diagram tab. The Navigator displays the tables as blue rectangles. You can scroll through the Canvas Panel to work with the display of tables and views.

MySQL_Lab3_21

  1. The next image shows EER Diagram with the tables moved for display purposes. Sometimes there are too many relationship lines, MySQL Workbench lets you split the lines so they don’t clutter the diagram.

MySQL_Lab3_22

  1. The next image shows you the Property tab of EER Diagram. You should see that the drawSplit checkbox is checked, which suppresses the bottom-most relationship line from displaying in the EER from display on the Canvas Panel.

You can now save the MySQL Workbench file in the Lab 3 directory.

Written by maclochlainn

September 25th, 2014 at 5:10 pm

Capture MySQL Foreign Keys

with one comment

Shantanu asked a follow-up question on my Cleanup a MySQL Schema post from last month. He wanted to know if there was a way to capture foreign keys before removing them. The answer is yes, but how you do it depends on whether the primary key is based on a surrogate key using an auto incrementing sequence of a natural key using descriptive columns.

You can capture foreign keys with a simple query when they’re determined by a single column value. However, this script creates ALTER statements that will fail when a table holds a multiple column foreign key value. The SELECT statement would look like this when capturing all foreign key values in a MySQL Server:

1
2
3
4
5
6
7
8
9
10
11
SELECT   CONCAT('ALTER TABLE',' ',tc.table_schema,'.',tc.table_name,' '
               ,'ADD CONSTRAINT',' fk_',tc.constraint_name,' '
               ,'FOREIGN KEY (',kcu.column_name,')',' '
               ,'REFERENCES',' ',kcu.referenced_table_schema,'.',kcu.referenced_table_name,' ' 
               ,'(',kcu.referenced_column_name,');') AS script
FROM     information_schema.table_constraints tc JOIN information_schema.key_column_usage kcu
ON       tc.constraint_name = kcu.constraint_name
AND      tc.constraint_schema = kcu.constraint_schema
WHERE    tc.constraint_type = 'foreign key'
ORDER BY tc.TABLE_NAME
,        kcu.column_name;

In MySQL 8 forward, you must make a case sensitive comparison, like:

9
WHERE    tc.constraint_type = 'FOREIGN KEY'

You would add a line in the WHERE clause to restrict it to a schema and a second line to restrict it to a table within a schema, like this:

AND      tc.table_schema = 'your_mysql_database'
AND      tc.table_name = 'your_table_name'

Unfortunately, when the primary and foreign keys involve two or more columns you require a procedure and function. The function because you need to read two cursors, and the NOT FOUND can’t be nested in the current deployment of MySQL’s SQL/PSM stored programs. In this example the storedForeignKeys procedure finds the table’s foreign key constraints, and the columnList function adds the column detail. The command_list table stores the commands to restore foreign key constraints.

The command_list table that stores the values is:

CREATE TABLE command_list
( command_list_id  INT UNSIGNED PRIMARY KEY AUTO_INCREMENT
, sql_command      VARCHAR(6)    NOT NULL
, sql_object       VARCHAR(10)   NOT NULL
, sql_constraint   VARCHAR(11)
, sql_statement    VARCHAR(768)  NOT NULL);

This is the storedForeignKeys procedure:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
CREATE PROCEDURE storeForeignKeys
( pv_schema_name  VARCHAR(64)
, pv_table_name   VARCHAR(64))
BEGIN
 
  /* Declare local variables. */
  DECLARE lv_schema_name              VARCHAR(64);
  DECLARE lv_table_name               VARCHAR(64);
  DECLARE lv_constraint_name          VARCHAR(64);
  DECLARE sql_stmt                    VARCHAR(1024);
 
  /* Declare control variable for handler. */
  DECLARE fetched    INT DEFAULT 0;
 
  /* Declare local cursor for foreign key table, it uses null replacement
     because the procedure supports null parameters. When you use null 
     parameters, you get all foreign key values. */
  DECLARE foreign_key_table CURSOR FOR
    SELECT   tc.table_schema
    ,        tc.table_name
    ,        tc.constraint_name
    FROM     information_schema.table_constraints tc
    WHERE    tc.table_schema = IFNULL(lv_schema_name, tc.table_schema)
    AND      tc.table_name = IFNULL(lv_table_name, tc.table_name)
    AND      tc.constraint_type = 'FOREIGN KEY'
    ORDER BY tc.table_name;
 
  /* Declare a not found record handler to close a cursor loop. */
  DECLARE CONTINUE HANDLER FOR NOT FOUND SET fetched = 1;
 
  /* Assign parameter values to local variables. */
  SET lv_schema_name := pv_schema_name;
  SET lv_table_name := pv_table_name;
 
  /* Open a local cursor. */  
  OPEN foreign_key_table;
  cursor_foreign_key_table: LOOP
 
    /* Fetch a row into the local variables. */
    FETCH foreign_key_table
    INTO  lv_schema_name
    ,     lv_table_name
    ,     lv_constraint_name;
 
    /* Catch handler for no more rows found from the fetch operation. */
    IF fetched = 1 THEN LEAVE cursor_foreign_key_table; END IF;
 
    /* The nested calls to the columnList function returns the list of columns
       in the foreign key. Surrogate primary to foreign keys can be resolved 
       with a simply query but natural primary to foreign key relationships
       require the list of columns involved in the primary and foreign key.
       The columnList function returns the list of foreign key columns in 
       the dependent table and the list of referenced columns (or the primary
       key columns) in the independent table. */
    SET sql_stmt := CONCAT('ALTER TABLE ',' ',lv_schema_name,'.',lv_table_name,' '
                          ,'ADD CONSTRAINT ',lv_constraint_name,' '
                          ,'FOREIGN KEY (',columnList(lv_schema_name,lv_table_name,lv_constraint_name));
 
    /* Record the SQL statements. */
    INSERT INTO command_list
    ( sql_command
    , sql_object
    , sql_constraint
    , sql_statement )
    VALUES
    ('ALTER'
    ,'TABLE'
    ,'FOREIGN KEY'
    , sql_stmt );
 
  END LOOP cursor_foreign_key_table;
  CLOSE foreign_key_table;  
 
END;
$$

This is the columnList function:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
CREATE FUNCTION columnList
( pv_schema_name      VARCHAR(64)
, pv_table_name       VARCHAR(64)
, pv_constraint_name  VARCHAR(64)) RETURNS VARCHAR(512)
BEGIN
 
  /* Declare local variables. */
  DECLARE lv_schema_name              VARCHAR(64);
  DECLARE lv_table_name               VARCHAR(64);
  DECLARE lv_constraint_name          VARCHAR(64);
  DECLARE lv_column_count             INT UNSIGNED;
  DECLARE lv_column_name              VARCHAR(64);
  DECLARE lv_column_list              VARCHAR(512);
  DECLARE lv_column_ref_list          VARCHAR(64);
  DECLARE lv_referenced_table_schema  VARCHAR(64);
  DECLARE lv_referenced_table_name    VARCHAR(64);
  DECLARE lv_referenced_column_name   VARCHAR(64);
  DECLARE lv_return_string            VARCHAR(768);
 
  /* Declare control variable for handler. */
  DECLARE fetched    INT DEFAULT 0;
 
  /* Declare local cursor for foreign key column. */
  DECLARE foreign_key_column CURSOR FOR
    SELECT   kcu.column_name
    ,        kcu.referenced_table_schema
    ,        kcu.referenced_table_name
    ,        kcu.referenced_column_name
    FROM     information_schema.key_column_usage kcu
    WHERE    kcu.referenced_table_schema = lv_schema_name
    AND      kcu.table_name = lv_table_name
    AND      kcu.constraint_name = lv_constraint_name
    ORDER BY kcu.column_name;
 
  /* Declare a not found record handler to close a cursor loop. */
  DECLARE CONTINUE HANDLER FOR NOT FOUND SET fetched = 1;
 
  /* Assign parameter values to local variables. */
  SET lv_schema_name := pv_schema_name;
  SET lv_table_name := pv_table_name;
  SET lv_constraint_name := pv_constraint_name;
 
  /* Set the first column value. */
  SET lv_column_count := 1;
 
  /* Open the nested cursor. */
  OPEN  foreign_key_column;
  cursor_foreign_key_column: LOOP
 
    /* Fetch a row into the local variables. */    
    FETCH foreign_key_column
    INTO  lv_column_name
    ,     lv_referenced_table_schema
    ,     lv_referenced_table_name
    ,     lv_referenced_column_name;
 
    /* Catch handler for no more rows found from the fetch operation. */
    IF fetched = 1 THEN LEAVE cursor_foreign_key_column; END IF;
 
    /* Initialize the column list or add to it. */
    IF lv_column_count = 1 THEN
      SET lv_column_list := lv_column_name;
      SET lv_column_ref_list := lv_referenced_column_name;
 
      /* Increment the counter value. */
      SET lv_column_count := lv_column_count + 1;
    ELSE
      SET lv_column_list := CONCAT(lv_column_list,',',lv_column_name);
      SET lv_column_ref_list := CONCAT(lv_column_ref_list,',',lv_referenced_column_name);
    END IF;
 
  END LOOP cursor_foreign_key_column;
  CLOSE foreign_key_column;
 
  /* Set the return string to a list of columns. */
  SET lv_return_string :=
        CONCAT(lv_column_list,')',' '
              ,'REFERENCES',' ',lv_referenced_table_schema,'.',lv_referenced_table_name,' ' 
              ,'(',lv_column_ref_list,');');
 
  RETURN lv_return_string;
END;
$$

You can call the procedure with a schema and table name, and you’ll get the foreign keys from just that table. You can create the following parent and child tables to test how multiple column foreign keys work in the script (provided because most folks use surrogate keys):

CREATE TABLE parent
( first_name  VARCHAR(20)  NOT NULL DEFAULT ''
, last_name   VARCHAR(20)  NOT NULL DEFAULT ''
, PRIMARY KEY (first_name, last_name)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
 
CREATE TABLE child
( child_name  VARCHAR(20)  NOT NULL
, first_name  VARCHAR(20)  DEFAULT NULL
, last_name   VARCHAR(20)  DEFAULT NULL
, PRIMARY KEY (child_name)
, KEY fk_parent(first_name, last_name)
, CONSTRAINT fk_parent FOREIGN KEY (first_name, last_name)
  REFERENCES parent (first_name, last_name)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

You call the storeForeignKeys procedure for the child table with this syntax:

CALL storeForeignKeys('studentdb', 'child');

You call the storeForeignKeys procedure for all tables in a schema with this syntax:

CALL storeForeignKeys('studentdb', null);

While unlikely you’ll need this, the following calls the storeForeignKeys procedure for all tables in all schemas:

CALL storeForeignKeys(null, null);

You can export the command sequence with the following command to a script file:

SELECT sql_statement
INTO OUTFILE 'c:/Data/MySQL/apply_foreign_keys.sql'
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
FROM command_list;

While preservation of tables and foreign keys is best managed by using a tool, like MySQL Workbench, it’s always handy to have scripts to do specific tasks. I hope this helps those looking for how to preserve foreign keys. You also can find a comprehensive treatment on how to write SQL/PSM code in Chapter 14 of my Oracle Database 11g and MySQL 5.6 Developer Handbook.

Written by maclochlainn

March 17th, 2014 at 11:27 pm