MacLochlainns Weblog

Michael McLaughlin's Technical Blog

Site Admin

A tkprof Korn Shell

without comments

Reviewing old files, I thought posting my tkprof.ksh would be helpful. So, here’s the script that assumes you’re using Oracle e-Business Suite (Demo database, hence the APPS/APPS connection); and if I get a chance this summer I’ll convert it to Bash shell.

#!/bin/ksh
# -------------------------------------------------------------------------
# Author:   Michael McLaughlin
# Name:     tkprof.ksh
 
# Purpose:  The program takes the following arguments:
#           1. A directory
#           2. A search string
#           3. A target directory
#           It assumes raw trace files have an extension of ".trc".
#           The output file name follows this pattern (because it is
#           possible for multiple tracefiles to be written during the
#           same minute).
# -------------------------------------------------------------------------
 
# Function to find minimum field delimiter.
function min
{
  # Find the whitespace that preceeds the file date.
  until [[ $(ls -al $i | cut -c$minv-$minv) == " " ]]; do
    let minv=minv+1
  done
}
 
# Function to find maximum field delimiter.
function max
{
  # Find the whitespace that succeeds the file date.
  until [[ $(ls -al $i | cut -c$maxv-$maxv) == " " ]]; do
    let maxv=maxv+1
  done
}
 
# Debugging enabled by unremarking the "set -x"
# set -x
 
# Print header information
print =================================================================
print Running [tkprof.ksh] script ...
 
# Evaluate whether an argument is provide and if no argument
# is provided, then substitute the present working directory.
if   [[ $# == 0 ]]; then
  dir=${PWD}  
  str="*"
  des=${PWD}  
elif [[ $# == 1 ]]; then
  dir=${1}
  str="*"
  des=${1}
elif [[ $# == 2 ]]; then
  dir=${1}
  str=${2}
  des=${1}
elif [[ $# == 3 ]]; then
  dir=${1}
  str=${2}
  des=${3}
fi
 
# Evaluate whether the argument is a directory file.
if [[ -d ${dir} ]] && [[ -d ${des} ]]; then
 
  # Print what directory and search string are targets.
  print =================================================================
  print Run in tkprof from [${dir}] directory ...
  print The files contain a string of [${str}] ...
  print =================================================================
 
  # Evaluate whether the argument is the present working
  # directory and if not change directory to that target
  # directory so file type evaluation will work.
  if [[ ${dir} != ${PWD} ]]; then
    cd ${dir} 
  fi
 
  # Set file counter.
  let fcnt=0
 
  # Submit compression to the background as a job.
  for i in $(grep -li "${str}" *.trc); do
 
    # Evaluate whether file is an ordinary file.
    if [[ -f ${i} ]]; then
 
      # Set default values each iteration.
      let minv=40
      let maxv=53
 
      # Increment counter.
      let fcnt=fcnt+1
 
      # Call functions to reset min and max values where necessary.
      min ${i}
      max ${i}
 
      # Parse date stamp from trace file without multiple IO calls.
      # Assumption that the file is from the current year.
      date=$(ls -al ${i} | cut -c${minv}-${maxv}) 
      mon=$(echo ${date} | cut -c1-3)
      yr=$(date          | cut -c25-28)
 
      # Validate month is 10 or greater to pad for reduced whitespace.
      if (( $(echo ${date} | cut -c5-6) < 10 )); then
        day=0$(echo ${date}| cut -c5-5)
        hr=$(echo ${date}  | cut -c7-8)
        min=$(echo ${date} | cut -c10-11)
      else
        day=$(echo ${date} | cut -c5-6)
        hr=$(echo ${date}  | cut -c8-9)
        min=$(echo ${date} | cut -c11-12)
      fi
 
      fn=file${fcnt}_${day}-${mon}-${yr}_${hr}:${min}:${day}
 
      print Old [$i] and new [$des/$fn]
      tkprof ${i} ${des}/${fn}.prf explain=APPS/APPS sort='(prsela,exeela,fchela)'
 
      # Print what directory and search string are targets.
      print =================================================================
    fi
  done
 
else
  # Print message that a directory argument was not provided.
  print You failed to provie a single valid directory argument.
fi

I hope this helps those looking for a solution.

Written by maclochlainn

May 21st, 2023 at 2:25 am

Listener for APEX

without comments

Unless dbca lets us build the listener.ora file, we often leave off some component. For example, running listener control program the following status indicates an incorrectly configured listener.ora file.

lsnrctl status

It returns the following, which displays an endpoint for the XDB Server (I’m using Oracle Database 11g XE because it’s pre-containerized and has a small testing footprint):

LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 24-MAR-2023 00:59:06
 
Copyright (c) 1991, 2011, Oracle.  All rights reserved.
 
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC_FOR_XE)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 11.2.0.2.0 - Production
Start Date                21-MAR-2023 21:17:37
Uptime                    2 days 3 hr. 41 min. 29 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Default Service           XE
Listener Parameter File   /u01/app/oracle/product/11.2.0/xe/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/localhost/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC_FOR_XE)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=8080))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
  Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "XE" has 1 instance(s).
  Instance "XE", status READY, has 1 handler(s) for this service...
Service "XEXDB" has 1 instance(s).
  Instance "XE", status READY, has 1 handler(s) for this service...
The command completed successfully

The listener is missing the second SID_LIST_LISTENER value of CLRExtProc value. A complete listener.ora file should be as follows for the Oracle Database XE:

# listener.ora Network Configuration FILE:
 
SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = PLSExtProc)
      (ORACLE_HOME = /u01/app/oracle/product/11.2.0/xe)
      (PROGRAM = extproc)
    )
    (SID_DESC =
      (SID_NAME = CLRExtProc)
      (ORACLE_HOME = /u01/app/oracle/product/11.2.0/xe)
      (PROGRAM = extproc)
    )
  )
 
LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC_FOR_XE))
      (ADDRESS = (PROTOCOL = TCP)(HOST = localhost.localdomain)(PORT = 1521))
    )
  )
 
DEFAULT_SERVICE_LISTENER = (XE)

With this listener.ora file, the Oracle listener control utility will return the following correct status, which hides the XDB Server’s endpoint:

LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 24-MAR-2023 02:38:57
 
Copyright (c) 1991, 2011, Oracle.  All rights reserved.
 
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC_FOR_XE)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 11.2.0.2.0 - Production
Start Date                24-MAR-2023 02:38:15
Uptime                    0 days 0 hr. 0 min. 42 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Default Service           XE
Listener Parameter File   /u01/app/oracle/product/11.2.0/xe/network/admin/listener.ora
Listener Log File         /u01/app/oracle/product/11.2.0/xe/log/diag/tnslsnr/localhost/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC_FOR_XE)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1521)))
Services Summary...
Service "CLRExtProc" has 1 instance(s).
  Instance "CLRExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "PLSExtProc" has 1 instance(s).
  Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

It seems a number of examples on the web left the SID_LIST_LISTENER value of CLRExtProc value out of the listener.ora file. As always, I hope this helps those looking for a complete solution rather than generic instructions without a concrete example.

Written by maclochlainn

March 24th, 2023 at 1:00 am

AWS EC2 TNS Listener

without comments

Having configured an AlmaLinux 8.6 with Oracle Database 11g XE, MySQL 8.0.30, and PostgreSQL 15, we migrated it to AWS EC2 and provisioned it. We used the older and de-supported Oracle Database 11g XE because it didn’t require any kernel modifications and had a much smaller footprint.

I had to address why attempting to connect with the sqlplus utility raised the following error after provisioning a copy with a new static IP address:

ERROR:
ORA-12514: TNS:listener does NOT currently know OF service requested IN CONNECT descriptor

A connection from SQL Developer raises a more addressable error, like:

ORA-17069

I immediately tried to check the connection with the tnsping utility and found that tnsping worked fine. However, when I tried to connect with the sqlplus utility it raised an ORA-12514 connection error.

There were no diagnostic steps beyond checking the tnsping utility. So, I had to experiment with what might block communication.

I changed the host name from ip-172-58-65-82.us-west-2.compute.internal to a localhost string in both the listener.ora and tnsnames.ora. The listener.ora file:

# listener.ora Network Configuration FILE:
 
SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = PLSExtProc)
      (ORACLE_HOME = /u01/app/oracle/product/11.2.0/xe)
      (PROGRAM = extproc)
    )
  )
 
LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC_FOR_XE))
      (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
    )
  )
 
DEFAULT_SERVICE_LISTENER = (XE)

The tnsnames.ora file:

# tnsnames.ora Network Configuration FILE:
 
XE =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = XE)
    )
  )
 
EXTPROC_CONNECTION_DATA =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC_FOR_XE))
    )
    (CONNECT_DATA =
      (SID = PLSExtProc)
      (PRESENTATION = RO)
    )
  )

I suspected that it might be related to the localhost value. So, I checked the /etc/hostname and /etc/hosts files.

Then, I modified /etc/hostname file by removing the AWS EC2 damain address. I did it on a memory that Oracle’s TNS raises errors for dots or periods in some addresses.

The /etc/hostname file:

ip-172-58-65-82

The /etc/hosts file:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 ip-172-58-65-82
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6 ip-172-58-65-82

Now, we can connect to the Oracle Database 11g XE instance with the sqlplus utility. I believe this type of solution will work for other AWS EC2 provisioned Oracle databases.

Written by maclochlainn

March 22nd, 2023 at 10:09 pm

MySQL @SQL_MODE

without comments

Installing MySQL Workbench 8 on Windows, we discovered that the default configuration no longer sets ONLY_FULL_GROUP_BY as part of the default SQL_MODE parameter value. While I’ve written a stored function to set the SQL_MODE parameter value for a session, some students didn’t understand that such a call is only valid in the scope of a connection to the database server. They felt the function didn’t work because they didn’t understand the difference between connecting to the MySQL CLI and clicking the lightening bolt in MySQL Workbench.

So, here are the instructions to reset the default SQL_MODE parameter value for Windows. You need to edit the setting in the my.ini file, which is in the C:\ProgramData\MySQL\MySQL Server 8.0 directory. The default installation will have the following:

# Set the SQL mode to strict
sql-mode="STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION"

You need to change it to the following in an editor with Administrative privileges:

# Set the SQL mode to strict
sql-mode="STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION,ONLY_FULL_GROUP_BY"

Then, you need to connect to the services by launching services.msc from the command prompt. In the list of services find MYSQL80 service and restart it. You can verify it by connecting to the MySQL 8.0.* server and running the following SQL query:

SELECT @@SQL_MODE:

That’s how you convert Windows to use only traditional group by behaviors in SQL. As always, I hope this helps those looking for a solution.

Written by maclochlainn

February 13th, 2023 at 12:55 pm

AlmaLinux Libraries

without comments

I discovered a dependency for MySQL Workbench on AlmaLinux 8 installation. I neglected to fully cover it when I documented the installation in a VM of AlmaLinux 9. I go back later and update that entry but for now you need the following dependencies:

proj-6.3.2-4.el8.x86_64.rpm
proj-datumgrid-1.8-6.3.2.4.el8.noarch.rpm
proj-devel-6.3.2-4.el8.x86_64.rpm

Install like this:

sudo dnf install -y *.rpm

Log file:

Last metadata expiration check: 3:01:53 ago on Fri 10 Feb 2023 03:37:49 AM UTC.
Dependencies resolved.
==========================================================================================
 Package                Architecture   Version                 Repository            Size
==========================================================================================
Installing:
 proj                   x86_64         6.3.2-4.el8             @commandline         2.0 M
 proj-datumgrid         noarch         1.8-6.3.2.4.el8         @commandline         5.4 M
 proj-devel             x86_64         6.3.2-4.el8             @commandline          89 k
 
Transaction Summary
==========================================================================================
Install  3 Packages
 
Total size: 7.5 M
Installed size: 17 M
Downloading Packages:
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                  1/1 
  Installing       : proj-datumgrid-1.8-6.3.2.4.el8.noarch                            1/3 
  Installing       : proj-6.3.2-4.el8.x86_64                                          2/3 
  Installing       : proj-devel-6.3.2-4.el8.x86_64                                    3/3 
  Running scriptlet: proj-devel-6.3.2-4.el8.x86_64                                    3/3 
  Verifying        : proj-6.3.2-4.el8.x86_64                                          1/3 
  Verifying        : proj-datumgrid-1.8-6.3.2.4.el8.noarch                            2/3 
  Verifying        : proj-devel-6.3.2-4.el8.x86_64                                    3/3 
 
Installed:
  proj-6.3.2-4.el8.x86_64                 proj-datumgrid-1.8-6.3.2.4.el8.noarch          
  proj-devel-6.3.2-4.el8.x86_64          
 
Complete!

A quick update while installing AlmaLinux for AWS.

Written by maclochlainn

February 9th, 2023 at 11:47 pm

Data Engineer?

without comments

Students often ask me about data engineering. I try to explain some of the aspects, and how the tasks can be organized but I never laid out all the titles. I really like this illustration (click on image for larger size) from the Gartner Group because it does that. You can download the full “What Are the Essential Roles for Data and Analytics” paper here).

An excerpt from Gartner’s paper:

Data Engineer

Data engineering is the practice of making the appropriate data available to various data consumers (including data scientists, data and business analysts, citizen integrators, and line-of-business users). It is a discipline that involves collaboration across business and IT units. This key discipline requires skilled data engineers to support both IT and business teams.

Data engineers are primarily responsible for building, managing and operationalizing data pipelines in support of key D&A use cases. They are also primarily responsible for leading the tedious (and often complex) task of:

  • Curating datasets and data pipelines created by nontechnical users (e.g., through self-service data preparation tools), data scientists or even IT resources.
  • Operationalizing data delivery for production-level deployments.

I hope the summary is helpful and Gartner’s paper interesting.

Written by maclochlainn

February 7th, 2023 at 1:07 pm

Updating SQL_MODE

without comments

This is an update for MySQL 8 Stored PSM to add the ONLY_FULL_GROUP_BY mode to the global SQL_MODE variable when it’s not set during a session. Here’s the code:

/* Drop procedure conditionally on whether it exists already. */
DROP PROCEDURE IF EXISTS set_full_group_by;
 
/* Reset delimter to allow semicolons to terminate statements. */
DELIMITER $$
 
/* Create a procedure to verify and set connection parameter. */
CREATE PROCEDURE set_full_group_by()
  LANGUAGE SQL
  NOT DETERMINISTIC
  SQL SECURITY DEFINER
  COMMENT 'Set connection parameter when not set.'
BEGIN
 
  /* Check whether full group by is set in the connection and
     if unset, set it in the scope of the connection. */
  IF EXISTS
    (SELECT TRUE
     WHERE NOT REGEXP_LIKE(@@SESSION.SQL_MODE,'ONLY_FULL_GROUP_BY'))
  THEN
    SET @@GLOBAL.SQL_MODE := CONCAT(@@SESSION.sql_mode,',ONLY_FULL_GROUP_BY');
  END IF;
END;
$$
 
/* Reset the default delimiter. */
DELIMITER ;

You can call the set_full_group_by procedure with the CALL command:

CALL set_full_group_by();

You can see the SQL_MODE variable with the following query:

SELECT @@GLOBAL.SQL_MODE;

It’ll return:

+---------------------------------------------------------------+
| @@GLOBAL.SQL_MODE                                             |
+---------------------------------------------------------------+
| ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION |
+---------------------------------------------------------------+
1 row in set (0.00 sec)

As always, I hope this helps those looking to solve this type of problem.

Written by maclochlainn

January 13th, 2023 at 12:06 am

GROUP BY Quirk

without comments

It’s always interesting to see how others teach SQL courses. It can be revealing as to whether they understand SQL or only understand a dialect of SQL. In this case, one of my old students was taking a graduate course in SQL and the teacher was using MySQL. The teacher made an issue of using ANSI SQL:1999 or SQL3 and asked the following question, which I suspect is a quiz bank question from a textbook:

“How would you get all students’ names and for each student the number of courses that the
student has registered for?”

They referenced the MySQL 5.7 documentation for the GROUP BY and SQL:1999 as if MySQL implemented the ANSI SQL:1999 specification defined the standard. I didn’t know whether to laugh or cry because they were referring to MySQL 5.7 when we’re all using MySQL 8 and anybody who’s worked in more than MySQL knows that the behavior for a GROUP BY in MySQL can work without listing the necessary non-aggregated columns in the SELECT-list.

For example, their working solution, which is from the instructor and the author of their MySQL textbook the correct perspective of ANSI:1999 behavior. It doesn’t matter that their solution is actually based on ANSI:1992 not ANSI:1999 because it will only succeed because of a quirk of MySQL:

SELECT   a.studentname
,        COUNT(b.courseid)
FROM     students a INNER JOIN registeredcourses b
ON       a.studentid = b.studentid
GROUP BY a.studentid;

While it works in MySQL, it doesn’t work because it conforms to an ANSI standard. It works in MySQL, notwithstanding that standard because it violates the standard.

In Oracle, PostgreSQL, and SQL Server, it raises an exception. For example, Oracle raises the following exception:

SELECT   a.studentname
         *
ERROR at line 1:
ORA-00979: not a GROUP BY expression

The correct way to write the GROUP BY is:

SELECT   a.studentname
,        COUNT(b.courseid)
FROM     students a INNER JOIN registeredcourses b
ON       a.studentid = b.studentid INNER JOIN courses c
ON       b.courseid = c.courseid
GROUP BY a.studentname;

Then, it would return:

Student Name                   Course IDs
------------------------------ ----------
Montgomery Scott                        1
Leonard McCoy                           2
James Tiberus Kirk                      3

For reference, here’s a complete test case for MySQL:

/* Drop table conditionally. */
DROP TABLE IF EXISTS students;
 
/* Create table. */
CREATE TABLE students
( studentID    int unsigned primary key auto_increment
, studentName  varchar(30));
 
/* Drop table conditionally. */
DROP TABLE IF EXISTS courses;
 
/* Create table. */
CREATE TABLE courses
( courseid    int unsigned primary key auto_increment
, coursename  varchar(40));
 
/* Drop table conditionally. */
DROP TABLE IF EXISTS registeredcourses;
 
/* Create table. */
CREATE TABLE registeredcourses
( courseid    int unsigned
, studentid   int unsigned );
 
/* Insert into students. */
INSERT INTO students
( studentName )
VALUES
 ('James Tiberus Kirk')
,('Leonard McCoy')
,('Montgomery Scott');
 
/* Insert into courses. */
INSERT INTO courses
( coursename )
VALUES
 ('English Literature')
,('Physics')
,('English Composition')
,('Botany')
,('Mechanical Engineering');
 
/* Insert into registeredcourses. */
INSERT INTO registeredcourses
( studentid
, courseid )
VALUES
 (1,1)
,(1,3)
,(1,4)
,(2,2)
,(2,5)
,(3,4); 
 
/* Check global sql_mode to ensure only_full_group_by is set. */
SELECT @@GLOBAL.SQL_MODE;
 
/* Query with a column not found in the SELECT-list. */
SELECT   a.studentname
,        COUNT(b.courseid)
FROM     students a INNER JOIN registeredcourses b
ON       a.studentid = b.studentid
GROUP BY a.studentid;
 
/* Query consistent with ANSI SQL:1992 */
SELECT   a.studentname
,        COUNT(b.courseid)
FROM     students a INNER JOIN registeredcourses b
ON       a.studentid = b.studentid INNER JOIN courses c
ON       b.courseid = c.courseid
GROUP BY a.studentname;

and, another complete test case for Oracle:

/* Drop tabhe unconditionallly. */
DROP TABLE students;
 
/* Create table. */
CREATE TABLE students
( studentID    NUMBER PRIMARY KEY
, studentName  VARCHAR(30));
 
/* Drop table unconditionally. */
DROP TABLE courses;
 
/* Create table. */
CREATE TABLE courses
( courseid    NUMBER PRIMARY KEY
, coursename  VARCHAR(40));
 
/* Drop table unconditionally. */
DROP TABLE registeredcourses;
 
/* Create table. */
CREATE TABLE registeredcourses
( courseid    NUMBER
, studentid   NUMBER );
 
/* Insert values in student. */
INSERT INTO students ( studentid, studentName ) VALUES (1,'James Tiberus Kirk');
INSERT INTO students ( studentid, studentName ) VALUES (2,'Leonard McCoy');
INSERT INTO students ( studentid, studentName ) VALUES (3,'Montgomery Scott');
 
/* Insert values in courses. */
INSERT INTO courses ( courseid, coursename ) VALUES (1,'English Literature');
INSERT INTO courses ( courseid, coursename ) VALUES (2,'Physics');
INSERT INTO courses ( courseid, coursename ) VALUES (3,'English Composition');
INSERT INTO courses ( courseid, coursename ) VALUES (4,'Botany');
INSERT INTO courses ( courseid, coursename ) VALUES (5,'Mechanical Engineering');
 
/* Insert values into registeredcourses. */
INSERT INTO registeredcourses ( studentid, courseid ) VALUES (1,1);
INSERT INTO registeredcourses ( studentid, courseid ) VALUES (1,3);
INSERT INTO registeredcourses ( studentid, courseid ) VALUES (1,4);
INSERT INTO registeredcourses ( studentid, courseid ) VALUES (2,2);
INSERT INTO registeredcourses ( studentid, courseid ) VALUES (2,5);
INSERT INTO registeredcourses ( studentid, courseid ) VALUES (3,4); 
 
/* Non-ANSI SQL GROUP BY statement. */
SELECT   a.studentname
,        COUNT(b.courseid)
FROM     students a INNER JOIN registeredcourses b
ON       a.studentid = b.studentid
GROUP BY a.studentid;
 
/* ANSI SQL GROUP BY statement. */
SELECT   a.studentname AS "Student Name"
,        COUNT(b.courseid) AS "Course IDs"
FROM     students a INNER JOIN registeredcourses b
ON       a.studentid = b.studentid INNER JOIN courses c
ON       b.courseid = c.courseid
GROUP BY a.studentname;

I hope this helps those learning the correct way to write SQL.

Written by maclochlainn

January 12th, 2023 at 11:30 pm

TDE on PostgreSQL

without comments

The scope of Transparent Data Encryption (TDE) in PostgreSQL only applies to columns. It does not encrypt other aspects of the database, like table-level and database-level encryption; and those who deploy PostgreSQL may need to implement additional security measures to protect these database components.

You need to know two key elements before exploring TDE in PostgreSQL: Scheme inside a database and extensions. Unlike many databases, PostgreSQL schemas are not synonymous with a database. You may have multiple scheme (or, alternatively schemas) inside any PostgreSQL database.

Creating an extension is a one time event. Therefore, it’s easier to show you that first. You create a pgcrypto extension with the following command:

CREATE EXTENSION pgcrypto;

The public schema is the one most users deploy but for the purpose of hiding our AES encryption key this example creates a hidden schema. Unless you change the default find setting the hidden schema is not visible when connecting to the database.

You create the hidden schema with the following idimpotent (re-runnable) set of commands:

/* Drop dependent objects before dropping the schema. */
DROP TABLE IF EXISTS hidden.aes_key;
DROP FUNCTION IF EXISTS hidden.get_aes_key;
 
/*
 *  Drop function with cascade to remove the
 *  film_character_t trigger at same time.
 */
DROP FUNCTION IF EXISTS hidden.film_character_dml_f CASCADE;
 
/* Drop the schema conditionally. */
DROP SCHEMA IF EXISTS hidden;
 
/* Create the schema. */
CREATE SCHEMA hidden;

Next, we need to create a aes_key table and get_aes_key function in the hidden schema. The table will store the AES encryption key and the function lets us create an AES encryption key.

/* Create an aes encryption key table. */
CREATE TABLE hidden.aes_key
( aes_key  text );
 
/* Create a hidden function to build an AES encryption key. */
CREATE OR REPLACE
  FUNCTION hidden.get_aes_key() RETURNS text AS
  $$
  BEGIN
    RETURN gen_random_bytes(16)::text;
  END;
  $$
  LANGUAGE plpgsql;

After creating the public get_key() function, you insert a single row to the aes_key table by prefacing it with the hidden schema name, like this:

/* Insert the AES encryption key into a table. */ 
INSERT INTO hidden.aes_key
( aes_key )
VALUES
( hidden.get_aes_key());

Having built the plumbing for our AES encryption key, let’s show you how to encrypt and decrypt string values. This example lets you create an idimpotent film_character table in the public schema, like:

/* Drop the table conditionally. */  
DROP TABLE IF EXISTS film_character;
 
/* Create the demonstration table for encrypting and decrypting strings. */ 
CREATE TABLE film_character
( character_id    serial PRIMARY KEY
, plain_text      text
, encrypted_text  bytea );

After creating the AES encryption key table, function, and inserting a row of data, you need to create a public get_key() function, like:

/* Create a public function to retrieve the AES encryption key. */
CREATE OR REPLACE
  FUNCTION get_key() RETURNS text AS
  $$
  DECLARE
    retval  text;
  BEGIN
    SELECT aes_key INTO retval FROM hidden.aes_key;
    RETURN retval;
  END;
  $$
  LANGUAGE plpgsql;

The following INSERT statement write a plain text column and encrypted text column into the film_character table. The get_key() function hides how the pgp_sym_encrypt function encrypts the string.

/* Insert plain and encrypted text into a table. */
INSERT INTO film_character
( plain_text
, encrypted_text )
VALUES
('Severus Snape'
, pgp_sym_encrypt('Slytherin',get_key()));

The following query displays the plain and encrypted text stored in a row of the film_character table.

/* Query plain and encrypted text from a table. */
SELECT character_id
,      plain_text
,      encrypted_text
FROM   film_character;

It displays:

 character_id |  plain_text   | encrypted_text
--------------+---------------+--------------------------------------------------------------------------------------------------------------------------------------------------------
            1 | Severus Snape | \xc30d04070302fa1c4eebd90204cc7bd23901f1d4fa91b2455c3ef2987a305aebe01a4d94f9ebb467d6cb7a3846342ccd09cb55ac5e82a71cbaef93728fbeb4aaa9bf71b6fb93457758d1
(1 row)

Last, the following query displays the plain and decrypted text with the pgp_sym_decrypt function in a query:

/* Query the plain and decrypted text from a table. */
SELECT character_id
,      plain_text
,      pgp_sym_decrypt(encrypted_text,get_key()) AS encrypted_text
FROM   film_character;

The query returns the plain and decrypted values:

 character_id |  plain_text   | encrypted_text
--------------+---------------+-----------------
            1 | Severus Snape | Slytherin
(1 row)

However, this approach exposes the method for encrypting the encrypted_text column’s string value. You can hide this by creating a film_character_dml_f function in the hidden schema and a film_character_t trigger in the public schema, like:

/* Create trigger function for insert or update. */
CREATE FUNCTION hidden.film_character_dml_f()
  RETURNS trigger AS
$$
DECLARE
  /* Declare local variable. */
  unencrypted_input  VARCHAR(30);
BEGIN
  unencrypted_input := new.encrypted_text::text;
  /* Encrypt the column. */
  new.encrypted_text := pgp_sym_encrypt(unencrypted_input,get_key());
 
  /* Return new record type. */
  RETURN NEW;
END;
$$
LANGUAGE plpgsql;
 
CREATE TRIGGER film_character_t
  BEFORE INSERT OR UPDATE ON film_character
  FOR EACH ROW EXECUTE FUNCTION hidden.film_character_dml_f();

Now, you can insert the plain text data in an INSERT statement and the encryption occurs without disclosing how it happens. Here’s a sample statement:

INSERT INTO film_character
( plain_text
, encrypted_text )
VALUES
('Harry Potter'
,'Gryffindor');

A query of the table shows you that both rows have an encrypted value in the encrypted_text column.

/* Query plain and encrypted text from a table. */
SELECT character_id
,      plain_text
,      encrypted_text
FROM   film_character;

Displayed like:

 character_id |  plain_text   |                                                                                   encrypted_text
--------------+---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
            1 | Severus Snape | \xc30d040703026716034f140d83e76cd23a01f99168afebe50d760b85c69373e3947c74473115a939843887db8e102cd0b2524378f4d684e0ba91c20afc436a056cd983fc47794eef7d4904
            2 | Harry Potter  | \xc30d040703020d8cc71d1f84e1ef6fd24701fd308f669e28a6135beac130fc51a6ccb5cef3c5005f4f557207fe5c84c4aedbb5b098dc9a882a9b7d801c61e34cd90517b4628b5a18b96b3fc61663b48391146b8c0fa2a858
(2 rows)

As always, I hope this code complete solution helps those trying to work with this technical stack.

Written by maclochlainn

January 8th, 2023 at 10:23 am

DML Event Management

without comments

Data Manipulation Language (DML)

DML statements add data to, change data in, and remove data from tables. This section examines four DML statements—the INSERT, UPDATE, DELETE, and MERGE statements—and builds on concepts of data transactions. The INSERT statement adds new data, the UPDATE statement changes data, the DELETE statement removes data from the database, and the MERGE statement either adds new data or changes existing data.

Any INSERT, UPDATE, MERGE, or DELETE SQL statement that adds, updates, or deletes rows in a table locks rows in a table and hides the information until the change is committed or undone (that is, rolled back). This is the nature of ACID-compliant SQL statements. Locks prevent other sessions from making a change while a current session is working with the data. Locks also restrict other sessions from seeing any changes until they’re made permanent. The database keeps two copies of rows that are undergoing change. One copy of the rows with pending changes is visible to the current session, while the other displays committed changes only.

ACID Compliant Transactions

ACID compliance relies on a two-phase commit (2PC) protocol and ensures that the current session is the only one that can see new inserts, updated column values, and the absence of deleted rows. Other sessions run by the same or different users can’t see the changes until you commit them.

ACID Compliant INSERT Statements

The INSERT statement adds rows to existing tables and uses a 2PC protocol to implement ACID- compliant guarantees. The SQL INSERT statement is a DML statement that adds one or more rows to a table. Oracle supports a VALUES clause when adding a single-row, and support a subquery when adding one to many rows.

The figure below shows a flow chart depicting an INSERT statement. The process of adding one or more rows to a table occurs during the first phase of an INSERT statement. Adding the rows exhibits both atomic and consistent properties. Atomic means all or nothing: it adds one or more rows and succeeds, or it doesn’t add any rows and fails. Consistent means that the addition of rows is guaranteed whether the database engine adds them sequentially or concurrently in threads.

Concurrent behaviors happen when the database parallelizes DML statements. This is similar to the concept of threads as lightweight processes that work under the direction of a single process. The parallel actions of a single SQL statement delegate and manage work sent to separate threads. Oracle supports all ACID properties and implements threaded execution as parallel operations. All tables support parallelization.

After adding the rows to a table, the isolation property prevents any other session from seeing the new rows—that means another session started by the same user or by another user with access to the same table. The atomic, consistent, and isolation properties occur in the first phase of any INSERT statement. The durable property is exclusively part of the second phase of an INSERT statement, and rows become durable when the COMMIT statement ratifies the insertion of the new data.

ACID Compliant UPDATE Statements

An UPDATE statement changes column values in one-to-many rows. With a WHERE clause, you update only rows of interest, but if you forget the WHERE clause, an UPDATE statement would run against all rows in a table. Although you can update any column in a row, it’s generally bad practice to update a primary or foreign key column because you can break referential integrity. You should only update non-key data in tables—that is, the data that doesn’t make a row unique within a table.

Changes to column values are atomic when they work. For scalability reasons, the database implementation of updates to many rows is often concurrent, in threads through parallelization. This process can span multiple process threads and uses a transaction paradigm that coordinates changes across the threads. The entire UPDATE statement fails when any one thread fails.

Similar to the INSERT statement, UPDATE statement changes to column values are also hidden until they are made permanent with the application of the isolation property. The changes are hidden from other sessions, including sessions begun by the same database user.

It’s possible that another session might attempt to lock or change data in a modified but uncommitted row. When this happens, the second DML statement encounters a lock and goes into a wait state until the row becomes available for changes. If you neglected to set a timeout value for the wait state, such as this clause, the FOR UPDATE clause waits until the target rows are unlocked:

WAIT n

As the figure below shows, actual updates are first-phase commit elements. While an UPDATE statement changes data, it changes only the current session values until it is made permanent by a COMMIT statement. Like the INSERT statement, the atomic, consistent, and isolation properties of an UPDATE statement occur during the first phase of a 2PC process. Changes to column values are atomic when they work. Any column changes are hidden from other sessions until the UPDATE statement is made permanent by a COMMIT or ROLLBACK statement, which is an example of the isolation property.

Any changes to column values can be modified by an ON UPDATE trigger before a COMMIT statement. ON UPDATE triggers run inside the first phase of the 2PC process. A COMMIT or ROLLBACK statement ends the transaction scope of the UPDATE statement.

The Oracle database engine can dispatch changes to many threads when an UPDATE statement works against many rows. UPDATE statements are consistent when these changes work in a single thread-of-control or across multiple threads with the same results.

As with the INSERT statement, the atomic, consistent, and isolation properties occur during the first phase of any UPDATE statement, and the COMMIT statement is the sole activity of the second phase. Column value changes become durable only with the execution of a COMMIT statement.

ACID Compliant DELETE Statements

A DELETE statement removes rows from a table. Like an UPDATE statement, the absence of a WHERE clause in a DELETE statement deletes all rows in a table. Deleted rows remain visible outside of the transaction scope where it has been removed. However, any attempts to UPDATE those deleted rows are held in a pending status until they are committed or rolled back.

You delete rows when they’re no longer useful. Deleting rows can be problematic when rows in another table have a dependency on the deleted rows. Consider, for example, a customer table that contains a list of cell phone contacts and an address table that contains the addresses for some but not all of the contacts. If you delete a row from the customer table that still has related rows in the address table, those address table rows are now orphaned and useless.

As a rule, you delete data from the most dependent table to the least dependent table, which is the opposite of the insertion process. Basically, you delete the child record before you delete the parent record. The parent record holds the primary key value, and the child record holds the foreign key value. You drop the foreign key value, which is a copy of the primary key, before you drop the primary key record. For example, you would insert a row in the customer table before you insert a row in the address table, and you delete rows from the address table before you delete rows in the customer table.

The figure below shows the logic behind a DELETE statement. Like the INSERT and UPDATE statements, acid, consistency, and isolation properties of the ACID-compliant transaction are managed during the first phase of a 2PC. The durability property is managed by the COMMIT or ROLLBACK statement.

There’s no discussion or diagrams for the MERGE statement because it does either an INSERT or UPDATE statement based on it’s internal logic. That means a MERGE statement is ACID compliant like an INSERT or UPDATE statement.

Written by maclochlainn

January 1st, 2023 at 8:05 pm