Archive for the ‘sql’ Category
MySQL Transaction Scope
The idea of ACID transactions are a basic feature of SQL’s individual Data Manipulation Language (DML) commands, like the INSERT
, UPDATE
, and DELETE
statements. Transactions across two or more tables are a natural extension of ACID compliance features provided by DML commands. However, they require a structured programming approach, like a store procedure or like API implemented in an imperative language.
Surprisingly, transaction management wasn’t covered well in Alan Beaulieu’s Learning SQL because he only provided pseudo code logic. While I thought troubleshoot some broken MySQL SQL/PSM logic would be a good learning experience for students, it wasn’t. So, I wrote this sample code to show how to achieve an all or nothing transaction across four tables.
The code for this example on transaction management lets you perform the important tasks necessary to effect transaction management:
- You must disable autocommit in the scope of the session.
- You must use an imperative programming language like SQL/PSM or Python, et cetera.
- You must identify an error thrown in a series of DML transactions, rollback all completed work, and exit the scope of the program unit.
This SQL defines the four tables:
/* Drop and create four tables. */ DROP TABLE IF EXISTS one, two, three, four; CREATE TABLE one ( id int primary key auto_increment, msg varchar(10)); CREATE TABLE two ( id int primary key auto_increment, msg varchar(10)); CREATE TABLE three ( id int primary key auto_increment, msg varchar(10)); CREATE TABLE four ( id int primary key auto_increment, msg varchar(10)); |
Unfortunately, there’s no way to simply transaction management from the MySQL Command-Line Interface (CLI) because you need to build the logic that manages success and failure. It requires that you create a procedure using MySQL’s SQL/PSM (Persistent Stored Module) or another imperative programming language. You might think why can’t you just write an anonymous block program, like you can do in other stored procedural languages. The answer is simple. You can’t write anonymous blocks in MySQL’s SQL/PSM because they adhere to ANSI SQL-2003 (or more accurately ISO/IEC 9075-4:2003).
The following code block does:
- Conditionally drops the locking() procedure.
- Sets the default semicolon (
;
) delimiter to a double-dollar ($$
), which lets you use the semicolon as statement and block terminators. - Declares a
locking()
procedure with the following:- Sets a parameter list with four IN-mode parameters.
- Declares an
EXIT
handler that undoes any writes before an error in a sequence of DDL commands, like theINSERT
,UPDATE
, orDELETE
statements. TheEXIT
handler then aborts completion of the rest of the procedure. (MySQL 13.6.7.2 Declare … Handler Statement) - Disables autocommit in the scope of the session.
- Starts a transaction context and inserts data into four tables as a transaction. The continue handler picks up processing when one of the
INSERT
statements fails with a 1406 error code. The 1406 error code represents an error that occurs because the data is too long for a column’s width. - When all elements of the procedure complete, you commit the work.
/* Conditionally drop procedure. */ DROP PROCEDURE IF EXISTS locking; /* Set delimiter to $$ to allow ; inside the procedure. */ DELIMITER $$ /* Create a transaction procedure. */ CREATE PROCEDURE locking(IN pv_one varchar(10) ,IN pv_two varchar(10) ,IN pv_three varchar(10) ,IN pv_four varchar(10)) BEGIN /* Declare an EXIT Handler when a string is too long for a column. Undo all prior writes with a ROLLBACK statement. */ DECLARE EXIT HANDLER FOR 1406 BEGIN ROLLBACK; END; /* Disable autocommit. */ SET AUTOCOMMIT=0; /* Start transaction scope. */ START TRANSACTION; /* A series of INSERT statement. */ INSERT INTO one (msg) VALUES (pv_one); INSERT INTO two (msg) VALUES (pv_two); INSERT INTO three (msg) VALUES (pv_three); INSERT INTO four (msg) VALUES (pv_four); /* Commit transaction set. */ COMMIT; END; $$ /* Reset delimiter to ; for SQL statements. */ DELIMITER ; |
The next block tests the locking()
procedure. The first and third calls are successful but the second one fails because the third parameter is too long for the msg
column in the three
table. The error triggers the EXIT
handler in the locking()
procedure.
/* Call locking procedure. */ CALL locking('Donald','Goofy','Mickey','Pluto'); CALL locking('Squirrel','Chipmunk','Monkey business','Raccoon'); CALL locking('Curly','Larry','Moe','Shemp'); |
The query block below:
/* Select from tables, which should be empty. */ SELECT * FROM one; SELECT * FROM two; SELECT * FROM three; SELECT * FROM four; |
Returns the following, which shows only the first and third test cases succeed:
+----+--------+ | id | msg | +----+--------+ | 1 | Donald | | 2 | Curly | +----+--------+ 2 rows in set (0.01 sec) +----+-------+ | id | msg | +----+-------+ | 1 | Goofy | | 2 | Larry | +----+-------+ 2 rows in set (0.00 sec) +----+--------+ | id | msg | +----+--------+ | 1 | Mickey | | 2 | Moe | +----+--------+ 2 rows in set (0.00 sec) +----+-------+ | id | msg | +----+-------+ | 1 | Pluto | | 2 | Shemp | +----+-------+ 2 rows in set (0.00 sec) |
As always, I hope this helps those trying to write transactions across multiple tables.
PL/SQL Inheritance Failure
PL/SQL is a great programming language as far as it goes but it lacks true type inheritance for its collections. While you can create an object type and subtype, you can’t work with collections of those types the same way. PL/SQL object type inheritance, unlike the Java class hierarchy and parallel array class hierarchy, only supports a class hierarchy. Effectively, that means:
- You can pass a subtype as a call parameter, or argument, to a parent data type in a function, procedure, or method signature, but
- You can’t pass a collection of a subtype as a call parameter, or argument, to a collection of parent type in a function, procedure, or method signature.
The limitation occurs because collections have their own data type, which is fixed when you create them. Worse yet, because Oracle has never seen fit to fix their two underlying code trees (23 years and counting since Oracle 8i), you have two types of collections using two distinct C/C++ libraries. You define collections of Attribute Data Types (ATDs) when you create a collection of a standard scalar data type, like NUMBER
, VARCHAR2
, or DATE
. You define collection of User-Defined Data Types (UTDs) when you create a collection of a SQL UDT or PL/SQL-only RECORD
data type. The former uses one C/C++ library and the latter another.
Now, Oracle even make the differences between Java and PL/SQL more complex because it treats collections known as tables, really lists in most programming languages, differently than varrays, or arrays. You create a TABLE
collection, or list, when you create a table of a scalar or UDT data type. There are two options when you create these object types, and they are:
- You create an empty collection with a no element constructor, which means you’ll need to allocate memory before assigning element values later in your program.
- You create a populated collection with a comma-delimited list of elements.
Both approaches give you a list of elements with a densely populated index. A “densely populated index” is Oracle’s jargon for how they characterize a 1-based sequence of integers without any gaps (e.g., 1, 2, 3, …). The initial construction works the same way whether you create a TABLE
or VARRAY
collection type. Unfortunately, after you’ve built the collection behaviors change. If you use Oracle’s Collection API to delete one or more items from a TABLE
collection type, you create gaps in the index’s sequence of values. That means you must use special logic to navigate across a TABLE
collection type to ensure it doesn’t fail when encountering a gap in the numeric sequence.
For example, here’s a FOR-LOOP
without the logic to vouchsafe a uninterrupted set of sequence values incrementing by a counter of 1 element at a time:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | DECLARE /* Create a local table collection. */ TYPE list IS TABLE OF VARCHAR2(10); /* Statically allocate memory and assign values to for elements. */ lv_list LIST := list('Moe','Shemp','Larry','Curly'); BEGIN /* Remove the second element, Shemp, from the collection of variable length strings. */ lv_list.DELETE(2); /* Loop through the target with a for loop, which depends on densely populated index values. */ FOR i IN 1..lv_list.COUNT LOOP dbms_output.put_line('['||lv_list(i)||']'); END LOOP; END; / |
The program fails when it tries to read the second element of the table collection, which was previously removed. It raises the following error message after print the first element of the table collection:
[Moe] DECLARE * ERROR at line 1: ORA-01403: no data found ORA-06512: at line 16 |
Conveniently, Oracle’s Collection API provides an EXISTS
method that we can use to check for the presence of an index’s value. Modifying line 16 by wrapping it in an IF-statement fixes one problem but identifies another:
15 16 17 18 19 | FOR i IN 1..lv_list.COUNT LOOP IF lv_list.EXISTS(i) THEN dbms_output.put_line('['||lv_list(i)||']'); END IF; END LOOP; |
The program no longer fails on a missing index value, or index gap, but it returns fewer lines of output than you might expect.
That’s because the Oracle Collection API’s COUNT
method returns the number of elements currently allocated in memory not the number of original elements. We learn that when we deleted the second element, Oracle deleted the memory allocated for it as well. This is the type of behavior you might expect for a singly linked list. It prints:
[Moe] [Larry] |
One more change is required to count past and to the highest index value. One line 15, change the COUNT
method call to the LAST
method call, which returns the highest index value.
15 16 17 18 19 | FOR i IN 1..lv_list.LAST LOOP IF lv_list.EXISTS(i) THEN dbms_output.put_line('['||lv_list(i)||']'); END IF; END LOOP; |
It now prints the three stooges we would expect to see:
[Moe] [Larry] [Curly] |
Realistically, a FOR-LOOP
is not the best control structure for a collection. You should use a WHILE-LOOP
and treat the incrementing value as an iterator rather than sequence index value. An iterator doesn’t worry about gaps in the sequence, it simply moves to the next element in the singly linked list. Here’s an example that uses the iterator approach with a WHILE-LOOP
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | DECLARE /* Create a local table collection. */ TYPE list IS TABLE OF VARCHAR2(10); /* Statically allocate memory and assign values to for elements. */ lv_list LIST := list('Moe','Shemp','Larry','Curly'); /* Declare a current index variable. */ CURRENT NUMBER; BEGIN /* Remove the second element, Shemp, from the collection of variable length strings. */ lv_list.DELETE(2); /* Loop through the target with a while loop, which doesn't depend on densely populated index values by setting the starting index value and increment as if with an iterator. */ CURRENT := lv_list.FIRST; WHILE NOT (CURRENT > lv_list.LAST) LOOP dbms_output.put_line('['||lv_list(CURRENT)||']'); CURRENT := lv_list.NEXT(CURRENT); END LOOP; END; / |
The iterator approach prints the elements as:
[Moe] [Larry] [Curly] |
You can reverse the process with the following changes to lines 20-24:
20 21 22 23 24 | CURRENT := lv_list.LAST; WHILE NOT (CURRENT < lv_list.FIRST) LOOP dbms_output.put_line('['||lv_list(CURRENT)||']'); CURRENT := lv_list.PRIOR(CURRENT); END LOOP; |
It prints the list backwards:
[Curly] [Larry] [Moe] |
After covering the issues with sparsely populated, those with gaps in the sequence of indexes values, table collections, let’s examine how you must work around PL/SQL’s lack of a parallel array class hierarchy. The solution lies in combining two programming concepts:
- A function to pack the sparsely populated table collection into a densely populated one, and
- A package with overloaded functions that pack different table collections.
To develop the test case, let’s use an ADT collection because it’s the simplest to work with. The following creates a table collection of a thirty character long scalar string:
1 2 3 | CREATE OR REPLACE TYPE list IS TABLE OF VARCHAR2(30); / |
The following pack function takes a table collection of the thirty character long scalar string, evaluates the string for missing elements, and packs the existing elements into a densely populated list:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | CREATE OR REPLACE FUNCTION pack ( pv_list LIST ) RETURN list IS /* Declare a new list. */ lv_new LIST := list(); BEGIN /* Read, check, and pack an old list into a new one. */ FOR i IN 1..pv_list.LAST LOOP IF pv_list.EXISTS(i) THEN lv_new.EXTEND; lv_new(lv_new.COUNT) := pv_list(i); END IF; END LOOP; RETURN lv_new; END; / |
This anonymous block tests the pack function:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | DECLARE /* Declare a list value. */ lv_test LIST := list('Moe','Shemp','Larry','Curly'); BEGIN /* Remove one element in the middle. */ lv_test.DELETE(2); /* Pack the list of elements into a sequence of values. */ lv_test := pack(lv_test); /* Print the list of elements from the packed list. */ FOR i IN 1..lv_test.COUNT LOOP dbms_output.put_line('['||lv_test(i)||']'); END LOOP; END; / |
It prints the expected three string values:
[Moe] [Larry] [Curly] |
Now, let’s expand the example to build an overloaded package. The first step requires building a base_t
object type and a table collection of the object type, like:
1 2 3 4 5 6 7 8 9 | CREATE OR REPLACE TYPE base_t IS OBJECT ( oid NUMBER ) INSTANTIABLE NOT FINAL; / CREATE OR REPLACE TYPE base_list IS TABLE OF base_t; / |
Next, you create a book_t
subtype of the base_t
object type and a book_list
table collection of the book_t
subtype, like:
1 2 3 4 5 6 7 8 9 | CREATE OR REPLACE TYPE book_t UNDER base_t ( title VARCHAR2(30) , COST NUMBER); / CREATE OR REPLACE TYPE book_list IS TABLE OF book_t; / |
We can test the base_t
and book_t
default constructors with the following SQL*Plus formatting and SQL query:
COL oid FORMAT 999 COL title FORMAT A20 COL COST FORMAT 99.99 SELECT * FROM TABLE(book_list(book_t(1,'Neuromancer',15.30) ,book_t(2,'Count Zero',7.99) ,book_t(3,'Mona Lisa Overdrive',7.99) ,book_t(4,'Burning Chrome',8.89))); |
It prints the following output:
OID TITLE COST ---- -------------------- ------ 1 Neuromancer 15.30 2 Count Zero 7.99 3 Mona Lisa Overdrive 7.99 4 Burning Chrome 8.89 |
The following is an overloaded package specification:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | CREATE OR REPLACE PACKAGE packer IS /* A simple ADT list of strings. */ FUNCTION pack ( pv_list LIST ) RETURN list; /* A UDT list of base objects. */ FUNCTION pack ( pv_list BASE_LIST ) RETURN base_list; /* A UDT list of subtype objects. */ FUNCTION pack ( pv_list BOOK_LIST ) RETURN book_list; END; / |
After you create the package specification, you need to provide the implementation. This is typical in any programming language that supports Interface Description Language (IDL). A package body provides the implementation for the package specification. The package body follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | CREATE OR REPLACE PACKAGE BODY packer IS /* A simple ADT list of strings. */ FUNCTION pack ( pv_list LIST ) RETURN list IS /* Declare a new list. */ lv_new LIST := list(); BEGIN /* Read, check, and pack an old list into a new one. */ FOR i IN 1..pv_list.LAST LOOP IF pv_list.EXISTS(i) THEN lv_new.EXTEND; lv_new(lv_new.COUNT) := pv_list(i); END IF; END LOOP; RETURN lv_new; END pack; /* A simple ADT list of strings. */ FUNCTION pack ( pv_list BASE_LIST ) RETURN base_list IS /* Declare a new list. */ lv_new BASE_LIST := base_list(); BEGIN /* Read, check, and pack an old list into a new one. */ FOR i IN 1..pv_list.LAST LOOP IF pv_list.EXISTS(i) THEN lv_new.EXTEND; lv_new(lv_new.COUNT) := pv_list(i); END IF; END LOOP; RETURN lv_new; END pack; /* A simple ADT list of strings. */ FUNCTION pack ( pv_list BOOK_LIST ) RETURN book_list IS /* Declare a new list. */ lv_new BOOK_LIST := book_list(); BEGIN /* Read, check, and pack an old list into a new one. */ FOR i IN 1..pv_list.LAST LOOP IF pv_list.EXISTS(i) THEN lv_new.EXTEND; lv_new(lv_new.COUNT) := pv_list(i); END IF; END LOOP; RETURN lv_new; END pack; END packer; / |
The test case for the base_list
object type is:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | DECLARE lv_test BASE_LIST := base_list(base_t(1),base_t(2) ,base_t(3),base_t(4)); BEGIN /* Remove one element in the middle. */ lv_test.DELETE(2); /* Pack the list of elements into a sequence of values. */ lv_test := packer.pack(lv_test); /* Print the list of elements from the packed list. */ FOR i IN 1..lv_test.LAST LOOP dbms_output.put_line('['||lv_test(i).oid||']'); END LOOP; END; / |
It prints the following output:
[1] [3] [4] |
The test case for the book_list
object type is:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | DECLARE lv_test BOOK_LIST := book_list(book_t(1,'Neuromancer',15.30) ,book_t(2,'Count Zero',7.99) ,book_t(3,'Mona Lisa Overdrive',7.99) ,book_t(4,'Burning Chrome',8.89)); BEGIN /* Remove one element in the middle. */ lv_test.DELETE(2); /* Pack the list of elements into a sequence of values. */ lv_test := packer.pack(lv_test); /* Print the list of elements from the packed list. */ FOR i IN 1..lv_test.LAST LOOP dbms_output.put_line( '['||lv_test(i).oid||']' ||'['||lv_test(i).title||']' ||'['||lv_test(i).COST||']'); END LOOP; END; / |
It prints the following output:
[1][Neuromancer][15.3] [3][Mona Lisa Overdrive][7.99] [4][Burning Chrome][8.89] |
In conclusion, you would not have to write overloaded methods for every list if PL/SQL supported class hierarchy and parallel array class hierarchy like Java. Unfortunately, it doesn’t and likely won’t in the future. You can pack table collections as a safety measure when they’re passed as parameters to other functions, procedures, or methods with the code above.
As always, I hope this helps those looking for a solution.
PL/SQL Coupled Loops
The purpose of this example shows you how to navigate a list with a sparsely populated index. This can occur when one element has been removed after the list was initialized. Unlike Oracle’s VARRAY
(array), removing an element from a TABLE
or list does not re-index the elements of the list.
This example also shows you how to coupled lists. The outer loop increments, notwithstanding the gap in index values, while the inner loop decrements. The upper range of the inner loop is set by the index value of the outer loop.
The example program uses an abbreviated version of the Twelve Days of Christmas, and I’ve tried to put teaching notes throughout the example file.
DECLARE /* Create a single column collection that is a list strings less than 8 characters in length and another of strings less than 20 characters in length. */ TYPE DAY IS TABLE OF VARCHAR2(8); TYPE verse IS TABLE OF VARCHAR2(20); /* Create variables that use the user-defined types: || ================================================= || 1. We give the variable a name of lv_day and lv_verse. || 2. We assign a user-defined ADT (Attribute Data Type) collection. || 3. We assign a list of value to the constructor of the list, which || allocates memory for each item in the comma-delimited list of || string. */ lv_day DAY := DAY('first','second','third','fourth','fifth'); lv_verse VERSE := verse('Partridge','Turtle Doves','French Hen' ,'Calling Birds','Gold Rings'); BEGIN /* || Remove an element from each of the two lists, which makes the two || lists sparsely indexed. A sparsely indexed list has gaps in the || sequential index of the list. */ lv_day.DELETE(3); /* || Loop through the list of days: || ===================================================j || 1. A list created by a comma-delimited list is densely populated, || which means it has no gaps in the sequence of indexes. || 2. A list created by any means that is subsequently accessed || and has one or more items removed is sparsely populated, || which means it may have gaps in the sequence of indexes. || 3. A FOR loop anticipates densely populated indexes and fails || when trying to read a missing index, which is why you should || use an IF statement to check for the element of a list before || accessing it. || 4. A COUNT method returns the number of elements allocated memory || in a list of values and the LAST method returns the highest || index value. The index value is alway an integer for user-defined || ADT (Attribute Data Type) collections, but may be a string for || an associative array or a PL/SQL list indexed by a string. || 5. Removing an element from a list does not change the other || index values but does if you create an array (or varray), which || means COUNT OR LAST may cause the same type of error for a list || with a missing element. */ FOR i IN 1..lv_day.LAST LOOP /* || Verify the index is valid. || ==================================================== || You check whether the element is present in the || list. */ IF lv_day.EXISTS(i) THEN /* Print the beginning of the stanza. */ dbms_output.put_line('On the ['||lv_day(i)||'] of Christmas ...'); /* Print the song. */ FOR j IN REVERSE 1..i LOOP /* Check if the day exists. */ IF lv_verse.EXISTS(j) THEN /* All but first and last verses. */ IF j > 1 THEN dbms_output.put_line('- ['||lv_verse(j)||']'); /* The last verse. */ ELSIF i = j THEN dbms_output.put_line('- A ['||lv_verse(j)||']'||CHR(10)); /* Last verse. */ ELSE dbms_output.put_line('and a ['||lv_verse(j)||']'||CHR(10)); END IF; END IF; END LOOP; ELSE CONTINUE; END IF; END LOOP; END; / |
As always, I hope it helps you solve problems in the real world.
MySQL Join Tutorial
Some believe the most important part of SQL is the ability to query data. Queries typically retrieve data by joining many tables together into useful result sets. This tutorial takes the position that visibility into the data helps those new to SQL understand how joins work. To that end, the queries use Common Tabular Expressions (CTEs) instead of tables.
Default behavior of a JOIN
without a qualifying descriptor is not simple because it may return:
- A
CROSS JOIN
(or Cartesian Product) when there is noON
orUSING
subclause, or - An
INNER JOIN
when you use anON
orUSING
subclause.
The following query uses JOIN without a qualifier or an ON or USING subclause. It also uses two copies of the single CTE, which is more or less a derived table and the result of a subquery held in memory. This demonstrates the key reason for table aliases. That key reason is you can put two copies of the same table in memory under different identifiers or labels.
1 2 3 4 5 6 7 | WITH alpha AS (SELECT 'A' AS letter, 130 AS amount UNION SELECT 'B' AS letter, 150 AS amount UNION SELECT 'C' AS letter, 321 AS amount) SELECT * FROM alpha a JOIN alpha b; |
It returns a Cartesian product:
+--------+--------+--------+--------+ | letter | amount | letter | amount | +--------+--------+--------+--------+ | A | 130 | A | 130 | | B | 150 | A | 130 | | C | 321 | A | 130 | | A | 130 | B | 150 | | B | 150 | B | 150 | | C | 321 | B | 150 | | A | 130 | C | 321 | | B | 150 | C | 321 | | C | 321 | C | 321 | +--------+--------+--------+--------+ 9 rows in set (0.00 sec) |
By adding an ON
clause to line 8, the default JOIN
keyword returns an INNER JOIN
result.
1 2 3 4 5 6 7 8 | WITH alpha AS (SELECT 'A' AS letter, 130 AS amount UNION SELECT 'B' AS letter, 150 AS amount UNION SELECT 'C' AS letter, 321 AS amount) SELECT * FROM alpha a JOIN alpha b ON a.letter = b.letter; |
It displays results, like:
+--------+--------+--------+--------+ | letter | amount | letter | amount | +--------+--------+--------+--------+ | A | 130 | A | 130 | | B | 150 | B | 150 | | C | 321 | C | 321 | +--------+--------+--------+--------+ 3 rows in set (0.00 sec) |
The next example uses two CTEs. One uses letters 'A'
, 'B'
, 'C'
, and D
and the other uses letters 'A'
, 'B'
, 'C'
, and 'E'
. The letter D
only exists in the alpha
derived table and the letter 'E'
only exists in the beta
derived table. The amount
column values differ for their respective letters in the two CTE tables.
The basic query below the comma delimited CTEs joins the alpha
and beta
derived tables with an INNER JOIN
using an ON
clause based on the letter
column values found in both alpha
and beta
CTEs.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | WITH alpha AS (SELECT 'A' AS letter, 130 AS amount UNION SELECT 'B' AS letter, 150 AS amount UNION SELECT 'C' AS letter, 321 AS amount UNION SELECT 'D' AS letter, 783 AS amount) , beta AS (SELECT 'A' AS letter, 387 AS amount UNION SELECT 'B' AS letter, 268 AS amount UNION SELECT 'C' AS letter, 532 AS amount UNION SELECT 'E' AS letter, 391 AS amount) SELECT * FROM alpha a INNER JOIN beta b ON a.letter = b.letter; |
The INNER JOIN
returns only those rows in alpha
and beta
CTEs where the letter
column values match:
+--------+--------+--------+--------+ | letter | amount | letter | amount | +--------+--------+--------+--------+ | A | 130 | A | 387 | | B | 150 | B | 268 | | C | 321 | C | 532 | +--------+--------+--------+--------+ 3 rows in set (0.01 sec) |
If you change line 17 from an INNER JOIN
to a LEFT JOIN
, you return all the rows from the alpha
CTE and only those rows from the beta
CTE that have a matching letter
column value. The new line 17 for a LEFT JOIN
is:
17 | SELECT * FROM alpha a LEFT JOIN beta b |
It returns the three matching rows plus the one non-matching row from the alpha
CTE that is on the left side of the LEFT JOIN
operator. You should note that that a left outer join puts null values into the beta
CTE columns where there is no matching row for the 'D'
letter
found in the alpha CTE.
The results are shown below:
+--------+--------+--------+--------+ | letter | amount | letter | amount | +--------+--------+--------+--------+ | A | 130 | A | 387 | | B | 150 | B | 268 | | C | 321 | C | 532 | | D | 783 | NULL | NULL | +--------+--------+--------+--------+ 4 rows in set (0.01 sec) |
If you change line 17 from an LEFT JOIN
to a RIGHT JOIN
, you return all the rows from the beta
CTE and only those rows from the alpha
CTE that have a matching letter
column value. The new line 17 for a RIGHT JOIN
is:
17 | SELECT * FROM alpha a RIGHT JOIN beta b |
It returns the following result set:
+--------+--------+--------+--------+ | letter | amount | letter | amount | +--------+--------+--------+--------+ | A | 130 | A | 387 | | B | 150 | B | 268 | | C | 321 | C | 532 | | NULL | NULL | E | 391 | +--------+--------+--------+--------+ 4 rows in set (0.00 sec) |
MySQL does not support a FULL JOIN
operation but you can mimic a full join by combining a LEFT JOIN
and RIGHT JOIN
with the UNION
operator. The UNION
operator performs a unique sort operation, which reduces the two copies of matching rows returned by both the left and right join operation to a unique set.
This is the way to write the equivalent of a full join:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | WITH alpha AS (SELECT 'A' AS letter, 130 AS amount UNION SELECT 'B' AS letter, 150 AS amount UNION SELECT 'C' AS letter, 321 AS amount UNION SELECT 'D' AS letter, 783 AS amount) , beta AS (SELECT 'A' AS letter, 387 AS amount UNION SELECT 'B' AS letter, 268 AS amount UNION SELECT 'C' AS letter, 532 AS amount UNION SELECT 'E' AS letter, 391 AS amount) SELECT * FROM alpha LEFT JOIN beta ON alpha.letter = beta.letter UNION SELECT * FROM alpha right JOIN beta ON alpha.letter = beta.letter; |
It returns one copy of the matching rows, and the non-matching rows from both the alpha
and beta
CTEs:
+--------+--------+--------+--------+ | letter | amount | letter | amount | +--------+--------+--------+--------+ | A | 130 | A | 387 | | B | 150 | B | 268 | | C | 321 | C | 532 | | D | 783 | NULL | NULL | | NULL | NULL | E | 391 | +--------+--------+--------+--------+ 5 rows in set (0.00 sec) |
A NATURAL JOIN
would return no rows because it works by implicitly discovering columns with matching names in both CTEs and then joins the result set from both CTEs. While the letter
column matches rows between the CTEs the amount column doesn’t hold any matches. The combination of letter
and amount
columns must match for a NATURAL JOIN
operation to return any rows.
You also have the ability to override the cost optimizer and force a left to right join by using the STRAIGHT_JOIN
operator. As always, I hope this helps those looking for a solution with an explanation.
MySQL Membership
MySQL membership conditions are in the MySQL 8 Documentation. They’re found in the 13.2.11.3 Subqueries with ANY
, IN
, or SOME
section. The IN
and =ANY
operators both perform equality matches with one twist. The IN
operator works with a set of values or a subquery but the =ANY
operator only works with a subquery.
I created the digits
, letters
, and words
tables for this example. They hold the following values respectively:
- The
numbers
table holds the values of1
,2
,3
, and4
- The
letters
table holds the values of'a'
,'b'
,'c'
, and'd'
- The
words
table holds the values of'Captain America'
,'Iron Man'
,'Thor'
, andAnt-Man
The following examples show the IN
and =ANY
membership comparison operators:
The IN
membership operator:
IN
operator with a set of values and the second example shows you how to use the IN
operator with a subquery:
SELECT 'True Statement' AS result WHERE 'a' IN ('a','b','c','d'); |
SELECT 'True Statement' AS result WHERE 'a' IN (SELECT letter FROM letters); |
The previous evaluations are case insensitive membership comparisons against a set of values and a return set from a subquery. You can make a case sensitive membership comparison by putting the BINARY
keyword before one of the strings in a comparison. The BINARY
keyword converts the string to a binary string and forces a binary string comparison of the two strings (MySQL 8.0 Reference Manual: 12.8.1 – String Comparison Operators and Functions).
SELECT 'True Statement' AS result WHERE BINARY 'a' IN (SELECT letter FROM letters); |
The =ANY
membership operator:
SELECT 'True Statement' AS result WHERE 'a' =ANY ('a','b','c','d'); |
It returns the following error because the ANY
, SOME
, and ALL
membership operators only work with subqueries. They all fail when you try to have them work with sets:
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''a','b','c','d')' at line 2 |
The following math operators work with the ANY
, SOME
, and ALL
membership operators:
- >
- >=
- <
- <=
- <>
- !=
While the ANY
and SOME
membership operators work alike, the AND
operator works differently. The ANY
and SOME
membership operators work like an OR
logical operator in the WHERE
clause.
For example, the following <ANY
comparison works when the right operand is in the set but not the largest element returned by the subquery.
SELECT 'True Statement' AS result WHERE 2 <SOME (SELECT digit FROM numbers); |
You can use a <=ANY
or <=SOME
when the left operand is equal to the highest digit in the subquery’s result set. The <ALL
comparison only works when the left operand is a digit lower than the smallest element returned by the subquery.
SELECT 'True Statement' AS result WHERE 0 <ALL (SELECT digit FROM numbers); |
Here’s a set of SQL commands to setup the test cases for membership operators:
DROP TABLE letters; CREATE TABLE letters (letter VARCHAR(10)); DROP TABLE numbers; CREATE TABLE numbers (digit int); DROP TABLE words; CREATE TABLE words (word VARCHAR(20)); INSERT INTO letters VALUES ('a'),('b'),('c'),('d'),('e'); INSERT INTO numbers VALUES (1),(2),(3),(4),(5); INSERT INTO words VALUES ('Captain America'),('Thor'),('Iron Man'),('Ant-Man'); |
As always, I hope this helps those looking for more examples and good solutions.
MySQL macOS Docker
While you can download MySQL as a DMG package, a number of users would prefer to install it as a Docker instance. You won’t find the macOS downloads on the same web site as other downloads. You can use the following macOS download site.
After installing Docker on your macOS, you can pull a copy of the current MySQL Server with the following command:
docker pull mysql/mysql-server |
You should create a mysql
directory inside your ~/Documents
directory with this command:
mkdir ~/Documents/mysql |
Then, you should use the cd
command to change into the ~/Documents/mysql
directory and run this command:
pwd |
It should return the following directory:
/Users/<user_name>/Documents/mysql |
Use the /Users/<user_name>/Documents/mysql
as the
in this command:
docker run --name=mysql1 --volume=<path_to_folder>:/var/lib/mysql -p 33060:3306/tcp -d mysql/mysql-server |
The --name
option value is mysql1
and it becomes the container value. Docker mounts the column in the ~/Documents/mysql
folder. All data from the Docker container under the /var/lib/mysql
directory will persist in this directory. This directory will still contain the database when the container is shut down.
The docker run
command maps the localhost’s 33060 port to the 3306 port on the Docker container. You will use the 33060 port to connect to the Docker instance of MySQL. It raises a dialog box asking for permission to access the directory. You need to allow Docker to write to the ~/Documents/mysql
directory.
You can verify that the Docker container is running with the following command:
docker ps |
It should return:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 142b5c491cd8 mysql/mysql-server "/entrypoint.sh mysq…" 7 minutes ago Up 6 minutes (healthy) 33060-33061/tcp, 0.0.0.0:33060->3306/tcp mysql1 |
You can get the MySQL generated root password with this Docker command:
docker logs mysql1 2>&1 | grep GENERATED |
It returns something like the following:
[Entrypoint] GENERATED ROOT PASSWORD: vop#3GNYqK3nC@S@N3haf3nox5E |
Use the following Docker command to connect to the Docker container:
docker exec -it mysql1 /bin/bash |
It launches a Bash shell inside the Docker container:
bash-4.2# |
Start the mysql
Command-Line Interface (CLI):
mysql -uroot -p |
You are then prompted for a password:
Enter password: |
After successfully entering the password, you’ll see the following:
Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 55 Server version: 8.0.22 Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> |
Unless you want to remember that hugely complex root
password, you should consider changing it to something simple like, 'cangetin'
with the following command:
ALTER USER 'root'@'localhost' IDENTIFIED WITH MYSQL_NATIVE_PASSWORD BY 'cangetin'; |
Next, you should check for the installed databases with this command:
show databases; |
It will return:
+--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | sys | +--------------------+ 4 rows in set (0.01 sec) |
Exiting mysql
, you can see the contents of the root
user’s directory with this list command:
ls -al |
It should return:
total 84 drwxr-xr-x 1 root root 4096 Jan 12 03:41 . drwxr-xr-x 1 root root 4096 Jan 12 03:41 .. -rwxr-xr-x 1 root root 0 Jan 12 03:41 .dockerenv lrwxrwxrwx 1 root root 7 Oct 12 22:06 bin -> usr/bin dr-xr-xr-x 2 root root 4096 Apr 11 2018 boot drwxr-xr-x 5 root root 340 Jan 12 03:41 dev drwxr-xr-x 2 root root 4096 Oct 19 05:47 docker-entrypoint-initdb.d -rwxr-xr-x 1 root root 7496 Oct 19 05:37 entrypoint.sh drwxr-xr-x 1 root root 4096 Jan 12 03:41 etc -rw-r--r-- 1 root root 86 Jan 12 03:41 healthcheck.cnf -rwxr-xr-x 1 root root 1073 Oct 19 05:37 healthcheck.sh drwxr-xr-x 2 root root 4096 Apr 11 2018 home lrwxrwxrwx 1 root root 7 Oct 12 22:06 lib -> usr/lib lrwxrwxrwx 1 root root 9 Oct 12 22:06 lib64 -> usr/lib64 drwxr-xr-x 2 root root 4096 Apr 11 2018 media drwxr-xr-x 2 root root 4096 Apr 11 2018 mnt -rw-r--r-- 1 root root 0 Jan 12 03:41 mysql-init-complete drwxr-xr-x 2 root root 4096 Apr 11 2018 opt dr-xr-xr-x 127 root root 0 Jan 12 03:41 proc dr-xr-x--- 1 root root 4096 Jan 12 04:21 root drwxr-xr-x 1 root root 4096 Oct 19 05:47 run lrwxrwxrwx 1 root root 8 Oct 12 22:06 sbin -> usr/sbin drwxr-xr-x 2 root root 4096 Apr 11 2018 srv dr-xr-xr-x 13 root root 0 Jan 12 03:41 sys drwxrwxrwt 1 root root 4096 Jan 12 03:41 tmp drwxr-xr-x 1 root root 4096 Oct 12 22:06 usr drwxr-xr-x 1 root root 4096 Oct 12 22:06 var |
At this point, you have to make a choice about how you will access the MySQL database. You have a couple options:
- Create an individual
student
user that can access the MySQL-Server as a micro-service, which would only be a MySQL user connecting through MySQL workbench. At least, that’s the only connection option unless you likewise install themysql
client on your host macOS. Themysql
client lets you connect from the host operating system through the Command-Line Interface (CLI). - Create a local
student
user account inside the Docker container. It will have access to the container file system and mimic the behavior of a non-root
user on a server.
Let’s create both for this demonstration. Reconnect as the root
user and issue the following two commands:
CREATE USER 'student'@'localhost' IDENTIFIED WITH MYSQL_NATIVE_PASSWORD BY 'student'; CREATE USER 'student'@'%.%.%.%' IDENTIFIED WITH MYSQL_NATIVE_PASSWORD BY 'student'; |
The first version of the student
user lets you access the database from inside the Docker container. The second version of the student
user lets you access the database from MySQL Workbench deployed on your base macOS.
You can add a sakila
database and grant all privileges to the student
user with the following command as the root
user:
CREATE DATABASE sakila; GRANT ALL ON sakila.* TO 'student'@'localhost'; GRANT ALL ON sakila.* TO 'student'@'%.%.%.%'; |
You need to get the sakila
database from the Internet within the Docker container. Exit the mysql
client with the following command:
quit; |
As the root
user, install the wget
and tar
Linux utilities with this command:
yum install -y wget tar |
As the student
user, you can use the wget
command to grab a copy of the sakila
database and store the database locally. Use the cd
command to get to your ${HOME}
directory, like:
cd |
Use this syntax to get a copy of the sakila
database:
wget http://downloads.mysql.com/docs/sakila-db.tar.gz |
Use the ls
command to verify the download, then run the following set of Linux commands from the Linux CLI:
tar -xzf sakila-db.tar.gz cd sakila-db |
Run the following two commands from the sakila-db
directory:
mysql -ustudent -p < sakila-schema.sql mysql -ustudent -p < sakila-data.sql |
or, you can connect as the student
user to the MySQL client and run them there:
source sakila-schema.sql source sakila-data.sql |
You create a non-root
student user
for the Docker container from the macOS host opearting system. Which means you need to quit;
the mysql
client, and exit
the root
user’s session with the Docker container.
At the terminal in your macOS, issue the following Docker command to create a student
account in the mysql1
container:
docker exec mysql1 bash -c "useradd -u 501 -g mysql -G users \ > -d /home/student -s /bin/bash -c "Student" -n student" |
Now, you can connect as the student
user to the mysql1
container, with the following Docker command:
docker exec -it --user student mysql1 bash |
The first time you connect, you will be a the /
(root) directory. Use the following cd
command to go to the student
user’s home directory:
cd |
Then, type the following command to set the student
user’s home directory as the default. You need to use this command because vim
isn’t installed in the default Docker container, which would let you interactively edit files. It appends the necessary Bash shell command to the end of the .bashrc
file.
echo 'cd ${HOME}' >> .bashrc |
With this change, the student
user will always be available form its home directory next time you connect to the mysql1
container. You can use scp
to move files into the student
user’s home (/home/student
) directory. However, you can create a quick test.sql
file like this:
echo "select user();" > test.sql |
Connect to the mysql
CLI with as the student
user:
mysql -ustudent -p |
Call your test.sql
file from the Linux CLI, like:
mysql -ustudent -p < test.sql |
or, you can run the test.sql
program as follows form the MySQL command-line:
source test.sql |
It will return:
+-------------------+ | user() | +-------------------+ | student@localhost | +-------------------+ 1 row in set (0.00 sec) |
That’s the basic setup of the Docker MySQL Container on the macOS. You can do much more once you’ve configured it like this. For example, you can add vim
to your library repository as the root
user with the following command:
yum install -y vim |
It just takes a minute or a bit more. Adding vim
opens up so much flexibility for you inside the Docker container, it’s impossible for me to resist. 😉
PL/pgSQL OUT Mode
A friend asked me a question about using the OUT
mode parameter versus INOUT
mode parameters in functions. He formed an opinion that they didn’t work in PostgreSQL PL/pgSQL.
Unfortunately, there’s not a lot of material written about how to use the OUT
mode parameter in functions. I thought an article showing the standard example with a call to the function might help. The standard example function from the PostgreSQL documentation is:
1 2 3 4 5 6 7 8 9 | CREATE OR REPLACE FUNCTION sales_tax( IN amount real , OUT tax real ) AS $$ BEGIN /* Calculate the tax at 6%. */ tax := amount * 0.06; END; $$ LANGUAGE plpgsql; |
The RETURNS
clause is optional but here’s how you can include it. The following example works exactly like the former.
1 2 3 4 5 6 7 8 9 10 | CREATE OR REPLACE FUNCTION sales_tax( IN amount real , OUT tax real ) RETURNS real AS $$ BEGIN /* Calculate the tax at 6%. */ tax := amount * 0.06; END; $$ LANGUAGE plpgsql; |
You call the PostgreSQL sales_tax()
function like this:
SELECT 'Sales Tax ['|| sales_tax(200) ||']' AS "Return Value"; |
It should return the following:
Return Value ---------------- Sales Tax [12] (1 row) |
You can also call it in an inline code block (e.g., what Oracle documentation calls an anonymous block), like:
1 2 3 4 5 6 7 8 9 10 11 12 13 | DO $$ DECLARE /* Declare a local variable. */ tax_paid real := 0; BEGIN /* Query the results into a local variable. */ SELECT sales_tax(100) INTO tax_paid; /* Print the taxes paid. */ RAISE NOTICE 'Tax Paid [%]',tax_paid; END; $$; |
You can replace the sales_tax
function with its OUT
mode tax
parameter with the following classic sales_tax
function, which adds a tax_rate
parameter.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | CREATE OR REPLACE FUNCTION sales_tax( IN amount real , IN tax_rate real ) RETURNS real AS $$ DECLARE /* Declare a local variable. */ tax real; BEGIN /* Calculate the tax at 6%. */ tax := amount * (tax_rate / 100); /* Return the tax. */ RETURN tax; END; $$ LANGUAGE plpgsql; |
Let’s return the original approach with the OUT
parameter. Then, let’s expand the list of parameters to include an INOUT
mode state
variable, like:
1 2 3 4 5 6 7 8 9 10 11 12 | CREATE OR REPLACE FUNCTION sales_tax( IN amount real , IN tax_rate real , INOUT state VARCHAR(14) , OUT tax real ) RETURNS RECORD AS $$ BEGIN /* Calculate the tax at 6%. */ tax := amount * (tax_rate / 100); END; $$ LANGUAGE plpgsql; |
You don’t need to include the RETURNS RECORD
phrase because PL/pgSQL implements a very mature adapter pattern and it adjusts the return type automatically to the parameter list. On the other hand, many beginning programmers and support staff won’t know that. That’s why I recommend you include it for clarity.
You can call this in a query with a column alias, like:
SELECT 'Sales Tax ['|| sales_tax(100,8.25,'California') ||']' AS "Return Value"; |
It will return a tuple:
Return Value ------------------------------- Sales Tax [(California,8.25)] (1 row) |
You can implement it inside an inline block by adding a local variable of the RECORD
data type, like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | DO $$ DECLARE /* Declare a local variable. */ state varchar(14) := 'California'; tax_paid real := 0; result RECORD; BEGIN /* Query the results into a local variable. */ SELECT sales_tax(100,8.25,state) INTO result; /* Print the taxes paid. */ RAISE NOTICE 'Tax Paid [%]', result; END; $$; |
It prints the following:
NOTICE: Tax Paid [("(California,8.25)")] |
You can actually return the individual members of the tuple by putting the function call inside the FROM
clause, like:
SELECT * FROM sales_tax(100,8.25,'California'); |
It now returns the members of the tuple in separate columns:
state | tax ------------+------ California | 8.25 (1 row) |
Alternatively, you can call it from inside an inline block, like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | DO $$ DECLARE /* Declare a local variable. */ state varchar(14) := 'California'; tax_paid real := 0; BEGIN /* Query the results into a local variable. */ SELECT * INTO state, tax_paid FROM sales_tax(100,8.25,state); /* Print the taxes paid. */ RAISE NOTICE 'Tax Paid [%] [%]', state, tax_paid; END; $$; |
It prints:
NOTICE: Tax Paid [California] [8.25] |
Now, let’s rewrite the function into a traditional function with all IN
mode variables that returns a RECORD
structure with additional values. Just one quick caveat (the big but), you can only assign values to dynamically constructed RECORD
structures by using the SELECT-INTO
or FOR
statements. Below is the refactored sales_tax()
function:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | CREATE OR REPLACE FUNCTION sales_tax( IN subtotal REAL , IN tax_rate REAL , IN state VARCHAR(14)) RETURNS RECORD AS $$ DECLARE /* Declare a local variable. */ tax REAL; tax_record RECORD; BEGIN /* Calculate the tax at 6%. */ tax := subtotal * (tax_rate / 100); /* Assign state to record. */ SELECT state, tax INTO tax_record; /* Return the tax. */ RETURN tax_record; END; $$ LANGUAGE plpgsql; |
It returns the same set of values as the early version with the four parameter example above but you only need three IN
-only mode variables to get the result. Other than the parameter lists, the biggest change appears to be the assignment line, which is required in the explicit and traditional function that has only IN
mode parameters:
16 | SELECT state, tax INTO tax_record; |
Given you can return any RECORD
structure you want, why use INOUT
and OUT
mode parameters? Don’t you loose clarity about what your stored function does? Or, at least, don’t you make understanding the program logic more difficult when you use INOUT
and OUT
mode variables? The only benefit appears to be when you shift your input variables from the SELECT
clause to the INTO
clause.
Hopefully, this shows folks how to use the OUT
mode parameter; and how closely related it is to a classic function.
MySQL Self-Join
I’m switching to MySQL and leveraging Alan Beaulieu’s Learning SQL as a supporting reference for my Database Design and Development course. While reviewing Alan’s Chapter 5: Querying Multiple Tables, I found his coverage of using self-joins minimal.
In fact, he adds a prequel_film_id
column to the film
table in the sakila
database and then a single row to demonstrate a minimal self-join query. I wanted to show them how to view a series of rows interconnected by a self-join, like the following:
SELECT f.title AS film , fp.title AS prequel FROM film f LEFT JOIN film fp ON f.prequel_id = fp.film_id WHERE f.series_name = 'Harry Potter' ORDER BY f.series_number; |
It returns the following result set:
+----------------------------------------------+----------------------------------------------+ | film | prequel | +----------------------------------------------+----------------------------------------------+ | Harry Potter and the Chamber of Secrets | Harry Potter and the Sorcerer's Stone | | Harry Potter and the Prisoner of Azkaban | Harry Potter and the Chamber of Secrets | | Harry Potter and the Goblet of Fire | Harry Potter and the Prisoner of Azkaban | | Harry Potter and the Order of the Phoenix | Harry Potter and the Goblet of Fire | | Harry Potter and the Half Blood Prince | Harry Potter and the Order of the Phoenix | | Harry Potter and the Deathly Hallows: Part 1 | Harry Potter and the Half Blood Prince | | Harry Potter and the Deathly Hallows: Part 2 | Harry Potter and the Deathly Hallows: Part 1 | +----------------------------------------------+----------------------------------------------+ 7 rows in set (0.00 sec) |
Then, I thought about what questions the students might ask. For example, why doesn’t the query return the first film that doesn’t have a prequel. So, I took the self-join to the next level to display the first film having no prequel, like this:
SELECT f.title AS film , IFNULL( CASE WHEN NOT f.film_id = fp.film_id AND f.prequel_id = fp.film_id THEN fp.title END,'None') AS prequel FROM film f LEFT JOIN film fp ON f.prequel_id = fp.film_id WHERE f.series_name = 'Harry Potter' ORDER BY f.series_number; |
The CASE
operator in the SELECT
-list filters the result set by eliminating rows erroneously returned. Without the CASE
filter, the query would return the original Harry Potter and the Sorcerer’s Stone film matched agains a NULL
and all of the other sequels. The CASE
operator effectively limits the result set for the LEFT JOIN
to only the following data:
+----------------------------------------------+----------------------------------------------+ | film | prequel | +----------------------------------------------+----------------------------------------------+ | Harry Potter and the Sorcerer's Stone | NULL | +----------------------------------------------+----------------------------------------------+ |
The IFNULL()
built-in function lets you replace the NULL
value returned as the prequel’s title
value. The IFNULL()
function substitutes a 'None'
string literal for a NULL
value. The query returns the following result set:
+----------------------------------------------+----------------------------------------------+ | film | prequel | +----------------------------------------------+----------------------------------------------+ | Harry Potter and the Sorcerer's Stone | None | | Harry Potter and the Chamber of Secrets | Harry Potter and the Sorcerer's Stone | | Harry Potter and the Prisoner of Azkaban | Harry Potter and the Chamber of Secrets | | Harry Potter and the Goblet of Fire | Harry Potter and the Prisoner of Azkaban | | Harry Potter and the Order of the Phoenix | Harry Potter and the Goblet of Fire | | Harry Potter and the Half Blood Prince | Harry Potter and the Order of the Phoenix | | Harry Potter and the Deathly Hallows: Part 1 | Harry Potter and the Half Blood Prince | | Harry Potter and the Deathly Hallows: Part 2 | Harry Potter and the Deathly Hallows: Part 1 | +----------------------------------------------+----------------------------------------------+ 8 rows in set (0.01 sec) |
Alan’s modification of the sakila.film
table had the following two related design flaws:
- It didn’t provide a way to guarantee the ordering of films with prequels because relational databases don’t guarantee ordered result sets unless you use an
ORDER BY
clause, which typically requires a column to order. - It didn’t provide a way to isolate a series of films.
I modified the film
table differently by adding the series_name
, series_number
, and prequel_id
columns. The series_name
column lets you group results and the series_number
column lets you order by a preserved sequence that you store as part of the data The prequel_id
column lets you connect to the prequel film, much like the backward portion of a doubly linked list.
The new sakila.film
table is:
+----------------------+---------------------------------------------------------------------+------+-----+-------------------+-----------------------------------------------+ | Field | Type | Null | Key | Default | Extra | +----------------------+---------------------------------------------------------------------+------+-----+-------------------+-----------------------------------------------+ | film_id | smallint unsigned | NO | PRI | NULL | auto_increment | | title | varchar(255) | NO | MUL | NULL | | | description | text | YES | | NULL | | | release_year | year | YES | | NULL | | | language_id | tinyint unsigned | NO | MUL | NULL | | | original_language_id | tinyint unsigned | YES | MUL | NULL | | | rental_duration | tinyint unsigned | NO | | 3 | | | rental_rate | decimal(4,2) | NO | | 4.99 | | | length | smallint unsigned | YES | | NULL | | | replacement_cost | decimal(5,2) | NO | | 19.99 | | | rating | enum('G','PG','PG-13','R','NC-17') | YES | | G | | | special_features | set('Trailers','Commentaries','Deleted Scenes','Behind the Scenes') | YES | | NULL | | | last_update | timestamp | NO | | CURRENT_TIMESTAMP | DEFAULT_GENERATED on update CURRENT_TIMESTAMP | | series_name | varchar(20) | YES | | NULL | | | series_number | int unsigned | YES | | NULL | | | prequel | int unsigned | YES | | NULL | | +----------------------+---------------------------------------------------------------------+------+-----+-------------------+-----------------------------------------------+ 16 rows in set (0.21 sec) |
After adding the three new columns, I inserted eight rows for the original Harry Potter films. You can use the following script in the MySQL client (mysql
) to add the columns and insert the data to test the preceding queries:
-- Use sakila database. USE sakila; -- Add a prequel_id column to the sakila.film table. ALTER TABLE film ADD (series_name varchar(20)), ADD (series_number int unsigned), ADD (prequel_id int unsigned); -- Set primary to foreign key local variable. SET @sv_film_id = 0; -- Insert Harry Potter films in sakila.film table with classic values clause. INSERT INTO film ( title , description , release_year , language_id , original_language_id , rental_duration , rental_rate , length , replacement_cost , rating , special_features , last_update , series_name , series_number , prequel_id ) VALUES ('Harry Potter and the Sorcerer''s Stone' ,'A film about a young boy who on his eleventh birthday discovers, he is the orphaned boy of two powerful wizards and has unique magical powers.' , 2001 , 1 , NULL , 3 , 0.99 , 152 , 19.99 ,'PG' ,'Trailers' ,'2001-11-04' ,'Harry Potter' , 1 , NULL ); -- Assign the last generated primary key value to the local variable. SET @sv_film_id := last_insert_id(); -- Insert 2nd film in sakila.film table with classic values clause. INSERT INTO film ( title , description , release_year , language_id , original_language_id , rental_duration , rental_rate , length , replacement_cost , rating , special_features , last_update , series_name , series_number , prequel_id ) VALUES ('Harry Potter and the Chamber of Secrets' ,'A film where Harry returning to Hogwarts, still famous and a hero, when strange things start to happen ... people are turning to stone and no-one knows what, or who, is doing it.' , 2002 , 1 , NULL , 3 , 0.99 , 160 , 19.99 ,'PG' ,'Trailers' ,'2002-11-15' ,'Harry Potter' , 2 , @sv_film_id ); -- Assign the last generated primary key value to the local variable. SET @sv_film_id := last_insert_id(); -- Insert 3rd film in sakila.film table with classic values clause. INSERT INTO film ( title , description , release_year , language_id , original_language_id , rental_duration , rental_rate , length , replacement_cost , rating , special_features , last_update , series_name , series_number , prequel_id ) VALUES ('Harry Potter and the Prisoner of Azkaban' ,'A film where Harry, Ron, and Hermione return for their third year at Hogwarts and are forced to face escaped prisoner, Sirius Black.' , 2004 , 1 , NULL , 3 , 0.99 , 141 , 19.99 ,'PG' ,'Trailers' ,'2004-06-04' ,'Harry Potter' , 3 , @sv_film_id ); -- Assign the last generated primary key value to the local variable. SET @sv_film_id := last_insert_id(); -- Insert 4th film in sakila.film table with classic values clause. INSERT INTO film ( title , description , release_year , language_id , original_language_id , rental_duration , rental_rate , length , replacement_cost , rating , special_features , last_update , series_name , series_number , prequel_id ) VALUES ('Harry Potter and the Goblet of Fire' ,'A film where where Harry Potter''s name emerges from the Goblet of Fire, and he becomes a competitor in a grueling battle for glory among three wizarding schools - the Triwizard Tournament.' , 2005 , 1 , NULL , 3 , 0.99 , 157 , 19.99 ,'PG' ,'Trailers' ,'2005-11-18' ,'Harry Potter' , 4 , @sv_film_id ); -- Assign the last generated primary key value to the local variable. SET @sv_film_id := last_insert_id(); -- Insert 5th film in sakila.film table with classic values clause. INSERT INTO film ( title , description , release_year , language_id , original_language_id , rental_duration , rental_rate , length , replacement_cost , rating , special_features , last_update , series_name , series_number , prequel_id ) VALUES ('Harry Potter and the Order of the Phoenix' ,'A film where Lord Voldemort has returned, but the Ministry of Magic is doing everything it can to keep the wizarding world from knowing the truth.' , 2007 , 1 , NULL , 3 , 0.99 , 138 , 19.99 ,'PG-13' ,'Trailers' ,'2007-07-12' ,'Harry Potter' , 5 , @sv_film_id ); -- Assign the last generated primary key value to the local variable. SET @sv_film_id := last_insert_id(); -- Insert 6th film in sakila.film table with classic values clause. INSERT INTO film ( title , description , release_year , language_id , original_language_id , rental_duration , rental_rate , length , replacement_cost , rating , special_features , last_update , series_name , series_number , prequel_id ) VALUES ('Harry Potter and the Half Blood Prince' ,'A film where Voldemort is tightening his grip on Hogwarts and it is no longer the safe haven it once was. Harry and Dumbledore work to find the key to unlock the Dark Lord''s defenses.' , 2009 , 1 , NULL , 3 , 0.99 , 153 , 19.99 ,'PG' ,'Trailers' ,'2009-07-15' ,'Harry Potter' , 6 , @sv_film_id ); -- Assign the last generated primary key value to the local variable. SET @sv_film_id := last_insert_id(); -- Insert 7th film in sakila.film table with classic values clause. INSERT INTO film ( title , description , release_year , language_id , original_language_id , rental_duration , rental_rate , length , replacement_cost , rating , special_features , last_update , series_name , series_number , prequel_id ) VALUES ('Harry Potter and the Deathly Hallows: Part 1' ,'A film where Harry, Ron and Hermione set out on their perilous mission to track down and destroy the Horcruxes - the keys to Voldemort''s immortality.' , 2010 , 1 , NULL , 3 , 0.99 , 146 , 19.99 ,'PG-13' ,'Trailers' ,'2010-11-19' ,'Harry Potter' , 7 , @sv_film_id ); -- Assign the last generated primary key value to the local variable. SET @sv_film_id := last_insert_id(); -- Insert 8th film in sakila.film table with classic values clause. INSERT INTO film ( title , description , release_year , language_id , original_language_id , rental_duration , rental_rate , length , replacement_cost , rating , special_features , last_update , series_name , series_number , prequel_id ) VALUES ('Harry Potter and the Deathly Hallows: Part 2' ,'A film where Harry, Ron and Hermione set out on their perilous mission to track down and destroy the Horcruxes - the keys to Voldemort''s immortality.' , 2011 , 1 , NULL , 3 , 0.99 , 130 , 19.99 ,'PG-13' ,'Trailers' ,'2011-07-15' ,'Harry Potter' , 8 , @sv_film_id ); |
You can put the following commands into a SQL script file to revert the sakila.film
table to its base configuration:
DELETE FROM film WHERE film_id > 1000; ALTER TABLE film DROP COLUMN series_name; ALTER TABLE film DROP COLUMN series_number; ALTER TABLE film DROP COLUMN prequel_id; ALTER TABLE film AUTO_INCREMENT = 1000; |
As always, I hope this helps those looking for how to solve a new problem.
MySQL Client in 8.0.21+
Having recently installed a fresh copy of MySQL 8.0.21 on Windows, I took careful note of parsing changes in the MySQL Shell. It was tedious that we lost multiple statement processing, which is supported in the MySQL Client and MySQL Workbench because it uses MySQL Client.
It was frustrating when I subsequently discovered that the MySQL Shell took away the ability to write log files by removing the TEE
and NOTEE
commands. I suspected that since MySQL Workbench was still using the MySQL Client that it should be in the code tree. In fact, the mysql.exe
client is in this directory:
C:\Program Files\MySQL\MySQL Server 8.0\bin |
So, I immediately created a batch file to put the MySQL Client into my %PATH%
environment variable when needed. I used this time tested DOS command:
copy con mysqlclient.bat set PATH=%PATH%;C:\Program Files\MySQL\MySQL Server 8.0\bin;. ^Z |
Now you can call the MySQL client with the following syntax if you have a student
user and studentdb
database:
mysql -ustudent -p -Dstudentdb |
This way, I can just call it in a PowerShell utility before running migration script files that should generate log files for review. It also leaves the MySQL Client out of general use.
Conditional Updates
While I’m switching labs next term after more than a decade with more comprehensive lab set, I’m hoping the new exercises build the students’ core SQL skill set. Next term, I hope to see whether the change is successful. I’ve opted for using Alan Beaulieu’s Learning SQL: Generate, Manipulate, and Retrieve Data, 3rd Edition, because it’s a great book and uses the MySQL database.
One exercise that the students will lose is a data migration exercise from a badly designed common_lookup
table to a well designed common_lookup
table. The starting point is shown below on the left and the fixed version is on the right.
There are several problems with the original common_lookup
table’s design. The first problem is that the common_lookup_context
column does not uniquely identify a location within the data model for at least one list of lookup values. While it uses table names generally, it has no way to support two or more lists within the same table. It also uses a 'MULTIPLE'
string for a list of values that supports two tables. The two tables supported by 'MULTIPLE'
string are the address
and telephone
tables.
The lab instructions have the students add the following three columns to the table:
common_lookup_table
common_lookup_column
common_lookup_code
Together the combination of the common_lookup_table
and common_lookup_column
columns create a non-unique super key. The super key identifies micro subtables. The combination of the common_lookup_table
, common_lookup_column
, and common_lookup_type
columns creates a unique natural key that defines all possible values for a lookup list based on a column in a table.
The lab asked the students to use the existing data, string literal values, and some simple rules to populate the new common_lookup_table
and common_lookup_column
columns with data. The rules or steps were:
- Migrate valid table names from the
common_lookup_context
column to thecommon_lookup_table
column. - Migrate a literal
'ADDRESS'
value into thecommon_lookup_table
column when thecommon_lookup_context
column holds a'MULTIPLE'
string value. - Migrate valid table names from the
common_lookup_context
column to thecommon_lookup_column
column by appending a'_TYPE'
string literal to thecommon_lookup_context
column values, except for those three rows that have a ‘VISA_CARD’, ‘MASTER_CARD’, or ‘DISCOVER_CARD’. The three exempted rows should update thecommon_lookup_column
column with a'CREDIT_CARD_TYPE'
string literal. - Migrate a literal
'ADDRESS_TYPE'
value into thecommon_lookup_column
column when thecommon_lookup_context
column holds a'MULTIPLE'
string value. - After these changes, insert two new rows in the
common_lookup
table. They should contain'TELEPHONE'
and'TELEPHONE_TYPE'
string literal values. One of the rows should contain'HOME'
for thecommon_lookup_type
column value and the other'WORK'
for the same column. - Then, the students were asked to update the foreign key column value in the
telephone_type
column of thetelephone
table.
It was a tremendous learning experience for those who did it because there were so many steps required to migrate the structure and data. Years ago, I would offer students half their final grade if they could complete the first four steps in a single UPDATE
statement. Occasionally, I had students complete it. We worked through the problem with a small group of students today in one of my two weekly tutoring sessions. I thought it might be nice to document the solution, which use CASE
operators in the SET
clause of the UPDATE
statement.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | UPDATE common_lookup SET common_lookup_table = CASE WHEN NOT common_lookup_context = 'MULTIPLE' THEN common_lookup_context ELSE 'ADDRESS' END , common_lookup_column = CASE WHEN common_lookup_table = 'MEMBER' AND common_lookup_type IN ('VISA_CARD','MASTER_CARD','DISCOVER_CARD') THEN 'CREDIT_CARD_TyPE' WHEN NOT common_lookup_context = 'MULTIPLE' THEN CONCAT(common_lookup_context,'_TYPE') ELSE 'ADDRESS_TYPE' END; |
As a rule, students would solve Step #6, which migrates the foreign key values of the telephone
table’s telephone_type
column to the new rows inserted into the common_lookup
table. Most would accomplish that step with two UPDATE
statements. Very few could see how to create a single UPDATE
statement for both conditions and migrate from a now obsolete foreign key value that pointed to the rows of the address
table’s rows in the common_lookup
table to a valid foreign key value pointed to the telephone
table’s rows in the common_lookup
table:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | UPDATE telephone SET telephone_type = CASE WHEN common_lookup_type = 'HOME' THEN (SELECT common_lookup_id FROM common_lookup WHERE common_lookup_table = 'TELEPHONE' AND common_lookup_type = 'HOME') ELSE (SELECT common_lookup_id FROM common_lookup WHERE common_lookup_table = 'TELEPHONE' AND common_lookup_type = 'WORK') END WHERE telephone_type = (SELECT common_lookup_id FROM common_lookup WHERE common_lookup_table = 'ADDRESS' AND common_lookup_type IN ('HOME','WORK'); |
As always, I hope this helps those looking for new ideas in SQL.