Archive for the ‘Unix’ Category
MySQL Transaction Scope
The idea of ACID transactions are a basic feature of SQL’s individual Data Manipulation Language (DML) commands, like the INSERT
, UPDATE
, and DELETE
statements. Transactions across two or more tables are a natural extension of ACID compliance features provided by DML commands. However, they require a structured programming approach, like a store procedure or like API implemented in an imperative language.
Surprisingly, transaction management wasn’t covered well in Alan Beaulieu’s Learning SQL because he only provided pseudo code logic. While I thought troubleshoot some broken MySQL SQL/PSM logic would be a good learning experience for students, it wasn’t. So, I wrote this sample code to show how to achieve an all or nothing transaction across four tables.
The code for this example on transaction management lets you perform the important tasks necessary to effect transaction management:
- You must disable autocommit in the scope of the session.
- You must use an imperative programming language like SQL/PSM or Python, et cetera.
- You must identify an error thrown in a series of DML transactions, rollback all completed work, and exit the scope of the program unit.
This SQL defines the four tables:
/* Drop and create four tables. */ DROP TABLE IF EXISTS one, two, three, four; CREATE TABLE one ( id int primary key auto_increment, msg varchar(10)); CREATE TABLE two ( id int primary key auto_increment, msg varchar(10)); CREATE TABLE three ( id int primary key auto_increment, msg varchar(10)); CREATE TABLE four ( id int primary key auto_increment, msg varchar(10)); |
Unfortunately, there’s no way to simply transaction management from the MySQL Command-Line Interface (CLI) because you need to build the logic that manages success and failure. It requires that you create a procedure using MySQL’s SQL/PSM (Persistent Stored Module) or another imperative programming language. You might think why can’t you just write an anonymous block program, like you can do in other stored procedural languages. The answer is simple. You can’t write anonymous blocks in MySQL’s SQL/PSM because they adhere to ANSI SQL-2003 (or more accurately ISO/IEC 9075-4:2003).
The following code block does:
- Conditionally drops the locking() procedure.
- Sets the default semicolon (
;
) delimiter to a double-dollar ($$
), which lets you use the semicolon as statement and block terminators. - Declares a
locking()
procedure with the following:- Sets a parameter list with four IN-mode parameters.
- Declares an
EXIT
handler that undoes any writes before an error in a sequence of DDL commands, like theINSERT
,UPDATE
, orDELETE
statements. TheEXIT
handler then aborts completion of the rest of the procedure. (MySQL 13.6.7.2 Declare … Handler Statement) - Disables autocommit in the scope of the session.
- Starts a transaction context and inserts data into four tables as a transaction. The continue handler picks up processing when one of the
INSERT
statements fails with a 1406 error code. The 1406 error code represents an error that occurs because the data is too long for a column’s width. - When all elements of the procedure complete, you commit the work.
/* Conditionally drop procedure. */ DROP PROCEDURE IF EXISTS locking; /* Set delimiter to $$ to allow ; inside the procedure. */ DELIMITER $$ /* Create a transaction procedure. */ CREATE PROCEDURE locking(IN pv_one varchar(10) ,IN pv_two varchar(10) ,IN pv_three varchar(10) ,IN pv_four varchar(10)) BEGIN /* Declare an EXIT Handler when a string is too long for a column. Undo all prior writes with a ROLLBACK statement. */ DECLARE EXIT HANDLER FOR 1406 BEGIN ROLLBACK; END; /* Disable autocommit. */ SET AUTOCOMMIT=0; /* Start transaction scope. */ START TRANSACTION; /* A series of INSERT statement. */ INSERT INTO one (msg) VALUES (pv_one); INSERT INTO two (msg) VALUES (pv_two); INSERT INTO three (msg) VALUES (pv_three); INSERT INTO four (msg) VALUES (pv_four); /* Commit transaction set. */ COMMIT; END; $$ /* Reset delimiter to ; for SQL statements. */ DELIMITER ; |
The next block tests the locking()
procedure. The first and third calls are successful but the second one fails because the third parameter is too long for the msg
column in the three
table. The error triggers the EXIT
handler in the locking()
procedure.
/* Call locking procedure. */ CALL locking('Donald','Goofy','Mickey','Pluto'); CALL locking('Squirrel','Chipmunk','Monkey business','Raccoon'); CALL locking('Curly','Larry','Moe','Shemp'); |
The query block below:
/* Select from tables, which should be empty. */ SELECT * FROM one; SELECT * FROM two; SELECT * FROM three; SELECT * FROM four; |
Returns the following, which shows only the first and third test cases succeed:
+----+--------+ | id | msg | +----+--------+ | 1 | Donald | | 2 | Curly | +----+--------+ 2 rows in set (0.01 sec) +----+-------+ | id | msg | +----+-------+ | 1 | Goofy | | 2 | Larry | +----+-------+ 2 rows in set (0.00 sec) +----+--------+ | id | msg | +----+--------+ | 1 | Mickey | | 2 | Moe | +----+--------+ 2 rows in set (0.00 sec) +----+-------+ | id | msg | +----+-------+ | 1 | Pluto | | 2 | Shemp | +----+-------+ 2 rows in set (0.00 sec) |
As always, I hope this helps those trying to write transactions across multiple tables.
PL/SQL Inheritance Failure
PL/SQL is a great programming language as far as it goes but it lacks true type inheritance for its collections. While you can create an object type and subtype, you can’t work with collections of those types the same way. PL/SQL object type inheritance, unlike the Java class hierarchy and parallel array class hierarchy, only supports a class hierarchy. Effectively, that means:
- You can pass a subtype as a call parameter, or argument, to a parent data type in a function, procedure, or method signature, but
- You can’t pass a collection of a subtype as a call parameter, or argument, to a collection of parent type in a function, procedure, or method signature.
The limitation occurs because collections have their own data type, which is fixed when you create them. Worse yet, because Oracle has never seen fit to fix their two underlying code trees (23 years and counting since Oracle 8i), you have two types of collections using two distinct C/C++ libraries. You define collections of Attribute Data Types (ATDs) when you create a collection of a standard scalar data type, like NUMBER
, VARCHAR2
, or DATE
. You define collection of User-Defined Data Types (UTDs) when you create a collection of a SQL UDT or PL/SQL-only RECORD
data type. The former uses one C/C++ library and the latter another.
Now, Oracle even make the differences between Java and PL/SQL more complex because it treats collections known as tables, really lists in most programming languages, differently than varrays, or arrays. You create a TABLE
collection, or list, when you create a table of a scalar or UDT data type. There are two options when you create these object types, and they are:
- You create an empty collection with a no element constructor, which means you’ll need to allocate memory before assigning element values later in your program.
- You create a populated collection with a comma-delimited list of elements.
Both approaches give you a list of elements with a densely populated index. A “densely populated index” is Oracle’s jargon for how they characterize a 1-based sequence of integers without any gaps (e.g., 1, 2, 3, …). The initial construction works the same way whether you create a TABLE
or VARRAY
collection type. Unfortunately, after you’ve built the collection behaviors change. If you use Oracle’s Collection API to delete one or more items from a TABLE
collection type, you create gaps in the index’s sequence of values. That means you must use special logic to navigate across a TABLE
collection type to ensure it doesn’t fail when encountering a gap in the numeric sequence.
For example, here’s a FOR-LOOP
without the logic to vouchsafe a uninterrupted set of sequence values incrementing by a counter of 1 element at a time:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | DECLARE /* Create a local table collection. */ TYPE list IS TABLE OF VARCHAR2(10); /* Statically allocate memory and assign values to for elements. */ lv_list LIST := list('Moe','Shemp','Larry','Curly'); BEGIN /* Remove the second element, Shemp, from the collection of variable length strings. */ lv_list.DELETE(2); /* Loop through the target with a for loop, which depends on densely populated index values. */ FOR i IN 1..lv_list.COUNT LOOP dbms_output.put_line('['||lv_list(i)||']'); END LOOP; END; / |
The program fails when it tries to read the second element of the table collection, which was previously removed. It raises the following error message after print the first element of the table collection:
[Moe] DECLARE * ERROR at line 1: ORA-01403: no data found ORA-06512: at line 16 |
Conveniently, Oracle’s Collection API provides an EXISTS
method that we can use to check for the presence of an index’s value. Modifying line 16 by wrapping it in an IF-statement fixes one problem but identifies another:
15 16 17 18 19 | FOR i IN 1..lv_list.COUNT LOOP IF lv_list.EXISTS(i) THEN dbms_output.put_line('['||lv_list(i)||']'); END IF; END LOOP; |
The program no longer fails on a missing index value, or index gap, but it returns fewer lines of output than you might expect.
That’s because the Oracle Collection API’s COUNT
method returns the number of elements currently allocated in memory not the number of original elements. We learn that when we deleted the second element, Oracle deleted the memory allocated for it as well. This is the type of behavior you might expect for a singly linked list. It prints:
[Moe] [Larry] |
One more change is required to count past and to the highest index value. One line 15, change the COUNT
method call to the LAST
method call, which returns the highest index value.
15 16 17 18 19 | FOR i IN 1..lv_list.LAST LOOP IF lv_list.EXISTS(i) THEN dbms_output.put_line('['||lv_list(i)||']'); END IF; END LOOP; |
It now prints the three stooges we would expect to see:
[Moe] [Larry] [Curly] |
Realistically, a FOR-LOOP
is not the best control structure for a collection. You should use a WHILE-LOOP
and treat the incrementing value as an iterator rather than sequence index value. An iterator doesn’t worry about gaps in the sequence, it simply moves to the next element in the singly linked list. Here’s an example that uses the iterator approach with a WHILE-LOOP
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | DECLARE /* Create a local table collection. */ TYPE list IS TABLE OF VARCHAR2(10); /* Statically allocate memory and assign values to for elements. */ lv_list LIST := list('Moe','Shemp','Larry','Curly'); /* Declare a current index variable. */ CURRENT NUMBER; BEGIN /* Remove the second element, Shemp, from the collection of variable length strings. */ lv_list.DELETE(2); /* Loop through the target with a while loop, which doesn't depend on densely populated index values by setting the starting index value and increment as if with an iterator. */ CURRENT := lv_list.FIRST; WHILE NOT (CURRENT > lv_list.LAST) LOOP dbms_output.put_line('['||lv_list(CURRENT)||']'); CURRENT := lv_list.NEXT(CURRENT); END LOOP; END; / |
The iterator approach prints the elements as:
[Moe] [Larry] [Curly] |
You can reverse the process with the following changes to lines 20-24:
20 21 22 23 24 | CURRENT := lv_list.LAST; WHILE NOT (CURRENT < lv_list.FIRST) LOOP dbms_output.put_line('['||lv_list(CURRENT)||']'); CURRENT := lv_list.PRIOR(CURRENT); END LOOP; |
It prints the list backwards:
[Curly] [Larry] [Moe] |
After covering the issues with sparsely populated, those with gaps in the sequence of indexes values, table collections, let’s examine how you must work around PL/SQL’s lack of a parallel array class hierarchy. The solution lies in combining two programming concepts:
- A function to pack the sparsely populated table collection into a densely populated one, and
- A package with overloaded functions that pack different table collections.
To develop the test case, let’s use an ADT collection because it’s the simplest to work with. The following creates a table collection of a thirty character long scalar string:
1 2 3 | CREATE OR REPLACE TYPE list IS TABLE OF VARCHAR2(30); / |
The following pack function takes a table collection of the thirty character long scalar string, evaluates the string for missing elements, and packs the existing elements into a densely populated list:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | CREATE OR REPLACE FUNCTION pack ( pv_list LIST ) RETURN list IS /* Declare a new list. */ lv_new LIST := list(); BEGIN /* Read, check, and pack an old list into a new one. */ FOR i IN 1..pv_list.LAST LOOP IF pv_list.EXISTS(i) THEN lv_new.EXTEND; lv_new(lv_new.COUNT) := pv_list(i); END IF; END LOOP; RETURN lv_new; END; / |
This anonymous block tests the pack function:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | DECLARE /* Declare a list value. */ lv_test LIST := list('Moe','Shemp','Larry','Curly'); BEGIN /* Remove one element in the middle. */ lv_test.DELETE(2); /* Pack the list of elements into a sequence of values. */ lv_test := pack(lv_test); /* Print the list of elements from the packed list. */ FOR i IN 1..lv_test.COUNT LOOP dbms_output.put_line('['||lv_test(i)||']'); END LOOP; END; / |
It prints the expected three string values:
[Moe] [Larry] [Curly] |
Now, let’s expand the example to build an overloaded package. The first step requires building a base_t
object type and a table collection of the object type, like:
1 2 3 4 5 6 7 8 9 | CREATE OR REPLACE TYPE base_t IS OBJECT ( oid NUMBER ) INSTANTIABLE NOT FINAL; / CREATE OR REPLACE TYPE base_list IS TABLE OF base_t; / |
Next, you create a book_t
subtype of the base_t
object type and a book_list
table collection of the book_t
subtype, like:
1 2 3 4 5 6 7 8 9 | CREATE OR REPLACE TYPE book_t UNDER base_t ( title VARCHAR2(30) , COST NUMBER); / CREATE OR REPLACE TYPE book_list IS TABLE OF book_t; / |
We can test the base_t
and book_t
default constructors with the following SQL*Plus formatting and SQL query:
COL oid FORMAT 999 COL title FORMAT A20 COL COST FORMAT 99.99 SELECT * FROM TABLE(book_list(book_t(1,'Neuromancer',15.30) ,book_t(2,'Count Zero',7.99) ,book_t(3,'Mona Lisa Overdrive',7.99) ,book_t(4,'Burning Chrome',8.89))); |
It prints the following output:
OID TITLE COST ---- -------------------- ------ 1 Neuromancer 15.30 2 Count Zero 7.99 3 Mona Lisa Overdrive 7.99 4 Burning Chrome 8.89 |
The following is an overloaded package specification:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | CREATE OR REPLACE PACKAGE packer IS /* A simple ADT list of strings. */ FUNCTION pack ( pv_list LIST ) RETURN list; /* A UDT list of base objects. */ FUNCTION pack ( pv_list BASE_LIST ) RETURN base_list; /* A UDT list of subtype objects. */ FUNCTION pack ( pv_list BOOK_LIST ) RETURN book_list; END; / |
After you create the package specification, you need to provide the implementation. This is typical in any programming language that supports Interface Description Language (IDL). A package body provides the implementation for the package specification. The package body follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | CREATE OR REPLACE PACKAGE BODY packer IS /* A simple ADT list of strings. */ FUNCTION pack ( pv_list LIST ) RETURN list IS /* Declare a new list. */ lv_new LIST := list(); BEGIN /* Read, check, and pack an old list into a new one. */ FOR i IN 1..pv_list.LAST LOOP IF pv_list.EXISTS(i) THEN lv_new.EXTEND; lv_new(lv_new.COUNT) := pv_list(i); END IF; END LOOP; RETURN lv_new; END pack; /* A simple ADT list of strings. */ FUNCTION pack ( pv_list BASE_LIST ) RETURN base_list IS /* Declare a new list. */ lv_new BASE_LIST := base_list(); BEGIN /* Read, check, and pack an old list into a new one. */ FOR i IN 1..pv_list.LAST LOOP IF pv_list.EXISTS(i) THEN lv_new.EXTEND; lv_new(lv_new.COUNT) := pv_list(i); END IF; END LOOP; RETURN lv_new; END pack; /* A simple ADT list of strings. */ FUNCTION pack ( pv_list BOOK_LIST ) RETURN book_list IS /* Declare a new list. */ lv_new BOOK_LIST := book_list(); BEGIN /* Read, check, and pack an old list into a new one. */ FOR i IN 1..pv_list.LAST LOOP IF pv_list.EXISTS(i) THEN lv_new.EXTEND; lv_new(lv_new.COUNT) := pv_list(i); END IF; END LOOP; RETURN lv_new; END pack; END packer; / |
The test case for the base_list
object type is:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | DECLARE lv_test BASE_LIST := base_list(base_t(1),base_t(2) ,base_t(3),base_t(4)); BEGIN /* Remove one element in the middle. */ lv_test.DELETE(2); /* Pack the list of elements into a sequence of values. */ lv_test := packer.pack(lv_test); /* Print the list of elements from the packed list. */ FOR i IN 1..lv_test.LAST LOOP dbms_output.put_line('['||lv_test(i).oid||']'); END LOOP; END; / |
It prints the following output:
[1] [3] [4] |
The test case for the book_list
object type is:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | DECLARE lv_test BOOK_LIST := book_list(book_t(1,'Neuromancer',15.30) ,book_t(2,'Count Zero',7.99) ,book_t(3,'Mona Lisa Overdrive',7.99) ,book_t(4,'Burning Chrome',8.89)); BEGIN /* Remove one element in the middle. */ lv_test.DELETE(2); /* Pack the list of elements into a sequence of values. */ lv_test := packer.pack(lv_test); /* Print the list of elements from the packed list. */ FOR i IN 1..lv_test.LAST LOOP dbms_output.put_line( '['||lv_test(i).oid||']' ||'['||lv_test(i).title||']' ||'['||lv_test(i).COST||']'); END LOOP; END; / |
It prints the following output:
[1][Neuromancer][15.3] [3][Mona Lisa Overdrive][7.99] [4][Burning Chrome][8.89] |
In conclusion, you would not have to write overloaded methods for every list if PL/SQL supported class hierarchy and parallel array class hierarchy like Java. Unfortunately, it doesn’t and likely won’t in the future. You can pack table collections as a safety measure when they’re passed as parameters to other functions, procedures, or methods with the code above.
As always, I hope this helps those looking for a solution.
Wrap Oracle’s tnsping
If you’ve worked with the Oracle database a while, you probably noticed that some utilities write to stdout
for both standard output and what should be standard error (stderr
). One of those commands is the tnsping
utility.
You can wrap the tnsping
command to send the TNS-03505
error to stdout
with the following code. I put Bash functions like these in a library.sh
script, which I can source when automating tasks.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | #!/usr/bin/bash tnsping() { if [ ! -z ${1} ]; then # Set default return value. stdout=`$ORACLE_HOME/bin/tnsping ${1} | tail -1` # Check stdout to return 0 for success and 1 for failure. if [[ `echo ${stdout} | cut -c1-9` = 'TNS-03505' ]]; then python -c 'import os, sys; arg = sys.argv[1]; os.write(2,arg + "\n")' "${stdout}" else echo "${1}" fi fi } |
You should notice that the script uses a Python call to redirect the error message to standard out (stdout
) but you can redirect in Bash shell with the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | #!/usr/bin/bash tnsping() { if [ ! -z ${1} ]; then # Set default return value. stdout=`$ORACLE_HOME/bin/tnsping ${1} | tail -1` # Check stdout to return 0 for success and 1 for failure. if [[ `echo ${stdout} | cut -c1-9` = 'TNS-03505' ]]; then echo ${stdout} 1>&2 else echo "${1}" fi fi } |
Interactively, we can now test a non-existent service name like wrong
with this syntax:
tnsping wrong |
It’ll print the standard error to console, like:
TNS-03505: Failed to resolve name |
or, you can suppress standard error (stderr
) by redirecting it to the traditional black hole, like:
tnsping wrong 2>/dev/null |
After redirecting standard error (stderr
), you simply receive nothing back. That lets you evaluate in another script whether or not the utility raises an error.
In an automating Bash shell script, you use the source command to put the Bash function in scope, like this:
source library.sh |
As always, I hope this helps those looking for a solution.
Node.js & MySQL
These are my notes for creating a small Node.js application that queries a MySQL database. The post will show you how to:
- Configure your Node.js development directory.
- Build a small application to test a MySQL connection.
- Build a small application that connects to the MySQL database and queries data.
This blog post assumes you’ve performed a global install of Node.js on a Linux server. If you’re unfamiliar with how to perform a global Node.js installation, I cover how to do it in this earlier blog post.
Before you write the Node.js applicaiton, you need to setup a db
developer directory. Then, create a node_modules
symbolic link to the /usr/local/lib/node_modules
directory in the db
directory. You can use the following command from the db
directory:
ln -s /usr/local/lib/node_modules `pwd`/node_modules |
After creating the node_modules
symbolic link, you need to run the following two npm
commands:
npm init --y npm install --save mysql |
The first command sets up a generic package.json
file, and the second adds the mysql
package and supporting packages to the package.json
file. These two steps configure the Node.js side of these examples.
They both require that you create the student
user with a native password, like so:
CREATE USER 'student'@'localhost' IDENTIFIED WITH 'mysql_native_password' BY 'student'; GRANT ALL PRIVILEGES ON studentdb.* TO 'student'@'localhost'; FLUSH PRIVILEGES; |
The following example shows you how to check a connection to the MySQL database:
const mysql = require('mysql') const connection = mysql.createConnection({ host: 'localhost', user: 'student', password: 'student', database: 'studentdb' }) connection.connect((err) => { if (err) throw err else console.log('Connected to MySQL Server!\n') console.log('User configured wiht mysql_native_password.\n'); console.log('Press Ctrl-C to terminate ...') }) |
You extend the previous example by adding a query component and returning the query result
value to the console’s log (leveraging the w3school’s Node.js and MySQL tutorial example):
const mysql = require('mysql') const connection = mysql.createConnection({ host: 'localhost', user: 'student', password: 'student', database: 'studentdb' }) connection.connect((err) => { if (err) throw err else { console.log('Connected to MySQL Server!\n') connection.query('SELECT DISTINCT item_title FROM item', function (err, result) { if (err) throw err else console.log(result) console.log('Press Ctrl-C to terminate ...') }) } }) |
It should display the following:
Connected to MySQL Server! [ RowDataPacket { item_title: 'The Hunt for Red October' }, RowDataPacket { item_title: 'Star Wars I' }, RowDataPacket { item_title: 'Star Wars II' }, RowDataPacket { item_title: 'Star Wars III' }, RowDataPacket { item_title: 'The Chronicles of Narnia' }, RowDataPacket { item_title: 'RoboCop' }, RowDataPacket { item_title: 'Pirates of the Caribbean' }, RowDataPacket { item_title: 'MarioKart' }, RowDataPacket { item_title: 'Splinter Cell' }, RowDataPacket { item_title: 'Need for Speed' }, RowDataPacket { item_title: 'The DaVinci Code' }, RowDataPacket { item_title: 'Cars' }, RowDataPacket { item_title: 'Beau Geste' }, RowDataPacket { item_title: 'I Remember Mama' }, RowDataPacket { item_title: 'Tora! Tora! Tora!' }, RowDataPacket { item_title: 'A Man for All Seasons' }, RowDataPacket { item_title: 'Hook' }, RowDataPacket { item_title: 'Around the World in 80 Days' }, RowDataPacket { item_title: 'Harry Potter and the Sorcerer\'s Stone' }, RowDataPacket { item_title: 'Camelot' }, RowDataPacket { item_title: 'Casino Royale' }, RowDataPacket { item_title: 'Die Another Day' }, RowDataPacket { item_title: 'Golden Eye' }, RowDataPacket { item_title: 'Tomorrow Never Dies' }, RowDataPacket { item_title: 'The World Is Not Enough' }, RowDataPacket { item_title: 'Brave Heart' }, RowDataPacket { item_title: 'Christmas Carol' }, RowDataPacket { item_title: 'Scrooge' }, RowDataPacket { item_title: 'Clear and Present Danger' }, RowDataPacket { item_title: 'Harry Potter and the Chamber of Secrets' }, RowDataPacket { item_title: 'Harry Potter and the Prisoner of Azkaban' }, RowDataPacket { item_title: 'Harry Potter and the Goblet of Fire' }, RowDataPacket { item_title: 'Harry Potter and the Order of the Phoenix' }, RowDataPacket { item_title: 'Harry Potter and the Half Blood Prince' }, RowDataPacket { item_title: 'Harry Potter and the Deathly Hallows, Part 1' }, RowDataPacket { item_title: 'Harry Potter and the Deathly Hallows, Part 2' }, RowDataPacket { item_title: 'Tron' }, RowDataPacket { item_title: 'The Avengers' }, RowDataPacket { item_title: 'Thor: The Dark World' } ] Press Ctrl-C to terminate ... |
As always, I hope this helps those looking to learn
Recursive bash function
While teaching a class on the Linux Command-Line (CLI), the book gave an example of generating a list of random US telephone numbers into a file. The book uses the RANDOM
function to generate segments of the telephone number, and then the grep
command to identify malformed telephone numbers.
My students wanted me to explain why the numbers were malformed. I had to explain that the RANDOM
function returns a random number between 1 and 99,999. The RANDOM
function may return a 1 to 5 digit random number, which means you may get a 1-digit or 2-digit number when you request a 3-digit random number or a 1- to 3-digit number when you request a 4-digit random number.
The author’s example is:
for i in {1..10}; do echo "(${RANDOM:0:3}) ${RANDOM:0:3}-${RANDOM:0:4}" >> list.txt done |
They asked if there was a way to write a shell script that guaranteed random but well-formed US telephone numbers. I said yes, however, you need to write a recursive bash shell function and assign the result to a global variable set in the shell script.
They seemed doubtful, so I wrote it for them. Here’s the script if you’re interested in learning more about bash shell scripting. While I implemented it with an bash array, that’s optional.
#!/usr/bin/bash # ============================================================ # Name: telephone.sh # Author: Michael McLaughlin # Date: 05-May-2020 # ------------------------------------------------------------ # Purpose: Demonstrate how to generate random telehpone # numbers. The RANDOM function returns a random # number between 1 and 99999; and while you can # easily shave off a extra digit guarnteeing a # value above 100 is impossible without logic. # ============================================================ targetLength() { # Declare variable in function-level scope. randomString='' # Check the number of parameters to process. if [[ ${#} = 2 ]]; then # Assign value to function-level and local variables. randomString=${1} formatLength=${2} # Get the length of the telephone number as integer. length=`echo -n ${randomString} | wc -c` # Calculate any shortfall. short=$((${formatLength}-${length})) # Check if the telephone number is too short. if [[ ${short} > 0 ]]; then randomString=`echo "${randomString}${RANDOM:0:${short}}"` fi fi # Check if the combination of random numbers equals the target length # and assign the value to the global variable, or repeat processing # by making a recursive function call. if [[ `echo -n ${randomString} | wc -c` = ${formatLength} ]]; then result=${randomString} else targetLength ${randomString} ${formatLength} fi } # Declare global variable to support targetLength(). result='' # Declare an array of strings. declare -A telephone_parts # Generate one hundred random telephone numbers. for i in {1..100}; do # Create random three digit area code. targetLength ${RANDOM:0:3} 3 telephone_parts[1]=${result} # Create random three digit prefix code. targetLength ${RANDOM:0:3} 3 telephone_parts[2]=${result} # Create random four digit number code. targetLength ${RANDOM:0:4} 4 telephone_parts[3]=${result} # Print the telephone numbers. echo "[${i}] (${telephone_parts[1]}) ${telephone_parts[2]}-${telephone_parts[3]}" done |
For reference, a recursive function call isn’t required here. It could be done more effectively with the following while
loop:
targetLength() { # Declare variable in function-level scope. randomString='' short=1 # Check the number of parameters to process. if [[ ${#} = 2 ]]; then # Assign value to function-level and local variables. randomString=${1} formatLength=${2} # Check if the telephone number is too short. while [[ ${short} > 0 ]]; do # Get the length of the telephone number as integer. length=`echo -n ${randomString} | wc -c` # Calculate any shortfall. short=$((${formatLength}-${length})) # Assign new value to randomString. randomString=`echo "${randomString}${RANDOM:0:${short}}"` done # Assign randomString to global result variable. result=${randomString} fi } |
As always, I hope this helps those you want to learn or solve a problem.
MySQL Linux to Windows
My students want to transfer their course setup MySQL files from the Linux VM that I provide to Microsoft Windows 10. This post explains how because I found a couple small errors in the Google’d posts they would most likely see.
The first step is required because when I didn’t assign a name or domain to the the Fedora VM, which allows it to run as localhost on any student machine. In tandem, I didn’t assign a static IP address but opted for dynamic IP assignment. That means, the first step to securely copy the files requires you to find the assigned IP address. You can do that with the following Linux command:
ifconfig -a | grep 'inet[[:blank:]]' | head -1 | cut -c 14-30 |
It would return something like:
192.168.147.198 |
After you have discovered the IP address, you need to download PuTTy from their web site because includes the pscp (PuTTy Secure Copy) utility. I recommend you click on the MSI (Microsoft Installer Package) file, and install it on your Windows machine. As a rule, you should accept the default location, which is C:\Program Files\PuTTy
.
While you could alter your system-level %PATH%
environment variable after you install the software, I recommend you only include it in the %PATH%
within the scope of a Command (cmd
) shell. Navigate to your Windows Start and enter cmd
in the search field. It should launch the Command Prompt terminal, which is a terminal emulator.
In the terminal editor, type the following case sensitive command to add the PuTTy directory to your path (yes, it’s case sensitive):
SET PATH=%PATH%;%ProgramFiles%\PuTTy |
Now, you can securely copy the files and directory structure from Linux to Windows with the following command (where you replace the traditional server name with the dynamically assigned IP address). You should also be in the target directory where you want the files and directories copied:
C:\Data\cit225>pscp -r student@192.168.147.198:/home/student/Data/cit225/mysql . |
After entering the password for the student on the Linux VM, you should see the following copy over:
Raiders2.png | 99 kB | 99.5 kB/s | ETA: 00:00:00 | 100% LordOfTheRings3.png | 119 kB | 119.8 kB/s | ETA: 00:00:00 | 100% HarryPotter4.png | 103 kB | 103.9 kB/s | ETA: 00:00:00 | 100% Raiders1.png | 92 kB | 92.4 kB/s | ETA: 00:00:00 | 100% Raiders3.png | 123 kB | 123.9 kB/s | ETA: 00:00:00 | 100% LordOfTheRings2.png | 111 kB | 111.7 kB/s | ETA: 00:00:00 | 100% LordOfTheRings1.png | 103 kB | 104.0 kB/s | ETA: 00:00:00 | 100% HarryPotter2.png | 118 kB | 118.7 kB/s | ETA: 00:00:00 | 100% HarryPotter7.png | 150 kB | 150.2 kB/s | ETA: 00:00:00 | 100% HarryPotter3.png | 106 kB | 106.1 kB/s | ETA: 00:00:00 | 100% HarryPotter5.png | 82 kB | 82.5 kB/s | ETA: 00:00:00 | 100% HarryPotter6.png | 129 kB | 129.9 kB/s | ETA: 00:00:00 | 100% HarryPotter1.png | 118 kB | 118.8 kB/s | ETA: 00:00:00 | 100% HarryPotter8.png | 150 kB | 150.9 kB/s | ETA: 00:00:00 | 100% HarryPotter8.txt | 8 kB | 8.5 kB/s | ETA: 00:00:00 | 100% HarryPotter3.txt | 5 kB | 5.8 kB/s | ETA: 00:00:00 | 100% HarryPotter5.txt | 7 kB | 7.9 kB/s | ETA: 00:00:00 | 100% HarryPotter1.txt | 6 kB | 6.6 kB/s | ETA: 00:00:00 | 100% HarryPotter2.txt | 7 kB | 7.8 kB/s | ETA: 00:00:00 | 100% Raiders3.txt | 5 kB | 5.6 kB/s | ETA: 00:00:00 | 100% HarryPotter4.txt | 7 kB | 7.5 kB/s | ETA: 00:00:00 | 100% HarryPotter7.txt | 5 kB | 5.4 kB/s | ETA: 00:00:00 | 100% HarryPotter6.txt | 7 kB | 7.4 kB/s | ETA: 00:00:00 | 100% LOTRFellowship.txt | 4 kB | 5.0 kB/s | ETA: 00:00:00 | 100% apply_store_base.sql | 1 kB | 1.6 kB/s | ETA: 00:00:00 | 100% query_membership.sql | 0 kB | 0.3 kB/s | ETA: 00:00:00 | 100% apply_mysql_lab1.sql | 1 kB | 1.9 kB/s | ETA: 00:00:00 | 100% configure_mysql_web.sql | 37 kB | 37.1 kB/s | ETA: 00:00:00 | 100% seed_mysql_store_ri2.sql | 58 kB | 58.5 kB/s | ETA: 00:00:00 | 100% cleanup_mysql_store.sql | 5 kB | 5.4 kB/s | ETA: 00:00:00 | 100% create_mysql_store_ri2.sq | 21 kB | 21.1 kB/s | ETA: 00:00:00 | 100% |
My students will need to repeat this step to transfer all of the sample PHP files that demonstrate web application patterns. They also need to inspect individual files to ensure any path referencing commands are manually converted to their new Windows equivalent.
They can move the physical files as the root
superuser with the following pscp
command provide you haven’t stored the files somewhere other than the default location:
C:\Data\temp>pscp -r root@192.168.147.198:/var/lib/mysql . |
As always, I hope this helps those trying to sort things out.
Quick C How-to add .h
Somebody wanted a quick example of using a user-defined header file for a constant value. While I think there a lot of examples on the Internet already, here’s quick example.
You can define a header (or global.h
) file in the local directory with the value of pi
, like this:
1 2 | // A global header file for constants. double pi = 3.1415926535; |
Then, you can write a little circle.c
program like so:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | #include <stdio.h> #include "global.h" int main() { int r; double area; // Get the radius. printf("Enter a radius in inches: "); scanf("%i",&r); // Calculate the area of a circle. area = pi * (r * r); // Print the radius and circle area. printf("Area of a %d inch circle is %lf square inches!\n",r,area); return(0); } |
The local global.h
file is enclosed in double quotes rather than less than and great than symbols. You can compile circle.c
like this on Linux, provided both files are in the same directory:
gcc -o circle circle.c |
According to the gcc-documentation, the priority for include <> is, on a “standard Unix system”, as follows:
/usr/local/include libdir/gcc/target/version/include /usr/target/include /usr/include |
You should chmod the circle
file as executable and then you can run it like so:
./circle |
You’ll see more or less the following:
Enter a radius in inches: 3 Area of a 3 inch circle is 28.274334 square inches! |
As always, I hope the example helps those looking for a starting point.
DBeaver for PostgreSQL
I’m migrating my database classes from the Oracle database to the PostgreSQL database. Using the Oracle Express Edition has always required a virtualized image because students use Windows and Mac OS. Also, the university doesn’t like my use of a virtualized image. Virtualization imposes incremental cost on students to have high end laptops.
The available Docker images don’t typically support the Oracle Express Edition. That means there are licensing implications tied to Oracle.
As a committee, we decided to use PostgreSQL as our new database platform. We opted to use PostgreSQL over MySQL because it supports arrays and stored procedures. PostgreSQL PL/pgSQL are a solid implementation of stored procedures. While MySQL supports a stored procedure language, it effectively discourages using it.
We use sqlplus
, mysql
, or psql
Command-Line Interface (CLI) as the primary interface. That’s because CLI supports production-oriented deployment and DevOps practices. The Open Source DBeaver project provides a clean native GUI management console for PostgreSQL. DBeaver also supports other databases, which pgAdmin doesn’t.
You click the Download button from the DBeaver home page. It support Windows, Mac OS, and Linux distributions. I downloaded the Linux RPM package to my Downloads directory. You can run the RPM with the following syntax:
rpm -ivh dbeaver-ce-6.2.3-stable.x86_64.rpm |
It doesn’t check for dependencies. The installation is quick. You can verify that it’s installed with the following command:
which dbeaver |
It installs here:
/usr/bin/dbeaver |
If you failed to install the PostgreSQL JAR file, you’ll see the following error message:
It will look like the following on Linux:
You should have a user in the pg_hba.conf
file, like the following:
host all all 127.0.0.1/32 trust |
As always, I hope this helps those solving problems.
Postgres Drop Structures
While building my PostgreSQL environment for the class, I had to write a couple utilities. They do the following:
- Drops all the tables from a schema.
- Drops all the sequences from a schema that aren’t tied to an
_id
column with aSERIAL
data type. - Drops all the functions and procedures (qualified as routines) from a schema.
- Drops all the triggers from a schema.
The following gives you the code for all four files: drop_tables.sql, drop_sequences.sql, drop_routines.sql, and drop_triggers.sql.
- The drop_tables.sql Script:
- The drop_sequences.sql script:
- The drop_routines.sql script:
- The drop_triggers.sql script:
/* Verify all tables present. */ SELECT table_name FROM information_schema.tables WHERE table_catalog = current_setting('videodb.catalog_name') AND table_schema = 'public'; DO $$ DECLARE /* Declare an indefinite length string and record variable. */ sql VARCHAR; row RECORD; /* Declare a cursor. */ table_cursor CURSOR FOR SELECT table_name FROM information_schema.tables WHERE table_catalog = current_setting('videodb.catalog_name') AND table_schema = 'public'; BEGIN /* Open the cursor. */ OPEN table_cursor; LOOP /* Fetch table names. */ FETCH table_cursor INTO row; /* Exit when no more records are found. */ EXIT WHEN NOT FOUND; /* Concatenate together a DDL to drop the table with prejudice. */ sql := 'DROP TABLE IF EXISTS '||row.table_name||' CASCADE'; /* Execute the DDL statement. */ EXECUTE sql; END LOOP; /* Close the cursor. */ CLOSE table_cursor; END; $$; /* Verify all tables are dropped. */ SELECT table_name FROM information_schema.tables WHERE table_catalog = current_setting('videodb.catalog_name') AND table_schema = 'public'; |
/* Verify all tables present. */ SELECT sequence_name FROM information_schema.sequences WHERE sequence_catalog = current_setting('videodb.catalog_name') AND sequence_schema = 'public'; DO $$ DECLARE /* Declare an indefinite length string and record variable. */ sql VARCHAR; row RECORD; /* Declare a cursor. */ sequence_cursor CURSOR FOR SELECT sequence_name FROM information_schema.sequences WHERE sequence_catalog = current_setting('videodb.catalog_name') AND sequence_schema = 'public'; BEGIN /* Open the cursor. */ OPEN sequence_cursor; LOOP /* Fetch table names. */ FETCH sequence_cursor INTO row; /* Exit when no more records are found. */ EXIT WHEN NOT FOUND; /* Concatenate together a DDL to drop the table with prejudice. */ sql := 'DROP SEQUENCE IF EXISTS '||row.sequence_name; /* Execute the DDL statement. */ EXECUTE sql; END LOOP; /* Close the cursor. */ CLOSE sequence_cursor; END; $$; /* Verify all tables are dropped. */ SELECT sequence_name FROM information_schema.sequences WHERE sequence_catalog = current_setting('videodb.catalog_name') AND sequence_schema = 'public'; |
/* Verify all tables present. */ SELECT routine_name , routine_type FROM information_schema.routines WHERE specific_catalog = current_setting('videodb.catalog_name') AND specific_schema = 'public'; DO $$ DECLARE /* Declare an indefinite length string and record variable. */ sql VARCHAR; row RECORD; /* Declare a cursor. */ routine_cursor CURSOR FOR SELECT routine_name , routine_type FROM information_schema.routines WHERE specific_catalog = current_setting('videodb.catalog_name') AND routine_schema = 'public'; BEGIN /* Open the cursor. */ OPEN routine_cursor; LOOP /* Fetch table names. */ FETCH routine_cursor INTO row; /* Exit when no more records are found. */ EXIT WHEN NOT FOUND; /* Concatenate together a DDL to drop the table with prejudice. */ sql := 'DROP '||row.routine_type||' IF EXISTS '||row.routine_name; /* Execute the DDL statement. */ EXECUTE sql; END LOOP; /* Close the cursor. */ CLOSE routine_cursor; END; $$; /* Verify all tables are dropped. */ SELECT routine_name , routine_type FROM information_schema.routines WHERE specific_catalog = 'videodb' AND specific_schema = 'public'; |
/* Verify all tables present. */ SELECT trigger_name FROM information_schema.triggers WHERE trigger_catalog = current_setting('videodb.catalog_name') AND trigger_schema = 'public'; DO $$ DECLARE /* Declare an indefinite length string and record variable. */ sql VARCHAR; row RECORD; /* Declare a cursor. */ trigger_cursor CURSOR FOR SELECT trigger_name FROM information_schema.triggers WHERE trigger_catalog = current_setting('videodb.catalog_name') AND trigger_schema = 'public'; BEGIN /* Open the cursor. */ OPEN trigger_cursor; LOOP /* Fetch table names. */ FETCH trigger_cursor INTO row; /* Exit when no more records are found. */ EXIT WHEN NOT FOUND; /* Concatenate together a DDL to drop the table with prejudice. */ sql := 'DROP TRIGGER IF EXISTS '||row.trigger_name; /* Execute the DDL statement. */ EXECUTE sql; END LOOP; /* Close the cursor. */ CLOSE trigger_cursor; END; $$; /* Verify all tables are dropped. */ SELECT trigger_name FROM information_schema.triggers WHERE trigger_catalog = current_setting('videodb.catalog_name') AND trigger_schema = 'public'; |
You can create a cleanup_catalog.sql script to call all four in sequence, like the following:
\i /home/student/Data/cit225/postgres/lib/utility/drop_tables.sql \i /home/student/Data/cit225/postgres/lib/utility/drop_sequences.sql \i /home/student/Data/cit225/postgres/lib/utility/drop_routines.sql \i /home/student/Data/cit225/postgres/lib/utility/drop_triggers.sql |
The nice thing about this approach is that you won’t see any notices when tables, sequences, routines, or triggers aren’t found. It’s a clean approach to cleaning the schema for a testing environment.
Postgres 11 Video DB
Installing PostgreSQL 11 on Fedora, Version 30, requires an update to my previous instructions to create a sandboxed user. A sandboxed user can only access a non-data dictionary database with a password. In the real world, rather than a personal test instance you would configure users to include aspects of networking. However, this post is only showing you how to connect from the local server.
This post builds on my PostgreSQL Installation blog post and shows you how to create a tablespace, database, role, and user. It also shows you how to change the default configuration for how users connect to the database.
The following steps create a tablespace, database, role, and user:
- Create tablespace
The directory for the data dictionary changes with PostgreSQL has changed. You can find it with the following command:
postgres=# show data_directory; |
This will return the following:
data_directory ------------------------ /var/lib/pgsql/11/data (1 row) |
You need to create a physical videoDB
subdirectory in the /var/lib/pgsql/11
directory. You can use the following syntax from the /var/lib/pgsql/11
directory to create the videoDB
subdirectory:
mkdir videoDB |
- Create Tablespace
You can create a video_db
tablespace with the following syntax:
CREATE TABLESPACE video_db OWNER postgres LOCATION '/var/lib/pgsql/11/videoDB'; |
This will return the following:
CREATE TABLESPACE |
You can query whether you successfully create the video_db
tablespace with the following:
SELECT * FROM pg_tablespace; |
It should return the following:
spcname | spcowner | spcacl | spcoptions ------------+----------+--------+------------ pg_default | 10 | | pg_global | 10 | | video_db | 10 | | (3 rows) |
- Create a Database
You can create a videodb
database with the following syntax:
CREATE DATABASE videodb WITH OWNER = postgres ENCODING = 'UTF8' TABLESPACE = video_db LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8' CONNECTION LIMIT = -1; |
You can verify the creation of the videodb
with the following command:
postgres# \l |
It should show you a display like the following:
List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+----------+----------+-------------+-------------+----------------------- postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres videodb | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres + | | | | | postgres=CTc/postgres+ | | | | | dba=CTc/postgres (4 rows) |
Then, you can assign comment to the database with the following syntax:
COMMENT ON DATABASE videodb IS 'Video Database'; |
- Create a Role, Grant, and User
In this section you create a dba
role, grant privileges on a videodb
database to a role, and create a user with the role that you created previously with the following three statements. There are three steps in this sections.
- The first step creates a
dba
role:CREATE ROLE dba WITH SUPERUSER;
- The second step grants all privileges on a
videodb
database to adba
role:GRANT ALL PRIVILEGES ON DATABASE videodb TO dba;
- The third step creates a
student
user with thedba
role:CREATE USER student WITH ROLE dba ENCRYPTED PASSWORD 'student';
- It is possible that you may (and should if this is a new instance you are building) encounter an error when you try to connect as a sandboxed user. The syntax to connect as the student user is:
psql -d videodb -U student -W
You may encounter this error:
psql: FATAL: Peer authentication failed for user "student"
You can fix this in PostgreSQL 11 by changing the user access parameters in the
pg_hba.conf
configuration file. The file is found in the/var/lib/pgsql/11/data
directory and you need to edit it as theroot
orpostgres
user.The default entry is:
# TYPE DATABASE USER ADDRESS METHOD # "local" is for Unix domain socket connections only local all all peer
You should replace it with the following:
# TYPE DATABASE USER ADDRESS METHOD # "local" is for Unix domain socket connections only local postgres all peer local videodb student md5
After you save those changes in the
pg_hba.conf
file, you need to restart the PostgreSQL (postgresql-11
) service. You can do that with the following command as theroot
user:service postgresql-11 restart
- Connect to the
videodb
as thestudent
user
Once the postgresql-11
service is restarted, you can connect with the sandboxed student
user with this syntax:
psql -d videodb -U student -W |
If you did everything correctly, you should see the following after correctly providing the student
password for the student
user:
psql (11.4, server 11.5) Type "help" for help. videodb=> |
After connecting to the videodb
database, you can query the current database, like
SELECT current_database(); |
It should return the following:
current_database ------------------ videodb (1 row) |
This has shown you how to create a videodb
database, dba
role, and student
user.