Archive for the ‘Unix’ Category
MySQL 5-Table Procedure
A student wanted a better example of writing a MySQL Persistent Stored Module (PSM) that maintains transactional scope across a couple tables. Here’s the one I wrote about ten years ago to build the MySQL Video Store model. It looks I neglected to put it out there before, so here it is for reference.
-- Conditionally drop procedure if it exists. DROP PROCEDURE IF EXISTS contact_insert; -- Reset the delimiter so that a semicolon can be used as a statement and block terminator. DELIMITER $$ SELECT 'CREATE PROCEDURE contact_insert' AS "Statement"; CREATE PROCEDURE contact_insert ( pv_member_type CHAR(12) , pv_account_number CHAR(19) , pv_credit_card_number CHAR(19) , pv_credit_card_type CHAR(12) , pv_first_name CHAR(20) , pv_middle_name CHAR(20) , pv_last_name CHAR(20) , pv_contact_type CHAR(12) , pv_address_type CHAR(12) , pv_city CHAR(30) , pv_state_province CHAR(30) , pv_postal_code CHAR(20) , pv_street_address CHAR(30) , pv_telephone_type CHAR(12) , pv_country_code CHAR(3) , pv_area_code CHAR(6) , pv_telephone_number CHAR(10)) MODIFIES SQL DATA BEGIN /* Declare variables to manipulate auto generated sequence values. */ DECLARE member_id int unsigned; DECLARE contact_id int unsigned; DECLARE address_id int unsigned; DECLARE street_address_id int unsigned; DECLARE telephone_id int unsigned; /* Declare local constants for who-audit columns. */ DECLARE lv_created_by int unsigned DEFAULT 1001; DECLARE lv_creation_date DATE DEFAULT UTC_DATE(); DECLARE lv_last_updated_by int unsigned DEFAULT 1001; DECLARE lv_last_update_date DATE DEFAULT UTC_DATE(); /* Declare a locally scoped variable. */ DECLARE duplicate_key INT DEFAULT 0; /* Declare a duplicate key handler */ DECLARE CONTINUE HANDLER FOR 1062 SET duplicate_key = 1; /* Start the transaction context. */ START TRANSACTION; /* Create a SAVEPOINT as a recovery point. */ SAVEPOINT all_or_none; /* Insert into the first table in sequence based on inheritance of primary keys by foreign keys. */ INSERT INTO member ( member_type , account_number , credit_card_number , credit_card_type , created_by , creation_date , last_updated_by , last_update_date ) VALUES ((SELECT common_lookup_id FROM common_lookup WHERE common_lookup_context = 'MEMBER' AND common_lookup_type = pv_member_type) , pv_account_number , pv_credit_card_number ,(SELECT common_lookup_id FROM common_lookup WHERE common_lookup_context = 'MEMBER' AND common_lookup_type = pv_credit_card_type) , lv_created_by , lv_creation_date , lv_last_updated_by , lv_last_update_date ); /* Preserve the sequence by a table related variable name. */ SET member_id = last_insert_id(); /* Insert into the first table in sequence based on inheritance of primary keys by foreign keys. */ INSERT INTO contact VALUES ( null , member_id ,(SELECT common_lookup_id FROM common_lookup WHERE common_lookup_context = 'CONTACT' AND common_lookup_type = pv_contact_type) , pv_first_name , pv_middle_name , pv_last_name , lv_created_by , lv_creation_date , lv_last_updated_by , lv_last_update_date ); /* Preserve the sequence by a table related variable name. */ SET contact_id = last_insert_id(); /* Insert into the first table in sequence based on inheritance of primary keys by foreign keys. */ INSERT INTO address VALUES ( null , last_insert_id() ,(SELECT common_lookup_id FROM common_lookup WHERE common_lookup_context = 'MULTIPLE' AND common_lookup_type = pv_address_type) , pv_city , pv_state_province , pv_postal_code , lv_created_by , lv_creation_date , lv_last_updated_by , lv_last_update_date ); /* Preserve the sequence by a table related variable name. */ SET address_id = last_insert_id(); /* Insert into the first table in sequence based on inheritance of primary keys by foreign keys. */ INSERT INTO street_address VALUES ( null , last_insert_id() , pv_street_address , lv_created_by , lv_creation_date , lv_last_updated_by , lv_last_update_date ); /* Insert into the first table in sequence based on inheritance of primary keys by foreign keys. */ INSERT INTO telephone VALUES ( null , contact_id , address_id ,(SELECT common_lookup_id FROM common_lookup WHERE common_lookup_context = 'MULTIPLE' AND common_lookup_type = pv_telephone_type) , pv_country_code , pv_area_code , pv_telephone_number , lv_created_by , lv_creation_date , lv_last_updated_by , lv_last_update_date); /* This acts as an exception handling block. */ IF duplicate_key = 1 THEN /* This undoes all DML statements to this point in the procedure. */ ROLLBACK TO SAVEPOINT all_or_none; END IF; /* This commits the write when successful and is harmless otherwise. */ COMMIT; END; $$ -- Reset the standard delimiter to let the semicolon work as an execution command. DELIMITER ; |
You can then call the procedure, like:
SELECT 'CALL contact_insert() PROCEDURE 5 times' AS "Statement"; CALL contact_insert('INDIVIDUAL','R11-514-34','1111-1111-1111-1111','VISA_CARD','Goeffrey','Ward','Clinton','CUSTOMER','HOME','Provo','Utah','84606','118 South 9th East','HOME','011','801','423\-1234'); CALL contact_insert('INDIVIDUAL','R11-514-35','1111-2222-1111-1111','VISA_CARD','Wendy',null,'Moss','CUSTOMER','HOME','Provo','Utah','84606','1218 South 10th East','HOME','011','801','423-1234'); CALL contact_insert('INDIVIDUAL','R11-514-36','1111-1111-2222-1111','VISA_CARD','Simon','Jonah','Gretelz','CUSTOMER','HOME','Provo','Utah','84606','2118 South 7th East','HOME','011','801','423-1234'); CALL contact_insert('INDIVIDUAL','R11-514-37','1111-1111-1111-2222','MASTER_CARD','Elizabeth','Jane','Royal','CUSTOMER','HOME','Provo','Utah','84606','2228 South 14th East','HOME','011','801','423-1234'); CALL contact_insert('INDIVIDUAL','R11-514-38','1111-1111-3333-1111','VISA_CARD','Brian','Nathan','Smith','CUSTOMER','HOME','Spanish Fork','Utah','84606','333 North 2nd East','HOME','011','801','423-1234'); |
I hope this code complete approach helps those looking to learn how to write MySQL PSMs.
PostgreSQL Arrays
If you’re wondering about this post, it shows the basic array of a set of integers and strings before showing you how to create nested tables of data in PostgreSQL. By the way, they’re not called nested tables in PostgreSQL, like they are in Oracle but perform like their Oracle cousins.
Let’s create a table with an auto-incrementing column and two arrays, one array of integers and another of strings:
-- Conditionally drop the demo table. DROP TABLE IF EXISTS demo; -- Create the test table. CREATE TABLE demo ( demo_id serial , demo_number integer[5] , demo_string varchar(5)[7]); |
You can insert test values like this:
INSERT INTO demo (demo_number, demo_string) VALUES ( array[1,2,3,4,5] , array['One','Two','Three','Four','Five','Six','Seven']); |
Then, you can query them with this unnest function, like:
SELECT unnest(demo_number) AS numbers , unnest(demo_string) AS strings FROM demo; |
It returns:
numbers | strings ---------+--------- 1 | One 2 | Two 3 | Three 4 | Four 5 | Five | Six | Seven (7 rows) |
You may note that the two arrays are asymmetrical. It only becomes an issue when you navigate the result in a PL/pgSQL cursor or imperative programming language, like Python.
Now, let’s do something more interesting like work with a composite user-defined type, like the player structure. You would create the composite user-defined type with this syntax:
-- Conditionally drop the player type. DROP TYPE IF EXISTS player; -- Create the player type. CREATE TYPE player AS ( player_no integer , player_name varchar(24) , player_position varchar(14) , ab integer , r integer , h integer , bb integer , rbi integer ); |
You can create a world_series table that include a players column that uses an array of player type, like
-- Conditionally drop the world_series table. DROP TABLE IF EXISTS world_series; -- Create the player type. CREATE TABLE world_series ( world_series_id serial , team varchar(24) , players player[30] , game_no integer , year integer ); |
If you’re familiar with the Oracle Database, you’d have to specify a tested table in the syntax. Fortunately, PostgreSQL doesn’t require that.
Insert two rows with the following statement:
INSERT INTO world_series ( team , players , game_no , year ) VALUES ('San Francisco Giants' , array[(24,'Willie Mayes','Center Fielder',5,0,1,0,0)::player ,(5,'Tom Haller','Catcher',4,1,2,0,2)::player] , 4 , 1962 ); |
You can append to the array with the following syntax. A former student and I have a disagreement on whether this is shown in the PostgreSQL 8.15.4 Modifying Array documentation. I believe it can be inferred from the document and he doesn’t believe so. Anyway, here’s how you add an element to an existing array in a table with the UPDATE statement:
UPDATE world_series SET players = (SELECT array_append(players,(7,'Henry Kuenn','Right Fielder',3,0,0,1,0)::player) FROM world_series) WHERE team = 'San Francisco Giants' AND year = 1962 AND game_no = 4; |
Like Oracle’s nested tables, PostgreSQL’s arrays of composite user-defined types requires writing a PL/pgSQL function. I’ll try to add one of those shortly in another blog entry to show you how to edit and replace entries in stored arrays of composite user-defined types.
You can query the unnested rows and get a return set like a Python tuple with the following query:
SELECT unnest(players) AS player_list FROM world_series WHERE team = 'San Francisco Giants' AND year = 1962 AND game_no = 4; |
It returns the three rows from the players array:
player_list ---------------------------------------------- (24,"Willie Mayes","Center Field",5,0,1,0,0) (5,"Tom Haller",Catcher,4,1,2,0,2) (7,"Henry Kuenn","Right Fielde",3,0,0,1,0) (3 rows) |
It returns the data set in entry-order. If we step outside of the standard 8.15 Arrays PostgreSQL Documentation, you can do much more with arrays (or nested tables). The balance of this example demonstrates some new syntax that helps you achieve constructive outcomes in PostgreSQL.
You can use a Common Table Expression (CTE) to get the columnar display of the player composite user-defined type. This type of solution is beyond the standard , like:
WITH list AS (SELECT unnest(players) AS row_result FROM world_series WHERE team = 'San Francisco Giants' AND year = 1962 AND game_no = 4) SELECT (row_result).player_name , (row_result).player_no , (row_result).player_position FROM list; |
If you’re unfamiliar with accessing composite user-defined types, I wrote a post on that 7 years ago. You can find the older blog entry PostgreSQL Composites on my blog.
It returns only the three requested columns of the player composite user-defined type:
player_name | player_no | player_position --------------+-----------+----------------- Willie Mayes | 24 | Center Fielder Tom Haller | 5 | Catcher Henry Kuenn | 7 | Right Fielder (3 rows) |
You should note that the data is presented in an entry-ordered manner when unnested alone in the SELECT-list. That behavior changes when the SELECT-list includes non-array data.
The easiest way to display data from the non-array and array columns is to list them inside the SELECT-list of the CTE, like:
WITH list AS (SELECT game_no AS game , year , unnest(players) AS row_result FROM world_series WHERE team = 'San Francisco Giants' AND year = 1962 AND game_no = 4) SELECT game , year , (row_result).player_name , (row_result).player_no , (row_result).player_position FROM list; |
It returns an ordered set of unnested rows when you include non-array columns, like:
game | year | player_name | player_no | player_position ------+------+--------------+-----------+----------------- 4 | 1962 | Henry Kuenn | 7 | Right Fielder 4 | 1962 | Tom Haller | 5 | Catcher 4 | 1962 | Willie Mayes | 24 | Center Fielder (3 rows) |
While you can join the world_series table to the unnested array rows (returned as a derived table, its a bad idea. The mechanics to do it require you to return the primary key column in the same SELECT-list of the CTE. Then, you join the CTE list to the world_series table by using the world_series_id primary key.
However, there is no advantage to an inner join approach and it imposes unnecessary processing on the database server. The odd rationale that I heard when I noticed somebody was using a CTE to base-table join was: “That’s necessary so they could use column aliases for the non-array columns.” That’s not true because you can use the aliases inside the CTE, as shown above when game is an alias to the game_no column.
As always, I hope this helps those looking to solve a problem in PostgreSQL.
Setting SQL_MODE
In MySQL, the @@sql_mode parameter should generally use ONLY_FULL_GROUP_BY. If it doesn’t include it and you don’t have the ability to change the database parameters, you can use a MySQL PSM (Persistent Stored Module), like:
Create the set_full_group_by procedure:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | -- Drop procedure conditionally on whether it exists already. DROP PROCEDURE IF EXISTS set_full_group_by; -- Reset delimter to allow semicolons to terminate statements. DELIMITER $$ -- Create a procedure to verify and set connection parameter. CREATE PROCEDURE set_full_group_by() LANGUAGE SQL NOT DETERMINISTIC SQL SECURITY DEFINER COMMENT 'Set connection parameter when not set.' BEGIN /* Check whether full group by is set in the connection and if unset, set it in the scope of the connection. */ IF NOT EXISTS (SELECT NULL WHERE REGEXP_LIKE(@@SQL_MODE,'ONLY_FULL_GROUP_BY')) THEN SET SQL_MODE=(SELECT CONCAT(@@sql_mode,',ONLY_FULL_GROUP_BY')); END IF; END; $$ -- Reset the default delimiter. DELIMITER ; |
Run the following SQL command before you attempt the exercises in the same session scope:
CALL set_full_group_by(); |
As always, I hope this helps those looking for a solution. Naturally, you can simply use the SET command on line #21 above.
Drop Overloaded Routine
In October 2019, I wrote a post with anonymous block programs to drop tables, sequences, routines, and triggers. Two weeks later, I wrote another post to drop all overloaded routines. However, I recognized the other day that I should have written a function that let you target which function or procedure you want to drop.
The older code only let you drop all of your functions or procedures. That was overkill when you’re working on new functions or procedures.
This post provides a utility for those writing functions and procedures in a public schema of any database in a PostgreSQL installation. It is designed to drop functions or procedures from the public schema.
The code follows below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 | CREATE OR REPLACE FUNCTION drop_routine( IN pv_routine_name VARCHAR(64) , IN pv_routine_type VARCHAR(64)) RETURNS INTEGER AS $$ DECLARE /* Declare the current catalog. */ lv_local_catalog VARCHAR(64) := current_database(); /* Declare return type variable. */ lv_retval INTEGER := 1; /* Manage data dictionary case mechanics: ====================================== routine_name is always in lowercase. routine_type is always in uppercase. */ lv_routine_name VARCHAR(64) := LOWER(pv_routine_name); lv_routine_type VARCHAR(64) := UPPER(pv_routine_type); /* Declare an indefinite length string for SQL statement. */ sql VARCHAR; /* Declare variables to manage cursor return values. */ row RECORD; arg VARCHAR; /* Declare parameter list. */ list VARCHAR; /* Declare a routine cursor. */ routine_cursor CURSOR( cv_routine_name VARCHAR , cv_routine_type VARCHAR ) FOR SELECT r.routine_name , r.specific_name , r.routine_type FROM information_schema.routines r WHERE r.specific_catalog = current_database() AND r.routine_schema = 'public' AND r.routine_type = cv_routine_type AND r.routine_name = cv_routine_name; /* Declare a parameter cursor. */ parameter_cursor CURSOR( cv_specific_name VARCHAR ) FOR SELECT args.data_type FROM information_schema.parameters args WHERE args.specific_catalog = current_database() AND args.specific_schema = 'public' AND args.specific_name = cv_specific_name; BEGIN /* Open the cursor. */ OPEN routine_cursor(lv_routine_name, lv_routine_type); <<row_loop>> LOOP /* Fetch table names. */ FETCH routine_cursor INTO row; /* Exit when no more records are found. */ EXIT row_loop WHEN NOT FOUND; /* Initialize parameter list. */ list := '('; /* Open the parameter cursor. */ OPEN parameter_cursor(row.specific_name::varchar); <<parameter_loop>> LOOP FETCH parameter_cursor INTO arg; /* Exit the parameter loop. */ EXIT parameter_loop WHEN NOT FOUND; /* Add parameter and delimit more than one parameter with a comma. */ IF LENGTH(list) > 1 THEN list := CONCAT(list,',',arg); ELSE list := CONCAT(list,arg); END IF; END LOOP; /* Close the parameter list. */ list := CONCAT(list,')'); /* Close the parameter cursor. */ CLOSE parameter_cursor; /* Concatenate together a DDL to drop the table with prejudice. */ sql := 'DROP '||row.routine_type||' IF EXISTS '||row.routine_name||list; /* Execute the DDL statement. */ EXECUTE sql; /* Assign success flag of 0. */ lv_retval := 0; END LOOP; /* Close the routine_cursor. */ CLOSE routine_cursor; /* Return the output text variable. */ RETURN lv_retval; END $$ LANGUAGE plpgsql; |
If you now create a series of hello overloaded functions, like:
CREATE OR REPLACE FUNCTION hello() RETURNS text AS $$ DECLARE output VARCHAR; BEGIN SELECT 'Hello World!' INTO output; RETURN output; END $$ LANGUAGE plpgsql; CREATE OR REPLACE FUNCTION hello(whom text) RETURNS text AS $$ DECLARE output VARCHAR; BEGIN SELECT CONCAT('Hello ',whom,'!') INTO output; RETURN output; END $$ LANGUAGE plpgsql; CREATE OR REPLACE FUNCTION hello(id int, whom text) RETURNS text AS $$ DECLARE output VARCHAR; BEGIN SELECT CONCAT('[',id,'] Hello ',whom,'!') INTO output; RETURN output; END $$ LANGUAGE plpgsql; |
After you create the overloaded functions, you can query their status from the information_schema.routines table in the data dictionary:
SELECT routine_name , specific_name , routine_type FROM information_schema.routines WHERE specific_catalog = current_setting('videodb.catalog_name') AND routine_schema = 'public' AND routine_name = 'hello'; |
Which shows you the three versions of the hello function:
routine_name | specific_name | routine_type --------------+---------------+-------------- hello | hello_18100 | FUNCTION hello | hello_18101 | FUNCTION hello | hello_18102 | FUNCTION (3 rows) |
You can drop all versions of the hello functions by calling the drop_routine function:
SELECT CASE WHEN drop_routine('hello','function') = 0 THEN 'Success' ELSE 'Failure' END AS drop_routine; |
It returns the following:
drop_routine -------------- Success (1 row) |
As always, I hope this helps those looking for how to routinely test new functions and procedures.
PostgreSQL CLI Error
Problems get reported to me all the time on installations for my students, this one was interesting. They got an error complaining about a missing libpq.so.5 library.
psql: /usr/pgsql-11/lib/libpq.so.5: no version information available (required by psql) psql: /usr/pgsql-11/lib/libpq.so.5: no version information available (required by psql) could not change directory to "/root": Permission denied psql (11.7, server 11.8) Type "help" for help. postgres=# |
It appeared as a mismatch of libraries but it’s not that. For reference, this was a Fedora instance. I ran the rpm utility:
rpm -qa | grep postgres |
It returned:
postgresql11-libs-11.8-1PGDG.f30.x86_64 postgresql-11.7-2.fc30.x86_64 postgresql-server-11.7-2.fc30.x86_64 |
Then, I had them run the rpm utility again looking for the Python driver for PostgreSQL:
rpm -qa | grep psycopg2 |
It returned:
python3-psycopg2-2.7.7-1.fc30.x86_64 |
Then, it was easy to explain. The Python psycopg2 library uses both PostgreSQL 11.7 and 11.8 dependent libraries and the libpq.so.5 library is missing version information. You must ignore the error, which is really only a warning message, when you want to work on Fedora, PostgreSQL 11, and Python 3.
MySQL WITH Clause
When I went over my example of using the WITH clause to solve how to use a series of literal values in data sets, some students got it right away and some didn’t. The original post showed how to solve a problem where one value in the data set is returned in the SELECT-list and two values are used as the minimum and maximum values with a BETWEEN operator. It used three approaches with literal values:
- A list of Python dictionaries that require you to filter the return set from the database through a range loop and if statement that mimics a SQL BETWEEN operator.
- A WITH clause that accepts the literals as bind variables to filter the query results inside the query.
- A table design that holds the literals values that an analyst might use for reporting.
It was the last example that required elaboration. I explained you might build a web form that uses a table, and the table could allow a data analyst to enter parameter sets. That way the analyst could submit a flag value to use one or another set of values. I threw out the idea on the whiteboard of introducing a report column to the prior post’s level table. The student went off to try it.
Two problems occurred. The first was in the design of the new table and the second was how to properly use the MySQL Python driver.
Below is a formal table design that supports this extension of the first blog post as a list of parameter values. It uses a report column as a super key to return a set of possible values. One value will show in the SELECT-list and the other two values deploy as the minimum and maximum values in a BETWEEN operator. It is seeded with two sets of values. One of the report possibilities is Summary level with three possibilities and the other is the Detail level with five possibilities.
-- Conditionally drop the levels table. DROP TABLE IF EXISTS levels; -- Create the levels list. CREATE TABLE levels ( level VARCHAR(16) , report ENUM('Summary','Detail') , min_roles INT , max_roles INT ); -- Insert values into the list table. INSERT INTO levels ( level, report, min_roles, max_roles ) VALUES ('Hollywood Star','Summary', 30, 99999) ,('Prolific Actor','Summary', 20, 29) ,('Newcommer','Summary', 1, 19) ,('Hollywood Star','Detail', 30, 99999) ,('Prolific Actor','Detail', 20, 29) ,('Regular Actor','Detail', 10, 19) ,('Actor','Detail', 5, 9) ,('Newcommer','Detail', 1, 4); |
The foregoing table design uses an ENUM type because reporting parameter sets are typically fewer than 64 possibilities. If you use the table to support multiple reports, you should add a second super key column like report_type. The report_type column key would let you use the table to support a series of different report parameter lists.
While the student used a %s inside the query, they created a runtime error when trying to pass the single bind variable into the query. The student misunderstood how to convert a report column input parameter variable into a tuple, which shows up when the student calls the Python MySQL Driver, like this:
59 | cursor.execute(query, (report)) |
The student’s code generated the following error stack:
Traceback (most recent call last): File "./python-with-clause.py", line 59, in <module> cursor.execute(query,(report)) File "/usr/lib/python3.7/site-packages/mysql/connector/cursor_cext.py", line 248, in execute prepared = self._cnx.prepare_for_mysql(params) File "/usr/lib/python3.7/site-packages/mysql/connector/connection_cext.py", line 632, in prepare_for_mysql raise ValueError("Could not process parameters") ValueError: Could not process parameters |
The ValueError should indicate to the developer that they’ve used a wrong data type in the call to the method:
cursor.execute(<class 'str'>,<class 'tuple'>) |
This clearly was a misunderstanding of how to cast a single string to a tuple. A quick explanation of how Python casts a single string into a tuple can best be illustrated inside an interactive Python shell, like:
>>> # Define a variable. >>> x = 'Detail' >>> # An incorrect attempt to make a string a tuple. >>> y = (x) >>> # Check type of y after assignment. >>> print(type(y)) <class 'str'> >>> # A correct attempt to make a string a tuple. >>> y = tuple(x) >>> # Check type of y after assignment. >>> print(type(y)) <class 'tuple'> >>> # An alternative to make a string a tuple. >>> z = (x,) >>> # Check type of z after assignment. >>> print(type(z)) <class 'tuple'> |
So, the fix was quite simple to line 59:
59 | cursor.execute(query, (report,)) |
The student started with a copy of a Python program that I provided. I fixed the argument handling and added some comments. The line 59 reference above maps to this code example.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 | # Import the library. import sys import mysql.connector from mysql.connector import errorcode # Capture argument list. fullCmdArguments = sys.argv # Assign argument list to variable. argumentList = fullCmdArguments[1:] # Define a standard report variable. report = "Summary" # Check and process argument list. # ============================================================ # If there are less than two arguments provide default values. # Else enumerate and convert strings to dates. # ============================================================ if (len(argumentList) == 1): # Set a default start date. if (isinstance(report,str)): report = argumentList[0] # Attempt the query. # ============================================================ # Use a try-catch block to manage the connection. # ============================================================ try: # Open connection. cnx = mysql.connector.connect(user='student', password='student', host='127.0.0.1', database='sakila') # Create cursor. cursor = cnx.cursor() # Set the query statement. query = ("WITH actors AS " "(SELECT a.actor_id " " , a.first_name " " , a.last_name " " , COUNT(fa.actor_id) AS num_roles " " FROM actor a INNER JOIN film_actor fa " " ON a.actor_id = fa.actor_id " " GROUP BY a.actor_id " " , a.first_name " " , a.last_name ) " " SELECT a.first_name " " , a.last_name " " , l.level " " , a.num_roles " " FROM actors a CROSS JOIN levels l " " WHERE a.num_roles BETWEEN l.min_roles AND l.max_roles " " AND l.report = %s " " ORDER BY a.last_name " " , a.first_name") # Execute cursor. cursor.execute(query,(report,)) # Display the rows returned by the query. for (first_name, last_name, level, num_roles) in cursor: print('{0} {1} is a {2} with {3} films.'.format( first_name.title() , last_name.title() , level.title() , num_roles)) # Close cursor. cursor.close() # ------------------------------------------------------------ # Handle exception and close connection. except mysql.connector.Error as e: if e.errno == errorcode.ER_ACCESS_DENIED_ERROR: print("Something is wrong with your user name or password") elif e.errno == errorcode.ER_BAD_DB_ERROR: print("Database does not exist") else: print("Error code:", e.errno) # error number print("SQLSTATE value:", e.sqlstate) # SQLSTATE value print("Error message:", e.msg) # error message # Close the connection when the try block completes. else: cnx.close() |
A Linux shell program like the following (provided the name of the shell script and Python program are the same) can run the Python program with or without a parameter. It works without a parameter because it sets a default value for the report variable.
# Switch the file extension and run the python program. file=${0/%sh/py} python3 ${file} "${@}" |
You call the shell script like this:
./python-with-clause.sh Detail |
As always, I hope this helps those looking for a solution.
Linux sqlplus wrapper
Here’s a quick way to ensure you can use the up-arrows and navigation keys when using the sqlplus command-line interface. You can just add it to your .bashrc file.
sqlplus () { path=`which rlwrap 2>/dev/null`; file=''; if [ -n ${path} ]; then file=${path##/*/}; fi; if [ -n ${file} ] && [[ ${file} = "rlwrap" ]]; then rlwrap sqlplus "${@}"; else echo "Command-line history unavailable: Install the rlwrap package."; $ORACLE_HOME/bin/sqlplus "${@}"; fi } |
As always, I hope this helps those looking of solutions.
Waking up the Network
Interesting problems seem to plague me from time to time. The current problem isn’t yet solved but I’m working on it. After a sleep cycle, IPV6 networking isn’t starting.
Try:
- Checking the network cables, modem, and router
- Reconnecting to Wi-Fi
ERR_INTERNET_DISCONNECTED |
In the broken Fedora 30 VM, I checked the status with the nmcli tool:
sudo nmcli general status |
It returned:
STATE CONNECTIVITY WIFI-HW WIFI WWAN-HW WWAN asleep none enabled enabled enabled enabled |
The STATE should return connected and connectivity return full. Unfortunately, that’s not the case.
There was little surprise that the next check:
sudo nmcli device |
Returned the following:
DEVICE TYPE STATE CONNECTION virbr0 bridge unmanaged -- ens33 ethernet unmanaged -- lo loopback unmanaged -- virbr0-nic tun unmanaged -- |
In a working instance, it should return:
DEVICE TYPE STATE CONNECTION ens33 ethernet connected ens33 virbr0 bridge connected virbr0 lo loopback unmanaged -- virbr0-nic tun unmanaged -- |
I’m currently troubleshooting what failed by leveraging an article on How to Configure Network Connection Using ‘nmcli’ Tool and the Gnome nmcli documentation. Naturally, when I get it fixed, I’ll finish this article.
Title Case Anyone?
Sometimes life is too surreal. Like when somebody says, “How do you get title case in an Oracle database?” That’s when you know three things about the individual, while suppressing laughter. They’re not very experienced with SQL, likely lazy, and don’t read the documentation.
I had a little fun with somebody today by taking them down a small rat-hole. “Oh, gosh … ” I said, “… let’s write a function for that.” Here’s the joke function, like:
CREATE OR REPLACE FUNCTION title_case ( string VARCHAR2 ) RETURN VARCHAR2 IS BEGIN /* Change upper case to title case. */ RETURN UPPER(SUBSTR(string,1,1)) || LOWER(SUBSTR(string,2,LENGTH(string))); END title_case; / |
Then, we tested it with a query from the pseudo dual table:
SELECT title_case('incredible') AS "Proper Name" FROM dual; |
It returned:
Proper Name ---------- Incredible |
Then, I said “Oh, that’s not his proper name in the Pixar World.” It should be: Mr. Incredible. Let’s try that:
SELECT title_case('mr. incredible') AS "Proper Name" FROM dual; |
It returned:
Proper Name -------------- Mr. incredible |
Then, I said: “That’s not what we want at all. Should we rewrite our function or simply use the INITCAP built-in function?” Then, I wrote:
SELECT INITCAP('mr. incredible') AS "Proper Name" FROM dual; |
It returned:
Proper Name -------------- Mr. Incredible |
Well, needless to say my acquaintance got it immediately and said “I should have Googled it or read the documentation.” I concurred with his observation.
Just sharing a cute day’s event that made me laugh and cry at the same time because there are too many who say SQL isn’t worth learning.
MongoDB Two Step
Sometimes a bit of humor helps with a new topic. Creating a database in MongoDB is a two-step process, like the Texas Two-Step (a nick name for a country/western two-step danced in common time). A Texas Two Step is a one-two, one-two shuffle which is like the two-step process for how you create a MongoDB database.
While databases in MongoDB are a multiuser sandbox like a relational database, you can’t simply create them and grant them privileges. You must first USE the database and then put a collection (a.k.a., equivalent to a table in MySQL) in it. This blog post shows you how to create a MongoDB play database with an actor collection. It shows you the commands to create the a database with one collection in it and how to verify its existence.
You connect through the mongo shell, like:
mongo |
The first-step requires you to connect to the play
database even though it doesn’t exist. You do that by typing:
> use play |
You can verify you’re in the play
database by typing the db
command, which the mongo
shell treats as an expression. It returns play
, as the name of the current database.
However, when you call the shell show dbs
helper:
> show dbs |
It displays:
admin 0.000GB config 0.000GB local 0.000GB |
You also should note that the play
database still doesn’t exist when you run the JavaScript equivalent to the shell show dbs
helper:
> db.getMongo().getDBs() |
It returns the following list of databases, which excludes the as yet not created play
database:
{ "databases" : [ { "name" : "admin", "sizeOnDisk" : 32768, "empty" : false }, { "name" : "config", "sizeOnDisk" : 61440, "empty" : false }, { "name" : "local", "sizeOnDisk" : 81920, "empty" : false } ], "totalSize" : 176128, "ok" : 1 } |
You create an actor
collection (a.k.a. the equivalent of a relational table) with the following syntax, where db
maps to the play
database that you’re using. Note that you must create or insert one document to begin a document collection. While you can use the insert
method, you should use either the newer insertOne()
or insertMany()
methods.
> db.actors.insertOne({"actor" : {"first_name" : "Chris", "last_name" : "Pine"}, "age" : 40 }) |
A new call to the shell show dbs
helper:
> show dbs |
displays your new play
database:
admin 0.000GB config 0.000GB local 0.000GB play 0.000GB |
You can add two more documents (a.k.a. for rows in a relational database) with the insertMany()
method:
> db.actors.insertMany([{"actor" : {"first_name" : "Chris", "last_name" : "Evans"}, "age" : 39 } ,{"actor" : {"first_name" : "Chris", "last_name" : "Pratt"}, "age" : 41 }]) |
A quick word to the JavaScript novices out there. Don’t forget the square brackets ([{},{},...]
) in the insertMany()
method call or you’ll get an error like this:
2021-04-12T16:20:13.237-0600 E QUERY [js] TypeError: documents.map is not a function : DBCollection.prototype.insertMany@src/mongo/shell/crud_api.js:295:1 @(shell):1:1 |
You call the shell show collections
helper or db.getMongo().getCollectionNames()
JavaScript function to display the collections in the play database. The show collections
displays a list of collections, and the db.getMongo().getCollectionNames()
displays an JavaScript array of collections.
If you’re like me, Mongo’s db convention is a bit risky that I could do something in the wrong database. So, I put the following function into my .mongorc.js (a.k.a., MongoDB Resource file):
prompt = function() { var dbName = db; return dbName + "> " } |
It ensures you will see the current database name to the left of the prompt (“>”), like:
play> |
You can query the documents from the actors
collection with the following:
play> db.actors.find().pretty() |
It returns:
{ "_id" : ObjectId("6074c692813c5a85db9cc9df"), "actor" : { "first_name" : "Chris", "last_name" : "Pine" }, "age" : 40 } { "_id" : ObjectId("6074c7ea813c5a85db9cc9e0"), "actor" : { "first_name" : "Chris", "last_name" : "Evans" }, "age" : 39 } { "_id" : ObjectId("6074c7ea813c5a85db9cc9e1"), "actor" : { "first_name" : "Chris", "last_name" : "Pratt" }, "age" : 41 } |
As always, I hope this helps those looking for a solution.