Oracle EBS Forms
Somebody wanted to know how to discover the difference between a customized and generic set of forms in an Oracle EBS solution without manually cataloging. That’s simple, create a vanilla instance from the media and then designate the customized instance as production; and create database links for them respectively as @vanilla and @production.
After doing that, here a query that will return only the customized forms:
COL application_short_name FORMAT A20 HEADING "Application|Short Name" COL form_name FORMAT A20 HEADING "Form Name" COL basepath FORMAT A12 HEADING "Basepath|Product|Top" SET PAGESIZE 9999 SELECT a.application_short_name , f.form_name , a.basepath FROM fnd_form@production f INNER JOIN fnd_application@production a ON f.application_id = a.application_id WHERE NOT EXISTS (SELECT NULL FROM fnd_form@vanilla vf WHERE f.application_id = vf.application_id AND f.form_id = vf.form_id AND f.form_name = vf.form_name) ORDER BY form_name; |
As always, I hope this helps.
Read CSV with Python
In 2009, I showed an example of how to use the MySQL LOAD DATA INFILE command. Last year, I updated the details to reset the secure_file-priv privilege to use the LOAD DATA INFILE command, but you can avoid that approach with a simple Python 3 program like the one in this example. You also can use MySQL Shell’s new parallel table import feature, introduced in 8.0.17, as noted in a comment on this blog post.
The example requires creating an avenger table, avenger.csv file, a readWriteData.py Python script, run the readWriteData.py Python script, and a query that validates the insertion of the avenger.csv file’s data into the avenger table. The complete code in five steps using the sakila demonstration database:
- Creating the avenger table with the create_avenger.sql script:
-- Conditionally drop the avenger table. DROP TABLE IF EXISTS avenger; -- Create the avenger table. CREATE TABLE avenger ( avenger_id int unsigned PRIMARY KEY AUTO_INCREMENT , first_name varchar(20) , last_name varchar(20) , avenger_name varchar(20)) ENGINE=InnoDB AUTO_INCREMENT=1001 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
- Create the avenger.csv file with the following data:
Anthony,Stark,Iron Man Thor,Odinson,God of Thunder Steven,Rogers,Captain America Bruce,Banner,Hulk Clinton,Barton,Hawkeye Natasha,Romanoff,Black Widow Peter,Parker,Spiderman Steven,Strange,Dr. Strange Scott,Lange,Ant-man Hope,van Dyne,Wasp
- Create the readWriteFile.py Python 3 script:
# Import libraries. import csv import mysql.connector from mysql.connector import errorcode from csv import reader # Attempt the statement. # ============================================================ # Use a try-catch block to manage the connection. # ============================================================ try: # Open connection. cnx = mysql.connector.connect( user='student' , password='student' , host='127.0.0.1' , database='sakila') # Create cursor. cursor = cnx.cursor() # Open file in read mode and pass the file object to reader. with open('avenger.csv', 'r') as read_obj: csv_reader = reader(read_obj) # Declare the dynamic statement. stmt = ("INSERT INTO avenger " "(first_name, last_name, avenger_name) " "VALUES " "(%s, %s, %s)") # Iterate over each row in the csv using reader object for row in csv_reader: cursor.execute(stmt, row) # Commit the writes. cnx.commit() #close the connection to the database. cursor.close() # Handle exception and close connection. except mysql.connector.Error as e: if e.errno == errorcode.ER_ACCESS_DENIED_ERROR: print("Something is wrong with your user name or password") elif e.errno == errorcode.ER_BAD_DB_ERROR: print("Database does not exist") else: print("Error code:", e.errno) # error number print("SQLSTATE value:", e.sqlstate) # SQLSTATE value print("Error message:", e.msg) # error message # Close the connection when the try block completes. else: cnx.close()
- Run the readWriteFile.py file:
python3 readWriteFile.py
- Query the avenger table:
SELECT * FROM avenger;
It returns:
+------------+------------+-----------+-----------------+ | avenger_id | first_name | last_name | avenger_name | +------------+------------+-----------+-----------------+ | 1001 | Anthony | Stark | Iron Man | | 1002 | Thor | Odinson | God of Thunder | | 1003 | Steven | Rogers | Captain America | | 1004 | Bruce | Banner | Hulk | | 1005 | Clinton | Barton | Hawkeye | | 1006 | Natasha | Romanoff | Black Widow | | 1007 | Peter | Parker | Spiderman | | 1008 | Steven | Strange | Dr. Strange | | 1009 | Scott | Lange | Ant-man | | 1010 | Hope | van Dyne | Wasp | +------------+------------+-----------+-----------------+ 10 rows in set (0.00 sec)
MySQL Query Performance
Working through our chapter on MySQL views, I wrote the query two ways to introduce the idea of SQL tuning. That’s one of the final topics before introducing JSON types.
I gave the students this query based on the Sakila sample database after explaining how to use the EXPLAIN syntax. The query only uses only inner joins, which are generally faster and more efficient than subqueries as a rule of thumb than correlated subqueries.
SELECT ctry.country AS country_name , SUM(p.amount) AS tot_payments FROM city c INNER JOIN address a ON c.city_id = a.city_id INNER JOIN customer cus ON a.address_id = cus.address_id INNER JOIN payment p ON cus.customer_id = p.customer_id INNER JOIN country ctry ON c.country_id = ctry.country_id GROUP BY ctry.country; |
It generated the following tabular explain plan output:
+----+-------------+-------+------------+--------+---------------------------+--------------------+---------+------------------------+------+----------+------------------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+--------+---------------------------+--------------------+---------+------------------------+------+----------+------------------------------+ | 1 | SIMPLE | cus | NULL | index | PRIMARY,idx_fk_address_id | idx_fk_address_id | 2 | NULL | 599 | 100.00 | Using index; Using temporary | | 1 | SIMPLE | a | NULL | eq_ref | PRIMARY,idx_fk_city_id | PRIMARY | 2 | sakila.cus.address_id | 1 | 100.00 | NULL | | 1 | SIMPLE | c | NULL | eq_ref | PRIMARY,idx_fk_country_id | PRIMARY | 2 | sakila.a.city_id | 1 | 100.00 | NULL | | 1 | SIMPLE | ctry | NULL | eq_ref | PRIMARY | PRIMARY | 2 | sakila.c.country_id | 1 | 100.00 | NULL | | 1 | SIMPLE | p | NULL | ref | idx_fk_customer_id | idx_fk_customer_id | 2 | sakila.cus.customer_id | 26 | 100.00 | NULL | +----+-------------+-------+------------+--------+---------------------------+--------------------+---------+------------------------+------+----------+------------------------------+ 5 rows in set, 1 warning (0.02 sec) |
Then, I used MySQL Workbench to generate the following visual explain plan:
Then, I compared it against a refactored version of the query that uses a correlated subquery in the SELECT-list. The example comes form Appendix B in Learning SQL, 3rd Edition by Alan Beaulieu.
SELECT ctry.country , (SELECT SUM(p.amount) FROM city c INNER JOIN address a ON c.city_id = a.city_id INNER JOIN customer cus ON a.address_id = cus.address_id INNER JOIN payment p ON cus.customer_id = p.customer_id WHERE c.country_id = ctry.country_id) AS tot_payments FROM country ctry; |
It generated the following tabular explain plan output:
+----+--------------------+-------+------------+------+---------------------------+--------------------+---------+------------------------+------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+--------------------+-------+------------+------+---------------------------+--------------------+---------+------------------------+------+----------+-------------+ | 1 | PRIMARY | ctry | NULL | ALL | NULL | NULL | NULL | NULL | 109 | 100.00 | NULL | | 2 | DEPENDENT SUBQUERY | c | NULL | ref | PRIMARY,idx_fk_country_id | idx_fk_country_id | 2 | sakila.ctry.country_id | 5 | 100.00 | Using index | | 2 | DEPENDENT SUBQUERY | a | NULL | ref | PRIMARY,idx_fk_city_id | idx_fk_city_id | 2 | sakila.c.city_id | 1 | 100.00 | Using index | | 2 | DEPENDENT SUBQUERY | cus | NULL | ref | PRIMARY,idx_fk_address_id | idx_fk_address_id | 2 | sakila.a.address_id | 1 | 100.00 | Using index | | 2 | DEPENDENT SUBQUERY | p | NULL | ref | idx_fk_customer_id | idx_fk_customer_id | 2 | sakila.cus.customer_id | 26 | 100.00 | NULL | +----+--------------------+-------+------------+------+---------------------------+--------------------+---------+------------------------+------+----------+-------------+ 5 rows in set, 2 warnings (0.00 sec) |
and, MySQL Workbench generated the following visual explain plan:
The tabular explain plan identifies the better performing query to an experienced eye but the visual explain plan works better for those new to SQL tuning.
The second query performs best because it reads the least data by leveraging the indexes best. As always, I hope these examples help those looking at learning more about MySQL.
MySQL DropIndexIfExists
In reply to a question about how to conditionally drop an index on a table in MySQL. It appears the syntax doesn’t exist. However, maybe it does and I missed it. If I did miss it, I’m sure somebody will let me know. However, I simply have a dropIndexIfExists stored procedure for this type of database maintenance.
Below is my dropIndexIfExists stored procedure:
-- Conditionally drop the procedure. DROP PROCEDURE IF EXISTS dropIndexIfExists; -- Change the default semicolon delimiter to write a PSM -- (Persistent Stored Module) or stored procedure. DELIMITER $$ -- Create the procedure. CREATE PROCEDURE dropIndexIfExists ( pv_table_name VARCHAR(64) , pv_index_name VARCHAR(64)) BEGIN /* Declare a local variable for the SQL statement. */ DECLARE stmt VARCHAR(1024); /* Set a session variable with two parameter markers. */ SET @SQL := CONCAT('ALTER TABLE ',pv_table_name,'DROP INDEX ',pv_index_name); /* Check if the constraint exists. */ IF EXISTS (SELECT NULL FROM information_schema.statistics s WHERE s.index_schema = database() AND s.table_name = pv_table_name AND s.index_name = pv_index_name) THEN /* Dynamically allocated and run statement. */ PREPARE stmt FROM @SQL; EXECUTE stmt; DEALLOCATE PREPARE stmt; END IF; END; $$ -- Reset the default semicolon delimiter. DELIMITER ; |
You call the procedure like:
CALL dropIndexIfExists('payment','idx_payment01'); |
As always, I hope this helps those looking for a solution.
MySQL 8+ Catalog
I was working through some tutorials for my students and noticed that there was a change in how a WHERE clause must be written against the information_schema.table_constraints table. It might have been made in an earlier release, I actually hadn’t checked it since 2014 when I wrote this early post on capturing MySQL Foreign Keys.
You could use the following WHERE case insensitive clause:
WHERE tc.constraint_type = 'foreign key' |
Now, you must use a case sensitive WHERE clause:
WHERE tc.constraint_type = 'FOREIGN KEY' |
I’d love to know why but I can’t seem to find a note on the change. As always, I hope this helps those looking for an answer.
MySQL WITH Clause
When I went over my example of using the WITH clause to solve how to use a series of literal values in data sets, some students got it right away and some didn’t. The original post showed how to solve a problem where one value in the data set is returned in the SELECT-list and two values are used as the minimum and maximum values with a BETWEEN operator. It used three approaches with literal values:
- A list of Python dictionaries that require you to filter the return set from the database through a range loop and if statement that mimics a SQL BETWEEN operator.
- A WITH clause that accepts the literals as bind variables to filter the query results inside the query.
- A table design that holds the literals values that an analyst might use for reporting.
It was the last example that required elaboration. I explained you might build a web form that uses a table, and the table could allow a data analyst to enter parameter sets. That way the analyst could submit a flag value to use one or another set of values. I threw out the idea on the whiteboard of introducing a report column to the prior post’s level table. The student went off to try it.
Two problems occurred. The first was in the design of the new table and the second was how to properly use the MySQL Python driver.
Below is a formal table design that supports this extension of the first blog post as a list of parameter values. It uses a report column as a super key to return a set of possible values. One value will show in the SELECT-list and the other two values deploy as the minimum and maximum values in a BETWEEN operator. It is seeded with two sets of values. One of the report possibilities is Summary level with three possibilities and the other is the Detail level with five possibilities.
-- Conditionally drop the levels table. DROP TABLE IF EXISTS levels; -- Create the levels list. CREATE TABLE levels ( level VARCHAR(16) , report ENUM('Summary','Detail') , min_roles INT , max_roles INT ); -- Insert values into the list table. INSERT INTO levels ( level, report, min_roles, max_roles ) VALUES ('Hollywood Star','Summary', 30, 99999) ,('Prolific Actor','Summary', 20, 29) ,('Newcommer','Summary', 1, 19) ,('Hollywood Star','Detail', 30, 99999) ,('Prolific Actor','Detail', 20, 29) ,('Regular Actor','Detail', 10, 19) ,('Actor','Detail', 5, 9) ,('Newcommer','Detail', 1, 4); |
The foregoing table design uses an ENUM type because reporting parameter sets are typically fewer than 64 possibilities. If you use the table to support multiple reports, you should add a second super key column like report_type. The report_type column key would let you use the table to support a series of different report parameter lists.
While the student used a %s inside the query, they created a runtime error when trying to pass the single bind variable into the query. The student misunderstood how to convert a report column input parameter variable into a tuple, which shows up when the student calls the Python MySQL Driver, like this:
59 | cursor.execute(query, (report)) |
The student’s code generated the following error stack:
Traceback (most recent call last): File "./python-with-clause.py", line 59, in <module> cursor.execute(query,(report)) File "/usr/lib/python3.7/site-packages/mysql/connector/cursor_cext.py", line 248, in execute prepared = self._cnx.prepare_for_mysql(params) File "/usr/lib/python3.7/site-packages/mysql/connector/connection_cext.py", line 632, in prepare_for_mysql raise ValueError("Could not process parameters") ValueError: Could not process parameters |
The ValueError should indicate to the developer that they’ve used a wrong data type in the call to the method:
cursor.execute(<class 'str'>,<class 'tuple'>) |
This clearly was a misunderstanding of how to cast a single string to a tuple. A quick explanation of how Python casts a single string into a tuple can best be illustrated inside an interactive Python shell, like:
>>> # Define a variable. >>> x = 'Detail' >>> # An incorrect attempt to make a string a tuple. >>> y = (x) >>> # Check type of y after assignment. >>> print(type(y)) <class 'str'> >>> # A correct attempt to make a string a tuple. >>> y = tuple(x) >>> # Check type of y after assignment. >>> print(type(y)) <class 'tuple'> >>> # An alternative to make a string a tuple. >>> z = (x,) >>> # Check type of z after assignment. >>> print(type(z)) <class 'tuple'> |
So, the fix was quite simple to line 59:
59 | cursor.execute(query, (report,)) |
The student started with a copy of a Python program that I provided. I fixed the argument handling and added some comments. The line 59 reference above maps to this code example.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 | # Import the library. import sys import mysql.connector from mysql.connector import errorcode # Capture argument list. fullCmdArguments = sys.argv # Assign argument list to variable. argumentList = fullCmdArguments[1:] # Define a standard report variable. report = "Summary" # Check and process argument list. # ============================================================ # If there are less than two arguments provide default values. # Else enumerate and convert strings to dates. # ============================================================ if (len(argumentList) == 1): # Set a default start date. if (isinstance(report,str)): report = argumentList[0] # Attempt the query. # ============================================================ # Use a try-catch block to manage the connection. # ============================================================ try: # Open connection. cnx = mysql.connector.connect(user='student', password='student', host='127.0.0.1', database='sakila') # Create cursor. cursor = cnx.cursor() # Set the query statement. query = ("WITH actors AS " "(SELECT a.actor_id " " , a.first_name " " , a.last_name " " , COUNT(fa.actor_id) AS num_roles " " FROM actor a INNER JOIN film_actor fa " " ON a.actor_id = fa.actor_id " " GROUP BY a.actor_id " " , a.first_name " " , a.last_name ) " " SELECT a.first_name " " , a.last_name " " , l.level " " , a.num_roles " " FROM actors a CROSS JOIN levels l " " WHERE a.num_roles BETWEEN l.min_roles AND l.max_roles " " AND l.report = %s " " ORDER BY a.last_name " " , a.first_name") # Execute cursor. cursor.execute(query,(report,)) # Display the rows returned by the query. for (first_name, last_name, level, num_roles) in cursor: print('{0} {1} is a {2} with {3} films.'.format( first_name.title() , last_name.title() , level.title() , num_roles)) # Close cursor. cursor.close() # ------------------------------------------------------------ # Handle exception and close connection. except mysql.connector.Error as e: if e.errno == errorcode.ER_ACCESS_DENIED_ERROR: print("Something is wrong with your user name or password") elif e.errno == errorcode.ER_BAD_DB_ERROR: print("Database does not exist") else: print("Error code:", e.errno) # error number print("SQLSTATE value:", e.sqlstate) # SQLSTATE value print("Error message:", e.msg) # error message # Close the connection when the try block completes. else: cnx.close() |
A Linux shell program like the following (provided the name of the shell script and Python program are the same) can run the Python program with or without a parameter. It works without a parameter because it sets a default value for the report variable.
# Switch the file extension and run the python program. file=${0/%sh/py} python3 ${file} "${@}" |
You call the shell script like this:
./python-with-clause.sh Detail |
As always, I hope this helps those looking for a solution.
Linux sqlplus wrapper
Here’s a quick way to ensure you can use the up-arrows and navigation keys when using the sqlplus command-line interface. You can just add it to your .bashrc file.
sqlplus () { path=`which rlwrap 2>/dev/null`; file=''; if [ -n ${path} ]; then file=${path##/*/}; fi; if [ -n ${file} ] && [[ ${file} = "rlwrap" ]]; then rlwrap sqlplus "${@}"; else echo "Command-line history unavailable: Install the rlwrap package."; $ORACLE_HOME/bin/sqlplus "${@}"; fi } |
As always, I hope this helps those looking of solutions.
Waking up the Network
Interesting problems seem to plague me from time to time. The current problem isn’t yet solved but I’m working on it. After a sleep cycle, IPV6 networking isn’t starting.
Try:
- Checking the network cables, modem, and router
- Reconnecting to Wi-Fi
ERR_INTERNET_DISCONNECTED |
In the broken Fedora 30 VM, I checked the status with the nmcli tool:
sudo nmcli general status |
It returned:
STATE CONNECTIVITY WIFI-HW WIFI WWAN-HW WWAN asleep none enabled enabled enabled enabled |
The STATE should return connected and connectivity return full. Unfortunately, that’s not the case.
There was little surprise that the next check:
sudo nmcli device |
Returned the following:
DEVICE TYPE STATE CONNECTION virbr0 bridge unmanaged -- ens33 ethernet unmanaged -- lo loopback unmanaged -- virbr0-nic tun unmanaged -- |
In a working instance, it should return:
DEVICE TYPE STATE CONNECTION ens33 ethernet connected ens33 virbr0 bridge connected virbr0 lo loopback unmanaged -- virbr0-nic tun unmanaged -- |
I’m currently troubleshooting what failed by leveraging an article on How to Configure Network Connection Using ‘nmcli’ Tool and the Gnome nmcli documentation. Naturally, when I get it fixed, I’ll finish this article.
MySQL with CTEs
As an example for my class on the usefulness of Common Table Expressions (CTEs), I created three examples with Python. They extend an exercise in Chapter 9 on subqueries from Learning SQL by Alan Beaulieu. All of the examples work with the sakila sample database.
These bullets describe the examples:
- Uses local variables and a range for loop and if statement that uses the variables to evaluate and add an element to the derived table (or query result set) from MySQL.
- Uses a CTE with substitution variables from the Python program, which eliminates the need to evaluate and add an element to the query result set because the query does that.
- Uses a table to hold the variables necessary to evaluate and add the element to the query result set.
This is the first Python program:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 | # Import the library. import sys import mysql.connector from mysql.connector import errorcode # Declare a list of tuples. dict = [{'level':'Hollywood Star','min_roles':30,'max_roles':99999} ,{'level':'Prolific Actor','min_roles':20,'max_roles':29} ,{'level':'Newcomer','min_roles':1,'max_roles':19}] # Attempt the query. # ============================================================ # Use a try-catch block to manage the connection. # ============================================================ try: # Open connection. cnx = mysql.connector.connect(user='student', password='student', host='127.0.0.1', database='sakila') # Create cursor. cursor = cnx.cursor() # Set the query statement. query = ("SELECT a.actor_id " ", a.first_name " ", a.last_name " ", COUNT(fa.actor_id) AS films " "FROM actor a INNER JOIN film_actor fa " "ON a.actor_id = fa.actor_id " "GROUP BY a.actor_id " ", a.first_name " ", a.last_name " "ORDER BY a.last_name " ", a.first_name") # Execute cursor. cursor.execute(query) # Display the rows returned by the query. for (actor_id, first_name, last_name, films) in cursor: for i in range(len(dict)): if films >= dict[i]["min_roles"] and films <= dict[i]["max_roles"]: print('{0} {1} is a {2} with {3} films.'.format( first_name.title() , last_name.title() , dict[i]["level"] , films)) # Close cursor. cursor.close() # ------------------------------------------------------------ # Handle exception and close connection. except mysql.connector.Error as e: if e.errno == errorcode.ER_ACCESS_DENIED_ERROR: print("Something is wrong with your user name or password") elif e.errno == errorcode.ER_BAD_DB_ERROR: print("Database does not exist") else: print("Error code:", e.errno) # error number print("SQLSTATE value:", e.sqlstate) # SQLSTATE value print("Error message:", e.msg) # error message # Close the connection when the try block completes. else: cnx.close() |
The Python dictionary on lines 7 thru 9 and range for loop and if statement on lines 41 and 42 can be eliminated by putting the literal values in a Common Table Expression (CTE). That’s because a CROSS JOIN matches all rows in the CTE against the base table before filtering them.
The match of all rows in the CTE against the base table effectively replaces the range for loop in the original code. The WHERE clause replaces the if statement in the original code.
Another optimization for readability of the final query puts the grouped query into a CTE as well. That way the final query simply demonstrates the filtering process.
This is the second Python program, and it converts the Python dictionary to a list of lists and assigns the lists to param tuple:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 | # Import the library. import sys import mysql.connector from mysql.connector import errorcode # Declare a list of lists. list = [['Hollywood Star',30,99999] ,['Prolific Actor',20,29] ,['Newcomer',1,19]] # Declare a tuple of the set of lists. param = (list[0] + list[1] + list[2]) # Attempt the query. # ============================================================ # Use a try-catch block to manage the connection. # ============================================================ try: # Open connection. cnx = mysql.connector.connect(user='student', password='student', host='127.0.0.1', database='sakila') # Create cursor. cursor = cnx.cursor() # Set the query statement. query = ("WITH actors AS " "(SELECT a.actor_id " " , a.first_name " " , a.last_name " " , COUNT(fa.actor_id) AS num_roles " " FROM actor a INNER JOIN film_actor fa " " ON a.actor_id = fa.actor_id " " GROUP BY a.actor_id " " , a.first_name " " , a.last_name ) " " , levels AS " "(SELECT %s AS level " " , %s AS min_roles " " , %s AS max_roles " " UNION ALL " " SELECT %s AS level " " , %s AS min_roles " " , %s AS max_roles " " UNION ALL " " SELECT %s AS level " " , %s AS min_roles " " , %s AS max_roles) " " SELECT a.first_name " " , a.last_name " " , l.level " " , a.num_roles " " FROM actors a CROSS JOIN levels l " " WHERE a.num_roles BETWEEN l.min_roles AND l.max_roles " " ORDER BY a.last_name " " , a.first_name") # Execute cursor. cursor.execute(query, param) # Display the rows returned by the query. for (first_name, last_name, level, num_roles) in cursor: print('{0} {1} is a {2} with {3} films.'.format( first_name.title() , last_name.title() , level.title() , num_roles)) # Close cursor. cursor.close() # ------------------------------------------------------------ # Handle exception and close connection. except mysql.connector.Error as e: if e.errno == errorcode.ER_ACCESS_DENIED_ERROR: print("Something is wrong with your user name or password") elif e.errno == errorcode.ER_BAD_DB_ERROR: print("Database does not exist") else: print("Error code:", e.errno) # error number print("SQLSTATE value:", e.sqlstate) # SQLSTATE value print("Error message:", e.msg) # error message # Close the connection when the try block completes. else: cnx.close() |
This is the third Python program requires some SQL setup. You should run this script inside the sakila database first. It basically takes the variables out of the code and stores them in a table. This is more likely what you would do to ensure maintainability of ever changing range values like these if you built a solution like this in a real application. It leaves the aggregation process inside a CTE and simplifies the final query.
-- Conditionally drop the levels table. DROP TABLE IF EXISTS levels; -- Create the levels list. CREATE TABLE levels ( level VARCHAR(16) , min_roles INT , max_roles INT ); -- Insert values into the list table. INSERT INTO levels ( level, min_roles, max_roles ) VALUES ('Hollywood Star', 30, 99999) ,('Prolific Actor', 20, 29) ,('Newcommer',1,19); |
After seeding the data in the levels table, you can test the query natively in MySQL, like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | -- Query the data. WITH actors AS (SELECT a.actor_id , a.first_name , a.last_name , COUNT(*) AS num_roles FROM actor a INNER JOIN film_actor fa ON a.actor_id = fa.actor_id GROUP BY actor_id) SELECT a.first_name , a.last_name , l.level , a.num_roles FROM actors a CROSS JOIN levels l WHERE a.num_roles BETWEEN l.min_roles AND l.max_roles ORDER BY a.last_name , a.first_name; |
There’s also a syntax that makes this type of query appear to be an INNER JOIN when it’s actually a filtered CROSS JOIN. If you adopt that syntax, you would rewrite lines 14 and 15:
14 15 | FROM actors a INNER JOIN levels l WHERE a.num_roles BETWEEN l.min_roles AND l.max_roles; |
Then, you can run this version without the second CTE element:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 | # Import the library. import sys import mysql.connector from mysql.connector import errorcode # Attempt the query. # ============================================================ # Use a try-catch block to manage the connection. # ============================================================ try: # Open connection. cnx = mysql.connector.connect(user='student', password='student', host='127.0.0.1', database='sakila') # Create cursor. cursor = cnx.cursor() # Set the query statement. query = ("WITH actors AS " "(SELECT a.actor_id " " , a.first_name " " , a.last_name " " , COUNT(fa.actor_id) AS num_roles " " FROM actor a INNER JOIN film_actor fa " " ON a.actor_id = fa.actor_id " " GROUP BY a.actor_id " " , a.first_name " " , a.last_name ) " " SELECT a.first_name " " , a.last_name " " , l.level " " , a.num_roles " " FROM actors a CROSS JOIN levels l " " WHERE a.num_roles BETWEEN l.min_roles AND l.max_roles " " ORDER BY a.last_name " " , a.first_name") # Execute cursor. cursor.execute(query) # Display the rows returned by the query. for (first_name, last_name, level, num_roles) in cursor: print('{0} {1} is a {2} with {3} films.'.format( first_name.title() , last_name.title() , level.title() , num_roles)) # Close cursor. cursor.close() # ------------------------------------------------------------ # Handle exception and close connection. except mysql.connector.Error as e: if e.errno == errorcode.ER_ACCESS_DENIED_ERROR: print("Something is wrong with your user name or password") elif e.errno == errorcode.ER_BAD_DB_ERROR: print("Database does not exist") else: print("Error code:", e.errno) # error number print("SQLSTATE value:", e.sqlstate) # SQLSTATE value print("Error message:", e.msg) # error message # Close the connection when the try block completes. else: cnx.close() |
As always, I hope this helps those trying to understand how CTEs can solve problems that would otherwise be coded in external imperative languages like Python.
MySQL and JavaScript
Sometimes students truly amaze me with their curiosity. The question was simple from their perspective while we were discussing MySQL’s builtin string functions. How would you do something like this JavaScript logic without using literals or session variables?
// Declare a string and substring. var myStr = 'Get me from the string.' var mySubstr = 'me' // Assign the substring to variable by rescuing it from the larger string. var rescued = myStr.substring(myStr.indexOf(mySubstr),myStr.indexOf(mySubstr) + mySubstr.length) // Print the result. print(rescued) |
tested with MongoDB, like
mongo --nodb --norc < parsing.js |
returning:
MongoDB shell version v4.0.20 me bye |
They thought the question would show SQL’s limits as a problem solving and programming language because they didn’t see how MySQL could assign a variable for evaluation in the builtin functions.
They were surprised to see how I showed them that they could do it. Since they disallowed session variables, I built a sample table and inserted the string value in a text column before writing a query with a Common Table Expression (CTE).
The MySQL steps are:
-- Stable testing scenario with table values requires a test table. DROP TABLE IF EXISTS sample; CREATE TABLE sample ( text VARCHAR(100) ); -- Insert the literal string into the testing table. INSERT INTO sample ( text ) VALUES ('Get me from the string.'); -- Test using a WITH clause to place a variable in context for use -- in the query, relying on the fact that a Cartesian set of one -- column and row becomes a new column in all rows of the other -- table's set. WITH struct AS (SELECT 'me' AS result) SELECT SUBSTR(text,INSTR(text,struct.result),LENGTH(struct.result)) AS rescued FROM sample CROSS JOIN struct; |
It returns the following:
+---------+ | rescued | +---------+ | me | +---------+ 1 row in set (0.00 sec) |
Wow, SQL works like a programming language was the response of the class. It’s like anything else in technology, new stuff isn’t as cool as old stuff until you learn how to use it.