Archive for the ‘MySQL Developer’ tag
MySQL on AlmaLinux
After installing AlmaLinux in a VMware VM on my MacBook Pro (Intel Chip), and updating the packages with the following command:
sudo dnf upgrade --refresh -y |
MySQL was first on my installation and configuration list. Here are the commands to install and configure it on AlmaLinux.
Install the MySQL Server packages and dependents:
sudo dnf install mysql mysql-server -y |
Install the MySQL service utilities with the initscripts package, using the following command:
sudo yum install -y initscripts |
Start the MySQL daemon with the following command:
sudo service mysqld start |
Connect and verify the root user can connect to the database. At this point, you’ve not set the root user’s password and should use the following syntax:
mysql -uroot |
It should connect and display:
Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 10 Server version: 8.0.30 Source distribution Copyright (c) 2000, 2022, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> |
Exit MySQL and run the following command:
mysql_secure_installation |
It’ll run you through the following prompts, which you may change to suit your installation. My choices are a trivial student database isolated inside a VM.
Securing the MySQL server deployment. Connecting to MySQL using a blank password. VALIDATE PASSWORD COMPONENT can be used to test passwords and improve security. It checks the strength of password and allows the users to set only those passwords which are secure enough. Would you like to setup VALIDATE PASSWORD component? Press y|Y for Yes, any other key for No: n Please set the password for root here. New password: Re-enter new password: By default, a MySQL installation has an anonymous user, allowing anyone to log into MySQL without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. Remove anonymous users? (Press y|Y for Yes, any other key for No) : y Success. Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y Success. By default, MySQL comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y - Dropping test database... Success. - Removing privileges on test database... Success. Reloading the privilege tables will ensure that all changes made so far will take effect immediately. Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y Success. All done! |
The next step requires setting up the sample sakila and studentdb database. The syntax has changed from prior releases. Here are the new three steps:
- Grant the root user the privilege to grant to others, which root does not have by default. You use the following syntax as the MySQL root user:
mysql> GRANT ALL ON *.* TO 'root'@'localhost';
- Download the sakila database, which you can download from this site. Click on the sakila database’s TGZ download.
When you download the sakila zip file it creates a sakila-db folder in the /home/student/Downloads directory. Copy the sakila-db folder into the /home/student/Data/sakila directory. Then, change to the /home/student/Data/sakila/sakila-db directory, connect to mysql as the root user, and run the following command:
mysql> SOURCE /home/student/Data/sakila/sakila-db/sakila-schema.sql mysql> SOURCE /home/student/Data/sakila/sakila-db/sakila-data.sql
- Create the studentdb database with the following command as the MySQL root user:
mysql> CREATE DATABASE studentdb;
- Create the user with a clear English password and grant the user student full privileges on the sakila and studentdb databases:
mysql> CREATE USER 'student'@'localhost' IDENTIFIED WITH mysql_native_password BY 'student'; mysql> GRANT ALL ON studentdb.* TO 'student'@'localhost'; mysql> GRANT ALL ON sakila.* TO 'student'@'localhost';
You can now connect to a sandboxed sakila database with the student user’s credentials, like:
mysql -ustudent -p -Dsakila |
or, you can now connect to a sandboxed studentdb database with the student user’s credentials, like:
mysql -ustudent -p -Dstudentdb |
As always, I hope code and step complete instructions help others get things done more quickly.
MySQL Workbench Keys
As I teach students how to create tables in MySQL Workbench, it’s always important to review the meaning of the checkbox keys. Then, I need to remind them that every table requires a natural key from our prior discussion on normalization. I explain that a natural key is a compound candidate key (made up of two or more column values), and that it naturally defines uniqueness for each row in a table.
Then, we discuss surrogate keys, which are typically ID column keys. I explain that surrogate keys are driven by sequences in the database. While a number of databases disclose the name of sequences, MySQL treats the sequence as an attribute of the table. In Object-Oriented Analysis and Design (OOAD), that makes the sequence a member of the table by composition rather than aggregation. Surrogate keys are also unique in the table but should never be used to determine uniqueness like the natural key. Surrogate keys are also candidate keys, like a VIN number uniquely identifies a vehicle.
In a well designed table you always have two candidate keys: One describes the unique row and the other assigns a number to it. While you can perform joins by using either candidate key, you always should use the surrogate key for joins statements. This means you elect, or choose, the surrogate candidate key as the primary key. Then, you build a unique index for the natural key, which lets you query any unique row with human decipherable words.
The column attribute table for MySQL Workbench is:
Key | Meaning |
---|---|
PK | Designates a primary key column. |
NN | Designates a not-null column constraint. |
UQ | Designates a column contains a unique value for every row. |
BIN | Designates a VARCHAR data type column so that its values are stored in a case-sensitive fashion. You can’t apply this constraint to other data types. |
UN | Designates a column contains an unsigned numeric data type. The possible values are 0 to the maximum number of the data type, like integer, float, or double. The value 0 isn’t possible when you also select the PK and AI check boxes, which ensures the column automatically increments to the maximum value of the column. |
ZF | Designates a zero fill populates zeros in front of any number data type until all space is consumed, which acts like a left pad function with zeros. |
AI | Designates AUTO_INCREMENT and should only be checked for a surrogate primary key value. |
All surrogate key columns should check the PK, NN, UN, and AI checkboxes. The default behavior checks only the PK and NN checkboxes and leaves the UN and AI boxes unchecked. You should also click the UN checkbox with the AI checkbox for all surrogate key columns. The AI checkbox enables AUTO_INCREMENT behavior. The UN checkbox ensure you have the maximum number of integers before you would migrate the table to a double precision number. More or less, this is what I wrote in MySQL Workbench Data Modeling & Development as the primary product guide in 2013, and what you find in the MySQL Workbench Manual 8.1.10.2 Columns Tab section.
Active tables grow quickly and using a signed int means you run out of rows more quickly. This is an important design consideration because using a unsigned int adds a maintenance task later. The maintenance task will require changing the data type of all dependent foreign key columns before changing the primary key column’s data type. Assuming you’re design uses referential integrity constraints, implemented as a foreign keys, you will need to:
- Remove any foreign key constraints before changing the referenced primary key and dependent foreign key column’s data types.
- Change the primary and foreign key column’s data types.
- Add back foreign key constraints after changing the referenced primary key and dependent foreign key column’s data types.
While fixing a less optimal design is a relatively simple scripting exercise for most data engineers, you can avoid this maintenance task. Implement all surrogate primary key columns and foreign key columns with the signed int as their initial data type.
The following small ERD displays a multi-language lookup table, which is preferable to a monolinquistic enum data type.:
A design uses a lookup table when there are known lists of selections to make. There are known lists that occur in most if not all business applications. Maintaining that list of values is an application setup task and requires the development team to build an entry and update form to input and maintain the lists.
While some MySQL examples demonstrate these types of lists by using the MySQL enum data type. However, the MySQL enum type doesn’t support multilingual implementations, isn’t readily portable to other relational database, and has a number of limitations.
A lookup table is the better solution to using an enum data type. It typically follows this pattern:
- Identify the target table and column where a list is useful. Use the table_name and column_name columns as a super key to identify the location where the list belongs.
- Identify a unique type identifier for the list. Store the unique type value in the type column of the lookup table.
- Use a lang column to enable multilingual lists.
The combination of the table_name, column_name, type, and lang let you identify unique sets. You can find a monolingual implementation in these two older blog posts:
The column view of the lookup table shows the appropriate design checkboxes:
While most foreign keys use copies of surrogate keys, there are instances when you copy the natural key value from another table rather than the surrogate key. This is done when your application will frequently query the dependent lookup table without a join to the lang table, which means the foreign key value should be a human friendly foreign key value that works as a super key.
A super key is a column or set of columns that uniquely identifies a rows in the scope of a relation. For this example, the lang column identifies rows that belong to a language in a multilingual data model. Belonging to a language is the relation between the lookup and language table. It is also a key when filtering rows with a specific lang value from the lookup table.
You navigate to the foreign key tab to create a lookup_fk foreign key constraint, like:
With this type of foreign key constraint, you copy the lang value from the language table when inserting the lookup table values. Then, your HTML forms can use the lookup table’s meaning column in any of the supported languages, like:
SELECT lookup_id , type , meaning FROM lookup WHERE table_name = 'some_table_name' AND column_name = 'some_column_name' AND lang = 'some_lang_name'; |
The type column value isn’t used in the WHERE clause to filter the data set because it is unique within the relation of the table_name, column_name, and lang column values. It is always non-unique when you exclude the lang column value, and potentially non-unique for another combination of the table_name and column_name column values.
As a rule, most foreign key references are to the lookup table’s surrogate primary key because the meaning column’s value is too long to copy into the referencing table or subject to change in the base or translated languages. Small values, with intrinsic meaning, are stored in a code column in many implementations, like the lang column. Those typically follow the same implementation rule as the lang column and are copied into the referencing table.
If I’ve left questions, let me know. Other wise, I hope this helps qualify a best design practice.
MySQL Query from JSON
One of my students asked how you could get JSON data out in tabular format. I said they should look at Øystein Grøvlen’s JSON_TABLE – Best of Both Worlds blog post from 2018. Unfortunately, the student wanted another example with the Video Store model that we use in class.
For clarity, all path definitions start with a $ followed by other selectors:
- A period followed by a name, such as $.website
- [N] where N is the position in a zero-indexed array
- The .[*] wildcard evaluates all members of an object
- The [*] wildcard evaluates all members of an array
- The prefix and suffix wildcard, **, evaluates to all paths that begin with the named prefix and end with the named suffix
So, here’s a quick supplement to what’s already there. It assumes you created an example table based on my prior blog post that looks like this:
+----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | id | struct | +----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | 1 | {"contact": [{"last_name": "Winn", "first_name": "Randi"}, {"last_name": "Winn", "first_name": "Brian"}], "account_number": "US00001"} | | 2 | {"contact": [{"last_name": "Vizquel", "first_name": "Oscar"}, {"last_name": "Vizquel", "first_name": "Doreen"}], "account_number": "US00002"} | | 3 | {"contact": [{"last_name": "Sweeney", "first_name": "Meaghan"}, {"last_name": "Sweeney", "first_name": "Matthew"}, {"last_name": "Sweeney", "first_name": "Ian"}], "account_number": "US00003"} | | 4 | {"contact": [{"last_name": "Clinton", "first_name": "Goeffrey"}], "account_number": "US00004"} | | 5 | {"contact": [{"last_name": "Moss", "first_name": "Wendy"}], "account_number": "US00005"} | | 6 | {"contact": [{"last_name": "Gretelz", "first_name": "Simon"}], "account_number": "US00006"} | | 7 | {"contact": [{"last_name": "Royal", "first_name": "Elizabeth"}], "account_number": "US00007"} | | 8 | {"contact": [{"last_name": "Smith", "first_name": "Brian"}], "account_number": "US00008"} | | 9 | {"contact": [{"last_name": "Potter", "first_name": "Harry"}, {"last_name": "Potter", "first_name": "Ginny"}, {"last_name": "Potter", "first_name": "Lily"}], "account_number": "US00011"} | +----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 9 rows in set (0.01 sec) |
You can query the account_number key value like this:
SELECT id , JSON_EXTRACT(struct, "$.account_number") AS account_no FROM example; |
It returns:
+----+------------+ | id | account_no | +----+------------+ | 1 | "US00001" | | 2 | "US00002" | | 3 | "US00003" | | 4 | "US00004" | | 5 | "US00005" | | 6 | "US00006" | | 7 | "US00007" | | 8 | "US00008" | | 9 | "US00011" | +----+------------+ 9 rows in set (0.00 sec) |
You use the JSON_TABLE function to get the embedded array elements of first and last name, like:
SELECT id , contact.* FROM example CROSS JOIN JSON_TABLE( struct ,"$.contact[*]" COLUMNS( lname JSON PATH "$.last_name" , fname JSON PATH "$.first_name")) AS contact; |
It returns:
+----+-----------+-------------+ | id | lname | fname | +----+-----------+-------------+ | 1 | "Winn" | "Randi" | | 1 | "Winn" | "Brian" | | 2 | "Vizquel" | "Oscar" | | 2 | "Vizquel" | "Doreen" | | 3 | "Sweeney" | "Meaghan" | | 3 | "Sweeney" | "Matthew" | | 3 | "Sweeney" | "Ian" | | 4 | "Clinton" | "Goeffrey" | | 5 | "Moss" | "Wendy" | | 6 | "Gretelz" | "Simon" | | 7 | "Royal" | "Elizabeth" | | 8 | "Smith" | "Brian" | | 9 | "Potter" | "Harry" | | 9 | "Potter" | "Ginny" | | 9 | "Potter" | "Lily" | +----+-----------+-------------+ 15 rows in set (0.00 sec) |
You can combine both approaches, as shown below.
SELECT id , JSON_EXTRACT(struct, "$.account_number") AS account_no , contact.* FROM example CROSS JOIN JSON_TABLE( struct ,"$.contact[*]" COLUMNS( lname JSON PATH "$.last_name" , fname JSON PATH "$.first_name")) AS contact; |
It returns:
+----+------------+-----------+-------------+ | id | account_no | lname | fname | +----+------------+-----------+-------------+ | 1 | "US00001" | "Winn" | "Randi" | | 1 | "US00001" | "Winn" | "Brian" | | 2 | "US00002" | "Vizquel" | "Oscar" | | 2 | "US00002" | "Vizquel" | "Doreen" | | 3 | "US00003" | "Sweeney" | "Meaghan" | | 3 | "US00003" | "Sweeney" | "Matthew" | | 3 | "US00003" | "Sweeney" | "Ian" | | 4 | "US00004" | "Clinton" | "Goeffrey" | | 5 | "US00005" | "Moss" | "Wendy" | | 6 | "US00006" | "Gretelz" | "Simon" | | 7 | "US00007" | "Royal" | "Elizabeth" | | 8 | "US00008" | "Smith" | "Brian" | | 9 | "US00011" | "Potter" | "Harry" | | 9 | "US00011" | "Potter" | "Ginny" | | 9 | "US00011" | "Potter" | "Lily" | +----+------------+-----------+-------------+ 15 rows in set (0.00 sec) |
Lastly, if you want to get rid of the enclosing double quotes you can do the following:
WITH raw AS (SELECT id , JSON_EXTRACT(struct, "$.account_number") AS account_no , contact.* FROM example CROSS JOIN JSON_TABLE( struct ,"$.contact[*]" COLUMNS( lname JSON PATH "$.last_name" , fname JSON PATH "$.first_name")) AS contact) SELECT id , REGEXP_REPLACE(account_no,'"','') AS account_no , REGEXP_REPLACE(lname,'"','') AS lname , REGEXP_REPLACE(fname,'"','') AS fname FROM raw; |
It’s also possible to use the JSON_UNQUOTE function to cleanup the double quotes. I hope this helps those extracting JSON data into tabular result sets.
MySQL Backslashes
Yesterday, I wrote a blog post that showed you how to write a query returning a JSON structure for a 1:many relationship. The relationship was between the member and contact table. It returns one account_number from the member table and a list of first_name and last_name columns from the contact table in a JSON structure.
One of my students asked why I choose to strip the backslashes with Python, and my reply was the SQL was already complex for most blog readers. The student asked but how would you do it in SQL. OK, that’s a fair question for two reasons. First, you don’t need to do in your local programs because it’ll run faster on the server. Second, if you strip the backslashes you can insert it into a standard JSON column. This blog post will show you how to do both.
You would use three REGEXP_REPLACE function calls, like:
SELECT REGEXP_REPLACE( REGEXP_REPLACE( REGEXP_REPLACE( JSON_OBJECT( 'account_number', account_number ,'contact', CONCAT('[' , GROUP_CONCAT( JSON_OBJECT('first_name',first_name ,'last_name',last_name ) SEPARATOR ',') ,']') ) ,'\\\\','') ,'"\\\[','\\\[') ,'\\\]"','\\\]') AS json_result FROM member m INNER JOIN contact c ON m.member_id = c.member_id GROUP BY m.account_number; |
It returns the following:
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | {"contact": [{"last_name": "Winn", "first_name": "Randi"},{"last_name": "Winn", "first_name": "Brian"}], "account_number": "US00001"} | | {"contact": [{"last_name": "Vizquel", "first_name": "Oscar"},{"last_name": "Vizquel", "first_name": "Doreen"}], "account_number": "US00002"} | | {"contact": [{"last_name": "Sweeney", "first_name": "Meaghan"},{"last_name": "Sweeney", "first_name": "Matthew"},{"last_name": "Sweeney", "first_name": "Ian"}], "account_number": "US00003"} | | {"contact": [{"last_name": "Clinton", "first_name": "Goeffrey"}], "account_number": "US00004"} | | {"contact": [{"last_name": "Moss", "first_name": "Wendy"}], "account_number": "US00005"} | | {"contact": [{"last_name": "Gretelz", "first_name": "Simon"}], "account_number": "US00006"} | | {"contact": [{"last_name": "Royal", "first_name": "Elizabeth"}], "account_number": "US00007"} | | {"contact": [{"last_name": "Smith", "first_name": "Brian"}], "account_number": "US00008"} | | {"contact": [{"last_name": "Potter", "first_name": "Harry"},{"last_name": "Potter", "first_name": "Ginny"},{"last_name": "Potter", "first_name": "Lily"}], "account_number": "US00011"} | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 9 rows in set (0.00 sec) |
Let’s create a table with a JSON structure with the following script:
/* Drop table if it exists. */ DROP TABLE IF EXISTS example; /* Create a example table. */ CREATE TABLE example ( id int unsigned auto_increment , struct json , PRIMARY KEY (id)); |
Now, we can embed the query inside an INSERT statement:
INSERT INTO example ( struct ) (SELECT REGEXP_REPLACE( REGEXP_REPLACE( REGEXP_REPLACE( JSON_OBJECT( 'account_number', account_number ,'contact', CONCAT('[' , GROUP_CONCAT( JSON_OBJECT('first_name',first_name ,'last_name',last_name ) SEPARATOR ',') ,']') ) ,'\\\\','') ,'"\\\[','\\\[') ,'\\\]"','\\\]') AS json_result FROM member m INNER JOIN contact c ON m.member_id = c.member_id GROUP BY m.account_number); |
A query of the example table, like:
SELECT * FROM example; |
Returns:
+----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | id | struct | +----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | 1 | {"contact": [{"last_name": "Winn", "first_name": "Randi"}, {"last_name": "Winn", "first_name": "Brian"}], "account_number": "US00001"} | | 2 | {"contact": [{"last_name": "Vizquel", "first_name": "Oscar"}, {"last_name": "Vizquel", "first_name": "Doreen"}], "account_number": "US00002"} | | 3 | {"contact": [{"last_name": "Sweeney", "first_name": "Meaghan"}, {"last_name": "Sweeney", "first_name": "Matthew"}, {"last_name": "Sweeney", "first_name": "Ian"}], "account_number": "US00003"} | | 4 | {"contact": [{"last_name": "Clinton", "first_name": "Goeffrey"}], "account_number": "US00004"} | | 5 | {"contact": [{"last_name": "Moss", "first_name": "Wendy"}], "account_number": "US00005"} | | 6 | {"contact": [{"last_name": "Gretelz", "first_name": "Simon"}], "account_number": "US00006"} | | 7 | {"contact": [{"last_name": "Royal", "first_name": "Elizabeth"}], "account_number": "US00007"} | | 8 | {"contact": [{"last_name": "Smith", "first_name": "Brian"}], "account_number": "US00008"} | | 9 | {"contact": [{"last_name": "Potter", "first_name": "Harry"}, {"last_name": "Potter", "first_name": "Ginny"}, {"last_name": "Potter", "first_name": "Lily"}], "account_number": "US00011"} | +----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 9 rows in set (0.00 sec) |
I hope this answers the question about whether you can use SQL remove the backslashes from the original result set and how you can insert the JSON result set into a JSON data type.
MySQL JSON Tricks
Are they really tricks or simply basic techniques combined to create a solution. Before writing these mechanics for using native MySQL to create a compound JSON object, let me point out that the easiest way to get one is to use the MySQL Node.js library, as shown recently in my “Is SQL Programming” blog post.
Moving data from a relational model output to a JSON structure isn’t as simple as a delimited list of columns in a SQL query. Let’s look at it in stages based on the MySQL Server 12.18.2 Functions that create JSON values.
Here’s how you return single row as a JSON object, which is quite straightforward:
SELECT JSON_OBJECT('first_name',c.first_name,'last_name',c.last_name) AS json_result FROM contact c WHERE first_name = 'Harry' AND last_name = 'Potter'; |
It returns:
+------------------------------------------------+ | json_result | +------------------------------------------------+ | {"last_name": "Potter", "first_name": "Harry"} | +------------------------------------------------+ 1 row in set (0.00 sec) |
With a GROUP_CONCAT function, let’s capture a JSON array of all three Potter family members:
SELECT CONCAT('[' , GROUP_CONCAT( JSON_OBJECT('first_name',first_name ,'last_name',last_name ) SEPARATOR ',') ,']') AS json_result FROM contact c WHERE c.last_name = 'Potter'; |
It returns an array of JSON objects:
+-----------------------------------------------------------------------------------------------------------------------------------------------+ | [{"last_name": "Potter", "first_name": "Harry"},{"last_name": "Potter", "first_name": "Ginny"},{"last_name": "Potter", "first_name": "Lily"}] | +-----------------------------------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.01 sec) |
Next, let’s put a 1:many relationship between the member and contact table into a JSON structure with a single account number and an array of contact. It requires a second call to the JSON_OBJECT function and the addition of a GROUP BY clause in the query.
SELECT JSON_OBJECT( 'account_number', account_number ,'contact', CONCAT('[' , GROUP_CONCAT( JSON_OBJECT('first_name',first_name ,'last_name',last_name ) SEPARATOR ',') ,']') ) AS json_result FROM member m INNER JOIN contact c ON m.member_id = c.member_id WHERE c.last_name = 'Potter' GROUP BY m.account_number; |
It returns the following string with an annoying set of backslashes. It also inverts the column order, which appears unavoidable but it shouldn’t matter because the order of name-value pairs in JSON is immaterial.
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | {"contact": "[{\"last_name\": \"Potter\", \"first_name\": \"Harry\"},{\"last_name\": \"Potter\", \"first_name\": \"Ginny\"},{\"last_name\": \"Potter\", \"first_name\": \"Lily\"}]", "account_number": "US00011"} | +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.00 sec) |
The following quick little Python code cleans up the JSON string by removing the backslashes and extraneous quotes around the array of contacts.
# Import the library. import mysql.connector from mysql.connector import errorcode try: # Open connection. cnx = mysql.connector.connect(user='student', password='student', host='127.0.0.1', database='studentdb') # Create cursor. cursor = cnx.cursor() # Set the query statement. query = ("SELECT JSON_OBJECT( " "'account_number', m.account_number " ",'contact', CONCAT('[' " " , GROUP_CONCAT( " " JSON_OBJECT('first_name', c.first_name " " ,'last_name', c.last_name ) SEPARATOR ',') " " ,']')) AS json_result " "FROM contact c INNER JOIN member m " "ON c.member_id = m.member_id " "WHERE c.last_name = %s " "GROUP BY account_number") # Execute cursor. cursor.execute(query,["Potter"]) # Display the column returned by the query stripped of backslashes and # extraneous quotes. for (row) in cursor: for column in range(len(row)): print(row[column].replace("\\","").replace("\"[","[").replace("]\"","]")) # Close cursor. cursor.close() # ------------------------------------------------------------ # Handle exception and close connection. except mysql.connector.Error as e: if e.errno == errorcode.ER_ACCESS_DENIED_ERROR: print("Something is wrong with your user name or password") elif e.errno == errorcode.ER_BAD_DB_ERROR: print("Database does not exist") else: print("Error code:", e.errno) # error number print("SQLSTATE value:", e.sqlstate) # SQLSTATE value print("Error message:", e.msg) # error message # Close the connection when the try block completes. else: cnx.close() |
It returns:
{"contact": [{"last_name": "Potter", "first_name": "Harry"},{"last_name": "Potter", "first_name": "Ginny"},{"last_name": "Potter", "first_name": "Lily"}], "account_number": "US00011"} |
I hope this helps exhibit less well known MySQL syntax. Check this post to see how to insert a result set without Python as an intermediary.
Is SQL Programming
Is SQL, or Structured Query Language, a programming language? That’s a great question! A question that many answer with emphasis: “No, SQL is not a programming language!” There are some who answer yes; and they usually qualify that answer with something like: “SQL is a programming language designed to communicate with relational databases.”
It strikes me that those saying “yes” are saying that SQL is only a collection of interface methods to read from and write to a database engine. Those saying SQL is not a programming language often qualify that a programming language must have conditional logic and iterative structures, which don’t exist in SQL.
There’s a third group that are fence sitters. They decline to say whether SQL is a programming language, but they also say individuals who only write SQL aren’t programmers. That’s a bit harsh from my perspective.
Before determining whether SQL is a programming language let’s define a programming language. Let’s define a programming language as a collection of lexical units, or building blocks, that build program units. Lexical units are typically organized as delimiters, identifiers, literals, and comments:
- Delimiters include single or double quotes to identify strings and operators that let you assign and compare values.
- Identifiers are reserved words, keywords, predefined identifiers (like data type names), user-defined variables, subroutines, or types.
- Literals are typically numbers and strings, where some strings qualify as dates because they implement a default format mask to convert strings to dates or date-times.
- Comments are simply delimited text that the program ignores but the programmer uses.
That means a programming language must let you define a variable, assign a value to a variable, iterate across a set of values, and make conditional statements. SQL meets these four conditions, but it does, as a set-programming language, qualify all variables as lists of tuples. Though it is possible to have variables with zero to many elements and one to many members in any given tuple. That means you can assign a literal value to to a one-element list with a single-member tuple, like you would a string or integer to a variable of that type.
As Kris Köhntopp commented, computer science defines a programming language as Turing Complete. As his comment qualifies and the Wikipedia page explains: “Turing completeness in declarative SQL is implemented through recursive common table expressions. Unsurprisingly, procedural extensions to SQL (PLSQL, etc.) are also Turing-complete.” While PostgreSQL introduces recursive query syntax through CTEs, it recently added the search and cycle feature in PostgreSQL 14. The recursive query feature has existed in the Oracle database since Oracle 8, but their documentation calls them hierarchical queries. I wrote a quick a tutorial on hierarchical queries in 2008.
For clarity, define and declare are two words that give grief to some newbies. Let’s qualify what they mean. Declare means to give a variable a name and data type. Define means to declare a variable and assign it a value. Another word for assigning a variable is initializing it. Unassigned variables are automatically assigned a default value or a null dependent on the programming language.
Let’s first declare a local variable, assign it to variable, and display the variable. The following example uses Node.js to define the input variable, assign the input variable to the display variable, and then print the display variable to console. Node.js requires that you assign an empty string to the display variable to define it as a string otherwise its type would be undefined, which is common behavior in dynamically typed languages.
/* Declare the display variable as a string. */ var display = "" /* Define the input variable. */ var input = "Hello World!" /* Assign the input variable contents to the display variable. */ display = input /* Print the display variable contents to console. */ console.log(display) |
It prints:
Hello World! |
Let’s write the same type of program in MySQL. Like the Node.js, there are implementation differences. The biggest difference in MySQL or other relational databases occurs because SQL is a declarative set-based language. That means every variable is a collection of a record structure . You can only mimic a scalar or primitive data type variable by creating a record structure with a single member.
In the case below, there are four processing steps:
- The ‘Hello World!’ literal value is assigned to an input variable.
- The SELECT-list (or comma-delimited set of values in the SELECT clause) is assigned like a tuple to the struct collection variable by treating the query of the literal value as an expression.
- The FROM clause returns the struct collection as the data set or as a derived table.
- The topmost SELECT clause evaluates the struct collection row-by-row, like a loop, and assigns the input member to a display variable.
The query is:
SELECT struct.input AS display FROM (SELECT 'Hello World!' AS input) struct; |
Since the struct collection contains only one element, it displays the original literal value one time, like
+--------------+ | display | +--------------+ | Hello World! | +--------------+ 1 row in set (0.00 sec) |
Let’s update the SQL syntax to the more readable, ANSI 1999 and forward, syntax with a Common Table Expression (CTE). CTEs are implemented by the WITH clause.
WITH struct AS (SELECT 'Hello World!' AS input) SELECT struct.input AS display FROM struct; |
The best thing about CTE values they run one-time and are subsequently available anywhere in your query, subqueries, or correlated subqueries. In short, there’s never an excuse to write a subquery twice in the same query.
Let’s look at loops and if-statements. Having established that we can assign a literal to a variable, re-assign the value from one variable to another, and then display the new variable, let’s assign a set of literal values to an array variable. As before, let’s use Node.js to structure the initial problem.
The program now assigns an array of strings to the input variable, uses a for-loop to read the values from the input array, and uses an if-statement with a regular expression evaluation. The if-statement determines which of the array value meets the condition by using a negating logical expression. That’s because the search() function returns a 0 or greater value when the needle value is found in the string and a -1 when not found. After validating that the needle variable value is found in an input string, the input value is assigned to the display variable.
/* Declare the display variable as a string. */ var display = "" /* Declare a lookup variable. */ var needle = "Goodbye" /* Define the input variable as an array of strings. */ var input = ["Hello World!" ,"Goodbye, Cruel World!" ,"Good morning, too early ..."] /* Read through an array and assign the value that meets * the condition to the display variable. */ for (i = 0; i < input.length; i++) if (!(input[i].search(needle) < 0)) display = input[i] /* Print the display variable contents to console. */ console.log(display) |
Then, it prints the display value:
Goodbye, Cruel World! |
To replicate the coding approach in a query, there must be two CTEs. The needle CTE assigns a literal value of ‘goodbye’ to a one-element collection of a single-member tuple variable. The struct CTE creates a collection of strings by using the UNION ALL operator to append three unique tuples instead of one tuple as found in the early example.
The needle CTE returns a one-element collection of a single-member tuple variable. The struct CTE returns a three-element collection of a single-member tuple, which mimics an array of strings. The needle and struct CTEs return distinct variables with different data types. A cross join operation between the two CTEs puts their results together into the same context. It returns a Cartesian product that:
- Adds a single-row tuples to each row of the query’s result set or derived table.
- Adds a multiple-tuples to each row of the query’s result set or derived table by creating copies of each row (following the Cartesian set theory which multiplies rows and adds columns).
In this case, the Cartesian join adds a one-element needle CTE value to each element, or row, returned by the multiple-element struct CTE and produces the following derived table:
+-----------------------------+---------+ | display | lookup | +-----------------------------+---------+ | Hello World! | goodbye | | Goodbye, cruel world! | goodbye | | Good morning, too early ... | goodbye | +-----------------------------+---------+ 3 rows in set (0.00 sec) |
The following query reads through the CTE collection like a loop and filters out any invalid input values. It uses the MySQL regular expression like function in the WHERE clause, which acts as a conditional or if-statement.
WITH needle AS (SELECT 'goodbye' AS lookup) , struct AS (SELECT 'Hello World!' AS input UNION ALL SELECT 'Goodbye, cruel world!' AS input UNION ALL SELECT 'Good morning, too early ...' AS input) SELECT struct.input AS display FROM struct CROSS JOIN needle WHERE REGEXP_LIKE(struct.input, CONCAT('^.*',needle.lookup,'.*$'),'i'); |
It returns the one display value that meets the criteria:
+-----------------------+ | display | +-----------------------+ | Goodbye, cruel world! | +-----------------------+ 1 row in set (0.00 sec) |
The comparisons of the imperative programming approach in Node.js and declarative programming approach should have established that SQL has all the elements of a programming language. That is, SQL has variable declaration and assignment and both iterative and conditional statements. SQL also has different styles for implementing variable declaration and the examples covered subqueries and CTEs with cross joins placing variables in common scope.
Comparative Approaches:
Next, let’s examine a problem that a programmer might encounter when they think SQL only queries or inserts, updates, or deletes single rows. With that perspective of SQL there’s often a limited perspective on how to write queries. Developers with this skill set level typically write only basic queries, which may include inner and outer joins and some aggregation statements.
Let’s assume the following for this programming assignment:
- A sale table as your data source, and
- A requirement to display the type, number, pre-tax sale amount, and percentage by type.
The sale table definition:
+------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+--------------+------+-----+---------+----------------+ | sale_id | int unsigned | NO | PRI | NULL | auto_increment | | item_desc | varchar(20) | YES | | NULL | | | unit_price | decimal(8,2) | YES | | NULL | | | serial_no | varchar(10) | YES | | NULL | | +------------+--------------+------+-----+---------+----------------+ |
A basic Node.js program may contain a SQL query that returns the item_desc and unit_price columns while counting the number of serial_no rows and summing the unit_price amounts (that assumes no discount sales, after all its Apple). That type of query leaves calculating the total amount of sales and percentage by type to the Node.js program.
const mysql = require('mysql') const connection = mysql.createConnection({ host: 'localhost', user: 'student', password: 'student', database: 'studentdb' }) connection.connect((err) => { if (err) throw err else { console.log('Connected to MySQL Server!\n') connection.query("SELECT s.item_desc " + ", s.unit_price " + ", COUNT(s.serial_no) AS quantity_sold " + ", SUM(s.unit_price) AS sales " + "FROM sale s " + "GROUP BY s.item_desc " + ", s.unit_price", function (err, result) { if (err) throw err else { // Prints the index value in the RowDataPacket. console.log(result) connection.end() }})} }) |
This program would return a JSON structure, like:
[ RowDataPacket { item_desc: 'MacBook Pro 16', unit_price: 2499, quantity_sold: 16, sales: 39984 }, ... RowDataPacket { item_desc: 'MacBook Air M1', unit_price: 999, quantity_sold: 22, sales: 21978 } ] |
While the remaining JavaScript code isn’t difficult to write, it’s unnecessary effort if the developer knew SQL well enough to program in it. The developer could simply re-write the query like the following and return the percentage by type value in the base JSON structure.
WITH sales AS (SELECT SUM(unit_price) AS total FROM sale) SELECT s.item_desc , s.unit_price , COUNT(s.serial_no) AS quantity_sold , SUM(s.unit_price) AS sales , CONCAT(FORMAT((s.unit_price * COUNT(s.serial_no))/sales.total * 100,2),'%') AS percentage FROM sale s CROSS JOIN sales GROUP BY s.item_desc , s.unit_price , sales.total; |
The query uses the sales CTE to calculate and define a tuple with the total sales and adds a derived column calculating the percentage by type of device. It’s probably important to note that aggregation rules require you add the sales.total CTE tuple to the group by clause.
The new query returns this JSON list:
[ RowDataPacket { item_desc: 'MacBook Pro 16', unit_price: 2499, quantity_sold: 16, sales: 39984, percentage: '17.70%' }, ... RowDataPacket { item_desc: 'MacBook Air M1', unit_price: 999, quantity_sold: 22, sales: 21978, percentage: '9.73%' } ] |
The developer would get a complete JSON list when the new query replaces the old. It also would eliminate the need to write additional JavaScript to calculate the percentage by type of device.
Conclusions:
Leveraging the programming power of SQL is frequently possible in many frontend and backend programming solutions. However, the programming power of SQL is infrequently found in programming solutions. That leaves me to ask: “Is it possible that the almost systemic failure to leverage the programming capabilities of SQL is a result of biases by instructors and mentors to their own limited skill sets?” That likely might be true if their instructors and mentors held the belief that: “No, SQL is not a programming language!”
Candidly, folks that write SQL at the programming level almost always have concurrent mastery in two or more imperative programming languages. They’re probably the ones who say, “SQL is a programming language designed to communicate with relational databases.”
Who are those pesky fence sitters? You remember those, don’t you. They’re the ones who declined to take a position on whether SQL is a programming language. Are they the developers who are still learning, and those without an entrenched, preconceived, or learned bias? Or, do they wonder if SQL is Turing complete?
Node.js MySQL Error
While I blogged about how to setup Node.js and MySQL almost two years ago, it was interesting when a student ran into a problem. The student said they’d configured the environment but were unable to use Node.js to access MySQL.
The error is caused by this import statement:
const mysql = require('mysql') |
The student got the following error, which simply says that they hadn’t installed the Node.js package for MySQL driver.
internal/modules/cjs/loader.js:638 throw err; ^ Error: Cannot find module 'mysql' at Function.Module._resolveFilename (internal/modules/cjs/loader.js:636:15) at Function.Module._load (internal/modules/cjs/loader.js:562:25) at Module.require (internal/modules/cjs/loader.js:692:17) at require (internal/modules/cjs/helpers.js:25:18) at Object.<anonymous> (/home/student/Data/cit325/oracle-s/lib/Oracle12cPLSQLCode/Introduction/query.js:4:15) at Module._compile (internal/modules/cjs/loader.js:778:30) at Object.Module._extensions..js (internal/modules/cjs/loader.js:789:10) at Module.load (internal/modules/cjs/loader.js:653:32) at tryModuleLoad (internal/modules/cjs/loader.js:593:12) at Function.Module._load (internal/modules/cjs/loader.js:585:3) |
I explained they could fix the problem with the following two Node.js Package Manager (NPM) commands:
npm init --y npm install --save mysql |
The student was able to retest the code with success. The issue was simply that the Node.js couldn’t find the NPM MySQL module.
MySQL Partitioned Tables
MySQL Partitioned Tables
Learning Outcomes
- Learn about List Partitioning.
- Learn about Range Partitioning.
- Learn about Columns Partitioning.
- Learn about Hash Partitioning.
- Learn about Key Partitioning.
- Learn about Subpartitioning.
Lesson Material
MySQL supports partitioning of tables. It supports range, list, hash, and key partitioning. Range partitioning lets you partition based on column values that fall within given ranges. List partitioning lets you partition based on columns matching one of a set of discrete values. Hash partitioning lets you partition based on the return value from a user-defined expression (the result from a stored SQL/PSM function). Key partitioning performs like hash partitioning, but it lets a user select one or more columns from the set of columns in a table; a hash manages the selection process for you. A hash is a method of organizing keys to types of data, and hashes speed access to read and change data in tables.
Each of the following subsections discusses one of the supported forms of partitioning in MySQL. Naturally, there are differences between other databases and MySQL’s implementation.
List Partitioning
A MySQL list partition works by identifying a column that contains an integer value, the franchise_number in the following example. Partitioning clauses follow the list of columns and constraints and require a partitioning key to be in the primary key or indexed.
The following list partition works with literal numeric values. MySQL uses the IN keyword for list partitions. Note that there’s no primary key designated and an index is on the auto-incrementing surrogate key column. A complete example is provided to avoid confusion on how to index the partitioning key:
CREATE TABLE franchise ( franchise_id INT UNSIGNED AUTO_INCREMENT , franchise_number INT UNSIGNED , franchise_name VARCHAR(20) , city VARCHAR(20) , state VARCHAR(20) , index idx (franchise_id)) PARTITION BY LIST(franchise_number) ( PARTITION offshore VALUES IN (49,50) , PARTITION west VALUES IN (34,45,48) , PARTITION desert VALUES IN (46,47) , PARTITION rockies VALUES IN (38,41,42,44)); |
The inclusion of a PRIMARY KEY constraint on the franchise_id column would trigger an ERROR 1503 when the partitioning key isn’t the primary key. The reason for the error message is that a primary key implicitly creates a unique index, and that index would conflict with the partitioning by list instruction. The use of a non-unique idx index on the franchise_id column is required when you want to partition on a non-primary key column.
Range Partitioning
Range partitioning works only with an integer value or an expression that resolves to an integer against the primary key column. The limitation of the integer drives the necessity of choosing an integer column for range partitioning. You can’t define a range-partitioned table with a PRIMARY KEY constraint unless the primary key becomes your partitioning key, like
the one below.
CREATE TABLE ordering ( ordering_id INT UNSIGNED AUTO_INCREMENT , item_id INT UNSIGNED , rental_amount DECIMAL(15,2) , rental_date DATE , index idx (ordering_id)) PARTITION BY RANGE(item_id) ( PARTITION jan2011 VALUES LESS THAN (10000) , PARTITION feb2011 VALUES LESS THAN (20000) , PARTITION mar2011 VALUES LESS THAN (30000)); |
Range partitioning is best suited to large tables that you want to break into smaller pieces based on the integer column. You can also use stored functions that return integers as the partitioning key instead of the numeric literals shown. Few other options are available in MySQL.
Columns Partitioning
Columns partitioning is a new variant of range and list partitioning. It is included in MySQL 5.5 and forward. Both range and list partitioning work on an integer-based column (using TINYINT, SMALLINT, MEDIUMINT, INT [alias INTEGER], and BIGINT). Columns partitioning extends those models by expanding the possible data types for the partitioning column to include CHAR, VARCHAR, BINARY, and VARBINARY string data types, and DATE, DATETIME, or TIMESTAMP data types. You still can’t use other number data types such as DECIMAL and FLOAT. The TIMESTAMP data type is also available only in range partitions with the caveat that you use a UNIX_TIMESTAMP function, according to MySQL Bug 42849.
Hash Partitioning
Hash partitions ensure an even distribution of rows across a predetermined number of partitions. It is probably the easiest way to partition a table quickly to test the result of partitioning on a large table. You should base hash partitions on a surrogate or natural primary key.
The following provides a modified example of the ordering table:
CREATE TABLE ordering ( ordering_id INT UNSIGNED PRIMARY KEY AUTO_INCREMENT , item_id INT UNSIGNED , rental_amount DECIMAL(15,2) , rental_date DATE) PARTITION BY HASH(ordering_id) PARTITIONS 8; |
This is the partitioning type that benefits from a PRIMARY KEY constraint because it automatically creates a unique index that can be used by the hash. A non-unique index such as the list partitioning example doesn’t work for a hash partition.
Key Partitioning
Key partitioning is valuable because you can partition on columns that aren’t integers. It performs along the line of hash partitioning, except the MySQL Server uses its own hashing expression.
CREATE TABLE orders_list ( order_list_id INT UNSIGNED AUTO_INCREMENT , customer_surname VARCHAR(30) , store_id INT UNSIGNED , salesperson_id INT UNSIGNED , order_date DATE , index idx (order_list_id)) PARTITION BY KEY (order_date) PARTITIONS 8; |
This is the only alternative when you want to partition by date ranges. Like the hash partition, it’s easy to deploy. The only consideration is the number of slices that you want to make of the data in the table.
Subpartitioning
The concept of subpartitioning is also known as composite partitioning. You can subpartition range or list partitions with a hash, linear hash, or linear key.
A slight change to the previously created ordering table is required to demonstrate composite partitioning: we’ll add a store_id column to the table definition. The following is an example of a range partition subpartitioned by a hash:
CREATE TABLE ordering INT UNSIGNED INT UNSIGNED DATE ( ordering_id INT UNSIGNED AUTO_INCREMENT , item_id INT UNSIGNED , store_id INT UNSIGNED , rental_amount DECIMAL(15,2) , rental_date DATE , index idx (ordering_id)) PARTITION BY RANGE(item_id) SUBPARTITION BY HASH(store_id) SUBPARTITIONS 4 ( PARTITION jan2011 VALUES LESS THAN (10000) , PARTITION feb2011 VALUES LESS THAN (20000) , PARTITION mar2011 VALUES LESS THAN (30000)); |
Composite partitioning is non-trivial and might require some experimentation to achieve optimal results. Plan on making a few tests of different scenarios before you deploy a solution.
MySQL Windows DSN
Almost a Ripley’s Believe It or Not. An prior data science student told me that his new IT department setup a Windows component that let him connect his Excel Spreadsheets to their production MySQL database without a password. Intrigued, I asked if it was a MySQL Connector/ODBC Data Source Configuration, or DSN (Data Source Name)?
He wasn’t sure, so I asked him to connect to PowerShell and run the following command:
Get-Item -Path Registry::HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI\MySQL |
It returned something like this (substituting output from one of my test systems):
Hive: HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI Name Property ---- -------- MySQL Driver : C:\Program Files\MySQL\Connector ODBC 8.0\myodbc8w.dll DESCRIPTION : MySQL ODBC Connector SERVER : localhost UID : student PWD : student DATABASE : studentdb PORT : 3306 |
The student was stunned and concerned he was compromising his employer’s system security. I suggested he share the information with his IT department so they could provide a different approach for his access to the production database. His IT department immediately agreed. Unfortunately, he’s bummed he can’t simply access the data through Excel.
I told him they were welcome to use the MySQL Connect Dialog PowerShell solution that I wrote. It creates a minimal MySQL DSN and requires a manual password entry through the PowerShell Dialog box. I also suggested that they look into the PowerShell Excel Module.
I also suggested they develop a query only copy of the production database, or shift access to a data warehouse. Needless to say, it wasn’t a large corporation.
As always, I hope this helps others.
MySQL CSV Output
Saturday, I posted how to use Microsoft ODBC DSN to connect to MySQL. Somebody didn’t like the fact that the PowerShell program failed to write a *.csv file to disk because the program used the Write-Host command to write to the content of the query to the console.
I thought that approach was a better as an example. However, it appears that it wasn’t because not everybody knows simple redirection. The original program can transfer the console output to a file, like:
powershell .\MySQLODBC.ps1 > output.csv |
So, the first thing you need to do is add a parameter list, like:
2 3 4 | param ( [Parameter(Mandatory)][string]$fileName ) |
Anyway, it’s trivial to demonstrate how to modify the PowerShell program to write to a disk. You should also create a virtual PowerShell drive before writing the file. That’s because you can change the physical directory anytime you want with minimal changes to rest of your code’s file references.
You can create a PowerShell virtual drive with the following command:
7 8 | New-PSDrive -Name test -PSProvider FileSystem -Description 'Test area' ` -Root C:\Data\cit225\mysql\test |
but, it will write the following to console:
Name Used (GB) Free (GB) Provider Root CurrentLocation ---- --------- --------- -------- ---- --------------- test 0.00 28.74 FileSystem C:\Data\cit225\mysql\test |
You can suppress the console output with Microsoft’s version of redirection to the void (> /dev/null), which pipes (|) the standard out (stdout) to Out-Null, like:
7 8 | New-PSDrive -Name test -PSProvider FileSystem -Description 'Test area' ` -Root C:\Data\cit225\mysql\test | Out-Null |
Since the program may run before an output file has been created, or after its been created and removed, you need to check whether the file exists before attempting to remove it. PowerShell provides the Test-Path command to check for the existence of a file and the Remove-Item command to remove a file, like:
11 12 | if (Test-Path test:$fileName) { Remove-Item -Path test:$fileName } |
Then, you simply replace the Write-Host call in the other program with the Add-Content command:
Add-Content -Value $output -Path test:$fileName |
Now, the PowerShell script file writes the MySQL query’s output to an output.csv file. You can call the MySQLContact.ps1 script file with the following syntax:
powershell MySQLContact.ps1 output.csv |
In case these changes don’t make sense outside the scope of the full script, here is the rewritten script:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 | # Define parameter list for mandatory file name. param ( [Parameter(Mandatory)][string]$fileName ) # Define a PowerShell Virtual Drive. New-PSDrive -Name test -PSProvider FileSystem -Description 'Test area' ` -Root C:\Data\cit225\mysql\test | Out-Null # Remove the file only when it exists. if (Test-Path test:$fileName) { Remove-Item -Path test:$fileName } # Define a ODBC DSN connection string. $ConnectionString = 'DSN=MySQLODBC2' # Define a MySQL Command Object for a non-query. $Connection = New-Object System.Data.Odbc.OdbcConnection; $Connection.ConnectionString = $ConnectionString # Attempt connection. try { $Connection.Open() # Create a SQL command. $Command = $Connection.CreateCommand(); $Command.CommandText = "SELECT last_name " + ", first_name " + "FROM contact " + "ORDER BY 1, 2"; # Attempt to read SQL command. try { $row = $Command.ExecuteReader(); # Read while records are found. while ($row.Read()) { # Initialize output for each row. $output = "" # Navigate across all columns (only two in this example). for ($column = 0; $column -lt $row.FieldCount; $column += 1) { # Mechanic for comma-delimit between last and first name. if ($output.length -eq 0) { $output += $row[$column] } else { $output += ", " + $row[$column] } } # Write the output from the database to a file. Add-Content -Value $output -Path test:$fileName } } catch { Write-Error "Message: $($_.Exception.Message)" Write-Error "StackTrace: $($_.Exception.StackTrace)" Write-Error "LoaderExceptions: $($_.Exception.LoaderExceptions)" } finally { # Close the reader. $row.Close() } } catch { Write-Error "Message: $($_.Exception.Message)" Write-Error "StackTrace: $($_.Exception.StackTrace)" Write-Error "LoaderExceptions: $($_.Exception.LoaderExceptions)" } finally { $Connection.Close() } |
While I understand you might want to go to this level of effort if you where building a formal cmdlet, I’m not convinced its worth the effort in an ordinary PowerShell script. However, I don’t like to leave a question unanswered.