MacLochlainns Weblog

Michael McLaughlin's Technical Blog

Site Admin

Archive for the ‘MySQL 8’ Category

AlmaLinux MySQL Workbench

without comments

AlmaLinux doesn’t natively support MySQL Workbench but these notes will help you install it. The great news is that MySQL Workbench works perfectly once you’ve installed all the dependent libraries. It’ll look like the following:

Disclaimer of sorts:

AlmaLinux is an open-source, community-driven project that intends to fill the gap left by the demise of the CentOS stable release. AlmaLinux is a 1:1 binary compatible fork of RHEL® 9 and it is built by the AlmaLinux OS Foundation as a standalone, completely free OS. The AlmaLinux OS Foundation will support future RHEL® releases by updating AlmaLinux. Ongoing development efforts are governed by the members of the community.

You can download MySQL Workbench from the following website:

https://dev.mysql.com/downloads/workbench

When you open this page, select the Red Hat Enterprise Linux 9 / Oracle Linux 9 (x86, 64-bit), RPM Package from the dropdown menu. Then, click the Download button. You may be prompted for your credentials or to create new credentials, but you can skip that by clicking on the No thanks, just start my download link.

When the download completes, open a terminal session as the student user. Navigate to the Downloads directory with the following command:

cd $HOME/Downloads

List the files in the $HOME/Downloads directory and you should see:

mysql-workbench-community-8.0.31-1.el9.x86_64.rpm

As the sudoer user or root, run the following command (naturally, exclude sudo if you’re the root user):

sudo dnf install -y mysql-workbench-community-8.0.31-1.el9.x86_64.rpm

It will most likely fail with an error message like this:

Last metadata expiration check: 2:50:04 ago on Thu 17 Nov 2022 09:33:15 AM EST.
Error: 
 Problem: conflicting requests
  - nothing provides gtkmm30-devel needed by mysql-workbench-community-8.0.31-1.el9.src
  - nothing provides libzip-devel needed by mysql-workbench-community-8.0.31-1.el9.src
  - nothing provides proj-devel needed by mysql-workbench-community-8.0.31-1.el9.src
  - nothing provides swig >= 3.0 needed by mysql-workbench-community-8.0.31-1.el9.src
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

AlmaLinux doesn’t install these prerequisite packages. You’ll need to resolve these dependencies by installing them in the right order and groups before you can run the MySQL Workbench packages.

You can discover missing packages at the pkgs.org website. You need to resolve all four prerequisites before installing MySQL Workbench.

  1. Let’s start with the gtkmm30-devel package, which has eight separate dependencies. Assuming you’re still in your $HOME/Downloads directory, you can run the following command to get the gtkmm30-devel for AlmaLinux 9:

    wget https://repo.almalinux.org/almalinux/9/CRB/x86_64/os/Packages/gtkmm30-devel-3.24.5-1.el9.x86_64.rpm

    It downloads the following package:

    gtkmm30-devel-3.24.5-1.el9.x86_64.rpm

    If you attempt to run it, the gtkmm30-devel package raises the following errors:

    sudo dnf install -y gtkmm30-devel-3.24.5-1.el9.x86_64.rpm
    Last metadata expiration check: 0:41:13 ago on Thu 17 Nov 2022 02:39:59 PM EST.
    Error: 
     Problem: conflicting requests
      - nothing provides pkgconfig(atkmm-1.6) >= 2.24.2 needed by gtkmm30-devel-3.24.5-1.el9.x86_64
      - nothing provides pkgconfig(cairomm-1.0) >= 1.12.0 needed by gtkmm30-devel-3.24.5-1.el9.x86_64
      - nothing provides pkgconfig(giomm-2.4) >= 2.54.0 needed by gtkmm30-devel-3.24.5-1.el9.x86_64
      - nothing provides pkgconfig(pangomm-1.4) >= 1.12.0 needed by gtkmm30-devel-3.24.5-1.el9.x86_64
    (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

    While you only get four errors, there are more packages required. You need to use the wget utility to download these packages. I would recommend you create a temporary gtkmm30 subdirectory inside your $HOME/Downloads directory and change to that directory before downloading these files.

    wget https://repo.almalinux.org/almalinux/9/CRB/x86_64/os/Packages/atkmm-devel-2.28.2-2.el9.x86_64.rpm
    wget https://repo.almalinux.org/almalinux/9/CRB/x86_64/os/Packages/cairomm-devel-1.14.2-10.el9.x86_64.rpm
    wget https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/gdk-pixbuf2-devel-2.42.6-2.el9.x86_64.rpm
    wget https://repo.almalinux.org/almalinux/9/CRB/x86_64/os/Packages/glibmm24-devel-2.66.1-1.el9.x86_64.rpm
    wget https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/gtk3-devel-3.24.31-2.el9.x86_64.rpm
    wget https://repo.almalinux.org/almalinux/9/CRB/x86_64/os/Packages/pangomm-devel-2.46.1-1.el9.x86_64.rpm
    wget https://repo.almalinux.org/almalinux/9/CRB/x86_64/os/Packages/libsigc++20-devel-2.10.7-2.el9.x86_64.rpm

    You need to run these as a set of prerequisites, so from your gtkmm30 subdirectory use the following dnf command as the sudoer user:

    sudo dnf install -y *.rpm

    The log file for this is:

    Now return to your $HOME/Downloads directory and run the following command. You’ll notice that it installs and upgrades many more packages than you might expect.

    sudo dnf install -y gtkmm30-devel-3.24.5-1.el9.x86_64.rpm

    The log file for this is:

    All that done and you’ve only got the first of four dependencies resovled.

  2. Next, start with the libzip-devel package, which has a couple dependencies. Assuming you’re still in your $HOME/Downloads directory, you can run the following command to get the libzip-devel and its prerequisite packages for AlmaLinux 9:

    wget https://repo.almalinux.org/almalinux/9/CRB/x86_64/os/Packages/libzip-devel-1.7.3-7.el9.x86_64.rpm
    wget https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/cmake-filesystem-3.20.2-7.el9.x86_64.rpm
    wget https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/libzip-1.7.3-7.el9.x86_64.rpm

    You can run the prerequisites with the following command:

    sudo dnf install -y cmake*.rpm libzip-1.7.3*.rpm

    Now, you can run the libzip-devel package with this syntax:

    sudo dnf install -y libzip-devel*.rpm

    Having resolved the two dependencies, you can install the compression development kit. This completes the second step.

  3. Next, you need to apply the proj_devel package for AlmaLinux 9:

    wget https://download-ib01.fedoraproject.org/pub/epel/9/Everything/x86_64/Packages/p/proj-devel-8.2.0-1.el9.x86_64.rpm

    Now, you can run the proj-devel package with this syntax:

    sudo dnf install -y proj-devel-8.2.0-1.el9.x86_64.rpm
  4. Next, you need to apply the swig packages for AlmaLinux 9:

    wget https://repo.almalinux.org/almalinux/9/CRB/x86_64/os/Packages/swig-4.0.2-8.el9.x86_64.rpm
    wget https://repo.almalinux.org/almalinux/9/CRB/x86_64/os/Packages/swig-doc-4.0.2-8.el9.noarch.rpm
    wget https://repo.almalinux.org/almalinux/9/CRB/x86_64/os/Packages/swig-gdb-4.0.2-8.el9.x86_64.rpm
    sudo dnf install -y swig*.rpm
  5. Next, you need to apply the mysql-community-workbench packages for AlmaLinux 9. The download instructions where provided above. You apply the packages with the following command.

    sudo dnf install -y mysql-workbench-community-8.0.31-1.el9.x86_64.rpm

After applying the dependent and mysql-community-workbench packages, you can launch MySQL Workbench by clicking the Activities symbol in the upper left hand corner. That displays the nine-dots for Show Applications icon. Click the Show Applications icon and choose the MySQL Workbench icon to launch MySQL Workbench.

You’ll be prompted with the following dialog. Just click Don’t show this message again checkbox and the OK button to launch MySQL Workbench.

As always, I hope this helps those looking to solve a real problem.

Written by maclochlainn

November 20th, 2022 at 11:31 pm

AlmaLinux MySQL+Perl

without comments

A quick primer on Perl programs connecting to the MySQL database. It’s another set of coding examples for the AlmaLinux instance that I’m building for students. This one demonstrates basic Perl programs, connecting to MySQL, returning data sets by reference and position, dynamic queries, and input parameters to dynamic queries.

  1. Naturally, a hello.pl is a great place to start:

    #!/usr/bin/perl
     
    # Hello World program.
    print "Hello World!\n";

    After setting the permissions to -rwxr-xr-x. with this command:

    chmod 755 hello.pl

    You call it like this from the Command-Line Interface (CLI):

    ./hello.pl

    It prints:

    Hello World!
  2. Next, a connect.pl program lets us test the Perl::DBI connection to the MySQL database.

    #!/usr/bin/perl
     
    # Import libraries.
    use strict;
    use warnings;
    use v5.10;     # for say() function
    use DBI;
     
    # Print with say() function message.
    say "Perl MySQL Connect Attempt.";
     
    # MySQL data source name should have a valid database as the
    # third argument; this uses the sakila demo database.
    my $dsn = "DBI:mysql:sakila";
     
    # Local variables to build the connection string.
    my $username = "student";
    my $password = "student";
     
    # Set arguments for MySQL database error management.
    my %attr = ( PrintError=>0,  # turn off error reporting via warn()
                 RaiseError=>1);   # turn on error reporting via die()           
     
    # Create connction with a data source name, user name and password.
    my $dbh  = DBI->connect($dsn,$username,$password, \%attr);
     
    # Print with say() function valid connection message.
    say "Connected to the MySQL database.";

    After setting the permissions to -rwxr-xr-x. you call it with this CLI command:

    ./connect.pl

    It prints:

    Perl MySQL Connect Attempt.
    Connected to the MySQL database.
  3. After connecting to the database lets query a couple columns by reference notation in a static.pl program. This one just returns the result of the MySQL version() and database() functions.

    #!/usr/bin/perl
     
    # Import libraries.
    use strict;
    use warnings;
    use v5.10;     # for say() function
    use DBI;
     
    # Print with say() function message.
    say "Perl MySQL Connect Attempt.";
     
    # MySQL data source name must have a valid database as the
    # third argument; this uses the sakila demo database.
    my $dsn = "DBI:mysql:sakila";
     
    # Local variables to build the connection string.
    my $username = "student";
    my $password = "student";
     
    # Set arguments for MySQL database error management.
    my %attr = ( PrintError=>0,  # turn off error reporting via warn()
                 RaiseError=>1); # turn on error reporting via die()           
     
    # Create connction with a data source name, user name and password.
    my $dbh  = DBI->connect($dsn,$username,$password, \%attr);
     
    # Creaet a static SQL statement or query.
    my $sth = $dbh->prepare("SELECT version() AS version \
    	                 ,      database() AS db_name");
     
    # Execute the static statement.
    $sth->execute() or die "Execution failed: $dbh->errstr()";
     
    # Read data and print by reference.
    print "----------------------------------------\n";
    while (my $ref = $sth->fetchrow_hashref()) {
      print "MySQL Version:  $ref->{'version'}\nMySQL Database: $ref->{'db_name'}\n";
    }
    print "----------------------------------------\n";
     
    # Close the statement.
    $sth->finish;
     
    # Disconnect from database connection.
    $dbh->disconnect();
     
    # Print with say() function valid connection message.
    say "Connected to the MySQL database.";

    After setting the permissions to -rwxr-xr-x. you call it with this CLI command:

    ./static.pl

    It prints:

    Perl MySQL Connect Attempt.
    ----------------------------------------
    MySQL Version:  8.0.30
    MySQL Database: sakila
    ----------------------------------------
    Connected to the MySQL database.
  4. After connecting to the database and securing variables by reference notation, lets return the value as an array of rows in a columns.pl program. This one just returns data from the film table of the sakila database. It is a static query because all the values are contained inside the SQL statement.

    #!/usr/bin/perl
     
    # Import libraries.
    use strict;
    use warnings;
    use v5.10;     # for say() function
    use DBI;
     
    # Print with say() function message.
    say "Perl MySQL Connect Attempt.";
     
    # MySQL data source name must have a valid database as the
    # third argument; this uses the sakila demo database.
    my $dsn = "DBI:mysql:sakila";
     
    # Local variables to build the connection string.
    my $username = "student";
    my $password = "student";
     
    # Set arguments for MySQL database error management.
    my %attr = ( PrintError=>0,  # turn off error reporting via warn()
                 RaiseError=>1); # turn on error reporting via die()           
     
    # Create connction with a data source name, user name and password.
    my $dbh  = DBI->connect($dsn,$username,$password, \%attr);
     
    # Creaet a static SQL statement or query.
    my $sth = $dbh->prepare("SELECT title         \
    	                 ,      release_year  \
    			 ,      rating        \
    			 FROM   film          \
    			 WHERE  title LIKE 'roc%'");
     
    # Execute the static statement.
    $sth->execute() or die "Execution failed: $dbh->errstr()";
     
    # Read data and print by comma-delimited row position.
    print "----------------------------------------\n";
    while (my @row = $sth->fetchrow_array()) {
      print join(", ", @row), "\n";
    }
    print "----------------------------------------\n";
     
    # Close the statement.
    $sth->finish;
     
    # Disconnect from database connection.
    $dbh->disconnect();
     
    # Print with say() function valid connection message.
    say "Connected to the MySQL database.";

    After setting the permissions to -rwxr-xr-x. you call it with this CLI command:

    ./columns.pl

    It prints:

    Perl MySQL Connect Attempt.
    ----------------------------------------
    ROCK INSTINCT, 2006, G
    ROCKETEER MOTHER, 2006, PG-13
    ROCKY WAR, 2006, PG-13
    ----------------------------------------
    Connected to the MySQL database.
  5. After connecting to the database and securing variables by reference notation, lets return the value as an array of rows in a dynamic.pl program. This one just returns data from the film table of the sakila database. It is a dynamic query because a string passed to the execute method and that value is bound to a ? placeholder in the SQL statement.

    #!/usr/bin/perl
     
    # Import libraries.
    use strict;
    use warnings;
    use v5.10;     # for say() function
    use DBI;
     
    # Print with say() function message.
    say "Perl MySQL Connect Attempt.";
     
    # MySQL data source name must have a valid database as the
    # third argument; this uses the sakila demo database.
    my $dsn = "DBI:mysql:sakila";
     
    # Local variables to build the connection string.
    my $username = "student";
    my $password = "student";
     
    # Set arguments for MySQL database error management.
    my %attr = ( PrintError=>0,  # turn off error reporting via warn()
                 RaiseError=>1); # turn on error reporting via die()           
     
    # Create connction with a data source name, user name and password.
    my $dbh  = DBI->connect($dsn,$username,$password, \%attr);
     
    # Creaet a static SQL statement or query.
    my $sth = $dbh->prepare("SELECT title         \
    	                 ,      release_year  \
    			 ,      rating        \
    			 FROM   film          \
    			 WHERE  title LIKE CONCAT(?,'%')");
     
    # Execute the dynamic statement by providing an input parameter.
    $sth->execute('roc') or die "Execution failed: $dbh->errstr()";
     
    # Read data and print by comma-delimited row position.
    print "----------------------------------------\n";
    while (my @row = $sth->fetchrow_array()) {
      print join(", ", @row), "\n";
    }
    print "----------------------------------------\n";
     
    # Close the statement.
    $sth->finish;
     
    # Disconnect from database connection.
    $dbh->disconnect();
     
    # Print with say() function valid connection message.
    say "Connected to the MySQL database.";

    After setting the permissions to -rwxr-xr-x. you call it with this CLI command:

    ./dynamic.pl

    It prints:

    Perl MySQL Connect Attempt.
    ----------------------------------------
    ROCK INSTINCT, 2006, G
    ROCKETEER MOTHER, 2006, PG-13
    ROCKY WAR, 2006, PG-13
    ----------------------------------------
    Connected to the MySQL database.
  6. After connecting to the database and securing variables by reference notation, lets return the value as an array of rows in a input.pl program. This one just returns data from the film table of the sakila database. It is a dynamic query because an input parameter is passed to a local variable and the local variable is bound to a ? placeholder in the SQL statement.

    #!/usr/bin/perl
     
    # Import libraries.
    use strict;
    use warnings;
    use v5.10;     # for say() function
    use DBI;
     
    # Get the index value of the maximum argument in the
    # argument.
    my $argc = $#ARGV;
     
    # Accept first argument value as parameter.
    my $param = $ARGV[$argc];
     
    # Verify variable value assigned.
    if (not defined $param) {
      die "Need parameter value.\n";
    }
     
    # Print with say() function message.
    say "Perl MySQL Connect Attempt.";
     
    # MySQL data source name must have a valid database as the
    # third argument; this uses the sakila demo database.
    my $dsn = "DBI:mysql:sakila";
     
    # Local variables to build the connection string.
    my $username = "student";
    my $password = "student";
     
    # Set arguments for MySQL database error management.
    my %attr = ( PrintError=>0,  # turn off error reporting via warn()
                 RaiseError=>1); # turn on error reporting via die()           
     
    # Create connction with a data source name, user name and password.
    my $dbh  = DBI->connect($dsn,$username,$password, \%attr);
     
    # Creaet a static SQL statement or query.
    my $sth = $dbh->prepare("SELECT title         \
    	                 ,      release_year  \
    			 ,      rating        \
    			 FROM   film          \
    			 WHERE  title LIKE CONCAT(?,'%')");
     
    # Execute the static statement.
    $sth->execute($param) or die "Execution failed: $dbh->errstr()";
     
    # Read data and print by comma-delimited row position.
    print "----------------------------------------\n";
    while (my @row = $sth->fetchrow_array()) {
      print join(", ", @row), "\n";
    }
    print "----------------------------------------\n";
     
    # Close the statement.
    $sth->finish;
     
    # Disconnect from database connection.
    $dbh->disconnect();
     
    # Print with say() function valid connection message.
    say "Connected to the MySQL database.";

    After setting the permissions to -rwxr-xr-x. you call it with this CLI command:

    ./input.pl ta

    It prints:

    Perl MySQL Connect Attempt.
    ----------------------------------------
    TADPOLE PARK, 2006, PG
    TALENTED HOMICIDE, 2006, PG
    TARZAN VIDEOTAPE, 2006, PG-13
    TAXI KICK, 2006, PG-13
    ----------------------------------------
    Connected to the MySQL database.

I think these examples cover most of the basic elements of writing Perl against the MySQL database. If I missed something you think would be useful, please advise. As always, I hope this helps those working with the MySQL and Perl products.

Written by maclochlainn

November 17th, 2022 at 12:01 am

AlmaLinux LAMP

without comments


After installing and configuring MySQL 8.0.30, I installed the Apache Web Server, PHP and the MySQLi packages. Here are the step-by-step instructions after installing and configuring the MySQL Server and provisioning a student user and the sakila and studentdb databases (blog for those steps). After installing the major components, I completed the HTTPS configuration steps for Apache 2.

The installation steps are:

  1. Install the Apache packages as the sudoer user with this command:

    sudo dnf install -y httpd
  2. Enable Apache as the sudoer user with this command:

    chkconfig httpd on

    This returns the following completion message:

    Note: Forwarding request to 'systemctl enable httpd.service'.
    Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.

    A quick Linux syntax note in the event you want to confirm the link or link target later. You can use the following syntax as a sudoer user to find the link:

    ls `find /etc -type l | grep httpd.service 2>/dev/null`

    and the following syntax as a sudoer user to find the link’s target:

    readlink `find /etc -type l | grep httpd.service 2>/dev/null`
  3. You still need to start the Apache service unless you reboot the operating system as the sudoer user with this command:

    apachectl start
  4. At this point, you need to check the firewall settings because Apache can’t even read localhost at this point. If you’re new to these firewall commands, you should consider reviewing Korbin Brown’s tutorial. As the sudoer user check the Apache available services with this command:

    firewall-cmd --zone=public --list-services

    It should return:

    cockpit dhcpv6-client ssh

    Add the following services and ports with these commands:

    firewall-cmd --zone=public --add-port 80/tcp --permanent
    firewall-cmd --zone=public --add-port 443/tcp --permanent
    firewall-cmd --zone=public --add-port 8080/tcp --permanent
    firewall-cmd --zone=public --add-service=http --permanent
    firewall-cmd --zone=public --add-service=https --permanent

    Check the open ports with the following command:

    firewall-cmd --zone=public --list-ports

    It should return:

    80/tcp 443/tcp 8080/tcp

    Check the open services with the following command:

    firewall-cmd --zone=public --list-services

    It should return:

    cockpit dhcpv6-client http https ssh
  5. Create the hello.htm file in the /var/www/html directory as the root user:

    Restart the Apache service as the sudoer user:

    apache restart

    <html>
    <body>
    Hello World!
    </body>
    </html>

    Then, you can launch the Firefox browser and type the following:

    localhost/hello.htm

    It should print “Hello World!” in the browser.

  6. Install the php package as the sudoer user with the following command:

    sudo dnf install -y php

    Create the info.php file in the /var/www/html directory as the root user:

    <?php
      phpinfo();
    ?>

    apache restart

    Then, you can launch the Firefox browser and type the following:

    localhost/info.php

    It should return the following in the browser.

  7. Install the php_mysqli package as the sudoer user with the following command:

    dnf install -y php-mysqli

    Create the mysqli_check.php file in the /var/www/html directory as the root user:

    <html>
    <header>
    <title>Static Query Object Sample</title>
    </header>
    <body>
    <?php
      if (!function_exists('mysqli_init') && !extension_loaded('mysqli')) {
        print 'mysqli not installed.'; }
      else {
        print 'mysqli installed.'; }
      if (!function_exists('pdo_init') && !extension_loaded('pdo')) {
        print '<p>pdo not installed.</p>'; }
      else {
        print '<p>pdo installed.</p>'; }
    ?>
    </script>
    </body>
    </html>

    apache restart

    Then, you can launch the Firefox browser and type the following:

    localhost/mysqli_check.php

    It should print the following in the browser.

    mysqli installed.
     
    pdo installed.
  8. Check if the mod_ssl module is installed. You can use the following command::

    rpm -qa | grep mod_ssl

    Assuming it’s not installed, you install it like this:

    dnf install -y mod_ssl

    Recheck after installing mod_ssl with the following command::

    rpm -qa | grep mod_ssl

    It should print:

    mod_ssl-2.4.51-7.el9_0.x86_64
  9. AlmaLinux and Apache require you to resolve the ServerName values and the public and private keys. Run this command on AlmaLinux to begin verifying and configuring the ServerName values and the public and private keys:

    httpd -M | grep ssl

    Assuming a new installation consistent with were MySQL and Apache were just configured, you should get the following message:

    AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using localhost.localdomain. Set the 'ServerName' directive globally to suppress this message
     ssl_module (shared)

    Recheck the failure for more detail with this command:

    sudo systemctl status httpd.service -l --no-pager

    It should print:

    ● httpd.service - The Apache HTTP Server
         Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
        Drop-In: /usr/lib/systemd/system/httpd.service.d
                 └─php-fpm.conf
         Active: active (running) since Sun 2022-11-13 22:39:07 EST; 1h 37min ago
           Docs: man:httpd.service(8)
       Main PID: 1351 (httpd)
         Status: "Total requests: 0; Idle/Busy workers 100/0;Requests/sec: 0; Bytes served/sec:   0 B/sec"
          Tasks: 213 (limit: 23280)
         Memory: 43.1M
            CPU: 2.733s
         CGroup: /system.slice/httpd.service
                 ├─1351 /usr/sbin/httpd -DFOREGROUND
                 ├─1443 /usr/sbin/httpd -DFOREGROUND
                 ├─1452 /usr/sbin/httpd -DFOREGROUND
                 ├─1456 /usr/sbin/httpd -DFOREGROUND
                 └─1459 /usr/sbin/httpd -DFOREGROUND
     
    Nov 13 22:39:06 localhost.localdomain systemd[1]: Starting The Apache HTTP Server...
    Nov 13 22:39:07 localhost.localdomain httpd[1351]: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using localhost.localdomain. Set the 'ServerName' directive globally to suppress this message
    Nov 13 22:39:07 localhost.localdomain systemd[1]: Started The Apache HTTP Server.
    Nov 13 22:39:07 localhost.localdomain httpd[1351]: Server configured, listening on: port 80

    It takes the next set of steps to fix the ServerName values.

    • Generically, on Linux you need to find the files to modify. You can use the following command from within the /etc directory to find the configuration files in the /etc directory that include ServerName in them. Their values will be proceeded by a # symbol because they’re comments by default.

      find /etc -type f | xargs grep -i ServerName

      It should return the following:

      ./httpd/conf.d/ssl.conf:#ServerName www.example.com:443
      ./httpd/conf/httpd.conf:# ServerName gives the name and port that the server uses to identify itself.
      ./httpd/conf/httpd.conf:#ServerName www.example.com:80
      ./dnsmasq.conf:# tftp_servername (the third option to dhcp-boot) and in that
    • Add the following line to the ssl.conf file as the root user:

      ServerName localhost:443
    • Add the following line to the httpd.conf file as the root user:

      ServerName localhost:443
    • After adding the two values, restart Apache with the following command:

      sudo apachectl restart
    • Rerun the systemctl command to get the status of the httpd service with this command:

      sudo systemctl status httpd.service -l --no-pager

      It should print:

      ● httpd.service - The Apache HTTP Server
           Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
          Drop-In: /usr/lib/systemd/system/httpd.service.d
                   └─php-fpm.conf
           Active: active (running) since Mon 2022-11-14 00:37:03 EST; 3min 23s ago
             Docs: man:httpd.service(8)
         Main PID: 53596 (httpd)
           Status: "Total requests: 0; Idle/Busy workers 100/0;Requests/sec: 0; Bytes served/sec:   0 B/sec"
            Tasks: 213 (limit: 23280)
           Memory: 34.0M
              CPU: 183ms
           CGroup: /system.slice/httpd.service
                   ├─53596 /usr/sbin/httpd -DFOREGROUND
                   ├─53597 /usr/sbin/httpd -DFOREGROUND
                   ├─53598 /usr/sbin/httpd -DFOREGROUND
                   ├─53599 /usr/sbin/httpd -DFOREGROUND
                   └─53600 /usr/sbin/httpd -DFOREGROUND
       
      Nov 14 00:37:03 localhost.localdomain systemd[1]: Starting The Apache HTTP Server...
      Nov 14 00:37:03 localhost.localdomain systemd[1]: Started The Apache HTTP Server.
      Nov 14 00:37:03 localhost.localdomain httpd[53596]: Server configured, listening on: port 443, port 80
  10. Your next step requires setting up an SSL Certificate. Consistent with the design to build a standalone test system that uses a DHCP assigned IP address to resolve a localhost server name, you require the following two tasks to create an openssl self-signed certificate.

    • On the new instance, you create a private subdirectory with this command:

      sudo mkdir /etc/ssl/private

    • Then, you can build a self-signed certificate with this command:
      sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/apache-selfsigned.key -out /etc/ssl/certs/apache-selfsigned.crt

      The openssl command will prompt you for these values to create a private key:

      You are about to be asked to enter information that will be incorporated
      into your certificate request.
      What you are about to enter is what is called a Distinguished Name or a DN.
      There are quite a few fields but you can leave some blank
      For some fields there will be a default value,
      If you enter '.', the field will be left blank.
      -----
      Country Name (2 letter code) [XX]:
      State or Province Name (full name) []:
      Locality Name (eg, city) [Default City]:
      Organization Name (eg, company) [Default Company Ltd]:
      Organizational Unit Name (eg, section) []:
      Common Name (eg, your name or your server's hostname) []:
      Email Address []:
  11. Your last step requires three tasks to configure Apache to use SSL.

    • You need to create the following sites-available directory with the following command as the root user:

      mkdir /etc/httpd/sites-available

    • Add the following localhost.conf/etc/httpd/sites-available directory:
      <VirtualHost *:443>
         ServerName localhost
         DocumentRoot /var/www/html
       
         SSLEngine on
         SSLCertificateFile /etc/ssl/certs/localhost.crt
         SSLCertificateKeyFile /etc/ssl/private/apache-selfsigned.key
      </VirtualHost>

    • Restart Apache with the following command:

      sudo apachectl restart
  12. After configuring everything, let’s test our self-signed HTTPS skunkworks. Launch the default Firefox browser and enter the following URL, which uses the mysql_check.php file from step #7:

    https://localhost/mysqli_check.php

    It will raise a warning message about a potential security risk, which is caused by our self-signed certificate. Click the Advanced… button and will see the option to Accept the Risk and Continue. If you want to use the self-signed and contained AlmaLinux LAMP stack for developer testing, accept the risk.

    Having assumed the risk, the confirmation of the configuration will be displayed as follows:

As always, I hope this helps those looking to install MySQL, PHP, on AlmaLinux.

Written by maclochlainn

October 30th, 2022 at 11:16 pm

AlmaLinux MySQL+Python

without comments

After installing and configuring MySQL 8.0.30, I installed the Python connector. During that process on AlmaLinux, there were several changes since I last installed the Python’s mysql module. Here are the step-by-step instructions after installing and configuring MySQL Server (blog for those steps).

Using the MySQL Connector/Python X DevAPI Reference, you must install the pip utility before you install the library. You install the pip library as a sudoer user with the following command:

sudo yum install -y pip

Then, using the pip utility as a sudoer user install the mysql-connector-python module with the following command:

sudo pip install mysql-connector-python

Please note that this type of library installation can cause problems when you maintain multiple testing environments. If you plan to maintain multiple testing environments, please install this library in a virtual environment.

You create a bare-bones Python connection test program, like connect.py below:

# Import the library.
import mysql.connector
from mysql.connector import errorcode
 
try:
  # Open connection.
  cnx = mysql.connector.connect(user='student',
                                password='student',
                                host='127.0.0.1',
                                database='studentdb')
 
  # Print the value.
  print("Database connection resolved.")
 
# Handle exception and close connection.
except mysql.connector.Error as e:
  if e.errno == errorcode.ER_ACCESS_DENIED_ERROR:
    print("Something is wrong with your user name or password")
  elif e.errno == errorcode.ER_BAD_DB_ERROR:
    print("Database does not exist")
  else:
    print(e)
 
# Close the connection when the try block completes.
else:
  cnx.close()

You test the connect.py program with this command:

python connect.py

It’ll return the following provided you provisioned the student user and studentdb database:

Database connection resolved.

If you’d like to run it without calling the python executable, you can add the following as the first line:

#/usr/bin/python

and, change the default file permissions from

-rw-rw-r--. 1 student student 717 Oct 30 13:57 connect.py

with the chmod command to

-rwxr-xr-x. 1 student student 717 Oct 30 13:57 connect.py

These instructions should set you up to develop Python code against your AlmaLinux MySQL studentdb database. You use this syntax, assuming a default $PATH environment variable that excludes the present working directory.

./connect.py

As always, I hope this helps those trying to get a complete solution.

Written by maclochlainn

October 30th, 2022 at 12:37 pm

MySQL on AlmaLinux

with 2 comments

After installing AlmaLinux in a VMware VM on my MacBook Pro (Intel Chip), and updating the packages with the following command:

sudo dnf upgrade --refresh -y

MySQL was first on my installation and configuration list. Here are the commands to install and configure it on AlmaLinux.

Install the MySQL Server packages and dependents:

sudo dnf install mysql mysql-server -y

Install the MySQL service utilities with the initscripts package, using the following command:

sudo yum install -y initscripts

Start the MySQL daemon with the following command:

sudo service mysqld start

Connect and verify the root user can connect to the database. At this point, you’ve not set the root user’s password and should use the following syntax:

mysql -uroot

It should connect and display:

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 10
Server version: 8.0.30 Source distribution
 
Copyright (c) 2000, 2022, Oracle and/or its affiliates.
 
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
 
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
 
mysql>

Exit MySQL and run the following command:

mysql_secure_installation

It’ll run you through the following prompts, which you may change to suit your installation. My choices are a trivial student database isolated inside a VM.

Securing the MySQL server deployment.
 
Connecting to MySQL using a blank password.
 
VALIDATE PASSWORD COMPONENT can be used to test passwords
and improve security. It checks the strength of password
and allows the users to set only those passwords which are
secure enough. Would you like to setup VALIDATE PASSWORD component?
 
Press y|Y for Yes, any other key for No: n
Please set the password for root here.
 
New password: 
 
Re-enter new password: 
By default, a MySQL installation has an anonymous user,
allowing anyone to log into MySQL without having to have
a user account created for them. This is intended only for
testing, and to make the installation go a bit smoother.
You should remove them before moving into a production
environment.
 
Remove anonymous users? (Press y|Y for Yes, any other key for No) : y
Success.
 
 
Normally, root should only be allowed to connect from
'localhost'. This ensures that someone cannot guess at
the root password from the network.
 
Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y
Success.
 
By default, MySQL comes with a database named 'test' that
anyone can access. This is also intended only for testing,
and should be removed before moving into a production
environment.
 
Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y
 - Dropping test database...
Success.
 
 - Removing privileges on test database...
Success.
 
Reloading the privilege tables will ensure that all changes
made so far will take effect immediately.
 
Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y
Success.
 
All done!

The next step requires setting up the sample sakila and studentdb database. The syntax has changed from prior releases. Here are the new three steps:

  1. Grant the root user the privilege to grant to others, which root does not have by default. You use the following syntax as the MySQL root user:

    mysql> GRANT ALL ON *.* TO 'root'@'localhost';
  2. Download the sakila database, which you can download from this site. Click on the sakila database’s TGZ download.

    When you download the sakila zip file it creates a sakila-db folder in the /home/student/Downloads directory. Copy the sakila-db folder into the /home/student/Data/sakila directory. Then, change to the /home/student/Data/sakila/sakila-db directory, connect to mysql as the root user, and run the following command:

    mysql> SOURCE /home/student/Data/sakila/sakila-db/sakila-schema.sql
    mysql> SOURCE /home/student/Data/sakila/sakila-db/sakila-data.sql
  3. Create the studentdb database with the following command as the MySQL root user:

    mysql> CREATE DATABASE studentdb;
  4. Create the user with a clear English password and grant the user student full privileges on the sakila and studentdb databases:

    mysql> CREATE USER 'student'@'localhost' IDENTIFIED WITH mysql_native_password BY 'student';
    mysql> GRANT ALL ON studentdb.* TO 'student'@'localhost';
    mysql> GRANT ALL ON sakila.* TO 'student'@'localhost';

You can now connect to a sandboxed sakila database with the student user’s credentials, like:

mysql -ustudent -p -Dsakila

or, you can now connect to a sandboxed studentdb database with the student user’s credentials, like:

mysql -ustudent -p -Dstudentdb

As always, I hope code and step complete instructions help others get things done more quickly.

Written by maclochlainn

October 27th, 2022 at 12:10 am

MySQL Workbench Keys

without comments

As I teach students how to create tables in MySQL Workbench, it’s always important to review the meaning of the checkbox keys. Then, I need to remind them that every table requires a natural key from our prior discussion on normalization. I explain that a natural key is a compound candidate key (made up of two or more column values), and that it naturally defines uniqueness for each row in a table.

Then, we discuss surrogate keys, which are typically ID column keys. I explain that surrogate keys are driven by sequences in the database. While a number of databases disclose the name of sequences, MySQL treats the sequence as an attribute of the table. In Object-Oriented Analysis and Design (OOAD), that makes the sequence a member of the table by composition rather than aggregation. Surrogate keys are also unique in the table but should never be used to determine uniqueness like the natural key. Surrogate keys are also candidate keys, like a VIN number uniquely identifies a vehicle.

In a well designed table you always have two candidate keys: One describes the unique row and the other assigns a number to it. While you can perform joins by using either candidate key, you always should use the surrogate key for joins statements. This means you elect, or choose, the surrogate candidate key as the primary key. Then, you build a unique index for the natural key, which lets you query any unique row with human decipherable words.

The column attribute table for MySQL Workbench is:

Key Meaning
PK Designates a primary key column.
NN Designates a not-null column constraint.
UQ Designates a column contains a unique value for every row.
BIN Designates a VARCHAR data type column so that its values are stored in a case-sensitive fashion. You can’t apply this constraint to other data types.
UN Designates a column contains an unsigned numeric data type. The possible values are 0 to the maximum number of the data type, like integer, float, or double. The value 0 isn’t possible when you also select the PK and AI check boxes, which ensures the column automatically increments to the maximum value of the column.
ZF Designates a zero fill populates zeros in front of any number data type until all space is consumed, which acts like a left pad function with zeros.
AI Designates AUTO_INCREMENT and should only be checked for a surrogate primary key value.

All surrogate key columns should check the PK, NN, UN, and AI checkboxes. The default behavior checks only the PK and NN checkboxes and leaves the UN and AI boxes unchecked. You should also click the UN checkbox with the AI checkbox for all surrogate key columns. The AI checkbox enables AUTO_INCREMENT behavior. The UN checkbox ensure you have the maximum number of integers before you would migrate the table to a double precision number. More or less, this is what I wrote in MySQL Workbench Data Modeling & Development as the primary product guide in 2013, and what you find in the MySQL Workbench Manual 8.1.10.2 Columns Tab section.

Active tables grow quickly and using a signed int means you run out of rows more quickly. This is an important design consideration because using a unsigned int adds a maintenance task later. The maintenance task will require changing the data type of all dependent foreign key columns before changing the primary key column’s data type. Assuming you’re design uses referential integrity constraints, implemented as a foreign keys, you will need to:

  • Remove any foreign key constraints before changing the referenced primary key and dependent foreign key column’s data types.
  • Change the primary and foreign key column’s data types.
  • Add back foreign key constraints after changing the referenced primary key and dependent foreign key column’s data types.

While fixing a less optimal design is a relatively simple scripting exercise for most data engineers, you can avoid this maintenance task. Implement all surrogate primary key columns and foreign key columns with the signed int as their initial data type.

The following small ERD displays a multi-language lookup table, which is preferable to a monolinquistic enum data type.:

A design uses a lookup table when there are known lists of selections to make. There are known lists that occur in most if not all business applications. Maintaining that list of values is an application setup task and requires the development team to build an entry and update form to input and maintain the lists.

While some MySQL examples demonstrate these types of lists by using the MySQL enum data type. However, the MySQL enum type doesn’t support multilingual implementations, isn’t readily portable to other relational database, and has a number of limitations.

A lookup table is the better solution to using an enum data type. It typically follows this pattern:

  • Identify the target table and column where a list is useful. Use the table_name and column_name columns as a super key to identify the location where the list belongs.
  • Identify a unique type identifier for the list. Store the unique type value in the type column of the lookup table.
  • Use a lang column to enable multilingual lists.

The combination of the table_name, column_name, type, and lang let you identify unique sets. You can find a monolingual implementation in these two older blog posts:

The column view of the lookup table shows the appropriate design checkboxes:

While most foreign keys use copies of surrogate keys, there are instances when you copy the natural key value from another table rather than the surrogate key. This is done when your application will frequently query the dependent lookup table without a join to the lang table, which means the foreign key value should be a human friendly foreign key value that works as a super key.

A super key is a column or set of columns that uniquely identifies a rows in the scope of a relation. For this example, the lang column identifies rows that belong to a language in a multilingual data model. Belonging to a language is the relation between the lookup and language table. It is also a key when filtering rows with a specific lang value from the lookup table.

You navigate to the foreign key tab to create a lookup_fk foreign key constraint, like:

With this type of foreign key constraint, you copy the lang value from the language table when inserting the lookup table values. Then, your HTML forms can use the lookup table’s meaning column in any of the supported languages, like:

SELECT lookup_id
,      type
,      meaning
FROM   lookup
WHERE  table_name = 'some_table_name'
AND    column_name = 'some_column_name'
AND    lang = 'some_lang_name';

The type column value isn’t used in the WHERE clause to filter the data set because it is unique within the relation of the table_name, column_name, and lang column values. It is always non-unique when you exclude the lang column value, and potentially non-unique for another combination of the table_name and column_name column values.

As a rule, most foreign key references are to the lookup table’s surrogate primary key because the meaning column’s value is too long to copy into the referencing table or subject to change in the base or translated languages. Small values, with intrinsic meaning, are stored in a code column in many implementations, like the lang column. Those typically follow the same implementation rule as the lang column and are copied into the referencing table.

If I’ve left questions, let me know. Other wise, I hope this helps qualify a best design practice.

Written by maclochlainn

October 16th, 2022 at 5:53 pm

MySQL Query from JSON

with one comment

One of my students asked how you could get JSON data out in tabular format. I said they should look at Øystein Grøvlen’s JSON_TABLE – Best of Both Worlds blog post from 2018. Unfortunately, the student wanted another example with the Video Store model that we use in class.

For clarity, all path definitions start with a $ followed by other selectors:

  • A period followed by a name, such as $.website
  • [N] where N is the position in a zero-indexed array
  • The .[*] wildcard evaluates all members of an object
  • The [*] wildcard evaluates all members of an array
  • The prefix and suffix wildcard, **, evaluates to all paths that begin with the named prefix and end with the named suffix

So, here’s a quick supplement to what’s already there. It assumes you created an example table based on my prior blog post that looks like this:

+----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| id | struct                                                                                                                                                                                          |
+----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|  1 | {"contact": [{"last_name": "Winn", "first_name": "Randi"}, {"last_name": "Winn", "first_name": "Brian"}], "account_number": "US00001"}                                                          |
|  2 | {"contact": [{"last_name": "Vizquel", "first_name": "Oscar"}, {"last_name": "Vizquel", "first_name": "Doreen"}], "account_number": "US00002"}                                                   |
|  3 | {"contact": [{"last_name": "Sweeney", "first_name": "Meaghan"}, {"last_name": "Sweeney", "first_name": "Matthew"}, {"last_name": "Sweeney", "first_name": "Ian"}], "account_number": "US00003"} |
|  4 | {"contact": [{"last_name": "Clinton", "first_name": "Goeffrey"}], "account_number": "US00004"}                                                                                                  |
|  5 | {"contact": [{"last_name": "Moss", "first_name": "Wendy"}], "account_number": "US00005"}                                                                                                        |
|  6 | {"contact": [{"last_name": "Gretelz", "first_name": "Simon"}], "account_number": "US00006"}                                                                                                     |
|  7 | {"contact": [{"last_name": "Royal", "first_name": "Elizabeth"}], "account_number": "US00007"}                                                                                                   |
|  8 | {"contact": [{"last_name": "Smith", "first_name": "Brian"}], "account_number": "US00008"}                                                                                                       |
|  9 | {"contact": [{"last_name": "Potter", "first_name": "Harry"}, {"last_name": "Potter", "first_name": "Ginny"}, {"last_name": "Potter", "first_name": "Lily"}], "account_number": "US00011"}       |
+----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
9 rows in set (0.01 sec)

You can query the account_number key value like this:

SELECT id
,      JSON_EXTRACT(struct, "$.account_number") AS account_no
FROM   example;

It returns:

+----+------------+
| id | account_no |
+----+------------+
|  1 | "US00001"  |
|  2 | "US00002"  |
|  3 | "US00003"  |
|  4 | "US00004"  |
|  5 | "US00005"  |
|  6 | "US00006"  |
|  7 | "US00007"  |
|  8 | "US00008"  |
|  9 | "US00011"  |
+----+------------+
9 rows in set (0.00 sec)

You use the JSON_TABLE function to get the embedded array elements of first and last name, like:

SELECT id
,      contact.*
FROM   example CROSS JOIN
       JSON_TABLE(
           struct
         ,"$.contact[*]"
         COLUMNS(
             lname JSON PATH "$.last_name"
           , fname JSON PATH "$.first_name")) AS contact;

It returns:

+----+-----------+-------------+
| id | lname     | fname       |
+----+-----------+-------------+
|  1 | "Winn"    | "Randi"     |
|  1 | "Winn"    | "Brian"     |
|  2 | "Vizquel" | "Oscar"     |
|  2 | "Vizquel" | "Doreen"    |
|  3 | "Sweeney" | "Meaghan"   |
|  3 | "Sweeney" | "Matthew"   |
|  3 | "Sweeney" | "Ian"       |
|  4 | "Clinton" | "Goeffrey"  |
|  5 | "Moss"    | "Wendy"     |
|  6 | "Gretelz" | "Simon"     |
|  7 | "Royal"   | "Elizabeth" |
|  8 | "Smith"   | "Brian"     |
|  9 | "Potter"  | "Harry"     |
|  9 | "Potter"  | "Ginny"     |
|  9 | "Potter"  | "Lily"      |
+----+-----------+-------------+
15 rows in set (0.00 sec)

You can combine both approaches, as shown below.

SELECT id
,      JSON_EXTRACT(struct, "$.account_number") AS account_no
,      contact.*
FROM   example CROSS JOIN
       JSON_TABLE(
           struct
         ,"$.contact[*]"
         COLUMNS(
             lname JSON PATH "$.last_name"
           , fname JSON PATH "$.first_name")) AS contact;

It returns:

+----+------------+-----------+-------------+
| id | account_no | lname     | fname       |
+----+------------+-----------+-------------+
|  1 | "US00001"  | "Winn"    | "Randi"     |
|  1 | "US00001"  | "Winn"    | "Brian"     |
|  2 | "US00002"  | "Vizquel" | "Oscar"     |
|  2 | "US00002"  | "Vizquel" | "Doreen"    |
|  3 | "US00003"  | "Sweeney" | "Meaghan"   |
|  3 | "US00003"  | "Sweeney" | "Matthew"   |
|  3 | "US00003"  | "Sweeney" | "Ian"       |
|  4 | "US00004"  | "Clinton" | "Goeffrey"  |
|  5 | "US00005"  | "Moss"    | "Wendy"     |
|  6 | "US00006"  | "Gretelz" | "Simon"     |
|  7 | "US00007"  | "Royal"   | "Elizabeth" |
|  8 | "US00008"  | "Smith"   | "Brian"     |
|  9 | "US00011"  | "Potter"  | "Harry"     |
|  9 | "US00011"  | "Potter"  | "Ginny"     |
|  9 | "US00011"  | "Potter"  | "Lily"      |
+----+------------+-----------+-------------+
15 rows in set (0.00 sec)

Lastly, if you want to get rid of the enclosing double quotes you can do the following:

WITH raw AS
(SELECT id
 ,      JSON_EXTRACT(struct, "$.account_number") AS account_no
 ,      contact.*
 FROM   example CROSS JOIN
        JSON_TABLE(
            struct
          ,"$.contact[*]"
          COLUMNS(
              lname JSON PATH "$.last_name"
            , fname JSON PATH "$.first_name")) AS contact)
SELECT  id
,       REGEXP_REPLACE(account_no,'"','') AS account_no
,       REGEXP_REPLACE(lname,'"','') AS lname
,       REGEXP_REPLACE(fname,'"','') AS fname
FROM    raw;

It’s also possible to use the JSON_UNQUOTE function to cleanup the double quotes. I hope this helps those extracting JSON data into tabular result sets.

Written by maclochlainn

June 26th, 2022 at 12:38 am

MySQL Backslashes

without comments

Yesterday, I wrote a blog post that showed you how to write a query returning a JSON structure for a 1:many relationship. The relationship was between the member and contact table. It returns one account_number from the member table and a list of first_name and last_name columns from the contact table in a JSON structure.

One of my students asked why I choose to strip the backslashes with Python, and my reply was the SQL was already complex for most blog readers. The student asked but how would you do it in SQL. OK, that’s a fair question for two reasons. First, you don’t need to do in your local programs because it’ll run faster on the server. Second, if you strip the backslashes you can insert it into a standard JSON column. This blog post will show you how to do both.

You would use three REGEXP_REPLACE function calls, like:

SELECT   REGEXP_REPLACE(
           REGEXP_REPLACE(
             REGEXP_REPLACE(
               JSON_OBJECT(
                  'account_number', account_number
                 ,'contact', CONCAT('['
                               , GROUP_CONCAT(
                                    JSON_OBJECT('first_name',first_name
                                   ,'last_name',last_name ) SEPARATOR ',')
                               ,']')
               )
               ,'\\\\','')
             ,'"\\\[','\\\[')
           ,'\\\]"','\\\]') AS json_result 
FROM     member m INNER JOIN contact c
ON       m.member_id = c.member_id
GROUP BY m.account_number;

It returns the following:

+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| {"contact": [{"last_name": "Winn", "first_name": "Randi"},{"last_name": "Winn", "first_name": "Brian"}], "account_number": "US00001"}                                                         |
| {"contact": [{"last_name": "Vizquel", "first_name": "Oscar"},{"last_name": "Vizquel", "first_name": "Doreen"}], "account_number": "US00002"}                                                  |
| {"contact": [{"last_name": "Sweeney", "first_name": "Meaghan"},{"last_name": "Sweeney", "first_name": "Matthew"},{"last_name": "Sweeney", "first_name": "Ian"}], "account_number": "US00003"} |
| {"contact": [{"last_name": "Clinton", "first_name": "Goeffrey"}], "account_number": "US00004"}                                                                                                |
| {"contact": [{"last_name": "Moss", "first_name": "Wendy"}], "account_number": "US00005"}                                                                                                      |
| {"contact": [{"last_name": "Gretelz", "first_name": "Simon"}], "account_number": "US00006"}                                                                                                   |
| {"contact": [{"last_name": "Royal", "first_name": "Elizabeth"}], "account_number": "US00007"}                                                                                                 |
| {"contact": [{"last_name": "Smith", "first_name": "Brian"}], "account_number": "US00008"}                                                                                                     |
| {"contact": [{"last_name": "Potter", "first_name": "Harry"},{"last_name": "Potter", "first_name": "Ginny"},{"last_name": "Potter", "first_name": "Lily"}], "account_number": "US00011"}       |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
9 rows in set (0.00 sec)

Let’s create a table with a JSON structure with the following script:

/* Drop table if it exists. */
DROP TABLE IF EXISTS example;
 
/* Create a example table. */
CREATE TABLE example
( id      int unsigned  auto_increment
, struct  json 
, PRIMARY KEY (id));

Now, we can embed the query inside an INSERT statement:

INSERT
INTO   example
( struct )
(SELECT   REGEXP_REPLACE(
            REGEXP_REPLACE(
              REGEXP_REPLACE(
                JSON_OBJECT(
                   'account_number', account_number
                  ,'contact', CONCAT('['
                                , GROUP_CONCAT(
                                     JSON_OBJECT('first_name',first_name
                                    ,'last_name',last_name ) SEPARATOR ',')
                                ,']')
                )
                ,'\\\\','')
              ,'"\\\[','\\\[')
            ,'\\\]"','\\\]') AS json_result 
FROM     member m INNER JOIN contact c
ON       m.member_id = c.member_id
GROUP BY m.account_number);

A query of the example table, like:

SELECT * FROM example;

Returns:

+----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| id | struct                                                                                                                                                                                          |
+----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|  1 | {"contact": [{"last_name": "Winn", "first_name": "Randi"}, {"last_name": "Winn", "first_name": "Brian"}], "account_number": "US00001"}                                                          |
|  2 | {"contact": [{"last_name": "Vizquel", "first_name": "Oscar"}, {"last_name": "Vizquel", "first_name": "Doreen"}], "account_number": "US00002"}                                                   |
|  3 | {"contact": [{"last_name": "Sweeney", "first_name": "Meaghan"}, {"last_name": "Sweeney", "first_name": "Matthew"}, {"last_name": "Sweeney", "first_name": "Ian"}], "account_number": "US00003"} |
|  4 | {"contact": [{"last_name": "Clinton", "first_name": "Goeffrey"}], "account_number": "US00004"}                                                                                                  |
|  5 | {"contact": [{"last_name": "Moss", "first_name": "Wendy"}], "account_number": "US00005"}                                                                                                        |
|  6 | {"contact": [{"last_name": "Gretelz", "first_name": "Simon"}], "account_number": "US00006"}                                                                                                     |
|  7 | {"contact": [{"last_name": "Royal", "first_name": "Elizabeth"}], "account_number": "US00007"}                                                                                                   |
|  8 | {"contact": [{"last_name": "Smith", "first_name": "Brian"}], "account_number": "US00008"}                                                                                                       |
|  9 | {"contact": [{"last_name": "Potter", "first_name": "Harry"}, {"last_name": "Potter", "first_name": "Ginny"}, {"last_name": "Potter", "first_name": "Lily"}], "account_number": "US00011"}       |
+----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
9 rows in set (0.00 sec)

I hope this answers the question about whether you can use SQL remove the backslashes from the original result set and how you can insert the JSON result set into a JSON data type.

Written by maclochlainn

June 23rd, 2022 at 7:41 pm

MySQL JSON Tricks

without comments

Are they really tricks or simply basic techniques combined to create a solution. Before writing these mechanics for using native MySQL to create a compound JSON object, let me point out that the easiest way to get one is to use the MySQL Node.js library, as shown recently in my “Is SQL Programming” blog post.

Moving data from a relational model output to a JSON structure isn’t as simple as a delimited list of columns in a SQL query. Let’s look at it in stages based on the MySQL Server 12.18.2 Functions that create JSON values.

Here’s how you return single row as a JSON object, which is quite straightforward:

SELECT JSON_OBJECT('first_name',c.first_name,'last_name',c.last_name) AS json_result
FROM   contact c
WHERE  first_name = 'Harry'
AND    last_name = 'Potter';

It returns:

+------------------------------------------------+
| json_result                                         |
+------------------------------------------------+
| {"last_name": "Potter", "first_name": "Harry"} |
+------------------------------------------------+
1 row in set (0.00 sec)

With a GROUP_CONCAT function, let’s capture a JSON array of all three Potter family members:

SELECT CONCAT('['
             , GROUP_CONCAT(
                 JSON_OBJECT('first_name',first_name
                            ,'last_name',last_name ) SEPARATOR ',')
             ,']') AS json_result 
FROM   contact c
WHERE  c.last_name = 'Potter';

It returns an array of JSON objects:

+-----------------------------------------------------------------------------------------------------------------------------------------------+
| [{"last_name": "Potter", "first_name": "Harry"},{"last_name": "Potter", "first_name": "Ginny"},{"last_name": "Potter", "first_name": "Lily"}] |
+-----------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.01 sec)

Next, let’s put a 1:many relationship between the member and contact table into a JSON structure with a single account number and an array of contact. It requires a second call to the JSON_OBJECT function and the addition of a GROUP BY clause in the query.

SELECT   JSON_OBJECT(
            'account_number', account_number
           ,'contact', CONCAT('['
                         , GROUP_CONCAT(
                              JSON_OBJECT('first_name',first_name
                             ,'last_name',last_name ) SEPARATOR ',')
                             ,']')
         ) AS json_result 
FROM     member m INNER JOIN contact c
ON       m.member_id = c.member_id
WHERE    c.last_name = 'Potter'
GROUP BY m.account_number;

It returns the following string with an annoying set of backslashes. It also inverts the column order, which appears unavoidable but it shouldn’t matter because the order of name-value pairs in JSON is immaterial.

+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| {"contact": "[{\"last_name\": \"Potter\", \"first_name\": \"Harry\"},{\"last_name\": \"Potter\", \"first_name\": \"Ginny\"},{\"last_name\": \"Potter\", \"first_name\": \"Lily\"}]", "account_number": "US00011"} |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

The following quick little Python code cleans up the JSON string by removing the backslashes and extraneous quotes around the array of contacts.

# Import the library.
import mysql.connector
from mysql.connector import errorcode
 
try:
  # Open connection.
  cnx = mysql.connector.connect(user='student', password='student',
                                host='127.0.0.1',
                                database='studentdb')
  # Create cursor.
  cursor = cnx.cursor()
 
  # Set the query statement.
  query = ("SELECT JSON_OBJECT( "
           "'account_number', m.account_number "
           ",'contact', CONCAT('[' "
           "              , GROUP_CONCAT( "
           "                   JSON_OBJECT('first_name', c.first_name "
           "                  ,'last_name', c.last_name ) SEPARATOR ',') "
           "                  ,']')) AS json_result "
           "FROM   contact c INNER JOIN member m "
           "ON     c.member_id = m.member_id "
           "WHERE  c.last_name = %s "
           "GROUP BY account_number")
 
  # Execute cursor.
  cursor.execute(query,["Potter"])
 
  # Display the column returned by the query stripped of backslashes and
  # extraneous quotes.
  for (row) in cursor:
    for column in range(len(row)):
      print(row[column].replace("\\","").replace("\"[","[").replace("]\"","]"))
 
  # Close cursor.
  cursor.close()
 
# ------------------------------------------------------------
# Handle exception and close connection.
except mysql.connector.Error as e:
  if e.errno == errorcode.ER_ACCESS_DENIED_ERROR:
    print("Something is wrong with your user name or password")
  elif e.errno == errorcode.ER_BAD_DB_ERROR:
    print("Database does not exist")
  else:
    print("Error code:", e.errno)        # error number
    print("SQLSTATE value:", e.sqlstate) # SQLSTATE value
    print("Error message:", e.msg)       # error message
 
# Close the connection when the try block completes.
else:
  cnx.close()

It returns:

{"contact": [{"last_name": "Potter", "first_name": "Harry"},{"last_name": "Potter", "first_name": "Ginny"},{"last_name": "Potter", "first_name": "Lily"}], "account_number": "US00011"}

I hope this helps exhibit less well known MySQL syntax. Check this post to see how to insert a result set without Python as an intermediary.

Written by maclochlainn

June 23rd, 2022 at 12:46 am

Is SQL Programming

with 4 comments


Is SQL, or Structured Query Language, a programming language? That’s a great question! A question that many answer with emphasis: “No, SQL is not a programming language!” There are some who answer yes; and they usually qualify that answer with something like: “SQL is a programming language designed to communicate with relational databases.”

It strikes me that those saying “yes” are saying that SQL is only a collection of interface methods to read from and write to a database engine. Those saying SQL is not a programming language often qualify that a programming language must have conditional logic and iterative structures, which don’t exist in SQL.

There’s a third group that are fence sitters. They decline to say whether SQL is a programming language, but they also say individuals who only write SQL aren’t programmers. That’s a bit harsh from my perspective.

Before determining whether SQL is a programming language let’s define a programming language. Let’s define a programming language as a collection of lexical units, or building blocks, that build program units. Lexical units are typically organized as delimiters, identifiers, literals, and comments:

  • Delimiters include single or double quotes to identify strings and operators that let you assign and compare values.
  • Identifiers are reserved words, keywords, predefined identifiers (like data type names), user-defined variables, subroutines, or types.
  • Literals are typically numbers and strings, where some strings qualify as dates because they implement a default format mask to convert strings to dates or date-times.
  • Comments are simply delimited text that the program ignores but the programmer uses.

That means a programming language must let you define a variable, assign a value to a variable, iterate across a set of values, and make conditional statements. SQL meets these four conditions, but it does, as a set-programming language, qualify all variables as lists of tuples. Though it is possible to have variables with zero to many elements and one to many members in any given tuple. That means you can assign a literal value to to a one-element list with a single-member tuple, like you would a string or integer to a variable of that type.

As Kris Köhntopp commented, computer science defines a programming language as Turing Complete. As his comment qualifies and the Wikipedia page explains: “Turing completeness in declarative SQL is implemented through recursive common table expressions. Unsurprisingly, procedural extensions to SQL (PLSQL, etc.) are also Turing-complete.” While PostgreSQL introduces recursive query syntax through CTEs, it recently added the search and cycle feature in PostgreSQL 14. The recursive query feature has existed in the Oracle database since Oracle 8, but their documentation calls them hierarchical queries. I wrote a quick a tutorial on hierarchical queries in 2008.

For clarity, define and declare are two words that give grief to some newbies. Let’s qualify what they mean. Declare means to give a variable a name and data type. Define means to declare a variable and assign it a value. Another word for assigning a variable is initializing it. Unassigned variables are automatically assigned a default value or a null dependent on the programming language.

Let’s first declare a local variable, assign it to variable, and display the variable. The following example uses Node.js to define the input variable, assign the input variable to the display variable, and then print the display variable to console. Node.js requires that you assign an empty string to the display variable to define it as a string otherwise its type would be undefined, which is common behavior in dynamically typed languages.

/* Declare the display variable as a string. */
var display = ""
 
/* Define the input variable. */
var input = "Hello World!"
 
/* Assign the input variable contents to the display variable. */
display = input
 
/* Print the display variable contents to console. */
console.log(display)

It prints:

Hello World!

Let’s write the same type of program in MySQL. Like the Node.js, there are implementation differences. The biggest difference in MySQL or other relational databases occurs because SQL is a declarative set-based language. That means every variable is a collection of a record structure . You can only mimic a scalar or primitive data type variable by creating a record structure with a single member.

In the case below, there are four processing steps:

  • The ‘Hello World!’ literal value is assigned to an input variable.
  • The SELECT-list (or comma-delimited set of values in the SELECT clause) is assigned like a tuple to the struct collection variable by treating the query of the literal value as an expression.
  • The FROM clause returns the struct collection as the data set or as a derived table.
  • The topmost SELECT clause evaluates the struct collection row-by-row, like a loop, and assigns the input member to a display variable.

The query is:

SELECT  struct.input AS display
FROM   (SELECT 'Hello World!' AS input) struct;

Since the struct collection contains only one element, it displays the original literal value one time, like

+--------------+
| display      |
+--------------+
| Hello World! |
+--------------+
1 row in set (0.00 sec)

Let’s update the SQL syntax to the more readable, ANSI 1999 and forward, syntax with a Common Table Expression (CTE). CTEs are implemented by the WITH clause.

WITH struct AS
 (SELECT 'Hello World!' AS input)
SELECT struct.input AS display
FROM   struct;

The best thing about CTE values they run one-time and are subsequently available anywhere in your query, subqueries, or correlated subqueries. In short, there’s never an excuse to write a subquery twice in the same query.

Let’s look at loops and if-statements. Having established that we can assign a literal to a variable, re-assign the value from one variable to another, and then display the new variable, let’s assign a set of literal values to an array variable. As before, let’s use Node.js to structure the initial problem.

The program now assigns an array of strings to the input variable, uses a for-loop to read the values from the input array, and uses an if-statement with a regular expression evaluation. The if-statement determines which of the array value meets the condition by using a negating logical expression. That’s because the search() function returns a 0 or greater value when the needle value is found in the string and a -1 when not found. After validating that the needle variable value is found in an input string, the input value is assigned to the display variable.

/* Declare the display variable as a string. */
var display = ""
 
/* Declare a lookup variable. */
var needle = "Goodbye"
 
/* Define the input variable as an array of strings. */
var input = ["Hello World!"
            ,"Goodbye, Cruel World!"
            ,"Good morning, too early ..."]
 
/* Read through an array and assign the value that meets
 * the condition to the display variable. */
for (i = 0; i < input.length; i++)
  if (!(input[i].search(needle) < 0))
    display = input[i]
 
/* Print the display variable contents to console. */
console.log(display)

Then, it prints the display value:

Goodbye, Cruel World!

To replicate the coding approach in a query, there must be two CTEs. The needle CTE assigns a literal value of ‘goodbye’ to a one-element collection of a single-member tuple variable. The struct CTE creates a collection of strings by using the UNION ALL operator to append three unique tuples instead of one tuple as found in the early example.

The needle CTE returns a one-element collection of a single-member tuple variable. The struct CTE returns a three-element collection of a single-member tuple, which mimics an array of strings. The needle and struct CTEs return distinct variables with different data types. A cross join operation between the two CTEs puts their results together into the same context. It returns a Cartesian product that:

  • Adds a single-row tuples to each row of the query’s result set or derived table.
  • Adds a multiple-tuples to each row of the query’s result set or derived table by creating copies of each row (following the Cartesian set theory which multiplies rows and adds columns).

In this case, the Cartesian join adds a one-element needle CTE value to each element, or row, returned by the multiple-element struct CTE and produces the following derived table:

+-----------------------------+---------+
| display                     | lookup  |
+-----------------------------+---------+
| Hello World!                | goodbye |
| Goodbye, cruel world!       | goodbye |
| Good morning, too early ... | goodbye |
+-----------------------------+---------+
3 rows in set (0.00 sec)

The following query reads through the CTE collection like a loop and filters out any invalid input values. It uses the MySQL regular expression like function in the WHERE clause, which acts as a conditional or if-statement.

WITH needle AS
 (SELECT 'goodbye' AS lookup)
, struct AS
 (SELECT 'Hello World!' AS input
  UNION ALL
  SELECT 'Goodbye, cruel world!' AS input
  UNION ALL
  SELECT 'Good morning, too early ...' AS input)
 SELECT struct.input AS display
 FROM   struct CROSS JOIN needle
 WHERE  REGEXP_LIKE(struct.input, CONCAT('^.*',needle.lookup,'.*$'),'i');

It returns the one display value that meets the criteria:

+-----------------------+
| display               |
+-----------------------+
| Goodbye, cruel world! |
+-----------------------+
1 row in set (0.00 sec)

The comparisons of the imperative programming approach in Node.js and declarative programming approach should have established that SQL has all the elements of a programming language. That is, SQL has variable declaration and assignment and both iterative and conditional statements. SQL also has different styles for implementing variable declaration and the examples covered subqueries and CTEs with cross joins placing variables in common scope.

Comparative Approaches:

Next, let’s examine a problem that a programmer might encounter when they think SQL only queries or inserts, updates, or deletes single rows. With that perspective of SQL there’s often a limited perspective on how to write queries. Developers with this skill set level typically write only basic queries, which may include inner and outer joins and some aggregation statements.

Let’s assume the following for this programming assignment:

  • A sale table as your data source, and
  • A requirement to display the type, number, pre-tax sale amount, and percentage by type.

The sale table definition:

+------------+--------------+------+-----+---------+----------------+
| Field      | Type         | Null | Key | Default | Extra          |
+------------+--------------+------+-----+---------+----------------+
| sale_id    | int unsigned | NO   | PRI | NULL    | auto_increment |
| item_desc  | varchar(20)  | YES  |     | NULL    |                |
| unit_price | decimal(8,2) | YES  |     | NULL    |                |
| serial_no  | varchar(10)  | YES  |     | NULL    |                |
+------------+--------------+------+-----+---------+----------------+

A basic Node.js program may contain a SQL query that returns the item_desc and unit_price columns while counting the number of serial_no rows and summing the unit_price amounts (that assumes no discount sales, after all its Apple). That type of query leaves calculating the total amount of sales and percentage by type to the Node.js program.

const mysql = require('mysql') 
const connection = mysql.createConnection({ 
   host: 'localhost', 
   user: 'student', 
   password: 'student', 
   database: 'studentdb' 
}) 
 
connection.connect((err) => { 
 if (err) 
   throw err 
 else { 
   console.log('Connected to MySQL Server!\n') 
   connection.query("SELECT   s.item_desc " +
                    ",        s.unit_price " +
                    ",        COUNT(s.serial_no) AS quantity_sold " +
                    ",        SUM(s.unit_price) AS sales " +
                    "FROM     sale s " +
                    "GROUP BY s.item_desc " +
                    ",        s.unit_price", function (err, result) { 
     if (err) 
       throw err 
     else { 
       // Prints the index value in the RowDataPacket. 
       console.log(result)
       connection.end()
   }})} 
})

This program would return a JSON structure, like:

[ RowDataPacket {
    item_desc: 'MacBook Pro 16',
    unit_price: 2499,
    quantity_sold: 16,
    sales: 39984 },
  ...
  RowDataPacket {
    item_desc: 'MacBook Air M1',
    unit_price: 999,
    quantity_sold: 22,
    sales: 21978 } ]

While the remaining JavaScript code isn’t difficult to write, it’s unnecessary effort if the developer knew SQL well enough to program in it. The developer could simply re-write the query like the following and return the percentage by type value in the base JSON structure.

WITH sales AS
 (SELECT SUM(unit_price) AS total
  FROM   sale)
SELECT   s.item_desc
,        s.unit_price
,        COUNT(s.serial_no) AS quantity_sold
,        SUM(s.unit_price) AS sales
,        CONCAT(FORMAT((s.unit_price * COUNT(s.serial_no))/sales.total * 100,2),'%') AS percentage
FROM     sale s CROSS JOIN sales
GROUP BY s.item_desc
,        s.unit_price
,        sales.total;

The query uses the sales CTE to calculate and define a tuple with the total sales and adds a derived column calculating the percentage by type of device. It’s probably important to note that aggregation rules require you add the sales.total CTE tuple to the group by clause.

The new query returns this JSON list:

[ RowDataPacket {
    item_desc: 'MacBook Pro 16',
    unit_price: 2499,
    quantity_sold: 16,
    sales: 39984,
    percentage: '17.70%' },
  ...
  RowDataPacket {
    item_desc: 'MacBook Air M1',
    unit_price: 999,
    quantity_sold: 22,
    sales: 21978,
    percentage: '9.73%' } ]

The developer would get a complete JSON list when the new query replaces the old. It also would eliminate the need to write additional JavaScript to calculate the percentage by type of device.

Conclusions:

Leveraging the programming power of SQL is frequently possible in many frontend and backend programming solutions. However, the programming power of SQL is infrequently found in programming solutions. That leaves me to ask: “Is it possible that the almost systemic failure to leverage the programming capabilities of SQL is a result of biases by instructors and mentors to their own limited skill sets?” That likely might be true if their instructors and mentors held the belief that: “No, SQL is not a programming language!”

Candidly, folks that write SQL at the programming level almost always have concurrent mastery in two or more imperative programming languages. They’re probably the ones who say, “SQL is a programming language designed to communicate with relational databases.”

Who are those pesky fence sitters? You remember those, don’t you. They’re the ones who declined to take a position on whether SQL is a programming language. Are they the developers who are still learning, and those without an entrenched, preconceived, or learned bias? Or, do they wonder if SQL is Turing complete?

Written by maclochlainn

June 12th, 2022 at 7:36 pm