MacLochlainns Weblog

Michael McLaughlin's Technical Blog

Site Admin

SQL Statement Management

without comments

It’s very difficult explaining to students new to relational databases how SQL works. There are many parts that seem intuitive and then there are others that confuse and confound.

For beginners, the idea that a SQL statement is simply a text string that you must dispatch to a SQL statement processing engine is new. That’s because they use an Integrated Development Environment (IDE) that hides, or abstracts the complexity, of how SQL executes.

I start my core SQL class by demonstrating how to run a text literal query without a FROM clause in MySQL Workbench, like this:

SELECT 'Hello World!' AS "Output";

After writing the query, I highlight everything except the semicolon and click the lightening bolt that dispatches the static string to the SQL statement engine. They see this result:

Then, I launch a mysql Monitor session and write the query with a semicolon to dispatch the SQL static string to the SQL statement engine:

SELECT 'Hello World!' AS "Output";

and, with a \g to dispatch the SQL static string to the SQL statement engine:

SELECT 'Hello World!' AS "Output"\g

Both queries return the same output, as shown below:

+--------------+
| output       |
+--------------+
| Hello World! |
+--------------+
1 row in set (0.00 sec)

Rewriting the query with a \G to dispatch the SQL static string to the SQL statement engine:

SELECT 'Hello World!' AS "Output"\G

Both queries return the following output:

*************************** 1. row ***************************
output: Hello World!
1 row in set (0.00 sec)

The next step requires removing the MySQL Workbench and MySQL Monitor from the demonstration. Without either of those tools, a Python program can demonstrate how to run a static SQL string.

The query is now a string literal into a query.sql file. The Python program reads the query.sql file, dispatches the embedded query, and displays the query results.

This is the query.sql file is:

SELECT 'Hello World!' AS "output";

This is the query.py file is:

#!/usr/bin/python
 
# Import libraries.
import sys
import mysql.connector
from mysql.connector import errorcode
 
# ============================================================
#  Use a try-catch block to read and parse a query from a
#  a file found in the same local directory as the Python
#  program.
# ============================================================
try:
  file = open('query.sql','r')
  query = file.read().replace('\n',' ').replace(';','')
  file.close()
 
except IOError:
  print("Could not read file:", fileName)
 
# ============================================================
#  Attempt connection in a try-catch block.
# ============================================================
# --------------------------------------------------------
#  Open connection, bind variable in query and format
#  query output before closing the cursor.
# --------------------------------------------------------
try:
  # Open connection.
  cnx = mysql.connector.connect(user='student', password='student',
                                host='127.0.0.1',
                                database='studentdb')
 
  # Create cursor.
  cursor = cnx.cursor()
 
  # Execute cursor, and coerce string to tuple.
  cursor.execute(query)
 
  # Display the rows returned by the query.
  for row in cursor:
    print(row[0])
 
  # Close cursor.
  cursor.close()
 
# --------------------------------------------------------
#  Handle MySQL exception 
# --------------------------------------------------------
except mysql.connector.Error as e:
  if e.errno == errorcode.ER_ACCESS_DENIED_ERROR:
    print("Something is wrong with your user name or password")
  elif e.errno == errorcode.ER_BAD_DB_ERROR:
    print("Database does not exist")
  else:
    print("Error code:", e.errno)        # error number
    print("SQLSTATE value:", e.sqlstate) # SQLSTATE value
    print("Error message:", e.msg)       # error message
 
# --------------------------------------------------------
#  Close connection after try-catch completes.
# --------------------------------------------------------
# Close the connection when the try block completes.
else:
  cnx.close()

In Linux or Unix from the relative directory where both the query.sql and query.py files are located:

./query.py

It returns:

Hello World!

These examples demonstrate that a query without variable substitution is only a static string. In all the cases, the static SQL strings are dispatched to the SQL engine by a terminator like a semicolon or through an ODBC library call that executes the static SQL string.

Written by maclochlainn

October 20th, 2024 at 1:38 pm

Troubleshoot Oracle Errors

without comments

It’s always a bit difficult to trap errors in SQL*Developer when you’re running scripts that do multiple things. As old as it is, using the SQL*Plus utility and spooling to log files is generally the fastest way to localize errors across multiple elements of scripts. Unfortunately, you must break up you components into local components, like a when you create a type, procedure, function, or package.

This is part of my solution to leverage in-depth testing of the Oracle Database 23ai Free container from an Ubuntu native platform. You can find this prior post shows you how to setup Oracle*Client for Ubuntu and connect to the Oracle Database 23ai Free container.

After you’ve done that, put the following oracle_errors Bash shell function into your testing context, or into your .bashrc file:

# Troubleshooting errors utility function.
oracle_errors ()
{
  #  Oracle Error prefixes qualify groups of error types, like
  #  this subset of error prefixes used in the Bash function.
  # ============================================================
  #  JMS - Java Messaging Errors
  #  JZN - JSON Errors
  #  KUP - External Table Access Errors
  #  LGI - File I/O Errors
  #  OCI - Oracle Call Interface Errors
  #  ORA - Oracle Database Errors
  #  PCC - Oracle Precompiler Errors
  #  PLS - Oracle PL/SQL Errors
  #  PLW - Oracle PL/SQL Warnings
  #  SP2 - Oracle SQL*Plus Errors
  #  SQL - SQL Library Errors
  #  TNS - SQL*Net (networking) Errors
  # ============================================================
 
  # Define a array of Oracle error prefixes.
  prefixes=("jms" "jzn" "kup" "lgi" "oci" "ora" "pcc" "pls" "plw" "sp2" "sql" "tns")
 
  # Prepend the -e for the grep utility to use regular expression pattern matching; and
  # use the ^before the Oracle error prefixes to avoid returning lines that may
  # contain the prefix in a comment, like the word lookup contains the prefix kup.
  for str in ${prefixes[@]}; do
    patterns+=" -e ^${str}"
  done
 
  # Display output from a SQL*Plus show errors command written to a log file when
  # a procedure, function, object type, or package body fails to compile. This
  # prints the warning message followed by the line number displayed.
  patterns+=" -e ^warning"
  patterns+=" -e ^[0-9]/[0-9]"
 
  # Assign any file filter to the ext variable.
  ext=${1}
 
  # Assign the extension or simply use a wildcard for all files.
  if [ ! -z ${ext} ]; then
    ext="*.${ext}"
  else
    ext="*"
  fi
 
  # Assign the number of qualifying files to a variable.
  fileNum=$(ls -l ${ext} 2>/dev/null | grep -v ^l | wc -l)
 
  # Evaluate the number of qualifying files and process.
  if [ ${fileNum} -eq "0" ]; then
    echo "[0] files exist."
  elif [ ${fileNum} -eq "1" ]; then
    fileName=$(ls ${ext})
    find `pwd` -type f | grep -in ${ext} ${patterns}  |
    while IFS='\n' read list; do
      echo "${fileName}:${list}"
    done
  else
    find `pwd` -type f | grep -in ${ext} ${patterns}  |
    while IFS='\n' read list; do
      echo "${list}"
    done
  fi
 
  # Clear ${patterns} variable.
  patterns=""
}

Now, let’s create a debug.txt test file to demonstrate how to use the oracle_errors, like:

ORA-12704: character SET mismatch
PLS-00124: name OF EXCEPTION expected FOR FIRST arg IN exception_init PRAGMA
SP2-00200: Environment error
JMS-00402: Class NOT found
JZN-00001: END OF input

You can navigate to your logging directory and call the oracle_errors function, like:

oracle_errors txt

It’ll return the following, which is file number, line number, and error code:

debug.txt:1:ORA-12704: character set mismatch
debug.txt:2:PLS-00124: name of exception expected for first arg in exception_init pragma
debug.txt:3:SP2-00200: Environment error
debug.txt:4:JMS-00402: Class not found
debug.txt:5:JZN-00001: End of input

There are other Oracle error prefixes but the ones I’ve selected are the more common errors for Java, JavaScript, PL/SQL, Python, and SQL testing. You can add others if your use cases require them to the prefixes array. Just a note for those new to Bash shell scripting the “${variable_name}” is required for arrays.

For a more complete example, I created the following files for a trivial example of procedure overloading in PL/SQL:

  1. tables.sql – that creates two tables.
  2. spec.sql – that creates a package specification.
  3. body.sql – that implements a package specification.
  4. test.sql – that implements a test case using the package.
  5. integration.sql – that calls the the scripts in proper order.

The tables.sql, spec.sql, body.sql, and test.sql use the SQL*Plus spool command to write log files, like:

SPOOL spec.txt
 
-- Insert code here ...
 
SPOOL OFF

The body.sql file includes SQL*Plus list and show errors commands, like:

SPOOL spec.txt
 
-- Insert code here ...
 
LIST
SHOW ERRORS
 
SPOOL OFF

The integration.sql script calls the tables.sql, spec.sql, body.sql, and test.sql in order. Corrupting the spec.sql file by adding a stray “x” to one of the parameter names causes a cascade of errors. After running the integration.sql file with the introduced error, the Bash oracle_errors function returns:

body.txt:2:Warning: Package Body created with compilation errors.
body.txt:148:4/13     PLS-00323: subprogram or cursor 'WARNER_BROTHER' is declared in a      
test.txt:4:ORA-06550: line 2, column 3: 
test.txt:5:PLS-00306: wrong number or types of arguments in call to 'WARNER_BROTHER' 
test.txt:6:ORA-06550: line 2, column 3:

I hope that helps those learning how to program and perform integration testing in an Oracle Database.

Written by maclochlainn

July 9th, 2024 at 4:37 pm

sqlplus on Ubuntu

without comments

With the release of Oracle Database 23c Free came the ability to update components of the container’s base operating system. Naturally, I took full advantage of that to build my development machine on an Ubuntu 22.0.4 VMware instance with a Docker implementation of the Oracle Database 23c Free container.

Unfortunately, there were changes from that release to the release of Oracle Database 23ai Free. Specifically, Oracle disallows direct patching of their published container’s native Unbreakable Linux 8. It appears the restriction lies in licensing but I haven’t been able to get a clear answer. Oracle’s instructions also shifted from using Docker to using Podman, which reduces the development platform to a limited type of Database as a Service (DaaS) environment. Moreover, that means it requires more skill to leverage the Oracle Database 23ai Free container as a real developer environment by installing and configuring Oracle’s Client software on the host Ubuntu operating system. Then, you must create a host of shared directories to the container to use external files or test external libraries.

While Oracle’s invocation of proprietary control of their native OS is annoying, it’s not nearly as onerous as Apple’s decision to not offer an Intel chip for their MacBook Pro machines. I’ve a hunch Oracle will grant access to their Oracle 23ai Free container in the future but for now this article shows you how to get native SQL*Plus access working.

As to Apple, while I’ve fixed my older machines by upgrading my Intel-based MacBook Pro (i7) to native Ubuntu, it still annoying. Yes, Tim Cooke, I’d rather run Ubuntu than sell back a wonderful piece of hardware on the cheap to Apple. I also did the same upgrade to my iMac 5K with 32 GB of RAM but swapped the cheap hybrid drive for a 2TB SSD.

Now to the technical content that lets you natively develop using Oracle’s SQL*Plus on Ubuntu against the Oracle Database 23ai Free container. While I love SQL*Developer, it has significant limits when testing large blocks of code. Whereas, good techniques, sqlplus, and Bash shell can simplify code development and integration testing.

Here are the steps to get sqlplus working on Ubuntu for your Oracle Database 23ai Free container:

  1. You need to download the following two zip files from the Oracle Instant Client Downloads for Linux x86-64 (64-bit) website, which assumes an Intel x86 Chip Architecture:

  2. Open a terminal as your default Ubuntu user and do the following to assume the root superuser responsibility:

    sudo sh

    As the root user, create the following directory for the Oracle Client software:

    mkdir /opt/oracle/instantclient_23_4

    As the root user, copy the previously downloaded files to the /opt/oracle directory (this assumes your default user is name as the student user:

    cp ~student/Downloads/instantclient*.zip  /opt/oracle/.

    As the root user, change directory with the cd command to the /opt/oracle directory and verify with the ls -al command that you have the following two files:

    total 120968
    drwxr-xr-x 4 root root      4096 Jul  3 14:29 .
    drwxr-xr-x 6 root root      4096 Jul  3 09:09 ..
    drwxr-xr-x 4 root root      4096 Jul  3 10:11 instantclient_23_4
    -rw-r--r-- 1 root root 118377607 Jul  3 14:29 instantclient-basic-linux.x64-23.4.0.24.05.zip
    -rw-r--r-- 1 root root   5471693 Jul  3 14:29 instantclient-sqlplus-linux.x64-23.4.0.24.05.zip

    As the root user, unzip the two zip files in the following order with the unzip command:

    unzip instantclient-basic-linux.x64-23.4.0.24.05.zip

    and, then

    unzip instantclient-sqlplus-linux.x64-23.4.0.24.05.zip

  3. As the root user, run these two commands:

    sudo sh -c "echo /opt/oracle/instantclient_23_4 > \
    /etc/ld.so.conf.d/oracle-instantclient.conf"
    sudo ldconfig

    Next, you’ll test the installation. As the root user, run these three commands, which you’ll later add to your standard Ubuntu user’s .bashrc file:

    export ORACLE_HOME=/opt/oracle/instantclient_23_4
    export LD_LIBRARY_PATH=$ORACLE_HOME
    export PATH=$PATH:$ORACLE_HOME

    As the root user, you can now test whether you can start the Oracle SQL*Plus client with the following command:

    sqlplus /nolog

    It should connect and return this:

    SQL*Plus: RELEASE 23.0.0.0.0 - Production ON Wed Jul 3 10:12:33 2024
    Version 23.4.0.24.05
     
    Copyright (c) 1982, 2024, Oracle.  ALL rights reserved.
     
    SQL>

    If you get this type of error, either you didn’t install the Oracle instant client basic libraries or you installed an incompatible version:

    sqlplus: error while loading shared libraries: libclntsh.so.23.1: cannot open shared object file: No such file or directory

    If you got the error, you’ll need to readdress the installation of the Oracle instant client typically.

    Another type of error can occur if you get ahead of these instructions and try to connect to the Oracle Database 23ai Free container with syntax like this:

    sql> connect c##student/student@free

    because you’ll most likely get an error like this:

    ERROR:
    ORA-12162: TNS:net service name is incorrectly specified
    Help: https://docs.oracle.com/error-help/db/ora-12162/

    The error occurs because you haven’t setup the Oracle Net Services, which is level 5 in the OSI (Open System Interconnection) Model. In Oracle-speak, that means you haven’t setup a tnsnames.ora file, failed to put the tnsnames.ora file in the right place, or failed to set the $TNS_ADMIN environment variable correctly.

  4. While there are many ways to setup a tnsnames.ora file, the best way is to follow Oracle’s recommended approaches. In the Oracle client approach you should put the tnsnames.ora file in the $ORACLE_HOME/network/admin directory and use the $TNS_ADMIN environment variable to point to it. Unfortunately, that approach doesn’t work when you’re installing the Oracle client software unless you want to play with mount points. It’s easiest to create a hidden directory in your sandbox user, which is student in this example.

    As the root user, use the mkdir command to create the .oracle directory in your student user directory:

    mkdir /home/student/.oracle

    As the student user, navigate to the /home/student/.oracle directory and create the tnsnames.ora file with the following text:

    # tnsnames.ora Network Configuration FILE:
     
    FREE =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = 127.0.0.0)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = FREE)
        )
      )
     
    LISTENER_FREE =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 127.0.0.0)(PORT = 1521))
     
    FREEPDB1 =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = 127.0.0.0)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = FREEPDB1)
        )
      )
     
    EXTPROC_CONNECTION_DATA =
      (DESCRIPTION =
         (ADDRESS_LIST =
           (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC_FOR_FREE))
         )
         (CONNECT_DATA =
           (SID = PLSExtProc)
           (PRESENTATION = RO)
         )
      )

    Exit the root user to your student user. As the student user set the $TNS_ADMIN environment variable like:

    export TNS_ADMIN=$HOME/.oracle

    Assuming you’ve already created a container user, like c##student, connect to sqlplus with the following syntax:

    sqlplus c##student/student@free

    You should see the following when connection to an Oracle 23c Container:

    SQL*Plus: Release 23.0.0.0.0 - Production on Wed Jul 3 15:05:10 2024
    Version 23.4.0.24.05
     
    Copyright (c) 1982, 2024, Oracle.  All rights reserved.
     
    Last Successful login time: Wed Jul 03 2024 10:52:13 -06:00
     
    Connected to:
    Oracle Database 23c Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
    Version 23.3.0.23.09
     
    SQL>

    You should see the following when connection to an Oracle 23ai Container:

    SQL*Plus: Release 23.0.0.0.0 - Production on Sat Jul 20 11:05:08 2024
    Version 23.4.0.24.05
     
    Copyright (c) 1982, 2024, Oracle.  All rights reserved.
     
    Last Successful login time: Sat Jul 20 2024 10:41:38 -06:00
     
    Connected to:
    Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
    Version 23.4.0.24.05
     
    SQL>
  5. The last step adds all of the configuration settings into the .bashrc file. Before we do that, you may want to add the rlwrap utility library so you can use the up-arrow to navigate the sqlplus history. You install it as the root or sudo user on Ubuntu, like

    apt install -y rlwrap

    If you want to manually check what you’re removing, use the following command as the root user:

    apt autoremove

  6. The last step requires that you put the environment variables into the student user’s .bashrc shell script, and add a sqlplus function to take advantage of the new libraries added to read your prior history inside the SQL*Plus command line.

    You should edit the .bashrc file and add the following environment variables and sqlplus() function:

    # Configure Oracle Client software.
    export ORACLE_HOME=/opt/oracle/instantclient_23_4
    export LD_LIBRARY_PATH=$ORACLE_HOME
    export PATH=$PATH:$ORACLE_HOME
    export TNS_ADMIN=$HOME/.oracle
     
    # A user-defined function to wrap the sqlplus history.
    sqlplus () 
    {
        # Discover the fully qualified program name. 
        path=`which rlwrap 2>/dev/null`
        file=''
     
        # Parse the program name from the path.
        if [ -n ${path} ]; then
            file=${path##/*/}
        fi;
     
        # Wrap when there is a file and it is rewrap.
        if [ -n ${file} ] && [[ ${file} = "rlwrap" ]]; then
            rlwrap $ORACLE_HOME/sqlplus "${@}"
        else
            echo "Command-line history unavailable: Install the rlwrap package."
            $ORACLE_HOME/sqlplus "${@}"
        fi
    }

    You should remember that when you access sqlplus from the Ubuntu environment the TNS net service name is required. If you should forget to include it like this:

    sqlplus c##student/student

    You’ll get the following error:

    ERROR:
    ORA-12162: TNS:net service name is incorrectly specified
    Help: https://docs.oracle.com/error-help/db/ora-12162/

    The correct way is:

    sqlplus c##student/student@free

As always, I hope this helps those looking for a solution.

Written by maclochlainn

July 3rd, 2024 at 1:58 pm

Updating Nested ADTs

without comments

The first part of this series showed how you can leverage Oracle’s SQL syntax with UDT columns and collection columns. It would be nice if Oracle gave you some SQL to work with the elements of ADT collections, but they don’t. After all, that’s why you have this article.

While you could change the setup of the prior example table, it’s easier to create a new customer table. The new customer table drops the address column. There’s also a new pizza table. The pizza table includes an ingredient ADT collection column, which by design holds a unique set of ingredients for each pizza.

Realistically, ADT collections of numbers, characters, and dates have little value by themselves. That’s because those data types typically don’t have much meaning. A set of unique strings can be useful for certain use cases.

You create the list ADT type with this syntax:

SQL> CREATE OR REPLACE
  2    TYPE list IS TABLE OF VARCHAR2(20);
  3  /

You create the customer and pizza tables, and customer_s and pizza_s sequences with the following syntax:

SQL> CREATE TABLE customer
  2  ( customer_id  NUMBER
  3  , first_name   VARCHAR2(20)
  4  , last_name    VARCHAR2(20)
  5  , CONSTRAINT pk_customer PRIMARY KEY (customer_id));
 
SQL> CREATE SEQUENCE customer_s;
 
SQL> CREATE TABLE pizza
  2  ( pizza_id     NUMBER
  3  , customer_id  NUMBER
  4  , pizza_size   VARCHAR2(10)
  5  , ingredients  LIST
  6  , CONSTRAINT pk_pizza PRIMARY KEY (pizza_id)
  7  , CONSTRAINT ck_pizza_size
  8    CHECK (pizza_size IN ('Mini','Small','Medium','Large','Very Large')))
  9  NESTED TABLE ingredients STORE AS ingredient_table;
 
SQL> CREATE SEQUENCE pizza_s;

The customer table only has scalar columns. The pizza table has the ingredient ADT collection column. Line 9 creates a nested ingredient_table for the ingredient ADT collection column.

There is a primary and foreign key relationship between the customer and pizza tables. That relationship between the tables requires that you insert rows into the customer table before you insert rows into the pizza table.

The sample script populates the customer table with characters from the Green Arrow television show, as follows:

  Customer
    ID # Last Name  First Name
-------- ---------- ----------
       1 Queen      Oliver
       2 Queen      Thea
       3 Queen      Moira
       4 Lance      Dinah
       5 Lance      Quentin
       6 Diggle     John
       7 Wilson     Slade

Next, you can insert three rows into the pizza table. Each has different ingredients in the ingredient ADT column.

The following is the syntax for the INSERT statements:

SQL> INSERT INTO pizza
  2  VALUES
  3  ( pizza_s.NEXTVAL
  4  ,(SELECT c.customer_id FROM customer c
  5    WHERE  c.first_name = 'Quentin' AND c.last_name = 'Lance')
  6  ,'Large'
  7  , list('Cheese','Marinara Sauce','Sausage','Salami'));
 
SQL> INSERT INTO pizza
  2  VALUES
  3  ( pizza_s.NEXTVAL
  4  ,(SELECT c.customer_id FROM customer c
  5    WHERE  c.first_name = 'Thea' AND c.last_name = 'Queen')
  6  ,'Medium'
  7  , list('Cheese','Marinara Sauce','Canadian Bacon','Pineapple'));
 
SQL> INSERT INTO pizza
  2  VALUES
  3  ( pizza_s.NEXTVAL
  4  ,(SELECT c.customer_id FROM customer c
  5    WHERE  c.first_name = 'John' AND c.last_name = 'Diggle')
  6  ,'Small'
  7  , list('Cheese','BBQ Sauce','Chicken'));

Querying results from tables with nested ADT columns provides interesting results. An ordinary query, like this:

SQL> COL pizza_id     FORMAT 99999  HEADING "Pizza|ID #"
SQL> COL pizza_size   FORMAT A6     HEADING "Pizza|Size"
SQL> COL ingredients  FORMAT A64    HEADING "Ingredients"
SQL> SELECT pizza_id
  2  ,      pizza_size
  3  ,      ingredients
  4  FROM   pizza;

… returns the following results with a flattened object type:

Pizza Pizza
  ID # Size   Ingredients
------ ------ ----------------------------------------------------------------“
     1 Large  LIST('Cheese', 'Marinara Sauce', 'Sausage', 'Salami')
     2 Medium LIST('Cheese', 'Marinara Sauce', 'Canadian Bacon', 'Pineapple')
     3 Small  LIST('Cheese', 'BBQ Sauce', 'Chicken')

If you use a CROSS JOIN it multiplies each row times the number of items in the ADT collection column. The multiplication hides the results.

The best solution for displaying results from an ADT collection requires that you serialize the results. The following serialize_set PL/SQL function creates a serialized comma separated list:

SQL> CREATE OR REPLACE
  2    FUNCTION serialize_set (pv_list LIST) RETURN VARCHAR2 IS
  3      /* Declare a return string as large as you need. */
  4      lv_comma_string  VARCHAR2(60);
  5    BEGIN
  6      /* Read list of values and serialize them in a string. */
  7      FOR i IN 1..pv_list.COUNT LOOP
  8        IF NOT i = pv_list.COUNT THEN
  9          lv_comma_string := lv_comma_string || pv_list(i) || ', ';
 10        ELSE
 11          lv_comma_string := lv_comma_string || pv_list(i);
 12        END IF;
 13      END LOOP;
 14      RETURN lv_comma_string;
 15    END serialize_set;

You can now write a query that uses your PL/SQL function to format the ADT collection column values into a single row. The syntax for the query is:

SQL> SELECT pizza_id
  2  ,      pizza_size
  3  ,      serialize_set(ingredients) AS ingredients
  4  FROM   pizza;

It returns:

Pizza Pizza
  ID # Size   Ingredients
------ ------ -----------------------------------------------------------
     1 Large  Cheese, Marinara Sauce, Sausage, Salami
     2 Medium Cheese, Marinara Sauce, Canadian Bacon, Pineapple
     3 Small  Cheese, BBQ Sauce, Chicken

At this point, you know how to create a table with an ADT collection column and how to insert values. The Oracle documentation says you can only replace the whole content of the ADT column in an UPDATE statement. That’s true in practice but not in principle.

The principal differs because you can write PL/SQL functions that add, change, or remove elements from the ADT collection that works in an UPDATE statement. The trick is quite simple. You achieve it by:

  • Passing the current ADT collection as a IN-only mode parameter
  • Passing any new parameters when you add or change elements
  • Passing any old parameters when you change or remove elements

Now, you will learn how to create the add_elements, change_elements, and remove_elements PL/SQL functions. They let you use an UPDATE statement to add, change, or remove elements from an ADT collection column.

Adding ADT elements with an UPDATE statement

This section shows you how to add elements to an ADT collection column with an UPDATE statement. The add_elements PL/SQL function can add one or many elements to an ADT collection column. That’s possible because the new element or elements are passed to the function inside an ADT collection parameter.

The merit of this type of solution is that you only need one function to accomplish two tasks. The test cases show you how to pass one new element or a set of new elements.

An alternative solution would have you write two functions. One would accept a collection parameter and a variable length string, and the other would accept two collection parameters. Many developers might choose to do that because they would like to leverage overloading inside PL/SQL packages. You should ask yourself one question when you make the decision about your approach to this problem: Which is easier to maintain and use?

The following creates the add_elements PL/SQL function:

SQL> CREATE OR REPLACE
  2    FUNCTION add_elements
  3    ( pv_list     LIST
  4    , pv_element  LIST ) RETURN LIST IS
  5      /* Declare local return collection variable. */
  6      lv_list  LIST;
  7    BEGIN
  8      /* Check for instantiated collection and initialize when necessary. */
  9      IF pv_list IS NULL THEN
 10        lv_list := list();
 11      ELSE
 12        /* Assign parameter collection to local collection variable. */
 13        lv_list := pv_list;
 14        FOR i IN 1..pv_element.COUNT LOOP
 15          /* Check to avoid duplicates, allocate memory and assign value. */
 16          IF NOT list(pv_element(i)) SUBMULTISET OF lv_list THEN
 17            lv_list.EXTEND;
 18            lv_list(lv_list.COUNT) := pv_element(i);
 19          END IF;
 20        END LOOP;
 21      END IF;
 22  
 23      /* Return new collection. */
 24      RETURN lv_list;
 25    END add_elements;
 26  /

Line 3 and 4 define the two parameters of the add_elements function as ADT collections. Line 4 also designates the return type of the function, which is the same ADT collection.

Line 6 declares a local ADT collection variable. You need a local lv_list ADT collection variable because you want to accept two collections and merge them into the local ADT collection variable. Then, you return the local ADT collection variable as the function outcome.

Line 9 checks whether the pv_list parameter is null. Line 10 initializes the lv_list variable when it is null to avoid an unitialized error when you try to assign values to it. Line 13 assigns an initialized ADT collection column’s value to the local lv_list variable. Line 14 starts a loop through the ADT collection you want to add to the ingredient column’s list of values.

Line 16 use the SUBMULTISET set operator to ensure that only new add elements when they don’t already exist in the ingredient ADT collection column. Line 17 allocates memory space in the lv_list variable, and line 18 assigns a new element to it.

You could extend memory for the total count of elements but that would make the index assignment on line 18 more complex. Combining them increments the count of items and lets you use the count as the index value. Line 24 returns the local ADT collection and replaces the original ingredient column value.

The test case for the function should ensure that only unique values are assigned to the ingredient ADT collection column value. This can be done by a three-step test case. The test queries the values in the ADT collection column, updates them, and re-queries them.

The following query shows you the contents of the row:

SQL> SELECT pizza_id, pizza_size
  2  ,      serialize_set(ingredients) AS ingredients
  3  FROM   pizza
  4  WHERE  customer_id =
  5          ( SELECT customer_id FROM customer
  6            WHERE  first_name = 'Quentin' AND last_name = 'Lance' );

It returns:

Pizza Pizza
  ID # Size   Ingredients
------ ------ -----------------------------------------------------------
     1 Large  Cheese, Marinara Sauce, Sausage, Salami

You can update the ADT collection column’s values with the following UPDATE statement. It attempts to add Sausage and Italian Sausage to the list of values. The function should add only Italian Sausage because Sausage already exists in the list of values. When you re-query the row you will see that the add_elements added only the element Italian Sausage.

You would use the following UPDATE statement:

SQL> UPDATE pizza
  2  SET    ingredients =
  3           add_elements(ingredients,list('Italian Sausage','Sausage'))
  4  WHERE  customer_id =
  5          (SELECT customer_id FROM customer
  6           WHERE  first_name = 'Quentin' AND last_name = 'Lance');

Line 3 calls the add_elements PL/SQL function with the ingredient ADT collection column’s value as the first parameter. The second parameter is a dynamically created list of the elements. It contains the element or elements you want to add to the ingredient column’s values.

Re-querying the row, you should see that the UPDATE statement added only the Italian Sausage element to the row. You should see the following output:

Pizza Pizza 
  ID # Size   Ingredients
------ ------ -----------------------------------------------------------
     1 Large  Cheese, Marinara Sauce, Sausage, Salami, Italian Sausage'

As you can see, the call to the add_elements function adds only Italian Sausage to the list of values in the ingredient column, while a comma delimited list of single quote delimited strings allows you to add multiple elements. You add one element by making it the only single quote delimited item in the list constructor call.

Updating ADT elements with an UPDATE statement

This section shows you how to change elements in an ADT collection column with an UPDATE statement. The change_elements PL/SQL function can change one to many elements in an ADT collection column. That’s possible because the change element or elements are passed to the function inside ADT collection parameters.

Unlike the add_elements function, the change_elements function requires an ADT collection parameter and a UDT collection element. The UDT collection needs to hold an old and new value.

The alternative approach would require you to try and synchronize two ADT collection value sets. One would hold all the old values and the other would hold all the new values, and they would both need to be synchronized in mirrored positional order.

You define a pair UDT object type such as the following:

SQL> CREATE OR REPLACE
  2    TYPE pair IS OBJECT
  3    ( old  VARCHAR2(20)
  4    , NEW  VARCHAR2(20));
  5  /

Next, you define a change UDT collection type:

SQL> CREATE OR REPLACE
  2    TYPE change IS TABLE OF pair;
  3  /

You define the change_element function as shown below:

SQL> CREATE OR REPLACE
  2    FUNCTION change_elements
  3    ( pv_list     LIST
  4    , pv_element  CHANGE ) RETURN LIST IS
  5     /* Declare local return collection variable. */
  6     lv_list  LIST;
  7    BEGIN
  8     /* Check for instantiated collection and initialize when necessary. */
  9     IF pv_list IS NULL THEN
 10       lv_list := list();
 11     ELSE
 12       /* Assign parameter collection to local collection variable. */
 13       lv_list := pv_list;
 14       FOR i IN 1..pv_element.COUNT LOOP
 15         /* Check to avoid duplicates, allocate memory and assign value. */
 16         IF NOT list(pv_element(i).old) SUBMULTISET OF lv_list THEN
 17           lv_list.EXTEND;
 18           lv_list(lv_list.COUNT) := pv_element(i).NEW;
 19         END IF;
 20       END LOOP;
 21     END IF;
 22  
 23     /* Return new collection. */
 24     RETURN lv_list;
 25    END change_elements;
 26  /

Line 3 and 4 define the two parameters of the change_elements function. The first pv_list parameter uses the list ADT collection type and the list type that matches the ingredient column’s data type. Line 4 defines a parameter that uses the change UDT collection type, which is a collection of the pair UDT type.

Line 6 declares a local ADT collection variable, such as the add_elements function. The lv_list variable also serves the same purpose as it does in the add_elements function.

Line 9 checks whether the pv_list parameter is null. Line 10 initializes the lv_list variable when it is null to avoid an unitialized error when you try to assign values to it. Line 13 assigns an initialized ADT collection column’s value to the local lv_list variable. Line 14 starts a loop through the ADT collection you want to add to the ingredient column’s list of values.

Line 16 uses the SUBMULTISET set operator to ensure that the old element exists in the ingredient ADT collection column. Line 17 allocates memory space in the lv_list variable, and line 18 assigns the new element to it.

The change_elements function couples the memory allocation with the assignment of new values. Line 24 returns the local ADT collection and replaces the original ingredient column value.

The test case shows you how to pass one old and one new element or a set of old and new elements. The initial query shows you the data before the update:

SQL> SELECT pizza_id, pizza_size
  2  ,          serialize_set(ingredients) AS ingredients
  3  FROM   pizza
  4  WHERE  customer_id =
  5           (SELECT customer_id FROM customer
  6            WHERE  first_name = 'Thea' AND last_name = 'Queen');

It returns:

Pizza Pizza
  ID # Size   Ingredients
------ ------ -----------------------------------------------------------
     2 Medium Cheese, Marinara Sauce, Canadian Bacon

You now update the row with the following query:

SQL> UPDATE pizza
  2  SET    ingredients =
  3           change_elements(ingredients
  4                          ,change(pair(old => 'Italian Sausage'
  5                                      ,NEW => 'Linguica')))
  6  WHERE  customer_id =
  7          ( SELECT customer_id FROM customer
  8            WHERE  first_name = 'Thea' AND last_name = 'Queen' );

When you re-query the row, it shows you the following:

Pizza Pizza
  ID # Size   Ingredients
------ ------ -----------------------------------------------------------
     2 Medium Cheese, Marinara Sauce, Canadian Bacon, Linguica

As you can see, the call to the change_elements function changes onlyItalian Sausage to Linguica in the list of values in the ingredient column, while a comma delimited list of pair UDT values allows you to change multiple elements. You change one element by making it the only pair UDT in the change constructor call.

Removing ADT elements with an UPDATE statement

This section shows you how to remove elements from an ADT collection column with an UPDATE statement. The remove_elements PL/SQL function can remove one to many elements from an ADT collection column.

The remove_elements function works much like the add_elements function. It uses the same ADT collections as the add_elements function.

The code for the remove_elements function is:

SQL> CREATE OR REPLACE
  2    FUNCTION remove_elements
  3    ( pv_list      LIST
  4    , pv_elements  LIST ) RETURN LIST IS
  5      /* Declare local return collection variable. */
  6      lv_list      LIST;
  7    BEGIN
  8      /* Check for instantiation and element membership. */
  9      IF NOT (pv_list IS NULL AND pv_elements IS NULL) AND
 10             (pv_list.COUNT > 0 AND pv_elements.COUNT > 0) THEN
 11        /* Assign parameters to local variables. */
 12        lv_list := pv_list;
 13        /* Remove any elements from a collection. */
 14        FOR i IN 1..lv_list.COUNT LOOP
 15          FOR j IN 1..pv_elements.COUNT LOOP
 16            IF lv_list(i) = pv_elements(j) THEN
 17              lv_list.DELETE(i);
 18              EXIT;
 19            END IF;
 20          END LOOP;
 21        END LOOP;
 22      END IF;
 23  
 24      /* Return modified collection. */
 25      RETURN lv_list;
 26    END remove_elements;
 27  /

Lines 3, 4, and 6 work like the add_elements function. Lines 9 and 10 differ because they check for initialized collections that hold at least one element each. Line 12 mimics the behavior of line 13 in the add_elements function. Lines 14 through 16 implements a nested loop and filtering IF-statement. The IF-statement checks for a valid element to remove from the ingredient ADT column’s list of values.

Line 17 removes an element from the list. Line 18 exits the inner loop to skip the evaluation of other non-matches. It’s possible to do this because the add_elements and change_elements functions ensure a unique list of string values in the ingredient ADT collection.

The test case for the remove_elements function works like the earlier tests. You query the row that you will update to check its values; for instance:

SQL> SELECT pizza_id, pizza_size
  2  ,      serialize_set(ingredients) AS ingredients
  3  FROM   pizza
  4  WHERE  customer_id =
  5          (SELECT customer_id FROM customer
  6           WHERE  first_name = 'Thea' AND last_name = 'Queen');

It should return:

Pizza Pizza
  ID # Size   Ingredients
------ ------ ----------------------------------------------------------------
     2 Medium Cheese, Marinara Sauce, Canadian Bacon, Linguica

You would remove an element from the ingredient ADT collection column with the following UPDATE statement:

SQL> UPDATE pizza
  2  SET    ingredients =
  3           remove_elements(ingredients,list('Canadian Bacon'))
  4  WHERE  customer_id =
  5          ( SELECT customer_id FROM customer
  6            WHERE  first_name = 'Thea' AND last_name = 'Queen' );

When you re-query the row, you should see that Canadian Bacon is no longer an element in the ingredient ADT collection column. Like this:

Pizza Pizza
  ID # Size   Ingredients
------ ------ ----------------------------------------------------------------
     2 Medium Cheese, Marinara Sauce, Linguica

This two article series has shown you the differences between working with ADT and UDT collection. It has also shown you how to create PL/SQL functions to enable you to add, change, and remove elements from ADT column inside an UPDATE statement.

The next step would be for you to put the serialize_set, add_elements, change_elements, and remove_elements functions into an adt package. That package would look like:

SQL> CREATE OR REPLACE
  2    PACKAGE adt IS
  3  
  4    FUNCTION add_elements
  5    ( pv_list     LIST
  6    , pv_element  LIST ) RETURN LIST;
  7  
  8    FUNCTION change_elements
  9    ( pv_list     LIST
 10    , pv_element  CHANGE ) RETURN LIST;
 11  
 12    FUNCTION remove_elements
 13    ( pv_list      LIST
 14    , pv_elements  LIST ) RETURN LIST;
 15  
 16    FUNCTION serialize_set
 17    (pv_list LIST) RETURN VARCHAR2;
 18  
 19  END adt;
 20  /

Beyond writing an ADT package to manage a list of variable length strings, you have the opportunity to extend behaviors further through overloading. Overloading lets you define functions that use the same name with different parameter lists.

For example, you could define the LIST_D, LIST_N, and LIST_S as SQL ADT where they would implement ADTs of dates, numbers, and strings respectively. Then, you would write three versions of the preceding four functions. Each set of functions would work with one of the type specific ADTs, and provide you with a powerful utility package to add, change, remove, and serialize the values of date, number, and string ADTs.

When you put all the related functions into a package you simplify access and organize for reusability. That way you have all the tools you need inside a single adt package to write advanced UPDATE statements against ADT nested tables.

Written by maclochlainn

May 11th, 2024 at 4:13 pm

Updating Nested Tables

without comments

This two-part series covers how you update User-Defined Types (UDTs) and Attribute Data Types (ADTs). There are two varieties of UDTs. One is a column of a UDT object type and the other a UDT collection of a UDT object type.

You update nested UDT columns by leveraging the TABLE function. The TABLE function lets you create a result set, and access a UDT object or collection column. You need to combine the TABLE function and a CROSS JOIN to update elements of a UDT collection column.

ADTs are collections of a scalar data types. Oracle’s scalar data types are DATE, NUMBER, CHAR and VARCHAR2 (or, variable length strings). ADTs are unique and from some developer’s perspective difficult to work with.

The first article in this series shows you how to work with a UDT object type column and a UDT collection type. The second article will show you how to work with an ADT collection type.

PL/SQL uses ADT collections all the time. PL/SQL also uses User-Defined Types (UDTs) collections all the time. UDTs can be record or object types, or collections of records and objects. Record types are limited, and only work inside a PL/SQL scope. Object types are less limited and you can use them in a SQL or PL/SQL scope.

Object types come in two flavors. One acts as a typical record structure and has no methods and the other acts like an object type in any object-oriented programming language (OOPL). This article refers only to object types like typical record structures. That means when you read ADTs you should think of a SQL collection of a scalar data type, and when you read UDTs you should think of a SQL collection of an object type without methods.

You can create tables that hold nested tables. Nested tables can use a SQL ADT or UDT data type. Inserting data into nested tables is straightforward when you understand the syntax, but updating nested tables can be complex. The complexity exists because Oracle treats nested tables of ADTs differently than UDTs. My article series will show you how to simplify updating ADT columns.

That’s why it has two parts:

  • How you insert and update rows with UDT columns and collection columns
  • How you insert and update rows with ADT collection columns

If you’re asking yourself why there isn’t a section for deleting rows, that’s simple. You delete them the same way as you would any other row, using the DELETE statement.

How you insert and update rows with UDT columns and collection columns

This section shows you how to create a table with a UDT column and a UDT collection column. It also shows you how to insert and update the embedded columns.

You insert into any ordinary UDT column by prefacing the data with a constructor name. A constructor name is the same as a UDT name. The following creates an address_type UDT that you will use inside a customer table:

SQL> CREATE OR REPLACE
  2    TYPE address_type IS OBJECT
  3    ( street  VARCHAR2(20)
  4    , city    VARCHAR2(30)
  5    , state   VARCHAR2(2)
  6    , zip     VARCHAR2(5));
  7  /

You should take note that the address_type UDT doesn’t have any methods. All object types without methods have a default constructor. The default constructor follows the same rules as tables in the database.

Create the sample customer table with an address column that uses the address_type UDT as its data type; for instance:

SQL> CREATE TABLE customer
  2  ( customer_id  NUMBER
  3  , first_name   VARCHAR2(20)
  4  , last_name    VARCHAR2(20)
  5  , address      ADDRESS_TYPE
  6  , CONSTRAINT pk_customer PRIMARY KEY (customer_id));

Line 5 defines the address column with the address_type UDT. You insert a row with an embedded address_type data record as follows:

SQL> INSERT
  2  INTO   customer
  3  VALUES
  4  ( customer_s.NEXTVAL
  5  ,'Oliver'
  6  ,'Queen'
  7  , address_type( street => '1 Park Place'
  8                , city   => 'Starling City'
  9                , state  => 'NY'
 10                , zip    => '10001'));

Lines 7 through 10 includes the constructor call to the address_type UDT. The address_type constructor uses named notation rather than positional notation. You should always try to use named notation for object type constructor calls.

Updating an element of a UDT object structure is straightforward, because you simply refer to the column and a member of the UDT object structure. The syntax for that type of UPDATE statement follows:

SQL> UPDATE customer c
  2  SET    c.address.state = 'NJ'
  3  WHERE  c.first_name = 'Oliver'
  4  AND    c.last_name = 'Queen';

The address_type UDT works for an object structure but not for a UDT collection. You need to add a column to differentiate between rows of the nested collection. You can redefine the address_type UDT as follows:

SQL> CREATE OR REPLACE
  2    TYPE address_type IS OBJECT
  3    ( status  VARCHAR2(8)
  4    , street  VARCHAR2(20)
  5    , city    VARCHAR2(30)
  6    , state   VARCHAR2(2)
  7    , zip     VARCHAR2(5));
  8  /

After creating the UDT object type, you need to create an address_table UDT collection of the address_type UDT object type. You use the following syntax to create the SQL collection:

SQL> CREATE OR REPLACE
  2    TYPE address_table IS TABLE OF address_type;
  3  /

Having both the UDT object and collection types, you can drop and create the customer table with the following syntax:

SQL> CREATE TABLE customer
  2  ( customer_id  NUMBER
  3  , first_name   VARCHAR2(20)
  4  , last_name    VARCHAR2(20)
  5  , address      ADDRESS_TABLE
  6  , CONSTRAINT pk_customer PRIMARY KEY (customer_id))
  7  NESTED TABLE address STORE AS address_tab;

Line 5 defines the address column as a UDT collection. Line 7 instructs how to store the UDT collection as a nested table. You designate the address column as the nested table and store it as an address_tab table. You can access the nested table only through its container, which is the customer table.

You can insert rows into the customer table with the following syntax. This example stores a single row with two elements of the address_type in the nested table:

SQL> INSERT
  2  INTO   customer
  3  VALUES
  4  ( customer_s.NEXTVAL
  5  ,'Oliver'
  6  ,'Queen'
  7  , address_table(
  8        address_type( status   => 'Obsolete'
  9                    , street => '1 Park Place'
 10                    , city => 'Starling City'
 11                    , state => 'NY'
 12                    , zip => '10001')
 13      , address_type( status   => 'Current'
 14                    , street => '1 Dockland Street'
 15                    , city => 'Starling City'
  16                    , state => 'NY'
 17                    , zip => '10001')));

Lines 7 through 17 have two constructor calls for the address_type UDT object type inside the address_table UDT collection. After you insert an address_table UDT collection, you can query an element by using the SQL built-in TABLE function and a CROSS JOIN. The TABLE function returns a SQL result set. The CROSS JOIN lets you create cross product that you can filter inside the WHERE clause.

A CROSS JOIN between two tables or a table and result set from a nested table matches every row in the customer table with every row in the nested table. A best practice would include a WHERE clause that filters the nested table to a single row in the result set.

The syntax for such a query is complex, and follows below:

SQL> COL first_name  FORMAT A8  HEADING "First|Name"
SQL> COL last_name   FORMAT A8  HEADING "Last|Name"
SQL> COL street      FORMAT A20 HEADING "Street"
SQL> COL city        FORMAT A14 HEADING "City"
SQL> COL state       FORMAT A5  HEADING "State"
SQL> SELECT c.first_name
  2  ,      c.last_name
  3  ,      a.street
  4  ,      a.city
  5  ,      a.state
  6  FROM   customer c CROSS JOIN TABLE(c.address) a
  7  WHERE  a.status = 'Current';

As mentioned, the TABLE function on line 6 translates the UDT collection into a SQL result set, which acts as a temporary table. The alias a becomes the name of the temporary table. Lines 3, 4, 5, and 7 all reference the temporary table.

The query should return the following for the customer and their current address value:

First    Last
Name     Name     Street               City           State
-------- -------- -------------------- -------------- -----
Oliver   Queen    1 Dockland Street    Starling City  NY

Oracle thought through the fact that you should be able to update UDT collections. The same TABLE function lets you update elements in the nested table. You can update the elements in nested UDT tables provided you create a unique key, such as a natural key or primary key. Oracle’s syntax doesn’t support constraints on nested tables, which means you need to implement it by design and protect by carefully controlling inserts and updates to the nested table.

You can update the state value of the current address with the following UPDATE statement:

SQL> UPDATE TABLE(SELECT c.address
  2               FROM   customer c
  3               WHERE  c.first_name = 'Oliver'
  4               AND    c.last_name = 'Queen') a
  5  SET    a.state = 'NJ'
  6  WHERE  a.status = 'Current';

Line 5 sets the current state value in the address_table UDT nested table. Line 6 filters the nested table to the current address element. You need to ensure that any UDT object type holds a member attribute or set of member attributes that holds a unique value. That’s because you need to ensure that there’s a way to find a unique element within a UDT collection. If you require the table, you should see the change inside the nested table.

Oracle does not provide equivalent syntax for such a change in an ADT collection type. The second article in this series show you how to implement PL/SQL functions to solve that problem.

Written by maclochlainn

May 9th, 2024 at 9:38 pm

Oracle23ai Ubuntu Install

without comments

What to do with a Late 2015 iMac with an i7 Quad CPU running at 3.4 GHz, 32 GB or RAM, a 5K Display and an almost warn out hybrid 1 TB hard disk? You could sell it to Apple for pennies, but why enrich them. I opted to upgrade it with an OWC kit that had a 2 TB SSD Disk. Then, I installed Ubuntu 22.0.4 and built a DaaS (Database as a Service) machine with Oracle Database 23ai in a Docker container, and MySQL 8 and PostgreSQL 14 natively.

I’ve posted on installing MySQL 8 and PostgreSQL 14 on Ubuntu before when I repurposed my late 2014 MacBook Pro. This post covers the installation of Docker and Oracle Database 23ai.

Install Docker

Contrary to the instructions, you should do the following as a sudoer user:

sudo apt install -y docker.io

Install all dependency packages using the following command:

sudo snap install docker

You should see the following:

docker 20.10.24 from Canonical✓ installed

You can verify the Docker install with the following command:

sudo docker --version

It should show something like this:

Docker version 24.0.5, build 24.0.5-0ubuntu1~22.04.1

You can check the pulled containers with the following command but at this point there should be no pulled containers.

sudo docker images

At this point, a docker group already exists but you need to add your user to the docker group with the following command:

sudo usermod -aG docker $USER

Using the Docker Commands:

  • To activate the logging, utilize the -f parameter.
  • To divide JSON, use Docker by default; to extract individual keys, use JQ.
  • In your Container file, there are quite a few areas where commands may be specified.
  • Posting to the volumes could be more effective while the picture is being built.
  • Docker offers a highly efficient way to create an alias for its own built-in commands. This makes it easier to set up and handle lengthy and enormous orders. These alias values are stored in the directories /.bashrc or and /.bash_aliases.
  • Docker offers further assistance to remove unused code fragments from the installation of the container.
  • Docker always favors reading statements from the container file that have not changed. Therefore, time savings may be realized by arranging what is shown in the container file in a way that ensures the elements that are susceptible to change are shown towards the end of the document and those that are most likely to undergo change are shown at the top.

Install Oracle Database 23ai Free in a Docker container

Use the following command to pull and install the Oracle Database 23ai container:

sudo docker run --name oracle23ai -p 1521:1521 -p 5500:5500 -e ORACLE_PWD=cangetin container-registry.oracle.com/database/free:latest

After installing the Oracle Database 23ai Free container, you can access it as the root user by default with this syntax:

docker exec -it -u root oracle23ai bash

At the root prompt, you can connect to the system schema with the following command:

sqlplus system/cangetin@FREE

You should see the following:

SQL*Plus: RELEASE 23.0.0.0.0 - Production ON Thu May 9 03:56:57 2024
Version 23.4.0.24.05
 
Copyright (c) 1982, 2024, Oracle.  ALL rights reserved.
 
LAST SUCCESSFUL login TIME: Wed Apr 24 2024 21:23:00 +00:00
 
Connected TO:
Oracle DATABASE 23ai Free RELEASE 23.0.0.0.0 - Develop, Learn, AND Run FOR Free
Version 23.4.0.24.05
 
SQL>

Create a c##student as a sandbox user:

After you create and provision the Oracle Database 21ai Free, you can create a c##student sand-boxed user with the following two step process.

  1. Create a c##student Oracle user account with the following command as the system user:

    CREATE USER c##student IDENTIFIED BY student
    DEFAULT TABLESPACE users QUOTA 200M ON users
    TEMPORARY TABLESPACE temp;

  2. Grant necessary privileges to the newly created c##student user:

    GRANT CREATE CLUSTER, CREATE INDEXTYPE, CREATE OPERATOR
    ,     CREATE PROCEDURE, CREATE SEQUENCE, CREATE SESSION
    ,     CREATE TABLE, CREATE TRIGGER, CREATE TYPE
    ,     CREATE VIEW TO c##student;

  3. Connect to the sandboxed user with the following syntax (by the way it’s a pluggable user account as qualified in Oracle Database 12c forward):

    SQL> CONNECT c##student/student@FREE

    or, disconnect and reconnect with this syntax:

    sqlplus system/cangetin@FREE

Set Docker Oracle 23ai to start always

Assuming that your container name was oracle23ai, as qualified above, you can run the following command to automatically restart the Docker container:

docker update --restart=always `docker ps -aqf "name=oracle23ai"`

The docker command inside the backquotes uses the Docker instance’s name to return the Docker container_id value, which can also be seen when you run the following command:

docker ps

which returns:

CONTAINER ID   IMAGE                                                COMMAND                  CREATED       STATUS                    PORTS                                                                                  NAMES
b211f494e692   container-registry.oracle.com/database/free:latest   "/bin/bash -c $ORACL…"   13 days ago   Up 18 minutes (healthy)   0.0.0.0:1521->1521/tcp, :::1521->1521/tcp, 0.0.0.0:5500->5500/tcp, :::5500->5500/tcp   oracle23ai

The Docker container_id value is required when you perform a Docker update operation.

Configuring your Docker Oracle 23ai environment

Unless you like memorizing the Docker command-line, you may automate connecting as the root user or add a sand boxed user. The root user typically has more power than you need to perform ordinary development and use-case testing tasks.

A sand boxed user has narrow access, can’t start and stop the database instance or perform Oracle Datasbase 23ai administration. In this segment, you’ll learn how to create a couple local Bash functions to simplify your use of the Oracle Database 23ai container; and how to extend the configuration of Oracle’s Docker container:

  • Adding a student user to the Docker container and configuring it to access the Oracle Database 23ai locally from within the Docker container using a direct sqlplus connection.
  • Configuring the Docker container to support external files and leverage a shared directory with your base operating system.

Automating Docker instance connections:

The following shows you how to add a local Bash function to automate access to the Docker container from the Linux command-line. You put the following Bash function in your base Linux operating system’s user .bashrc file:

  1. Create the following Bash function:

    # User defined function to launch Oracle 23 ai container
    # as the root user.
    admin () 
    {
        # Discover the fully qualified program name. 
        path=`which docker 2>/dev/null`
        file=''
     
        # Parse the program name from the path.
        if [ -n ${path} ]; then
            file=${path##/*/}
        fi
     
        # Wrap when there is a file and it is rewrap.
        if [ -n ${file} ] && [[ ${file} = "docker" ]]; then
            python -c "import subprocess; subprocess.run(['docker exec -it --user root oracle23ai bash'], shell=True)" 
        else
            echo "Docker is unavailable: Install the docker package."
        fi
    }

  2. After you source the .bashrc file or simply reconnect as to the terminal as your user, which resources the .bashrc file, you can access the oracle23ai Docker instance with this command:

    admin

    It will display a new prompt with the root user and the Docker container_id value, like:

    [root@b211f494e692 oracle]#

    You can exit the Docker container by typing exit at the Linux command line. If you curious what version of Linux you’re using inside the Docker instance, you can’t use the uname command because it returns the hosting Linux distribution (distro). You must use the following when inside the Docker instance:

    cat /etc/os-release

    or, outside the Docker instance you can use the following docker command:

    docker exec oracle23ai cat /etc/os-release

    Either way, for an Oracle Database 23ai container, it should return:

    NAME="Oracle Linux Server"
    VERSION="8.9"
    ID="ol"
    ID_LIKE="fedora"
    VARIANT="Server"
    VARIANT_ID="server"
    VERSION_ID="8.9"
    PLATFORM_ID="platform:el8"
    PRETTY_NAME="Oracle Linux Server 8.9"
    ANSI_COLOR="0;31"
    CPE_NAME="cpe:/o:oracle:linux:8:9:server"
    HOME_URL="https://linux.oracle.com/"
    BUG_REPORT_URL="https://github.com/oracle/oracle-linux"
     
    ORACLE_BUGZILLA_PRODUCT="Oracle Linux 8"
    ORACLE_BUGZILLA_PRODUCT_VERSION=8.9
    ORACLE_SUPPORT_PRODUCT="Oracle Linux"
    ORACLE_SUPPORT_PRODUCT_VERSION=8.9

    Unfortunately, Oracle has appeared to block updates to the Oracle Unbreakable Linux 8 instance inside the container, which makes native SQL*Plus use more difficult. That’s because you’ll need to install the Oracle SQL*Plus client in the hosting Operating System.

    I’ve written a separate blog post that instructs you on how to install and use Oracle SQL*Plus client on Ubuntu.

Install SQL Developer in the base Linux operating system

The first steps are installing the Java Runtime Environment and Java Development Kit, and then downloading, installing and configuring SQL Developer. These are the required steps:

  1. Install the Java Runtime Environment:

    sudo apt install default-jre

    The log file for this is:

  2. Install the Java Runtime Environment:

    sudo apt install -y default-idk

    The log file for this is:

  3. Download SQL Developer from here; and then install SQL Developer to the /opt directory on your Ubuntu local instance:

    Use the following command to unzip the SQL Developer files to the /opt directory:

    sudo unzip ~/Downloads/sqldeveloper-23.1.0.097.1607-no-jre.zip
  4. Create the following /usr/local/bin/sqldeveloper symbolic link:

    sudo ln -s /opt/sqldeveloper/sqldeveloper.sh /usr/local/bin/sqldeveloper
  5. Edit the /opt/sqldeveloper/sqldeveloper.sh file by replacing the following line:

    cd "`dirname $0`"/sqldeveloper/bin && bash sqldeveloper $*

    with this version:

    /opt/sqldeveloper/sqldeveloper/bin/sqldeveloper $*
  6. Now, you can launch SQL Developer from any location on your local Ubuntu operating system, like:

    sqldeveloper
  7. You can now connect as the system user through SQL Developer to the Oracle Database 23ai Free Docker instance with the following connection information:

    (Excuse recycling the version from 21c but I didn’t see any utility in making a new screen shot.)

  8. You can also create a Desktop shortcut by creating the sqldeveloper.desktop file in the /usr/share/applications directory. The SQL Developer icon is provided in the sqldeveloper base directory.

    You should create the following sqldeveloper.desktop file to use a Desktop shortcut:

    [Desktop Entry]
    Name=Oracle SQL Developer
    Comment=SQL Developer from Oracle
    GenericName=SQL Tool
    Exec=/usr/local/bin/sqldeveloper
    Icon=/opt/sqldeveloper/icon.png
    Type=Application
    StartupNotify=true
    Categories=Utility;Oracle;Development;SQL;

As always, I hope this helps those trying to accomplish this task.

Written by maclochlainn

May 8th, 2024 at 10:12 pm

Learning SQL Exercise

without comments

I’ve been using Alan Beaulieu’s Learning SQL to teach my SQL Development class with MySQL 8. It’s a great book overall but Chapter 12 lacks a complete exercise. Here’s all that the author provides to the reader. This is inadequate for most readers to work with to solve the concept of a transaction.

Exercise 12-1

Generate a unit of work to transfer $50 from account 123 to account 789. You will need to insert two rows into the transaction table and update two rows in the account table. Use the following table definitions/data:

                      Account:
account_id     avail_balance    last_activity_date
-----------    --------------   ------------------
       123               450    2019-07-10 20:53:27
       789               125    2019-06-22 15:18:35
 
                      Transaction:
txn_id    txn_date      account_id    txn_type_cd    amount
------    ----------    -------+--    -----------    ------
  1001    2019-05-15           123    C                 500
  1002    2019-06-01           789    C                  75

Use txn_type_cd = ‘C” to indicate a credit (addition), and use txn_type_cd = ‘D’ to indicate a debit (substraction).

New Exercise 12-1

The problem with the exercise description is that the sakila database, which is used for most of the book, doesn’t have transaction or account tables. Nor, are there any instructions about general accounting practices or principles. These missing components make it hard for students to understand how to build the transaction.

The first thing the exercise’s problem defintion should qualify is how to create the account and transaction tables, like:

  1. Create the account table, like this with an initial auto incrementing value of 1001:

    -- +--------------------+--------------+------+-----+---------+----------------+
    -- | Field              | Type         | Null | Key | Default | Extra          |
    -- +--------------------+--------------+------+-----+---------+----------------+
    -- | account_id         | int unsigned | NO   | PRI | NULL    | auto_increment |
    -- | avail_balance      | double       | NO   |     | NULL    |                |
    -- | last_activity_date | datetime     | NO   |     | NULL    |                |
    -- +--------------------+--------------+------+-----+---------+----------------+
  2. Create the transaction table, like this with an initial auto incrementing value of 1001:

    -- +----------------+--------------+------+-----+---------+----------------+
    -- | Field          | Type         | Null | Key | Default | Extra          |
    -- +----------------+--------------+------+-----+---------+----------------+
    -- | txn_id         | int unsigned | NO   | PRI | NULL    | auto_increment |
    -- | txn_date       | datetime     | YES  |     | NULL    |                |
    -- | account_id     | int unsigned | YES  |     | NULL    |                |
    -- | txn_type_cd    | varchar(1)   | NO   |     | NULL    |                |
    -- | amount         | double       | YES  |     | NULL    |                |
    -- +----------------+--------------+------+-----+---------+----------------+

Checking accounts are liabilities to banks, which means you credit a liability account to increase its value and debit a liability to decrease its value. You should insert the initial rows into the account table with a zero avail_balance. Then, make these iniitial deposits:

  1. Credit transaction table with an account_id column value of 123 with $500 and a txn_type_cd column value of ‘C’.
  2. Credit transaction table with an account_id column value of 789 with $75 and a txn_type_cd column value of ‘C’.

Write an update statement to set the avail_balance column values equal to the aggregate sum of the transaction table’s rows, which treats credit transacctions (those with a ‘C’ in the txn_type_cd column as a positive number and thos with a ‘D’ in the txn_type_cd column as a negative number).

Generate a unit of work to transfer $50 from account 123 to account 789. You will need to insert two rows into the transaction table and update two rows in the account table. Use the following table definitions/data:

  1. Debit transaction table with an account_id column value of 123 with $50 and a txn_type_cd column value of ‘D’.
  2. Credit transaction table with an account_id column value of 789 with $50 and a txn_type_cd column value of ‘C’.

Apply the prior update statement to set the avail_balance column values equal to the aggregate sum of the transaction table’s rows, which treats credit transacctions (those with a ‘C’ in the txn_type_cd column as a positive number and thos with a ‘D’ in the txn_type_cd column as a negative number).

Here’s the solution to the problem:

-- +--------------------+--------------+------+-----+---------+----------------+
-- | Field              | Type         | Null | Key | Default | Extra          |
-- +--------------------+--------------+------+-----+---------+----------------+
-- | account_id         | int unsigned | NO   | PRI | NULL    | auto_increment |
-- | avail_balance      | double       | NO   |     | NULL    |                |
-- | last_activity_date | datetime     | NO   |     | NULL    |                |
-- +--------------------+--------------+------+-----+---------+----------------+
 
DROP TABLE IF EXISTS account, transaction;
 
CREATE TABLE account
( account_id          int unsigned PRIMARY KEY AUTO_INCREMENT
, avail_balance       double       NOT NULL
, last_activity_date  datetime     NOT NULL )
 ENGINE=InnoDB 
 AUTO_INCREMENT=1001 
 DEFAULT CHARSET=utf8mb4 
 COLLATE=utf8mb4_0900_ai_ci;
 
-- +----------------+--------------+------+-----+---------+----------------+
-- | Field          | Type         | Null | Key | Default | Extra          |
-- +----------------+--------------+------+-----+---------+----------------+
-- | txn_id         | int unsigned | NO   | PRI | NULL    | auto_increment |
-- | txn_date       | datetime     | YES  |     | NULL    |                |
-- | account_id     | int unsigned | YES  |     | NULL    |                |
-- | txn_type_cd    | varchar(1)   | NO   |     | NULL    |                |
-- | amount         | double       | YES  |     | NULL    |                |
-- +----------------+--------------+------+-----+---------+----------------+
 
CREATE TABLE transaction
( txn_id         int unsigned  PRIMARY KEY AUTO_INCREMENT
, txn_date       datetime      NOT NULL
, account_id     int unsigned  NOT NULL
, txn_type_cd    varchar(1)
, amount         double
, CONSTRAINT transaction_fk1 FOREIGN KEY (account_id)
 REFERENCES account(account_id))
 ENGINE=InnoDB
 AUTO_INCREMENT=1001
 DEFAULT CHARSET=utf8mb4
 COLLATE=utf8mb4_0900_ai_ci;
 
-- Insert initial accounts.
INSERT INTO account
( account_id
, avail_balance
, last_activity_date )
VALUES
( 123
, 0
,'2019-07-10 20:53:27');
 
INSERT INTO account
( account_id
, avail_balance
, last_activity_date )
VALUES
( 789
, 0
,'2019-06-22 15:18:35');
 
-- Insert initial deposits.
INSERT INTO transaction
( txn_date
, account_id
, txn_type_cd
, amount )
VALUES
( CAST(NOW() AS DATE)
, 123
,'C'
, 500 );
 
INSERT INTO transaction
( txn_date
, account_id
, txn_type_cd
, amount )
VALUES
( CAST(NOW() AS DATE)
, 789
,'C'
, 75 );
 
UPDATE account a
SET    a.avail_balance = 
 (SELECT  SUM(
            CASE
              WHEN t.txn_type_cd = 'C' THEN amount
              WHEN t.txn_type_cd = 'D' THEN amount * -1
            END) AS amount
 FROM     transaction t
 WHERE    t.account_id = a.account_id
 AND      t.account_id IN (123,789)
 GROUP BY t.account_id);
 
SELECT * FROM account;
SELECT * FROM transaction;
 
-- Insert initial deposits.
INSERT INTO transaction
( txn_date
, account_id
, txn_type_cd
, amount )
VALUES
( CAST(NOW() AS DATE)
, 123
,'D'
, 50 );
 
INSERT INTO transaction
( txn_date
, account_id
, txn_type_cd
, amount )
VALUES
( CAST(NOW() AS DATE)
, 789
,'C'
, 50 );
 
UPDATE account a
SET    a.avail_balance = 
 (SELECT  SUM(
            CASE
              WHEN t.txn_type_cd = 'C' THEN amount
              WHEN t.txn_type_cd = 'D' THEN amount * -1
            END) AS amount
 FROM     transaction t
 WHERE    t.account_id = a.account_id
 AND      t.account_id IN (123,789)
 GROUP BY t.account_id);
 
SELECT * FROM account;
SELECT * FROM transaction;

The results are:

+------------+---------------+---------------------+
| account_id | avail_balance | last_activity_date  |
+------------+---------------+---------------------+
|        123 |           450 | 2019-07-10 20:53:27 |
|        789 |           125 | 2019-06-22 15:18:35 |
+------------+---------------+---------------------+
2 rows in set (0.00 sec)
 
+--------+---------------------+------------+-------------+--------+
| txn_id | txn_date            | account_id | txn_type_cd | amount |
+--------+---------------------+------------+-------------+--------+
|   1001 | 2024-04-01 00:00:00 |        123 | C           |    500 |
|   1002 | 2024-04-01 00:00:00 |        789 | C           |     75 |
|   1003 | 2024-04-01 00:00:00 |        123 | D           |     50 |
|   1004 | 2024-04-01 00:00:00 |        789 | C           |     50 |
+--------+---------------------+------------+-------------+--------+
4 rows in set (0.00 sec)

As always, I hope this helps those trying to understand how CTEs can solve problems that would otherwise be coded in external imperative languages like Python.

Written by maclochlainn

April 1st, 2024 at 12:32 am

Ubuntu Pro Upgrade?

without comments

There wasn’t a choice when I chose to update the Ubuntu instance. I was compelled to upgrade to Ubuntu Pro. According to the upgrade I have five free installations. You can read more about Ubuntu Pro on this web page, and find their pricing schedule on this page.

Written by maclochlainn

March 13th, 2024 at 9:04 pm

MongoDB on Ubuntu

without comments

This post shows how to install, configure, and use MongoDB with JavaScript programs. You need to complete each section in the order provided (based on Cherry Server post).

Step #1: MongoDB Installation

Install the prerequisite packages with the following command:

sudo apt install -y software-properties-common gnupg apt-transport-https ca-certificates

Import the public key for MongoDB on your system using the curl command:

curl -fsSL https://pgp.mongodb.com/server-7.0.asc |  sudo gpg -o /usr/share/keyrings/mongodb-server-7.0.gpg --dearmor

Add MongoDB 7.0 APT repository to the /etc/apt/sources.list.d directory:

echo "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/7.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-7.0.list

Reload the local package index, which refreshes the local repositories and makes Ubuntu aware of the newly added MongoDB repository:

sudo apt update

Install the mongodb-org meta-package:

sudo apt install -y mongodb-org

Verify the installed version of MongoDB with this command:

mongod --version

It should display:

db version v7.0.6
Build Info: {
    "version": "7.0.6",
    "gitVersion": "66cdc1f28172cb33ff68263050d73d4ade73b9a4",
    "openSSLVersion": "OpenSSL 3.0.2 15 Mar 2022",
    "modules": [],
    "allocator": "tcmalloc",
    "environment": {
        "distmod": "ubuntu2204",
        "distarch": "x86_64",
        "target_arch": "x86_64"
    }
}

Step #2: Start MongoDB Service & Shell

You can verify that the installed mongodb is disabled after initial installation with this command:

sudo systemctl status mongod

It should display:

○ mongod.service - MongoDB Database Server
     Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)
     Active: inactive (dead)
       Docs: https://docs.mongodb.org/manual

Exit the output display from the systemctl utility by typing the escape key, a colon (:) and a q in sequence.

You can start the MongoDB service with this command:

sudo systemctl start mongod

Then, check the MongoDB service:

sudo systemctl status mongod

It displays:

● mongod.service - MongoDB Database Server
     Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)
     Active: active (running) since Thu 2024-03-07 16:38:17 MST; 2s ago
       Docs: https://docs.mongodb.org/manual
   Main PID: 33795 (mongod)
     Memory: 79.2M
        CPU: 706ms
     CGroup: /system.slice/mongod.service
             └─33795 /usr/bin/mongod --config /etc/mongod.conf
Mar 07 16:38:17 student-virtual-machine systemd[1]: Started MongoDB Database Server.
Mar 07 16:38:17 student-virtual-machine mongod[33795]: {"t":{"$date":"2024-03-07T23:38:17.642Z"},"s">

You can confirm that the database is up and running by checking if the server is listening on its default port, which is port 27017. Run the ss command to check the port number.

sudo ss -pnltu | grep 27017

It will display:

tcp   LISTEN 0      4096       127.0.0.1:27017      0.0.0.0:*    users:(("mongod",pid=33795,fd=14))

You can enable the mongodb service at startup with the following command:

sudo systemctl enable mongod

It raised the following error:

Created symlink /etc/systemd/system/multi-user.target.wants/mongod.service → /lib/systemd/system/mongod.service.

Now, start the MongoDB Shell (mongosh) by typing either the explicit or implicit MongoDB Shell command. The explicit one uses the port and database path, which are unnecessary when you’ve successfully started the mongosh service. (Please note that at the time of writing this blog post there is erroneous, or obsolete, content on the MongoDB Documentation Enable Access Control web page.

Explicit connection:

mongosh  --port 27017 --db /var/lib/mongodb --help

This version of the command will display most of the options available in MongoDB but it will suppress warning messages.

$ mongosh [options] [db address] [file names (ending in .js or .mongodb)]
 
  Options:
 
    -h, --help                                 Show this usage information
    -f, --file [arg]                           Load the specified mongosh script
        --host [arg]                           Server to connect to
        --port [arg]                           Port to connect to
        --build-info                           Show build information
        --version                              Show version information
        --quiet                                Silence output from the shell during the connection process
        --shell                                Run the shell after executing files
        --nodb                                 Don't connect to mongod on startup - no 'db address' [arg] expected
        --norc                                 Will not run the '.mongoshrc.js' file on start up
        --eval [arg]                           Evaluate javascript
        --json[=canonical|relaxed]             Print result of --eval as Extended JSON, including errors
        --retryWrites[=true|false]             Automatically retry write operations upon transient network errors (Default: true)
 
  Authentication Options:
 
    -u, --username [arg]                       Username for authentication
    -p, --password [arg]                       Password for authentication
        --authenticationDatabase [arg]         User source (defaults to dbname)
        --authenticationMechanism [arg]        Authentication mechanism
        --awsIamSessionToken [arg]             AWS IAM Temporary Session Token ID
        --gssapiServiceName [arg]              Service name to use when authenticating using GSSAPI/Kerberos
        --sspiHostnameCanonicalization [arg]   Specify the SSPI hostname canonicalization (none or forward, available on Windows)
        --sspiRealmOverride [arg]              Specify the SSPI server realm (available on Windows)
 
  TLS Options:
 
        --tls                                  Use TLS for all connections
        --tlsCertificateKeyFile [arg]          PEM certificate/key file for TLS
        --tlsCertificateKeyFilePassword [arg]  Password for key in PEM file for TLS
        --tlsCAFile [arg]                      Certificate Authority file for TLS
        --tlsAllowInvalidHostnames             Allow connections to servers with non-matching hostnames
        --tlsAllowInvalidCertificates          Allow connections to servers with invalid certificates
        --tlsCertificateSelector [arg]         TLS Certificate in system store (Windows and macOS only)
        --tlsCRLFile [arg]                     Specifies the .pem file that contains the Certificate Revocation List
        --tlsDisabledProtocols [arg]           Comma separated list of TLS protocols to disable [TLS1_0,TLS1_1,TLS1_2]
        --tlsUseSystemCA                       Load the operating system trusted certificate list
        --tlsFIPSMode                          Enable the system TLS library's FIPS mode
 
  API version options:
 
        --apiVersion [arg]                     Specifies the API version to connect with
        --apiStrict                            Use strict API version mode
        --apiDeprecationErrors                 Fail deprecated commands for the specified API version
 
  FLE Options:
 
        --awsAccessKeyId [arg]                 AWS Access Key for FLE Amazon KMS
        --awsSecretAccessKey [arg]             AWS Secret Key for FLE Amazon KMS
        --awsSessionToken [arg]                Optional AWS Session Token ID
        --keyVaultNamespace [arg]              database.collection to store encrypted FLE parameters
        --kmsURL [arg]                         Test parameter to override the URL of the KMS endpoint
 
  DB Address Examples:
 
        foo                                    Foo database on local machine
        192.168.0.5/foo                        Foo database on 192.168.0.5 machine
        192.168.0.5:9999/foo                   Foo database on 192.168.0.5 machine on port 9999
        mongodb://192.168.0.5:9999/foo         Connection string URI can also be used
 
  File Names:
 
        A list of files to run. Files must end in .js and will exit after unless --shell is specified.
 
  Examples:
 
        Start mongosh using 'ships' database on specified connection string:
        $ mongosh mongodb://192.168.0.5:9999/ships
 
  For more information on usage: https://docs.mongodb.com/mongodb-shell.

Implicit connection:

mongosh

You should see the following message with any warning messages:

Current Mongosh Log ID:	65ea502a97f4c1e2b7e12af4
Connecting to:		mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.1.5
Using MongoDB:		7.0.6
Using Mongosh:		2.1.5
 
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
 
To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).
You can opt-out by running the disableTelemetry() command.
 
------
   The server generated these startup warnings when booting
   2024-03-07T16:38:17.818-07:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
   2024-03-07T16:38:18.350-07:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
   2024-03-07T16:38:18.350-07:00: vm.max_map_count is too low
------

You can run opt out of the data collection by running the disableTelemetry() command from the Linux command line. Use the following command (a broader explanation is in the MongoDB Telemetry documentation):

mongosh --nodb --eval "disableTelemetry()"

It should return:

Current Mongosh Log ID:	65eab2df3e663bde3711fa2f
Using Mongosh:		2.1.5
 
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
 
Telemetry is now disabled.

You still have three warning messages to deal with at this point. You should fix the vm.max_map_count warning first. This is a Linux kernel issue. You can determine the current value of the vm.max_map_count value with this command:

cat /proc/sys/vm/max_map_count

It should return the system default value:

65530

You can change it at runtime with this command:

sudo sysctl -w vm.max_map_count=262144

However, you must restart the mongod service to see the change in the mongosh shell. There won’t be a warning message for the kernel parameter value being too low until you reboot your operating system. You can restart your mongod service with this command:

sudo service mongod restart

You can make a change to the /etc/sysctl.conf file to ensure the parameter is set to the correct value each time the system reboots. Simply add the following line as the root user or by using the sudo prefacing a text editor or your choice (like vim or nano) to your /etc/sysctl.conf file:

# Adding vm.max_map_count to sysctl.conf defaults.
vm.max_map_count=262144

At this point, you’ve eliminated two of the warning messages. The next step shows you how to enable Access Control. If you want to check the general server status, run the following command from the Linux Command-Line Interface (CLI):

mongosh --eval "db.serverStatus()" > server_status.log

You can inspect the log file, which should be slightly less than 2,000 lines of output with MongoDB a 7.0.6 installation. Using the command from the Linux CLI is generally the easiest way to inspect the output from the db.serverStatus() function, which is just too long to scroll from the console output.

Step #3: MongoDB Enabling Access Control

Connect to the mongosh …

Step #4: MongoDB Installing Node.js and React.js

Install Node.js with the following command:

sudo apt install -y nodejs

You can check the Node.js version with this command:

node -v

v12.22.9

Install the Node.js package manager npm with the following command:

sudo apt install -y npm

You can check the Node.js version with this command:

npm -v

8.5.1

As always, I hope this helps those looking for concise and complete free answer.

Written by maclochlainn

March 7th, 2024 at 11:10 pm

Parametric Queries

without comments

In 2021, I wrote a MySQL example for my class on the usefulness of Common Table Expressions (CTEs). When discussing the original post, I would comment on how you could extend the last example to build a parametric reporting table.

Somebody finally asked for a concrete example. So, this explains how to build a sample MySQL parametric query by leveraging a filter cross join and tests the parameter use with a Python script.

You can build this in any database you prefer but I used a studentdb database with the sakila sample database installed. I’ve granted privileges to both databases to the student user. The following SQL is required for the example:

-- Conditionally drop the levels table.
DROP TABLE IF EXISTS levels;
 
-- Create the levels list.
CREATE TABLE levels
( level_id       int unsigned primary key auto_increment
, parameter_set  enum('Three','Five')
, description    varchar(20)
, min_roles      int
, max_roles      int );
 
-- Insert values into the list table.
INSERT INTO levels
( parameter_set
, description
, min_roles
, max_roles )
VALUES
 ('Three','Hollywood Star', 30, 99999)
,('Three','Prolific Actor', 20, 29)
,('Three','Newcommer',1,19)
,('Five','Newcommer',1,9)
,('Five','Junior Actor',10,19)
,('Five','Professional Actor',20,29)
,('Five','Major Actor',30,39)
,('Five','Hollywood Star',40,99999);

The sample lets you use the three or five value labels while filtering on any partial full_name value as the result of the query below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
-- Query the data.
WITH actors AS
 (SELECT   a.actor_id
  ,        a.first_name
  ,        a.last_name
  ,        COUNT(*) AS num_roles
  FROM     sakila.actor a INNER JOIN sakila.film_actor fa
  ON       a.actor_id = fa.actor_id
  GROUP BY actor_id)
SELECT   CONCAT(a.last_name,', ',a.first_name) full_name
,        l.description
,        a.num_roles
FROM     actors a CROSS JOIN levels l
WHERE    a.num_roles BETWEEN l.min_roles AND l.max_roles
AND      l.parameter_set = 'Five'
AND      a.last_name LIKE CONCAT('H','%')
ORDER BY a.last_name
,        a.first_name;

They extends a concept exercise found in Chapter 9 on subqueries in Alan Beaulieu’s Learning SQL book.

This is the parametric Python program, which embeds the function locally (to make it easier for those who don’t write a lot of Python). You could set the PYTHONPATH to a relative src directory and import your function if you prefer.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
#!/usr/bin/python
 
# Import the libraries.
import sys
import mysql.connector
from mysql.connector import errorcode
 
# ============================================================
 
# Define function to check and replace arguments.
def check_replace(argv):
 
  # Set defaults for incorrect parameter values.
  defaults = ("Three","_")
 
  # Declare empty list variables.
  inputs = []
  args = ()
 
  # Check whether or not parameters exist after file name.
  if isinstance(argv,list) and len(argv) != 0:
 
    # Check whether there are at least two parameters.
    if len(argv) >= 2:
 
      # Loop through available command-line arguments.
      for element in argv:
 
        # Check first of two parameter values and substitute
        # default value if input value is an invalid option.
        if len(inputs) == 0 and (element in ('Three','Five')) or \
           len(inputs) == 1 and (isinstance(element,str)):
          inputs.append(element)
        elif len(inputs) == 0:
          inputs.append(defaults[0])
        elif len(inputs) == 1:
          inputs.append(defaults[1])
 
      # Assign arguments to parameters.
      args = (inputs)
 
    # Check whether only one parameter value exists.
    elif len(argv) == 1 and (argv[0] in ('Three','Five')):
      args = (argv[0],"_")
 
    # Assume only one parameter is valid and substitute an 
    # empty string as the second parameter.
    else:
      args = (defaults[0],"_")
 
    # Substitute defaults when missing parameters.
  else:
    args = defaults
 
  # Return parameters as a tuple.
  return args
 
# ============================================================
 
# Assign command-line argument list to variable by removing
# the program file name.
# ============================================================
params = check_replace(sys.argv[1:])
# ============================================================
 
#  Attempt the query.
# ============================================================
#  Use a try-catch block to manage the connection.
# ============================================================
try:
  # Open connection.
  cnx = mysql.connector.connect(user='student', password='student',
                                host='127.0.0.1',
                                database='studentdb')
  # Create cursor.
  cursor = cnx.cursor()
 
  # Set the query statement.
  query = ("WITH actors AS "
           "(SELECT   a.first_name "
           " ,        a.last_name "
           " ,        COUNT(*) AS num_roles "
           " FROM     sakila.actor a INNER JOIN sakila.film_actor fa "
           " ON       a.actor_id = fa.actor_id "
           " GROUP BY a.first_name "
           " ,        a.last_name ) "
           " SELECT   CONCAT(a.last_name,', ',a.first_name) AS full_name "
           " ,        l.description "
           " ,        a.num_roles "
           " FROM     actors a CROSS JOIN levels l "
           " WHERE    a.num_roles BETWEEN l.min_roles AND l.max_roles "
           " AND      l.parameter_set = %s "
           " AND      a.last_name LIKE CONCAT(%s,'%') "
           " ORDER BY a.last_name "
           " ,        a.first_name")
 
  # Execute cursor.
  cursor.execute(query, params)
 
  # Display the rows returned by the query.
  for (full_name, description, num_roles) in cursor:
    print('{0} is a {1} with {2} films.'.format( full_name.title()
                                               , description.title()
                                               , num_roles))
 
  # Close cursor.
  cursor.close()
 
# ------------------------------------------------------------
# Handle exception and close connection.
except mysql.connector.Error as e:
  if e.errno == errorcode.ER_ACCESS_DENIED_ERROR:
    print("Something is wrong with your user name or password")
  elif e.errno == errorcode.ER_BAD_DB_ERROR:
    print("Database does not exist")
  else:
    print("Error code:", e.errno)        # error number
    print("SQLSTATE value:", e.sqlstate) # SQLSTATE value
    print("Error message:", e.msg)       # error message
 
# Close the connection when the try block completes.
else:
  cnx.close()

As always, I hope this helps those trying to understand how CTEs can solve problems that would otherwise be coded in external imperative languages like Python.

Written by maclochlainn

March 1st, 2024 at 12:30 am