MacLochlainns Weblog

Michael McLaughlin's Technical Blog

Site Admin

Archive for the ‘Linux’ Category

Oracle External Table

without comments

Supporting my student labs requires the ability to use external tables. I didn’t have a chance to test external tables when I adopted Oracle’s 18c Docker installations. The following are the instructions for the macOS version, which work. Unfortunately, I couldn’t sort out a way to implement external tables in the Windows version of Oracle’s 18c Docker installation. If somebody knows the way to do that, feel free to drop me a note or a URL to where the information may be found.

These two blog posts explain Oracle’s external tables:

External Tables
External Tables with Preprocessing

These instructions build on my base macOS instructions, which you can find in this earlier Install, configure, and use an Oracle Docker Container blog post. You can find the existing Oracle virtual directories when you connect as the system user and query the dba_directories view. I used the following SQL*Plus formatting commands for the query:

SET PAGESIZE 99
COL directory_name FORMAT A24
COL directory_path FORMAT A54

and the following SQL query:

SELECT directory_name
,      directory_path
FROM   dba_directories;

It returns the following results:

DIRECTORY_NAME		 DIRECTORY_PATH
------------------------ ------------------------------------------------------
ORA_DBMS_FCP_LOGDIR	 /opt/oracle/product/18c/dbhomeXE/cfgtoollogs
SDO_DIR_WORK		 /ade/b/2794046351/oracle/work
SDO_DIR_ADMIN		 /ade/b/2794046351/oracle/md/admin
ORA_DBMS_FCP_ADMINDIR	 /opt/oracle/product/18c/dbhomeXE/rdbms/admin
XMLDIR			 /opt/oracle/product/18c/dbhomeXE/rdbms/xml
XSDDIR			 /opt/oracle/product/18c/dbhomeXE/rdbms/xml/schema
ORACLE_OCM_CONFIG_DIR2	 /opt/oracle/product/18c/dbhomeXE/ccr/state
ORACLE_OCM_CONFIG_DIR	 /opt/oracle/product/18c/dbhomeXE/ccr/state
OPATCH_INST_DIR 	 /opt/oracle/product/18c/dbhomeXE/OPatch
DATA_PUMP_DIR		 /opt/oracle/admin/XE/dpdump/
OPATCH_SCRIPT_DIR	 /opt/oracle/product/18c/dbhomeXE/QOpatch
OPATCH_LOG_DIR		 /opt/oracle/product/18c/dbhomeXE/rdbms/log
ORACLE_BASE		 /opt/oracle
ORACLE_HOME		 /opt/oracle/product/18c/dbhomeXE
 
14 rows selected.

In that prior post, you will find instructions for creating an environment file. The following instructions leverage the $ORACLE_BASE environment variable, which points to the /opt/oracle directory in the Docker environment.

Here are the steps to complete the external file setup, including how to position the physical comma-separated value (CSV) files in the available but otherwise hidden Docker directories. Hidden might be the wrong word choice but they’re not visible from the host macOS operating system. You have to connect to the Docker instance as the root user.

The following Docker command, used in the prior blog post, connects as the root user:

docker exec -it videodb bash

The instructions start before connecting to and after connecting to the Docker instance:

  1. Put the transaction_upload.csv and transaction_upload2.csv files into the following macOS host directory:

    /Users/<installuser>/oracle/student

    which matches to the following internal Docker directory:

    /home/student
  2. As the connected root user, change directory to the $ORACLE_BASE (/opt/oracle) directory with the following command:

    cd $ORACLE_BASE

  3. As the connected root user, make an upload directory inside the $ORACLE_BASE (/opt/oracle) directory with the following command:

    mkdir upload

    As the root user, change the ownership of the upload director from root as the owner and primary group to oracle as the owner and oinstall as the primary group with the following command:

    chown oracle:oinstall upload

  4. As the connected root user, copy the transaction_upload.csv and transaction_upload2.csv files from the /home/student directory to the $ORACLE_BASE (/opt/oracle) directory with the following command:

    cp ~student/transaction_upload*.csv $ORACLE_BASE/upload/.

  5. As the connected root user, change from the $ORACLE_BASE (/opt/oracle) directory to the upload subdirectory with the following command:

    cd upload

    As the root user, change the ownership of the transaction_upload.csv and transaction_upload2.csv files from root as the owner and primary group to oracle as the owner and oinstall as the primary group with the following command:

    chown oracle:oinstall transaction_upload*.csv

  6. As the connected Oracle system user, create the upload virtual directory with the following command:

    CREATE DIRECTORY upload AS '/opt/oracle/upload';

    As the connected Oracle system user, grant read and write privileges to the Oracle c##student user with the following command:

    GRANT READ, WRITE ON DIRECTORY upload TO c##student;

  7. As the connected Oracle c##student user, create the transaction_upload externally managed table with the following command:

    CREATE TABLE transaction_upload
    ( account_number          VARCHAR2(10)
    , first_name              VARCHAR2(20)
    , middle_name             VARCHAR2(20)
    , last_name               VARCHAR2(20)
    , check_out_date          DATE
    , return_date             DATE
    , rental_item_type        VARCHAR2(12)
    , transaction_type        VARCHAR2(14)
    , transaction_amount      NUMBER
    , transaction_date        DATE
    , item_id                 NUMBER
    , payment_method_type     VARCHAR2(14)
    , payment_account_number  VARCHAR2(19))
      ORGANIZATION EXTERNAL
      ( TYPE oracle_loader
        DEFAULT DIRECTORY upload
        ACCESS PARAMETERS
        ( RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII
          BADFILE     'UPLOAD':'transaction_upload.bad'
          DISCARDFILE 'UPLOAD':'transaction_upload.dis'
          LOGFILE     'UPLOAD':'transaction_upload.log'
          FIELDS TERMINATED BY ','
          OPTIONALLY ENCLOSED BY "'"
          MISSING FIELD VALUES ARE NULL )
        LOCATION ('transaction_upload.csv'))
    REJECT LIMIT UNLIMITED;

    As the connected Oracle c##student user, query the transaction_upload table to verify that you can read the external file source through the virtual upload directory.

    COL  record  FORMAT 99,999  HEADING "Record|Count"
    SELECT TO_CHAR(COUNT(*),'99,999') AS record
    FROM   transaction_upload;

    It should return the following value:

    Record
    Count
    -------
     11,520

  8. As the connected Oracle c##student user, create the transaction_reversal externally managed table with the following command:

    CREATE TABLE transaction_reversal
    ( transaction_id          NUMBER
    , transaction_account     VARCHAR2(15)
    , transaction_type        VARCHAR2(30)
    , transaction_date        DATE
    , transaction_amount      NUMBER
    , rental_id               NUMBER
    , payment_method_type     NUMBER
    , payment_account_number  VARCHAR2(20)
    , created_by              NUMBER
    , creation_date           DATE
    , last_updated_by         NUMBER
    , last_update_date        DATE)
      ORGANIZATION EXTERNAL
      ( TYPE oracle_loader
        DEFAULT DIRECTORY upload
        ACCESS PARAMETERS
        ( RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII
          BADFILE     'UPLOAD':'transaction_upload2.bad'
          DISCARDFILE 'UPLOAD':'transaction_upload2.dis'
          LOGFILE     'UPLOAD':'transaction_upload2.log'
          FIELDS TERMINATED BY ','
          OPTIONALLY ENCLOSED BY "'"
          MISSING FIELD VALUES ARE NULL )
        LOCATION ('transaction_upload2.csv'))
    REJECT LIMIT UNLIMITED;

    As the connected Oracle c##student user, query the transaction_reversal table to verify that you can read the external file source through the virtual upload directory.

    COL  record  FORMAT 99,999  HEADING "Record|Count"
    SELECT TO_CHAR(COUNT(*),'99,999') AS record
    FROM   transaction_reversal;

    It should return the following value:

    Record
    Count
    -------
      1,170

Unfortunately, the file permission on the Windows version of the Oracle Docker 18c installation make it more difficult to install.

Written by maclochlainn

November 14th, 2020 at 10:57 pm

Installing PL/Python Extension

without comments

While PL/Python is an untrusted language inside PostgreSQL, I was installing it to test some of its features. First, we check to see if PL/Python is installed by attempting to create a PL/Python function:

CREATE FUNCTION pima(a integer, b integer)
RETURNS integer AS
$$
if a > b:
  return a
return b
$$ LANGUAGE plpython3u;

It likely should raise an error like this because PL/Python is an untrusted language. It’s untrusted because it runs with root privileges rather than a restricted user’s privilege. More or less, with PL/Python you can access the entire database. This makes PL/Python more of a threat than tool beyond experimentation in a test database.

ERROR:  language "plpython3u" does not exist
HINT:  Use CREATE EXTENSION to load the language into the database.

A quick query as the postgres user tells you whether or not PL/Python is installed:

SELECT * FROM pg_language;

It returns the following:

 lanname  | lanowner | lanispl | lanpltrusted | lanplcallfoid | laninline | lanvalidator | lanacl 
----------+----------+---------+--------------+---------------+-----------+--------------+--------
 internal |       10 | f       | f            |             0 |         0 |         2246 | 
 c        |       10 | f       | f            |             0 |         0 |         2247 | 
 sql      |       10 | f       | t            |             0 |         0 |         2248 | 
 plpgsql  |       10 | t       | t            |         14088 |     14089 |        14090 | 
(4 rows)

I attempted to add PL/Python with the following command:

CREATE EXTENSION plpython3u;

It raised the following error:

ERROR:  could not open extension control file "/usr/pgsql-11/share/extension/plpython3u.control": No such file or directory

The error basically appeared to occur because there’s a missing Python 3 package from what I could find on the web. I installed the missing postgresql-plpython3 package as the root superuser, which you also could install as a member of the sudoer list. The following shows how to install it as a sudoer member:

sudo yum install -y postgresql-plpython3

You should see something close to the following console output:

Last metadata expiration check: 0:43:53 ago on Fri 06 Nov 2020 10:42:28 AM MST.
Dependencies resolved.
============================================================================================================
 Package                           Architecture        Version                   Repository            Size
============================================================================================================
Installing:
 postgresql-plpython3              x86_64              11.7-2.fc30               updates               86 k
Installing dependencies:
 postgresql-server                 x86_64              11.7-2.fc30               updates              5.3 M
 
Transaction Summary
============================================================================================================
Install  2 Packages
 
Total download size: 5.3 M
Installed size: 23 M
Downloading Packages:
(1/2): postgresql-plpython3-11.7-2.fc30.x86_64.rpm                          218 kB/s |  86 kB     00:00    
(2/2): postgresql-server-11.7-2.fc30.x86_64.rpm                             3.0 MB/s | 5.3 MB     00:01    
------------------------------------------------------------------------------------------------------------
Total                                                                       2.1 MB/s | 5.3 MB     00:02     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                    1/1 
  Running scriptlet: postgresql-server-11.7-2.fc30.x86_64                                               1/2 
  Installing       : postgresql-server-11.7-2.fc30.x86_64                                               1/2 
warning: /var/lib/pgsql/.bash_profile created as /var/lib/pgsql/.bash_profile.rpmnew
 
  Running scriptlet: postgresql-server-11.7-2.fc30.x86_64                                               1/2 
  Installing       : postgresql-plpython3-11.7-2.fc30.x86_64                                            2/2 
  Running scriptlet: postgresql-plpython3-11.7-2.fc30.x86_64                                            2/2 
  Verifying        : postgresql-plpython3-11.7-2.fc30.x86_64                                            1/2 
  Verifying        : postgresql-server-11.7-2.fc30.x86_64                                               2/2 
 
Installed:
  postgresql-plpython3-11.7-2.fc30.x86_64                postgresql-server-11.7-2.fc30.x86_64               
 
Complete!

While I thought this might fix the problem, it didn’t and raised the following error:

ERROR:  could not open extension control file "/usr/pgsql-11/share/extension/plpython3u.control": No such file or directory

I found the plpython3u.control file on GitHub and put the following plpython3u.control file, after comparing it against the plpgsql.control file, into the /usr/pgsql-11/share/extension directory. At this point, I began wondering why it’s looking in the /usr/pgsql-11/share/extension directory instead of a /usr/plpython3u/share/extension directory (does not exist).

# plpython3u extension
comment = 'PL/Python3U untrusted procedural language'
default_version = '1.0'
module_pathname = '$libdir/plpython3'
relocatable = false
schema = pg_catalog
superuser = true

I retried creating the plpython3u extension:

CREATE EXTENSION plpython3u;

It raised the following error:

ERROR:  extension "plpython3u" has no installation script nor update path for version "1.0"

At this point, I could find no further help on the Internet. I did notice that there were these two *.sql files in the /usr/pgsql-11/share/extension directory:

  • plpgsql–1.0.sql
  • plpgsql–unpackaged–1.0.sql

I found this plpython3u--1.0.sql file on GitHub:

/* src/pl/plpython/plpython3u--1.0.sql */
 
CREATE FUNCTION plpython3_call_handler() RETURNS language_handler
  LANGUAGE c AS 'MODULE_PATHNAME';
 
CREATE FUNCTION plpython3_inline_handler(internal) RETURNS void
  STRICT LANGUAGE c AS 'MODULE_PATHNAME';
 
CREATE FUNCTION plpython3_validator(oid) RETURNS void
  STRICT LANGUAGE c AS 'MODULE_PATHNAME';
 
CREATE LANGUAGE plpython3u
  HANDLER plpython3_call_handler
  INLINE plpython3_inline_handler
  VALIDATOR plpython3_validator;
 
COMMENT ON LANGUAGE plpython3u IS 'PL/Python3U untrusted procedural language';

I retried creating the plpython3u extension, as a member of the sudoer list and got a new error:

ERROR:  permission denied to create extension "plpython3u"
HINT:  Must be superuser to create this extension.

I retried creating the plpython3u extension as the postgres user, who is the owning user:

CREATE EXTENSION plpython3u;

It raised the following error:

ERROR:  could not access file "$libdir/plpython3": No such file or directory

While I intend to finish this, that’s as far as I got. It appears from some of the things I’ve read I need to recompile or configure items that would destabilize what I have working at the moment. Finishing this will need to wait for me to build another test environment from scratch. If you catch this post and know the remaining steps, I invite you to add them in the comments.

Written by maclochlainn

November 8th, 2020 at 1:54 pm

Conditional Updates

without comments

While I’m switching labs next term after more than a decade with more comprehensive lab set, I’m hoping the new exercises build the students’ core SQL skill set. Next term, I hope to see whether the change is successful. I’ve opted for using Alan Beaulieu’s Learning SQL: Generate, Manipulate, and Retrieve Data, 3rd Edition, because it’s a great book and uses the MySQL database.

One exercise that the students will lose is a data migration exercise from a badly designed common_lookup table to a well designed common_lookup table. The starting point is shown below on the left and the fixed version is on the right.

        

There are several problems with the original common_lookup table’s design. The first problem is that the common_lookup_context column does not uniquely identify a location within the data model for at least one list of lookup values. While it uses table names generally, it has no way to support two or more lists within the same table. It also uses a 'MULTIPLE' string for a list of values that supports two tables. The two tables supported by 'MULTIPLE' string are the address and telephone tables.

The lab instructions have the students add the following three columns to the table:

  • common_lookup_table
  • common_lookup_column
  • common_lookup_code

Together the combination of the common_lookup_table and common_lookup_column columns create a non-unique super key. The super key identifies micro subtables. The combination of the common_lookup_table, common_lookup_column, and common_lookup_type columns creates a unique natural key that defines all possible values for a lookup list based on a column in a table.

The lab asked the students to use the existing data, string literal values, and some simple rules to populate the new common_lookup_table and common_lookup_column columns with data. The rules or steps were:

  1. Migrate valid table names from the common_lookup_context column to the common_lookup_table column.
  2. Migrate a literal 'ADDRESS' value into the common_lookup_table column when the common_lookup_context column holds a 'MULTIPLE' string value.
  3. Migrate valid table names from the common_lookup_context column to the common_lookup_column column by appending a '_TYPE' string literal to the common_lookup_context column values, except for those three rows that have a ‘VISA_CARD’, ‘MASTER_CARD’, or ‘DISCOVER_CARD’. The three exempted rows should update the common_lookup_column column with a 'CREDIT_CARD_TYPE' string literal.
  4. Migrate a literal 'ADDRESS_TYPE' value into the common_lookup_column column when the common_lookup_context column holds a 'MULTIPLE' string value.
  5. After these changes, insert two new rows in the common_lookup table. They should contain 'TELEPHONE' and 'TELEPHONE_TYPE' string literal values. One of the rows should contain 'HOME' for the common_lookup_type column value and the other 'WORK' for the same column.
  6. Then, the students were asked to update the foreign key column value in the telephone_type column of the telephone table.

It was a tremendous learning experience for those who did it because there were so many steps required to migrate the structure and data. Years ago, I would offer students half their final grade if they could complete the first four steps in a single UPDATE statement. Occasionally, I had students complete it. We worked through the problem with a small group of students today in one of my two weekly tutoring sessions. I thought it might be nice to document the solution, which use CASE operators in the SET clause of the UPDATE statement.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
UPDATE common_lookup
SET    common_lookup_table =
         CASE
           WHEN NOT common_lookup_context = 'MULTIPLE' THEN
             common_lookup_context
           ELSE
             'ADDRESS'
         END
,      common_lookup_column =
         CASE
           WHEN common_lookup_table = 'MEMBER' AND
                common_lookup_type IN ('VISA_CARD','MASTER_CARD','DISCOVER_CARD') THEN
             'CREDIT_CARD_TyPE'
           WHEN NOT common_lookup_context = 'MULTIPLE' THEN
             CONCAT(common_lookup_context,'_TYPE')
           ELSE
             'ADDRESS_TYPE'
         END;

As a rule, students would solve Step #6, which migrates the foreign key values of the telephone table’s telephone_type column to the new rows inserted into the common_lookup table. Most would accomplish that step with two UPDATE statements. Very few could see how to create a single UPDATE statement for both conditions and migrate from a now obsolete foreign key value that pointed to the rows of the address table’s rows in the common_lookup table to a valid foreign key value pointed to the telephone table’s rows in the common_lookup table:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
UPDATE   telephone
SET      telephone_type =
           CASE
             WHEN common_lookup_type = 'HOME' THEN
               (SELECT common_lookup_id
                FROM common_lookup
                WHERE common_lookup_table = 'TELEPHONE'
                AND common_lookup_type = 'HOME')
             ELSE
               (SELECT common_lookup_id
                FROM common_lookup
                WHERE common_lookup_table = 'TELEPHONE'
                AND common_lookup_type = 'WORK')
             END
WHERE    telephone_type = 
           (SELECT common_lookup_id
            FROM common_lookup
            WHERE common_lookup_table = 'ADDRESS'
            AND common_lookup_type IN ('HOME','WORK');

As always, I hope this helps those looking for new ideas in SQL.

Written by maclochlainn

November 5th, 2020 at 11:03 pm

Quick Python Flask

without comments

A quick example of installing and running a sample hello.py file with the Flask application with the Command Line Interface (CLI) documentation on the Flask website. The first thing you need to do is install the flask module with the pip3 utility on Fedora Linux (or most other platforms):

pip3 install flask --user student

You should see a successful log like this for student user:

Requirement already satisfied: flask in /usr/local/lib64/python3.7/site-packages (1.1.2)
Collecting student
  Downloading https://files.pythonhosted.org/packages/b5/af/be416c18e4fe63a582e06fb0d47bf059bd0f4f413e5a6cfe893747ebaf79/Student-0.0.1-py3-none-any.whl
Requirement already satisfied: click>=5.1 in /usr/lib/python3.7/site-packages (from flask) (7.1.1)
Requirement already satisfied: itsdangerous>=0.24 in /usr/local/lib/python3.7/site-packages (from flask) (1.1.0)
Requirement already satisfied: Werkzeug>=0.15 in /usr/local/lib/python3.7/site-packages (from flask) (1.0.1)
Requirement already satisfied: Jinja2>=2.10.1 in /usr/lib/python3.7/site-packages (from flask) (2.10.1)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/lib64/python3.7/site-packages (from Jinja2>=2.10.1->flask) (1.1.1)
Installing collected packages: student
Successfully installed student-0.0.1

The smallest footprint hello.py program is:

# Import statement.
from flask import Flask
 
# Application defintion.
app = Flask(__name__)
 
# Basic URI rotuing.
@app.route("/")
 
# Define a hello function.
def hello():
  return "Hello World!"
 
# Run the program.
if __name__ == "__main__":
  app.run()

You can run the program from the command line with two commands:

export FLASK_APP=hello.py
flask run

It will show you the following in the terminal session:

 * Serving Flask app "hello.py"
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)

Type the following URL in a local browser:

http://localhost:5000/

Flask will add the following to the console log:

127.0.0.1 - - [26/Oct/2020 00:37:49] "GET / HTTP/1.1" 200 -

You can jazz the hello.py program up with a an argument list, like this modified example based on this comparison article on Python frameworks:

# Import libraries.
from flask import Flask, escape, request
 
# Define the application.
app = Flask(__name__)
 
# Define a base URI route and function.
@app.route('/')
def index():
  return "Index Page"
 
# Define an application URI route and function.
@app.route("/hello")
def hello():
  name = request.args.get("name","Simon")
  return f'Hello {escape(name)}!'
 
# Define an about URI route and function.
@app.route("/about")
def about():
  return "About Page"
 
# Run the file.
if __name__ == "__main__":
  app.run()

It prints 'Hello Simon!' in a web page. If you try to stop your Flask server with a Ctrl+Z instead of a Ctrl+C, the next time you go to start it you will receive an error message like:

OSError: [Errno 98] Address already in use

You’ll need to find the Linux process ID and kill the process with prejudice. You can find the process with the following Linux command:

sudo netstat -nlp | grep 5000
[sudo] password for student:

It prompts you for your sudoer password, and then returns a line like:

tcp        0      0 127.0.0.1:5000          0.0.0.0:*               LISTEN      76802/python3

You kill the process with the following Linux command:

kill -9 76802

As always, I hope this helps those looking for the missing pieces.

Written by maclochlainn

October 26th, 2020 at 12:54 am

MySQL Unicode Warning

without comments

It’s always interesting when I upgrade from one release to the next. I learn new things, and in the case of MySQL’s installation and maintenance I become more grateful for the great team of developers working to produce MySQL 8.

A warning that caught my eye in MySQL 8 (8.0.21) was this one on Unicode with the utf8 character code:

Warning (code 3719): 'utf8' is currently an alias for the character set UTF8MB3, but will be an alias for UTF8MB4 in a future release. Please consider using UTF8MB4 in order to be unambiguous.

Currently, a character alias for utf8mb3 is an alias for the deprecated utf8mb3 (a 3-byte character set) until it is removed. When the utf8mb3 character set is removed in a subsequent release the utf8 alias will represent the utf8mb4 (a 4-byte character set), which includes BMP and supplemental character support.

It appears to me that its probably a good time to proactively provision disk space for an upgrade to utf8mb4 (a 4-byte character set). It also probably time to use the utf8mb4 character set rather than the utf8 character set alias. The MySQL Server team wrote a blog entry on the when to use which one. Naturally, all this will have a substantial impact on disk space allocated to the database.

Written by maclochlainn

September 29th, 2020 at 8:24 am

Wrap Oracle’s tnsping

without comments

If you’ve worked with the Oracle database a while, you probably noticed that some utilities write to stdout for both standard output and what should be standard error (stderr). One of those commands is the tnsping utility.

You can wrap the tnsping command to send the TNS-03505 error to stdout with the following code. I put Bash functions like these in a library.sh script, which I can source when automating tasks.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#!/usr/bin/bash
 
tnsping()
{
  if [ ! -z ${1} ]; then
    # Set default return value.
    stdout=`$ORACLE_HOME/bin/tnsping ${1} | tail -1`
 
    # Check stdout to return 0 for success and 1 for failure.
    if [[ `echo ${stdout} | cut -c1-9` = 'TNS-03505' ]]; then
      python -c 'import os, sys; arg = sys.argv[1]; os.write(2,arg + "\n")' "${stdout}"
    else
      echo "${1}"
    fi
  fi
}

You should notice that the script uses a Python call to redirect the error message to standard out (stdout) but you can redirect in Bash shell with the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#!/usr/bin/bash
 
tnsping()
{
  if [ ! -z ${1} ]; then
    # Set default return value.
    stdout=`$ORACLE_HOME/bin/tnsping ${1} | tail -1`
 
    # Check stdout to return 0 for success and 1 for failure.
    if [[ `echo ${stdout} | cut -c1-9` = 'TNS-03505' ]]; then
      echo ${stdout} 1>&2
    else
      echo "${1}"
    fi
  fi
}

Interactively, we can now test a non-existent service name like wrong with this syntax:

tnsping wrong

It’ll print the standard error to console, like:

TNS-03505: Failed to resolve name

or, you can suppress standard error (stderr) by redirecting it to the traditional black hole, like:

tnsping wrong 2>/dev/null

After redirecting standard error (stderr), you simply receive nothing back. That lets you evaluate in another script whether or not the utility raises an error.

In an automating Bash shell script, you use the source command to put the Bash function in scope, like this:

source library.sh

As always, I hope this helps those looking for a solution.

Written by maclochlainn

September 23rd, 2020 at 11:43 pm

SQL Developer JDK

without comments

In my classes, we use a VMware Linux install with SQL Developer. One of my students called me in a panic after an upgrade of packages when SQL Developer failed to launch. The student was astute enough to try running it from the command line where it generates an error like:

 Oracle SQL Developer
 Copyright (c) 2005, 2018, Oracle and/or its affiliates. All rights reserved.
 
/opt/sqldeveloper/sqldeveloper/bin/../../ide/bin/launcher.sh: line 954: [: : integer expression expected
The JDK (/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.222.b10-0.fc30.x86_64/) is not a valid JDK.
The JDK was specified by a SetJavaHome directive in a .conf file or by a --setjavahome option.
Type the full pathname of a JDK installation (or Ctrl-C to quit), the path will be stored in /home/student/.sqldeveloper/19.2.0/product.conf
 
Error:  Unable to get APP_JAVA_HOME input from stdin after 10 tries

The error is simple, the SQL Developer package update wipe clean the configuration of the SetJavaHome variable in the user’s ~/.sqldeveloper/19.2.0/product.conf file. The fix is three steps because its very likely that the Java packages were also updated. Here’s how to fix it:

  1. Navigate to the directory where you’ve installed the Java Virtual Machine (JVM) and find the current version of the JVM installed:

    cd /usr/lib/jvm
    ls java*

    It will return a set of files, like:

    java
    java-1.8.0
    java-1.8.0-openjdk
    java-1.8.0-openjdk-1.8.0.252.b09-0.fc30.x86_64
    java-openjdk
    jre
    jre-1.8.0
    jre-1.8.0-openjdk
    jre-1.8.0-openjdk-1.8.0.252.b09-0.fc30.x86_64
    jre-openjdk

  2. Navigate to your user’s product configuration file with this command:

    cd ~/.sqldeveloper/19.2.0
  3. Add the following line to the product.conf file:

    # SetJavaHome /path/jdk
    SetJavaHome /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.252.b09-0.fc30.x86_64/

Now, you should be able to run it from the command line. The shortcut icon should also work if one was installed. Also, don’t forget to update your $JAVA_HOME variable in the master Bash resource file, or your local user’s .bashrc files.

As always, I hope this helps those looking for a quick solution.

Written by maclochlainn

September 22nd, 2020 at 12:08 pm

JavaScript Streams?

without comments

A quick followup to my post on how to write a server-side shell component in JavaScript. Naturally, it’s based on a question posed to me about the original article. It asked, “Why didn’t I use JavaScript’s streams instead of synchronized files?”

Moreover, the question asks why I wrote logic (lines 69 thru 105) that wrote to local files rather than a stream. While they didn’t provide an example, here’s a rewritten solution that uses a stream.

69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
  else {
    // Returns RowDataPacket from query.
    for(let element in result) {
      data += result[element].item_title + ', ' + result[element].release_date = '\n'
    }
 
    // Write file when data string is not empty.
    if (data.length > 0) {
      buffer = Buffer.alloc(data.length, data)
 
      // Check for data from database query and write file.
      if (path.length > 0) { 
        let writeStream = fs.createWriteStream(path)
        writeStream.write(buffer) 
        writeStream.on(buffer) 
      } 
      // Set standard out (stdout) and exit program.
      console.log(data)
      process.exit(0)
    }
    else {
      console.error('Query returned no rows.')
    }
  }

The Node.js stream replacement code has a significant problem when the target file can’t be read and written to by a Node.js application. This could be an ownership- or permission-driven problem coupled with the lazy file opening behavior of a stream in JavaScript. The lazy open is a hidden behavior of the createWriteStream() method, which actually calls the fs.open() method. It may raise the following type of error:

events.js:174 
     throw er; // Unhandled 'error' event 
     ^ 
 
Error: EACCES: permission denied, open 'output.csv' 
Emitted 'error' event at: 
   at lazyFs.open (internal/fs/streams.js:277:12) 
   at FSReqWrap.args [as oncomplete] (fs.js:140:20)

You can prevent this type of unhandled exception by putting this type of block right after you verify the target file name in the script. It ensures that your program raises a handled exception before your code tries to access a target file as a stream.

69
70
71
72
73
74
75
76
// Verify access to the file. 
try { 
  fs.accessSync(path, (fs.constants.R_OK && fs.constants.W_OK)) 
  access = true 
} 
catch { 
  console.error("Error accessing [%s] file.", path) 
}

Naturally, you also need to define the access variable at the top of your script. The preceding block lets you set the access variable to true on line 72 when you have permissions to the file used by the stream. It also lets you replace line 76 (from the prior example code) with the following statement that effectively blocks any attempt to use a stream that will fail because of the lazy file opening process:

76
      if (access && (path.length > 0)) {

Adding the extra block does lengthen the program, and change line numbers. I hope I’ve adjusted in a way that makes sense by referencing the old numbers for the change of the decision making if-statement.

As always, I hope this helps those looking for a related solution.

Written by maclochlainn

August 4th, 2020 at 1:47 am

MySQL JSON Server

with one comment

A student question: Does JavaScript make context switching for web-based applications obsolete? Wow! I asked what that meant. He said, it means JavaScript replaces all other server-side programming languages, like PHP, C#, or Python. I asked the student why he believed that. His answer was that’s what two interviewing managers told him.

I thought it would be interesting to put the idea to a test. Below is a Node.js script that acts as a utility that queries the MySQL database with substitution variables in query. It also returns a standard out (stdout) stream of the MySQL query’s results. It also supports three flag and value pairs as arguments, and optionally writes the results of the MySQL query to a log file while still returning result as the stdout value. All errors are written to the standard error (stderr) stream.

The Node.js solution is completely portable between Windows and Linux. You can deploy it to either platform without any edits for Windows case insensitive Command-Line Interface (CLI). Clearly, Node.js offers a replacement for direct interaction with the .NET components in PowerShell. This appears to mean basic Linux shell or PowerShell knowledge is all that’s required to write and deploy JavaScript programs as server-side programming solutions. It means anything that you would have done with the .NET you can do with JavaScript. Likewise, you can replace PHP, C#, Python, or Ruby server-side scripts with JavaScript programs.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
// Declare constants. 
const fs = require('fs') 
const util = require('util') 
const express = require('express') 
const mysql = require('mysql') 
const connection = mysql.createConnection({ 
  host: 'localhost', 
  user: 'student', 
  password: 'student', 
  database: 'studentdb' 
}) 
 
// Declare local variables for case insensitive use. 
var data = '' 
var buffer = Buffer.alloc(0) 
var path = '' 
 
// Declare default query variables dates. 
var startDate = new Date('1980-01-01') 
var endDate = new Date() 
 
// Set default endDate value as tomorrow. 
endDate.setDate(new Date().getDate() + 1) 
 
// Define a regular expression for valid file names. 
var regexp = /^([0-9a-zA-Z]+|[0-9a-zA-Z]+\.+[0-9a-zA-Z]{3})$/ 
 
// Assign dynamic variables from arguments. 
var argv = process.argv.slice(2) 
 
// Check for paired values, evaluate and assign them to local variables. 
if ((argv.length % 2) == 0) { 
 for (let i = 0; i < argv.length; i += 2) { 
   // Assign a file name to write to the output path. 
   if ((argv[i].toLowerCase() == '-f') && (regexp.test(argv[i+1]))) { 
     // Assign present working for Windows or Linux. 
     if (process.platform == 'win32') 
       path = '.\\' + argv[1] 
     else 
       path = './' + argv[1] 
   } 
   // Assign a start date from the input string. 
   else if (argv[i].toLowerCase() == '-b') { 
     startDate = new Date(argv[i+1]) 
   } 
   // Assign a end date from the input string. 
   else if (argv[i].toLowerCase() == '-e') { 
     endDate = new Date(argv[i+1]) 
   } 
 } 
} 
else { 
 console.error('Arguments must be in pairs: flag and value.') 
} 
 
// Define and run MySQL query. 
connection.query("SELECT   i.item_title " + 
                 ",        date_format(i.release_date,'%d-%M-%Y') AS release_date " + 
                 "FROM     item i JOIN common_lookup cl " + 
                 "ON       i.item_type = cl.common_lookup_id " + 
                 "WHERE    cl.common_lookup_type = 'BLU-RAY' " + 
                 "AND      i.release_date BETWEEN ? AND ? " + 
                 "ORDER BY i.release_date" 
                ,[startDate, endDate], function (err, result) { 
 if (err) { 
   console.error('Query contains error ...') 
   console.error('-> ' + err) 
 } 
 else { 
   // Prints the index value in the RowDataPacket. 
   for(let element in result) { 
     data += result[element].item_title + ', ' + result[element].release_date + '\n' 
   } 
   // Write file when data string is not empty. 
   if (data.length > 0 ) { 
     buffer = Buffer.alloc(data.length,data) 
 
     // Check for a defined path before writing a file. 
     if (path.length > 0) { 
       // Open the file. 
       fs.open(path, 'w', function(err, fd) { 
         if (err) { 
           console.error('Could not open [' + path + '] file [' + err + ']') 
         } 
         else { 
           // Write the file. 
           fs.write(fd, buffer, 0, buffer.length, null, function(err) { 
             if (err) 
               console.error('Error writing [' + path + '] file [' + err + ']') 
             fs.close(fd, function() { 
               if (fs.existsSync(path)) { 
                 process.exit(0) 
               } 
             }) 
           }) 
         }   
       }) 
     } 
     // Set standard out (stdout). 
     console.log(data) 
   } 
   else { 
     console.error('Query returned no rows.') 
   } 
 } 
}) 
 
// Close MySQL connection. 
connection.end()

You can call this code with the default values, like

node app.js

You can call this code with a user defined file name, and a custom start and end date values, like

node app.js -f output.csv -b '2001-01-01' -e '2004-12-31'

The latter command returns the following by querying my MySQL studentdb video store:

Star Wars II, 16-May-2002 
Harry Potter and the Chamber of Secrets, 28-May-2002 
Harry Potter and the Sorcerer's Stone, 28-May-2002 
Die Another Day, 03-June-2003 
Harry Potter and the Prisoner of Azkaban, 23-October-2004

As always, I hope this helps somebody trying to sort it out.

Written by maclochlainn

August 1st, 2020 at 1:05 am

Express.js & MySQL

without comments

Sometimes, you just half to chuckle. A couple folks felt that I didn’t give enough information in my post showing how to configure a small Node.js application that could access a MySQL database. Specifically, they wanted me to explain the following:

  1. Configure your Express.js and MySQL development in a single Node.js application.
  2. How to convert the list of RowDataPacket objects as elements of data, which is really just simple JavaScript knowledge.
  3. How to bind variables into the query.

Like the other blog post, this one assumes you’ve performed a global install of Node.js on a Linux server. If you’re unfamiliar with how to perform a global Node.js installation, I cover how to do it in this earlier blog post.

Before you write the Node.js applicaiton, you need to setup a db developer directory. A global install of Node.js means you need to create a node_modules symbolic link to the /usr/local/lib/node_modules directory in the db directory (in Linux). You can use the following Linux command from within the db directory to create the appropriate symbolic link:

ln -s /usr/local/lib/node_modules `pwd`/node_modules

or, assuming you have a /home/some_user/db directory

ln -s /usr/local/lib/node_modules /home/some_user/node_modules

After creating the node_modules symbolic link, you need to run the following two npm commands. Please note that second command holds the secret-sauce for generating a package.json file that supports Express.js and the MySQL driver:

npm init --y
sudo npm install --save express mysql

Then, you need to replace the package.json file with the contents of the package-lock.json file from your last npm command.

Here’s a small sample program that uses Express.js, converts the RowDataPackets collection, and binds local variables into the query:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
// Require libraries.
const express = require('express') 
const mysql = require('mysql')
 
// Create a mysql connection. 
const connection = mysql.createConnection({ 
  host: 'localhost', 
  user: 'student', 
  password: 'student', 
  database: 'studentdb' 
}) 
 
// Declare two local variables.
const start_date = '2001-01-01' 
const end_date = '2003-12-31' 
 
// Connect and display results in the console log.
connection.connect((err) => { 
if (err) 
  throw err 
else { 
  console.log('Connected to MySQL Server!\n') 
  connection.query("SELECT   i.item_title " + 
                   ",        date_format(i.release_date,'%d-%M-%Y') AS release_date " + 
                   "FROM     item i JOIN common_lookup cl " + 
                   "ON       i.item_type = cl.common_lookup_id " + 
                   "WHERE    cl.common_lookup_type = 'BLU-RAY' " + 
                   "AND      i.release_date BETWEEN ? AND ? " + 
                   "ORDER BY i.release_date" 
                  ,[start_date, end_date], function (err, result) { 
    if (err) 
      throw err 
    else { 
      // Prints the index value in the RowDataPacket. 
      for(let element in result) { 
        console.log(result[element].item_title + ', ' + result[element].release_date) 
      } 
      console.log('') 
      console.log('Press Ctrl-C to terminate ...') 
    } 
  }) 
} 
})

Line 28 shows two question marks. They act as placeholders for binding variables. Then, on line 30 you see a collection of the start_date and end_date local variables, which is the second argument to the query() function.

Rather than define individual variables, you can pass them as a collection directly. For example, you replace lines 14 and 15 with this single line:

14
const dates = ['2001-01-01','2003-12-31']

Then, you can pass dates as the second argument to the query() function, like this:

30
                  ,dates, function (err, result) {

Lines 35 and 36 show you how to convert a collection of RowDataPacket objects into elements of data. The for loop assigns the index value to the element variable, which lets you address a single RowDataPacket object instance. The dot (“.“) notation lets you use the name in a name-value pair to reference its value.

It displays the following:

Connected to MySQL Server! 
 
Star Wars II, 16-May-2002 
Harry Potter and the Chamber of Secrets, 28-May-2002 
Harry Potter and the Sorcerer's Stone, 28-May-2002 
Die Another Day, 03-June-2003 
 
Press Ctrl-C to terminate ...

Naturally, I’ll get around to writing something up that shows how to leverage MySQL data into a dynamic form with Handlebars at some point in the near future.

As always, I hope this helps those looking for a solution.

Written by maclochlainn

July 28th, 2020 at 1:48 pm