Archive for the ‘DevOps’ Category
Oracle External Table
Supporting my student labs requires the ability to use external tables. I didn’t have a chance to test external tables when I adopted Oracle’s 18c Docker installations. The following are the instructions for the macOS version, which work. Unfortunately, I couldn’t sort out a way to implement external tables in the Windows version of Oracle’s 18c Docker installation. If somebody knows the way to do that, feel free to drop me a note or a URL to where the information may be found.
These two blog posts explain Oracle’s external tables:
External Tables
External Tables with Preprocessing
These instructions build on my base macOS instructions, which you can find in this earlier Install, configure, and use an Oracle Docker Container blog post. You can find the existing Oracle virtual directories when you connect as the system
user and query the dba_directories
view. I used the following SQL*Plus formatting commands for the query:
SET PAGESIZE 99 COL directory_name FORMAT A24 COL directory_path FORMAT A54 |
and the following SQL query:
SELECT directory_name , directory_path FROM dba_directories; |
It returns the following results:
DIRECTORY_NAME DIRECTORY_PATH ------------------------ ------------------------------------------------------ ORA_DBMS_FCP_LOGDIR /opt/oracle/product/18c/dbhomeXE/cfgtoollogs SDO_DIR_WORK /ade/b/2794046351/oracle/work SDO_DIR_ADMIN /ade/b/2794046351/oracle/md/admin ORA_DBMS_FCP_ADMINDIR /opt/oracle/product/18c/dbhomeXE/rdbms/admin XMLDIR /opt/oracle/product/18c/dbhomeXE/rdbms/xml XSDDIR /opt/oracle/product/18c/dbhomeXE/rdbms/xml/schema ORACLE_OCM_CONFIG_DIR2 /opt/oracle/product/18c/dbhomeXE/ccr/state ORACLE_OCM_CONFIG_DIR /opt/oracle/product/18c/dbhomeXE/ccr/state OPATCH_INST_DIR /opt/oracle/product/18c/dbhomeXE/OPatch DATA_PUMP_DIR /opt/oracle/admin/XE/dpdump/ OPATCH_SCRIPT_DIR /opt/oracle/product/18c/dbhomeXE/QOpatch OPATCH_LOG_DIR /opt/oracle/product/18c/dbhomeXE/rdbms/log ORACLE_BASE /opt/oracle ORACLE_HOME /opt/oracle/product/18c/dbhomeXE 14 rows selected. |
In that prior post, you will find instructions for creating an environment file. The following instructions leverage the $ORACLE_BASE
environment variable, which points to the /opt/oracle
directory in the Docker environment.
Here are the steps to complete the external file setup, including how to position the physical comma-separated value (CSV) files in the available but otherwise hidden Docker directories. Hidden might be the wrong word choice but they’re not visible from the host macOS operating system. You have to connect to the Docker instance as the root user.
The following Docker command, used in the prior blog post, connects as the root
user:
docker exec -it videodb bash |
The instructions start before connecting to and after connecting to the Docker instance:
- Put the
transaction_upload.csv
andtransaction_upload2.csv
files into the following macOS host directory:/Users/<installuser>/oracle/student
which matches to the following internal Docker directory:
/home/student
- As the connected
root
user, change directory to the$ORACLE_BASE
(/opt/oracle
) directory with the following command:cd $ORACLE_BASE
- As the connected
root
user, make anupload
directory inside the$ORACLE_BASE
(/opt/oracle
) directory with the following command:mkdir upload
As the
root
user, change the ownership of theupload
director fromroot
as the owner and primary group tooracle
as the owner andoinstall
as the primary group with the following command:chown oracle:oinstall upload
- As the connected
root
user, copy thetransaction_upload.csv
andtransaction_upload2.csv
files from the/home/student
directory to the$ORACLE_BASE
(/opt/oracle
) directory with the following command:cp ~student/transaction_upload*.csv $ORACLE_BASE/upload/.
- As the connected
root
user, change from the$ORACLE_BASE
(/opt/oracle
) directory to theupload
subdirectory with the following command:cd upload
As the
root
user, change the ownership of thetransaction_upload.csv
andtransaction_upload2.csv
files fromroot
as the owner and primary group tooracle
as the owner andoinstall
as the primary group with the following command:chown oracle:oinstall transaction_upload*.csv
- As the connected Oracle
system
user, create theupload
virtual directory with the following command:CREATE DIRECTORY upload AS '/opt/oracle/upload';
As the connected Oracle
system
user, grantread
andwrite
privileges to the Oraclec##student
user with the following command:GRANT READ, WRITE ON DIRECTORY upload TO c##student;
- As the connected Oracle
c##student
user, create thetransaction_upload
externally managed table with the following command:CREATE TABLE transaction_upload ( account_number VARCHAR2(10) , first_name VARCHAR2(20) , middle_name VARCHAR2(20) , last_name VARCHAR2(20) , check_out_date DATE , return_date DATE , rental_item_type VARCHAR2(12) , transaction_type VARCHAR2(14) , transaction_amount NUMBER , transaction_date DATE , item_id NUMBER , payment_method_type VARCHAR2(14) , payment_account_number VARCHAR2(19)) ORGANIZATION EXTERNAL ( TYPE oracle_loader DEFAULT DIRECTORY upload ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII BADFILE 'UPLOAD':'transaction_upload.bad' DISCARDFILE 'UPLOAD':'transaction_upload.dis' LOGFILE 'UPLOAD':'transaction_upload.log' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY "'" MISSING FIELD VALUES ARE NULL ) LOCATION ('transaction_upload.csv')) REJECT LIMIT UNLIMITED;
As the connected Oracle
c##student
user, query thetransaction_upload
table to verify that you can read the external file source through the virtualupload
directory.COL record FORMAT 99,999 HEADING "Record|Count" SELECT TO_CHAR(COUNT(*),'99,999') AS record FROM transaction_upload;
It should return the following value:
Record Count ------- 11,520
- As the connected Oracle
c##student
user, create thetransaction_reversal
externally managed table with the following command:CREATE TABLE transaction_reversal ( transaction_id NUMBER , transaction_account VARCHAR2(15) , transaction_type VARCHAR2(30) , transaction_date DATE , transaction_amount NUMBER , rental_id NUMBER , payment_method_type NUMBER , payment_account_number VARCHAR2(20) , created_by NUMBER , creation_date DATE , last_updated_by NUMBER , last_update_date DATE) ORGANIZATION EXTERNAL ( TYPE oracle_loader DEFAULT DIRECTORY upload ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII BADFILE 'UPLOAD':'transaction_upload2.bad' DISCARDFILE 'UPLOAD':'transaction_upload2.dis' LOGFILE 'UPLOAD':'transaction_upload2.log' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY "'" MISSING FIELD VALUES ARE NULL ) LOCATION ('transaction_upload2.csv')) REJECT LIMIT UNLIMITED;
As the connected Oracle
c##student
user, query thetransaction_reversal
table to verify that you can read the external file source through the virtualupload
directory.COL record FORMAT 99,999 HEADING "Record|Count" SELECT TO_CHAR(COUNT(*),'99,999') AS record FROM transaction_reversal;
It should return the following value:
Record Count ------- 1,170
Unfortunately, the file permission on the Windows version of the Oracle Docker 18c installation make it more difficult to install.
Java & MySQL 8.0.19
It’s the in-between term time and we’re all stuck at home. I decided to update the image for my Fedora 30 virtual machine. I had a work around to the update issue that I had encountered last October in Bug #96969 but it was not required with the current version. However, after updating from MySQL 8.0.17 to MySQL 8.0.19, I found that my Java connection example failed.
The $CLASSPATH
value was correct:
/usr/share/java/mysql-connector-java.jar:. |
The first error that I got was the my reference to MySQL JDBC driver was incorrect. The error message is quite clear:
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary. Cannot connect to database server: The server time zone value 'MDT' is unrecognized or represents more than one time zone. You must configure either the server or JDBC driver (via the 'serverTimezone' configuration property) to use a more specifc time zone value if you want to utilize time zone support. |
I changed the MySQL Driver reference as instructed by the error message:
29 30 31 | // Create instance of MySQLDriver. Class.forName ("com.mysql.cj.jdbc.Driver").newInstance(); conn = DriverManager.getConnection (url, username, password); |
After the change, I got the following error while retesting my little MySQL Java driver connection test program. Initially, I thought this required a change in the Java environment but that wasn’t it.
Cannot connect to database server: The server time zone value 'MDT' is unrecognized or represents more than one time zone. You must configure either the server or JDBC driver (via the 'serverTimezone' configuration property) to use a more specifc time zone value if you want to utilize time zone support. |
It required me to add the following line to my /etc/my.cnf configuration file, which synchronizes the database’s timezone with the operating system.
# Synchronize the MySQL clock with the computer system clock. default-time-zone='+00:00' |
Then, running my MySQL Driver connection test program worked like a charm. It returns the following:
Database connection established MySQLDriver Version [8.0.19] Database connection terminated |
Here’s the MySQL Connector Java code if you’d like to use the MySQLDriver.java
file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 | // Import classes. import java.sql.*; /* You can't include the following on Linux without raising an exception. */ // import com.mysql.jdbc.Driver; public class MySQLDriver { public MySQLDriver() { /* Declare variables that require explicit assignments because they're addressed in the finally block. */ Connection conn = null; Statement stmt = null; ResultSet rset = null; /* Declare other variables. */ String url; String username = "student"; String password = "student"; String database = "studentdb"; String hostname = "localhost"; String port = "3306"; String sql; /* Attempt a connection. */ try { // Set URL. url = "jdbc:mysql://" + hostname + ":" + port + "/" + database; // Create instance of MySQLDriver. Class.forName ("com.mysql.cj.jdbc.Driver").newInstance(); conn = DriverManager.getConnection (url, username, password); // Query the version of the database. sql = "SELECT version()"; stmt = conn.createStatement(); rset = stmt.executeQuery(sql); System.out.println ("Database connection established"); // Read row returns for one column. while (rset.next()) { System.out.println("MySQLDriver Version [" + rset.getString(1) + "]"); } } catch (SQLException e) { System.err.println ("Cannot connect to database server:"); System.out.println(e.getMessage()); } catch (ClassNotFoundException e) { System.err.println ("Cannot find MySQL driver class:"); System.out.println(e.getMessage()); } catch (InstantiationException e) { System.err.println ("Cannot instantiate class:"); System.out.println(e.getMessage()); } catch (IllegalAccessException e) { System.err.println ("Illegal access exception:"); System.out.println(e.getMessage()); } finally { if (conn != null) { try { rset.close(); stmt.close(); conn.close(); System.out.println ("Database connection terminated"); } catch (Exception e) { /* ignore close errors */ } } } } /* Unit test. */ public static void main(String args[]) { new MySQLDriver(); } } |
As always, I hope this helps those who encounter similar problems.
PostgreSQL Write JSON File
In the process of working through issues to support ETL transformations from JSON with Python. The first step was extracting a the data from a PostgreSQL table into a JSON file. The syntax wasn’t exactly clear and the PostgreSQL Tutorial was a bit misleading but I worked it out. My initial sample deals with writing the data from the item
table of my videodb
database to a item.json
file in the /tmp
directory.
The query retrieves the data but casts the creation_date
and last_update_date
timestamp
data types into date
data types, as follows:
SELECT row_to_json( (SELECT ROW FROM (SELECT item_id , item_barcode , item_type , item_title , item_subtitle , item_rating , item_release_date , created_by , creation_date::DATE , last_updated_by , last_update_date::DATE) ROW)) AS DATA FROM item; |
The internal SELECT
-list creates the name element of name:value pairs in the JSON structure. Naturally, it does exclude the type casting ::date
portion of the query from the name elements. The row_to_json()
maps the name
list to the list of value
results from in each row, and returns a result set.
The query returns the following 21 JSON objects:
{"item_id":1001,"item_barcode":"9736-05640-4","item_type":1016,"item_title":"The Hunt for Red October","item_subtitle":"Special Collector's Edition","item_rating":"PG","item_release_date":"1990-03-02","created_by":1001,"creation_date":"2019-12-13","last_updated_by":1001,"last_update_date":"2019-12-13"} {"item_id":1002,"item_barcode":"24543-02392","item_type":1016,"item_title":"Star Wars I","item_subtitle":"Phantom Menace","item_rating":"PG","item_release_date":"1999-05-04","created_by":1001,"creation_date":"2019-12-13","last_updated_by":1001,"last_update_date":"2019-12-13"} {"item_id":1003,"item_barcode":"24543-5615","item_type":1015,"item_title":"Star Wars II","item_subtitle":"Attack of the Clones","item_rating":"PG","item_release_date":"2002-05-16","created_by":1001,"creation_date":"2019-12-13","last_updated_by":1001,"last_update_date":"2019-12-13"} {"item_id":1004,"item_barcode":"24543-05539","item_type":1016,"item_title":"Star Wars II","item_subtitle":"Attack of the Clones","item_rating":"PG","item_release_date":"2002-05-16","created_by":1001,"creation_date":"2019-12-13","last_updated_by":1001,"last_update_date":"2019-12-13"} {"item_id":1005,"item_barcode":"24543-20309","item_type":1016,"item_title":"Star Wars III","item_subtitle":"Revenge of the Sith","item_rating":"PG13","item_release_date":"2005-05-19","created_by":1001,"creation_date":"2019-12-13","last_updated_by":1001,"last_update_date":"2019-12-13"} {"item_id":1006,"item_barcode":"86936-70380","item_type":1016,"item_title":"The Chronicles of Narnia","item_subtitle":"The Lion, the Witch and the Wardrobe","item_rating":"PG","item_release_date":"2002-05-16","created_by":1001,"creation_date":"2019-12-13","last_updated_by":1001,"last_update_date":"2019-12-13"} {"item_id":1007,"item_barcode":"91493-06475","item_type":1019,"item_title":"RoboCop","item_subtitle":"","item_rating":"Mature","item_release_date":"2003-07-24","created_by":1001,"creation_date":"2019-12-13","last_updated_by":1001,"last_update_date":"2019-12-13"} {"item_id":1008,"item_barcode":"93155-11810","item_type":1019,"item_title":"Pirates of the Caribbean","item_subtitle":"","item_rating":"Teen","item_release_date":"2003-06-30","created_by":1001,"creation_date":"2019-12-13","last_updated_by":1001,"last_update_date":"2019-12-13"} {"item_id":1009,"item_barcode":"12725-00173","item_type":1019,"item_title":"The Chronicles of Narnia","item_subtitle":"The Lion, the Witch and the Wardrobe","item_rating":"Everyone","item_release_date":"2003-06-30","created_by":1001,"creation_date":"2019-12-13","last_updated_by":1001,"last_update_date":"2019-12-13"} {"item_id":1010,"item_barcode":"45496-96128","item_type":1017,"item_title":"MarioKart","item_subtitle":"Double Dash","item_rating":"Everyone","item_release_date":"2003-11-17","created_by":1001,"creation_date":"2019-12-13","last_updated_by":1001,"last_update_date":"2019-12-13"} {"item_id":1011,"item_barcode":"08888-32214","item_type":1018,"item_title":"Splinter Cell","item_subtitle":"Chaos Theory","item_rating":"Teen","item_release_date":"2003-04-08","created_by":1001,"creation_date":"2019-12-13","last_updated_by":1001,"last_update_date":"2019-12-13"} {"item_id":1012,"item_barcode":"14633-14821","item_type":1018,"item_title":"Need for Speed","item_subtitle":"Most Wanted","item_rating":"Everyone","item_release_date":"2004-11-15","created_by":1001,"creation_date":"2019-12-13","last_updated_by":1001,"last_update_date":"2019-12-13"} {"item_id":1013,"item_barcode":"10425-29944","item_type":1019,"item_title":"The DaVinci Code","item_subtitle":"","item_rating":"Teen","item_release_date":"2006-05-19","created_by":1001,"creation_date":"2019-12-13","last_updated_by":1001,"last_update_date":"2019-12-13"} {"item_id":1014,"item_barcode":"52919-52057","item_type":1019,"item_title":"Cars","item_subtitle":"","item_rating":"Everyone","item_release_date":"2006-04-28","created_by":1001,"creation_date":"2019-12-13","last_updated_by":1001,"last_update_date":"2019-12-13"} {"item_id":1015,"item_barcode":"9689-80547-3","item_type":1020,"item_title":"Beau Geste","item_subtitle":"","item_rating":"PG","item_release_date":"1992-03-01","created_by":1001,"creation_date":"2019-12-13","last_updated_by":1001,"last_update_date":"2019-12-13"} {"item_id":1016,"item_barcode":"53939-64103","item_type":1020,"item_title":"I Remember Mama","item_subtitle":"","item_rating":"NR","item_release_date":"1998-01-05","created_by":1001,"creation_date":"2019-12-13","last_updated_by":1001,"last_update_date":"2019-12-13"} {"item_id":1017,"item_barcode":"24543-01292","item_type":1020,"item_title":"Tora! Tora! Tora!","item_subtitle":"The Attack on Pearl Harbor","item_rating":"G","item_release_date":"1999-11-02","created_by":1001,"creation_date":"2019-12-13","last_updated_by":1001,"last_update_date":"2019-12-13"} {"item_id":1018,"item_barcode":"43396-60047","item_type":1020,"item_title":"A Man for All Seasons","item_subtitle":"","item_rating":"G","item_release_date":"1994-06-28","created_by":1001,"creation_date":"2019-12-13","last_updated_by":1001,"last_update_date":"2019-12-13"} {"item_id":1019,"item_barcode":"43396-70603","item_type":1020,"item_title":"Hook","item_subtitle":"","item_rating":"PG","item_release_date":"1991-12-11","created_by":1001,"creation_date":"2019-12-13","last_updated_by":1001,"last_update_date":"2019-12-13"} {"item_id":1020,"item_barcode":"85391-13213","item_type":1020,"item_title":"Around the World in 80 Days","item_subtitle":"","item_rating":"G","item_release_date":"1992-12-04","created_by":1001,"creation_date":"2019-12-13","last_updated_by":1001,"last_update_date":"2019-12-13"} {"item_id":1021,"item_barcode":"85391-10843","item_type":1020,"item_title":"Camelot","item_subtitle":"","item_rating":"G","item_release_date":"1998-05-15","created_by":1001,"creation_date":"2019-12-13","last_updated_by":1001,"last_update_date":"2019-12-13"} |
Moving on to the next step you enclose the query in the copy
command syntax, like:
COPY (SELECT row_to_json( (SELECT ROW FROM (SELECT item_id , item_barcode , item_type , item_title , item_subtitle , item_rating , item_release_date , created_by , creation_date::DATE , last_updated_by , last_update_date::DATE) ROW)) AS DATA FROM item) TO '/tmp/sample.json'; |
When you run it, it will fail with the following error message:
ERROR: must be superuser or a member of the pg_write_server_files role to COPY to a file HINT: Anyone can COPY to stdout or from stdin. psql's \copy command also works for anyone. |
You can fix this problem by connecting as the postgres
user and granting the privilege to the student
user:
GRANT pg_write_server_files TO student; |
It then runs successfully as the student
user and creates the sample.json
file in the /tmp
directory. As always, I hope this helps those looking for a solution.
Java and Postgres
I wanted to get Java working with PostgreSQL to test some GUI interfaces on Linux. Figuring out the necessary JAR file for the JDBC was my first hurdle. I found it was postgreSQL-42-2.5.jar
file.
You can download it with the following command line:
wget https://jdbc.postgresql.org/download/postgresql-42.2.5.jar |
I downloaded it to a Java directory off the home/student directory. Then, I added the following CLASSPATH
to local java.env
environment file.
#!/usr/bin/bash # Set the Java CLASSPATH environment variable to include the JDBC jar file. export set CLASSPATH=/home/student/Java/postgresql-42.2.5.jar:. |
I sourced the postgresql-42.2.5.jar
file and I wrote the following JavaTest.java
program:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 | /* Import classes. */ import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; import java.util.logging.Level; import java.util.logging.Logger; public class JavaTest { public static void main(String[] args) { /* Set default strings for the connection. */ String url = "jdbc:postgresql://localhost:5432/videodb"; String user = "student"; String password = "student"; /* Try the connection and statement. */ try { Connection conn = DriverManager.getConnection(url, user, password); Statement st = conn.createStatement(); ResultSet rs = st.executeQuery("SELECT VERSION()"); if (rs.next()) { System.out.println(rs.getString(1)); } /* Close the result set and statement. */ rs.close(); st.close(); } catch (SQLException e) { Logger logger = Logger.getLogger(JavaTest.class.getName()); logger.log(Level.SEVERE, e.getMessage(), e); } } } |
I compiled the JavaTest.java
program and tested it. It failed with the following error:
FATAL: Ident authentication failed for user - Unable to connect to PostgreSQL |
The failure occurred because I hadn’t allowed the connection in PostgreSQL’s pg_hba.conf
file. I changed the following line in my pg_hba.conf
file:
host all all 127.0.0.1/32 ident |
to
host all all 127.0.0.1/32 trust |
Then, I restarted the postgresql-11
service, like this:
systemctl restart postgresql-11.service |
The JavaTest
program ran successfully and returned:
PostgreSQL 11.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 9.1.1 20190503 (Red Hat 9.1.1-1), 64-bit |
You can extend the logic to output a comma-separated value file by leveraging the JDBC help page, like this:
/* Import classes. */ import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.ResultSetMetaData; import java.sql.SQLException; import java.sql.Statement; import java.util.logging.Level; import java.util.logging.Logger; /* Create class withonly a static method for runtime testing. */ public class JavaReturnSet { public static void main(String[] args) { /* Set default strings for the connection. */ String url = "jdbc:postgresql://localhost:5432/videodb"; String user = "student"; String password = "student"; /* Try the connection and statement. */ try { /* Set connection, statement, and result set. */ Connection conn = DriverManager.getConnection(url, user, password); Statement st = conn.createStatement(); /* Use the + to concatenate lines for query clarity. */ ResultSet rs = st.executeQuery("SELECT m.account_number\n" + ", CONCAT(c.last_name,', ',c.first_name)\n" + "FROM member m JOIN contact c\n" + "ON m.member_id = c.member_id"); /* Get query metadata for subsequent management of results. */ ResultSetMetaData rsmd = rs.getMetaData(); String line; while (rs.next()) { /* Initialize the line output for each row. */ line = ""; /* Process the columns. */ for (int i = 0; i < rsmd.getColumnCount(); i++) { if (rsmd.getColumnType(i + 1) == 12) { line = line + "\"" + rs.getString(i + 1) + "\""; } else { line = line + rs.getInt(i + 1); } /* Put a comma between output columns. */ if (i < rsmd.getColumnCount() - 1) { line = line + ","; } } System.out.println(line); } /* Close the result set and statement. */ rs.close(); st.close(); } catch (SQLException e) { Logger logger = Logger.getLogger(JavaReturnSet.class.getName()); logger.log(Level.SEVERE, e.getMessage(), e); } } } |
As always, I hope this helps those looking for a clean solution.