Archive for the ‘bash’ Category
Troubleshoot Oracle Errors
It’s always a bit difficult to trap errors in SQL*Developer when you’re running scripts that do multiple things. As old as it is, using the SQL*Plus utility and spooling to log files is generally the fastest way to localize errors across multiple elements of scripts. Unfortunately, you must break up you components into local components, like a when you create a type, procedure, function, or package.
This is part of my solution to leverage in-depth testing of the Oracle Database 23ai Free container from an Ubuntu native platform. You can find this prior post shows you how to setup Oracle*Client for Ubuntu and connect to the Oracle Database 23ai Free container.
After you’ve done that, put the following oracle_errors Bash shell function into your testing context, or into your .bashrc file:
# Troubleshooting errors utility function. oracle_errors () { # Oracle Error prefixes qualify groups of error types, like # this subset of error prefixes used in the Bash function. # ============================================================ # JMS - Java Messaging Errors # JZN - JSON Errors # KUP - External Table Access Errors # LGI - File I/O Errors # OCI - Oracle Call Interface Errors # ORA - Oracle Database Errors # PCC - Oracle Precompiler Errors # PLS - Oracle PL/SQL Errors # PLW - Oracle PL/SQL Warnings # SP2 - Oracle SQL*Plus Errors # SQL - SQL Library Errors # TNS - SQL*Net (networking) Errors # ============================================================ # Define a array of Oracle error prefixes. prefixes=("jms" "jzn" "kup" "lgi" "oci" "ora" "pcc" "pls" "plw" "sp2" "sql" "tns") # Prepend the -e for the grep utility to use regular expression pattern matching; and # use the ^before the Oracle error prefixes to avoid returning lines that may # contain the prefix in a comment, like the word lookup contains the prefix kup. for str in ${prefixes[@]}; do patterns+=" -e ^${str}" done # Display output from a SQL*Plus show errors command written to a log file when # a procedure, function, object type, or package body fails to compile. This # prints the warning message followed by the line number displayed. patterns+=" -e ^warning" patterns+=" -e ^[0-9]/[0-9]" # Assign any file filter to the ext variable. ext=${1} # Assign the extension or simply use a wildcard for all files. if [ ! -z ${ext} ]; then ext="*.${ext}" else ext="*" fi # Assign the number of qualifying files to a variable. fileNum=$(ls -l ${ext} 2>/dev/null | grep -v ^l | wc -l) # Evaluate the number of qualifying files and process. if [ ${fileNum} -eq "0" ]; then echo "[0] files exist." elif [ ${fileNum} -eq "1" ]; then fileName=$(ls ${ext}) find `pwd` -type f | grep -in ${ext} ${patterns} | while IFS='\n' read list; do echo "${fileName}:${list}" done else find `pwd` -type f | grep -in ${ext} ${patterns} | while IFS='\n' read list; do echo "${list}" done fi # Clear ${patterns} variable. patterns="" } |
Now, let’s create a debug.txt test file to demonstrate how to use the oracle_errors, like:
ORA-12704: character SET mismatch PLS-00124: name OF EXCEPTION expected FOR FIRST arg IN exception_init PRAGMA SP2-00200: Environment error JMS-00402: Class NOT found JZN-00001: END OF input |
You can navigate to your logging directory and call the oracle_errors function, like:
oracle_errors txt |
It’ll return the following, which is file number, line number, and error code:
debug.txt:1:ORA-12704: character set mismatch debug.txt:2:PLS-00124: name of exception expected for first arg in exception_init pragma debug.txt:3:SP2-00200: Environment error debug.txt:4:JMS-00402: Class not found debug.txt:5:JZN-00001: End of input |
There are other Oracle error prefixes but the ones I’ve selected are the more common errors for Java, JavaScript, PL/SQL, Python, and SQL testing. You can add others if your use cases require them to the prefixes array. Just a note for those new to Bash shell scripting the “${variable_name}” is required for arrays.
For a more complete example, I created the following files for a trivial example of procedure overloading in PL/SQL:
- tables.sql – that creates two tables.
- spec.sql – that creates a package specification.
- body.sql – that implements a package specification.
- test.sql – that implements a test case using the package.
- integration.sql – that calls the the scripts in proper order.
The tables.sql, spec.sql, body.sql, and test.sql use the SQL*Plus spool command to write log files, like:
SPOOL spec.txt
-- Insert code here ...
SPOOL OFF |
The body.sql file includes SQL*Plus list and show errors commands, like:
SPOOL spec.txt
-- Insert code here ...
LIST
SHOW ERRORS
SPOOL OFF |
The integration.sql script calls the tables.sql, spec.sql, body.sql, and test.sql in order. Corrupting the spec.sql file by adding a stray “x” to one of the parameter names causes a cascade of errors. After running the integration.sql file with the introduced error, the Bash oracle_errors function returns:
body.txt:2:Warning: Package Body created with compilation errors. body.txt:148:4/13 PLS-00323: subprogram or cursor 'WARNER_BROTHER' is declared in a test.txt:4:ORA-06550: line 2, column 3: test.txt:5:PLS-00306: wrong number or types of arguments in call to 'WARNER_BROTHER' test.txt:6:ORA-06550: line 2, column 3: |
I hope that helps those learning how to program and perform integration testing in an Oracle Database.
Bash Debug Function
My students working in Linux would have a series of labs to negotiate and I’d have them log the activities of their Oracle SQL scripts. Many of them would suffer quite a bit because they didn’t know how to find the errors in the log files.
I wrote this SQL function for them to put in their .bashrc files. It searches all the .txt files for errors and organizes them by log file, line number, and descriptive error message.
errors () { label="File Name:Line Number:Error Code"; list=`ls ./*.$1 | wc -l`; if [[ ${list} -eq 1 ]]; then echo ${label}; echo "----------------------------------------"; filename=`ls *.txt`; echo ${filename}:`find . -type f | grep -in *.txt -e ora\- -e pls\- -e sp2\-`; else if [[ ${list} -gt 1 ]]; then echo ${label}; echo "----------------------------------------"; find . -type f | grep --color=auto -in *.txt -e ora\- -e pls\- -e sp2\-; fi; fi } |
I hope it helps others now too.
Docker on macOS
I finally got on the current release of macOS, Monterey, and found that my tedious Docker error still existed. When the computer boots, I get the following Fatal Error message:
Open a Terminal session and issue the following command:
killall Docker |
Then, restart Docker and everything is fine.
Wrap Oracle’s tnsping
If you’ve worked with the Oracle database a while, you probably noticed that some utilities write to stdout
for both standard output and what should be standard error (stderr
). One of those commands is the tnsping
utility.
You can wrap the tnsping
command to send the TNS-03505
error to stdout
with the following code. I put Bash functions like these in a library.sh
script, which I can source when automating tasks.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | #!/usr/bin/bash tnsping() { if [ ! -z ${1} ]; then # Set default return value. stdout=`$ORACLE_HOME/bin/tnsping ${1} | tail -1` # Check stdout to return 0 for success and 1 for failure. if [[ `echo ${stdout} | cut -c1-9` = 'TNS-03505' ]]; then python -c 'import os, sys; arg = sys.argv[1]; os.write(2,arg + "\n")' "${stdout}" else echo "${1}" fi fi } |
You should notice that the script uses a Python call to redirect the error message to standard out (stdout
) but you can redirect in Bash shell with the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | #!/usr/bin/bash tnsping() { if [ ! -z ${1} ]; then # Set default return value. stdout=`$ORACLE_HOME/bin/tnsping ${1} | tail -1` # Check stdout to return 0 for success and 1 for failure. if [[ `echo ${stdout} | cut -c1-9` = 'TNS-03505' ]]; then echo ${stdout} 1>&2 else echo "${1}" fi fi } |
Interactively, we can now test a non-existent service name like wrong
with this syntax:
tnsping wrong |
It’ll print the standard error to console, like:
TNS-03505: Failed to resolve name |
or, you can suppress standard error (stderr
) by redirecting it to the traditional black hole, like:
tnsping wrong 2>/dev/null |
After redirecting standard error (stderr
), you simply receive nothing back. That lets you evaluate in another script whether or not the utility raises an error.
In an automating Bash shell script, you use the source command to put the Bash function in scope, like this:
source library.sh |
As always, I hope this helps those looking for a solution.
MySQL JSON Server
A student question: Does JavaScript make context switching for web-based applications obsolete? Wow! I asked what that meant. He said, it means JavaScript replaces all other server-side programming languages, like PHP, C#, or Python. I asked the student why he believed that. His answer was that’s what two interviewing managers told him.
I thought it would be interesting to put the idea to a test. Below is a Node.js script that acts as a utility that queries the MySQL database with substitution variables in query. It also returns a standard out (stdout
) stream of the MySQL query’s results. It also supports three flag and value pairs as arguments, and optionally writes the results of the MySQL query to a log file while still returning result as the stdout
value. All errors are written to the standard error (stderr
) stream.
The Node.js solution is completely portable between Windows and Linux. You can deploy it to either platform without any edits for Windows case insensitive Command-Line Interface (CLI). Clearly, Node.js offers a replacement for direct interaction with the .NET components in PowerShell. This appears to mean basic Linux shell or PowerShell knowledge is all that’s required to write and deploy JavaScript programs as server-side programming solutions. It means anything that you would have done with the .NET you can do with JavaScript. Likewise, you can replace PHP, C#, Python, or Ruby server-side scripts with JavaScript programs.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 | // Declare constants. const fs = require('fs') const util = require('util') const express = require('express') const mysql = require('mysql') const connection = mysql.createConnection({ host: 'localhost', user: 'student', password: 'student', database: 'studentdb' }) // Declare local variables for case insensitive use. var data = '' var buffer = Buffer.alloc(0) var path = '' // Declare default query variables dates. var startDate = new Date('1980-01-01') var endDate = new Date() // Set default endDate value as tomorrow. endDate.setDate(new Date().getDate() + 1) // Define a regular expression for valid file names. var regexp = /^([0-9a-zA-Z]+|[0-9a-zA-Z]+\.+[0-9a-zA-Z]{3})$/ // Assign dynamic variables from arguments. var argv = process.argv.slice(2) // Check for paired values, evaluate and assign them to local variables. if ((argv.length % 2) == 0) { for (let i = 0; i < argv.length; i += 2) { // Assign a file name to write to the output path. if ((argv[i].toLowerCase() == '-f') && (regexp.test(argv[i+1]))) { // Assign present working for Windows or Linux. if (process.platform == 'win32') path = '.\\' + argv[1] else path = './' + argv[1] } // Assign a start date from the input string. else if (argv[i].toLowerCase() == '-b') { startDate = new Date(argv[i+1]) } // Assign a end date from the input string. else if (argv[i].toLowerCase() == '-e') { endDate = new Date(argv[i+1]) } } } else { console.error('Arguments must be in pairs: flag and value.') } // Define and run MySQL query. connection.query("SELECT i.item_title " + ", date_format(i.release_date,'%d-%M-%Y') AS release_date " + "FROM item i JOIN common_lookup cl " + "ON i.item_type = cl.common_lookup_id " + "WHERE cl.common_lookup_type = 'BLU-RAY' " + "AND i.release_date BETWEEN ? AND ? " + "ORDER BY i.release_date" ,[startDate, endDate], function (err, result) { if (err) { console.error('Query contains error ...') console.error('-> ' + err) } else { // Prints the index value in the RowDataPacket. for(let element in result) { data += result[element].item_title + ', ' + result[element].release_date + '\n' } // Write file when data string is not empty. if (data.length > 0 ) { buffer = Buffer.alloc(data.length,data) // Check for a defined path before writing a file. if (path.length > 0) { // Open the file. fs.open(path, 'w', function(err, fd) { if (err) { console.error('Could not open [' + path + '] file [' + err + ']') } else { // Write the file. fs.write(fd, buffer, 0, buffer.length, null, function(err) { if (err) console.error('Error writing [' + path + '] file [' + err + ']') fs.close(fd, function() { if (fs.existsSync(path)) { process.exit(0) } }) }) } }) } // Set standard out (stdout). console.log(data) } else { console.error('Query returned no rows.') } } }) // Close MySQL connection. connection.end() |
You can call this code with the default values, like
node app.js |
You can call this code with a user defined file name, and a custom start and end date values, like
node app.js -f output.csv -b '2001-01-01' -e '2004-12-31' |
The latter command returns the following by querying my MySQL studentdb
video store:
Star Wars II, 16-May-2002 Harry Potter and the Chamber of Secrets, 28-May-2002 Harry Potter and the Sorcerer's Stone, 28-May-2002 Die Another Day, 03-June-2003 Harry Potter and the Prisoner of Azkaban, 23-October-2004 |
As always, I hope this helps somebody trying to sort it out.
Recursive bash function
While teaching a class on the Linux Command-Line (CLI), the book gave an example of generating a list of random US telephone numbers into a file. The book uses the RANDOM
function to generate segments of the telephone number, and then the grep
command to identify malformed telephone numbers.
My students wanted me to explain why the numbers were malformed. I had to explain that the RANDOM
function returns a random number between 1 and 99,999. The RANDOM
function may return a 1 to 5 digit random number, which means you may get a 1-digit or 2-digit number when you request a 3-digit random number or a 1- to 3-digit number when you request a 4-digit random number.
The author’s example is:
for i in {1..10}; do echo "(${RANDOM:0:3}) ${RANDOM:0:3}-${RANDOM:0:4}" >> list.txt done |
They asked if there was a way to write a shell script that guaranteed random but well-formed US telephone numbers. I said yes, however, you need to write a recursive bash shell function and assign the result to a global variable set in the shell script.
They seemed doubtful, so I wrote it for them. Here’s the script if you’re interested in learning more about bash shell scripting. While I implemented it with an bash array, that’s optional.
#!/usr/bin/bash # ============================================================ # Name: telephone.sh # Author: Michael McLaughlin # Date: 05-May-2020 # ------------------------------------------------------------ # Purpose: Demonstrate how to generate random telehpone # numbers. The RANDOM function returns a random # number between 1 and 99999; and while you can # easily shave off a extra digit guarnteeing a # value above 100 is impossible without logic. # ============================================================ targetLength() { # Declare variable in function-level scope. randomString='' # Check the number of parameters to process. if [[ ${#} = 2 ]]; then # Assign value to function-level and local variables. randomString=${1} formatLength=${2} # Get the length of the telephone number as integer. length=`echo -n ${randomString} | wc -c` # Calculate any shortfall. short=$((${formatLength}-${length})) # Check if the telephone number is too short. if [[ ${short} > 0 ]]; then randomString=`echo "${randomString}${RANDOM:0:${short}}"` fi fi # Check if the combination of random numbers equals the target length # and assign the value to the global variable, or repeat processing # by making a recursive function call. if [[ `echo -n ${randomString} | wc -c` = ${formatLength} ]]; then result=${randomString} else targetLength ${randomString} ${formatLength} fi } # Declare global variable to support targetLength(). result='' # Declare an array of strings. declare -A telephone_parts # Generate one hundred random telephone numbers. for i in {1..100}; do # Create random three digit area code. targetLength ${RANDOM:0:3} 3 telephone_parts[1]=${result} # Create random three digit prefix code. targetLength ${RANDOM:0:3} 3 telephone_parts[2]=${result} # Create random four digit number code. targetLength ${RANDOM:0:4} 4 telephone_parts[3]=${result} # Print the telephone numbers. echo "[${i}] (${telephone_parts[1]}) ${telephone_parts[2]}-${telephone_parts[3]}" done |
For reference, a recursive function call isn’t required here. It could be done more effectively with the following while
loop:
targetLength() { # Declare variable in function-level scope. randomString='' short=1 # Check the number of parameters to process. if [[ ${#} = 2 ]]; then # Assign value to function-level and local variables. randomString=${1} formatLength=${2} # Check if the telephone number is too short. while [[ ${short} > 0 ]]; do # Get the length of the telephone number as integer. length=`echo -n ${randomString} | wc -c` # Calculate any shortfall. short=$((${formatLength}-${length})) # Assign new value to randomString. randomString=`echo "${randomString}${RANDOM:0:${short}}"` done # Assign randomString to global result variable. result=${randomString} fi } |
As always, I hope this helps those you want to learn or solve a problem.
Wrap Oracle SQL*Plus
One of the key problems with Oracle’s deployment is that you can not use the up-arrow key to navigate the sqlplus
command-line history. Here’s little Bash shell function that you can put in your .bashrc
file. It requires you to have your system administrator install the rlwrap
package, which wraps the sqlplus
command-line history.
You should also set the $ORACLE_HOME
environment variable before you put this function in your .bashrc
file.
sqlplus () { # Discover the fully qualified program name. path=`which rlwrap 2>/dev/null` file='' # Parse the program name from the path. if [ -n ${path} ]; then file=${path##/*/} fi; # Wrap when there is a file and it is rewrap. if [ -n ${file} ] && [[ ${file} = "rlwrap" ]]; then rlwrap sqlplus "${@}" else echo "Command-line history unavailable: Install the rlwrap package." $ORACLE_HOME/bin/sqlplus "${@}" fi } |
If you port this shell script to an environment where rlwrap
is not installed, it simply prints the error message and advises you to install the rlwrap
package.
As always, I hope this helps those looking for a solution.
Oracle Error Bash f(x)
My students always struggle initially with basic Linux skills. I wrote little function for their .bashrc
file to help them avoid the frustration. It finds and displays all errors by file name, line number and error message for a collection of log files in a single directory (or folder).
errors() { # Determine if any log files exist and check for errors. label="File Name:Line Number:Error Code" list=`ls ./*.$1 | wc -l` if [[ $list} -eq 1 ]]; then echo ${label} echo "--------------------------------------------------" filename=`ls *.txt` echo ${filename}:`find . -type f | grep -in *.txt -e ora\- -e pls\- -e sp2\-` elif [[ ${list} -gt 1 ]]; then echo ${label} echo "--------------------------------------------------" find . -type f | grep -in *.txt -e ora\- -e pls\- -e sp2\- fi } |
Let’s say you name your log files with a file extension of .txt, then you would call the function like this:
errors txt |
It would return output like the following:
common_lookup_lab.txt:229:ORA-02275: such a referential constraint already exists in the table common_lookup_lab.txt:239:ORA-02275: such a referential constraint already exists in the table |
As always, I hope this helps those looking for a solution.
Find files with errors
My students wanted a quick solution on how to find the log files that contain errors. That’s a simple line of code in Linux if you want any Oracle errors that start with ORA-
:
find $HOME/lab2 -type f | xargs grep -i ora\- |
It takes only a moment more to look for errors starting with ORA-
or PLS-
, like:
find $HOME/lab2 -type f | xargs grep -i -e ora\- -e pls\- |
The latter might return something like this:
contact_lab.txt:ORA-00904: "MEMBER_LAB_ID": invalid identifier contact_lab.txt:ORA-00942: table or view does not exist contact_lab.txt:ORA-00942: table or view does not exist member_lab.txt:ORA-02264: name already used by an existing constraint member_lab.txt:ORA-00955: name is already used by an existing object |
You can improve the error identification by identifying line numbers by adding -n
option, like:
find $HOME/lab2 -type f | xargs grep -in -e ora\- -e pls\- |
The latter might return something like this when there are two or more files:
contact_lab.txt:76:ORA-00904: "MEMBER_LAB_ID": invalid identifier contact_lab.txt:150:ORA-00942: table or view does not exist contact_lab.txt:157:ORA-00942: table or view does not exist member_lab.txt:75:ORA-02264: name already used by an existing constraint member_lab.txt:149:ORA-00955: name is already used by an existing object |
Unfortunately, the command raises an error when there aren’t any files found of with a qualified extension. It also fails to prepend the file name when there’s only one qualified file name. As a result of these deficiencies, I’ve written the following Bash shell script. I’ve opted to call it the .findErrors.bashrc
file name and deploy it in the user’s $HOME
directory.
#!/bin/bash # Assign any file filter to the ext variable. ext=${1} # Assign the extension or simply use a wildcard for all files. if [ ! -z ${ext} ]; then ext="*.${ext}" else ext="*" fi # Assign the number of qualifying files to a variable. fileNum=$(ls -l ${ext} 2>/dev/null | grep -v ^l | wc -l) # Evaluate the number of qualifying files and process. if [ ${fileNum} -eq "0" ]; then echo "[0] files exist." elif [ ${fileNum} -eq "1" ]; then fileName=$(ls ${ext}) find `pwd` -type f | grep -in ${ext} -e ora\- -e pls\- | while IFS='\n' read list; do echo "${fileName}:${list}" done else find `pwd` -type f | grep -in ${ext} -e ora\- -e pls\- | while IFS='\n' read list; do echo "${list}" done fi |
You can modify the errors()
function with or without a file extension to identify errors beginning with ORA-
or PLS-
in their log files. As always, I hope this helps those looking for a solution.
Preprocessing External Tables
A question that comes up now and again is there a way in Oracle Database 11g Express Edition to mimic some behavior in the Oracle Standard or Enterprise editions. Many of these questions arise because developers want to migrate a behavior they’ve implemented in Java to the Express Edition. Sometimes the answer is no but many times the answer is yes. The yes answers come with a how.
This article answers the question: “How can I read an operating systems’ file directory with out an embedded Java Virtual Machine (JVM)?” These developers have read or implemented logic like that found in my earlier “Using DBMS_JAVA
to Read External Files” article. The answer is simple. You need to use a preprocessing script inside an external table. That’s what you will learn in this article, but if you’re not familiar with external tables you should read this other “External Tables” article.
External tables let you access plain text files with SQL*Loader or Oracle’s proprietary Data Pump files. You typically create external tables with Oracle Data Pump when you’re moving large data sets between database instances.
External tables use Oracle’s virtual directories. An Oracle virtual directory is an internal reference in the data dictionary. A virtual directory maps a unique directory name to a physical directory on the local operating system. Virtual directories were simple before Oracle Database 12c gave us the multitenant architecture. In a multitenant database there are two types of virtual directories. One services the schemas of the Container Database (CDB) and it’s in the CDB’s SYS
schema. The other services the schemas of a Pluggable Database (PDB) and it’s in the ADMIN
schema for the PDB.
You can create a CDB virtual database as SYSTEM
user with the following syntax in Windows:
SQL> CREATE DIRECTORY upload AS 'C:\Data\Upload'; |
or, like this in Linux or Unix:
SQL> CREATE DIRECTORY upload AS '/u01/app/oracle'; |
There are some subtle differences between these two statements. Windows directories or folders start with a logical drive letter, like C:\
, D:\, and so forth. Linux and Unix directories start with a mount point like /u01.
As you can read in the “External Tables” article, you need to change the ownership of external files and directories to the oracle user and, default, oracle user’s default dba group. Likewise, you should change the privilege of the containing directory to 755 (owner has read, write, and execute privileges; and group and others have read and execute privileges.
The balance of this article is broken into two pieces configuring a working external table with preprocessing and troubleshooting cartridge errors.
External Tables with Preprocessing Example
There are xxx database steps to creating this example. The first database step requires you create three virtual directories. The syntax for the three statements is:
SQL> CREATE DIRECTORY upload AS '/u01/app/oracle/upload'; SQL> CREATE DIRECTORY LOG AS '/u01/app/oracle/log'; SQL> CREATE DIRECTORY preproc AS '/u01/app/oracle/preproc'; |
The upload
directory hosts the files you want to discover for upload. The log
directory hosts the log files for the external tables. The preproc
directory hosts the executable program, which generates a list of files currently in the upload
directory.
After creating the virtual directories or before creating them, you should create the physical directories in the Linux operating system. The virtual directories can only point to something when it actually exists. Moreover, they work like Oracle’s synonyms that point to other objects in the database. The physical files need to be in a directory tree that is navigable by the oracle user and the oracle user and it’s default primary dba group needs to own them.
You can use the following command to change ownership when you’re the root
user:
# chown –R oracle:dba /u01/app/oracle |
The second database step requires that you grant privileges on the virtual directories to the student
user. You can do that with the following syntax:
SQL> GRANT read ON DIRECTORY upload; SQL> GRANT read, WRITE ON DIRECTORY LOG; SQL> GRANT read, EXECUTE ON DIRECTORY preproc; |
The upload
directory requires read-only privileges. The log
directory requires read and write privileges. The read privileges let it find files and the write privilege lets it append to log files when they already exist. The preproc
directory requires read and execute privileges. The read privilege is the same as that explained earlier. The execute privilege lets you run the preprocessing program file.
The third database step requires creating an external file with preprocessing. The following script creates the sample table:
SQL> CREATE TABLE directory_list 2 ( file_name VARCHAR2(60)) 3 ORGANIZATION EXTERNAL 4 ( TYPE oracle_loader 5 DEFAULT DIRECTORY preproc 6 ACCESS PARAMETERS 7 ( RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII 8 PREPROCESSOR preproc:'list2dir.sh' 9 BADFILE 'LOG':'dir.bad' 10 DISCARDFILE 'LOG':'dir.dis' 11 LOGFILE 'LOG':'dir.log' 12 FIELDS TERMINATED BY ',' 13 OPTIONALLY ENCLOSED BY "'" 14 MISSING FIELD VALUES ARE NULL) 15 LOCATION ('list2dir.sh')) 16 REJECT LIMIT UNLIMITED; |
Line 5 designates the default directory as preproc because the location of the executable file should be in the preproc directory. Line 8 designates that there is a preprocessing step, and it identifies the virtual directory and physical file name inside single quotes. Line 15 identifies the source file for the external table, which is an executable program.
Next, you need to create the bash
file to get and return a directory list. Before you write that file, you need to understand that preprocessing script files don’t inherit a $PATH
environment variable from Oracle.
That probably means you might have tried to create a simple bash
shell command like the following in a list2dir.sh
file.
ls /u01/app/oracle/upload | find . -type f | ls *csv | sed -e 's/\.\///' |
When you test this file by calling it from SQL, like this:
SQL> SELECT * FROM directory_list; |
It raises the following exception stack:
SELECT * FROM directory_list * ERROR AT line 1: ORA-29913: error IN executing ODCIEXTTABLEFETCH callout ORA-29400: data cartridge error KUP-04095: preprocessor command /u01/app/oracle/preprocess/list2dir.sh encountered error "/u01/app/oracle/preprocess/list2dir.sh: line 1: ls: No such file or directory |
The reason isn’t immediately clear to some developers. The significant error is:
ls: No such file or directory |
The error message indicates that a call through Oracle’s OCI call interface cannot find the location of the ls
program. That occurs because there is no $PATH
variable set a list of values that points to the /usr/bin
directory where you find the ls
program. You need to prepend /usr/bin before the ls
, find
, and sed
programs.
/usr/bin/ls /u01/app/oracle/upload | /usr/bin/find . -type f | /usr/bin/ls *csv | /usr/bin/sed -e 's/\.\///' |
Create a list2dir.sh
file in the /u01/app/oracle/preproc
directory with the preceding command line. Then, make sure oracle is the owner with a primary dba
group and the privileges are 755 on the file. The command to set the privileges is:
# chmod –R 755 /u01/app/oracle/preproc.sh |
Having completed that Linux operating system step you should probably put some files in the upload directory. You can create empty files with the touch command at the linux command line for this example.
The fourth database step lets you query the external table, which runs the preprocessing program and returns its results as values in the table:
SQL> CREATE * FROM directory_list; |
It should return something like this:
FILE_NAME ------------------------------ character.csv transaction_upload2.csv transaction_upload.csv |
As always, this is written to help those solve problems.