Archive for the ‘Oracle DBA’ tag
Selective Aggregation
Selective Aggregation
Learning Outcomes
- Learn how to combine CASE operators and aggregation functions.
- Learn how to selective aggregate values.
- Learn how to use SQL to format report output.
Selective aggregation is the combination of the CASE operator and aggregation functions. Any aggregation function adds, sums, or averages the numbers that it finds; and when you embed the results of a CASE operator inside an aggregation function you get a selective result. The selectivity is determined by the WHEN clause of a CASE operator, which is more or less like an IF statement in an imperative programming language.
The prototype for selective aggregation is illustrated with a SUM function below:
SELECT SUM(CASE WHEN left_operand = right_operand THEN result WHEN left_operand > right_operand THEN result WHEN left_operand IN (SET OF comma-delimited VALUES) THEN result WHEN left_operand IN (query OF results) THEN result ELSE alt_result END) AS selective_aggregate FROM some_table; |
A small example let’s you see how selective aggregation works. You create a PAYMENT table and PAYMENT_S sequence for this example, as follows:
-- Create a PAYMENT table. CREATE TABLE payment ( payment_id NUMBER , payment_date DATE CONSTRAINT nn_payment_1 NOT NULL , payment_amount NUMBER(20,2) CONSTRAINT nn_payment_2 NOT NULL , CONSTRAINT pk_payment PRIMARY KEY (payment_id)); -- Create a PAYMENT_S sequence. CREATE SEQUENCE payment_s; |
After you create the table and sequence, you should insert some data. You can match the values below or choose your own values. You should just insert values for a bunch of rows.
View Anonymous PL/SQL Block →
You can populate data with the anonymous PL/SQL block, which creates 10,000 random rows in the payment table. Please note thatyou will get different payment dates and amounts each time you run the script.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | DECLARE -- Create local collection data types. TYPE pmtval IS TABLE OF NUMBER(20,2); TYPE smonth IS TABLE OF VARCHAR2(3); -- Create variable to hold the list of payments. payments PMTVAL := pmtval(); -- Declare month arrays. short_month SMONTH := smonth('JAN','FEB','MAR','APR','MAY','JUN' ,'JUL','AUG','SEP','OCT','NOV','DEC'); -- Declare variable values. month VARCHAR2(3); year NUMBER := '2019'; pmt_date DATE; tpmt_date VARCHAR2(11); -- Declare default number of random payments. payment_number NUMBER := 10000; BEGIN -- Populate payment list. FOR i IN 1..payment_number LOOP payments.EXTEND; SELECT ROUND(dbms_random.value() * 1000,0) || '.' || ROUND(dbms_random.value() * 100,0) INTO payments(payments.COUNT) FROM dual; END LOOP; -- Create and populate payment date and amount. FOR i IN 1..payment_number LOOP -- Assign random month value. month := short_month(dbms_random.value(1,short_month.COUNT)); -- Assign random day of the month value and assemble random date. IF month IN ('JAN','MAR','MAY','JUL','AUG','OCT','DEC') THEN pmt_date := ROUND(dbms_random.value(1,31),0) || '-' || month || '-' || year; ELSIF month IN ('APR','JUN','SEP','NOV') THEN pmt_date := ROUND(dbms_random.value(1,30),0) || '-' || month || '-' || year; ELSE pmt_date := ROUND(dbms_random.value(1,28),0) || '-' || month || '-' || year; END IF; -- Insert values into the PAYMENT table. INSERT INTO payment ( payment_id, payment_date, payment_amount ) VALUES ( payment_s.NEXTVAL, pmt_date, payments(i)); END LOOP; -- Commit the writes. COMMIT; END; / |
After inserting 10,000 rows, you can get an unformatted total with the following query:
-- Query total amount. SELECT SUM(payment_amount) AS payment_total FROM payment; |
It outputs the following:
PAYMENT_TOTAL ------------- 5011091.75 |
You can nest the result inside the TO_CHAR function to format the output, like
-- Query total formatted amount. SELECT TO_CHAR(SUM(payment_amount),'999,999,999.00') AS payment_total FROM payment; |
It outputs the following:
PAYMENT_TOTAL --------------- 5,011,091.75 |
Somebody may suggest that you use a PIVOT function to rotate the data into a summary by month but the PIVOT function has limits. The pivoting key must be numeric and the column values will use only those numeric values.
-- Pivoted summaries by numeric monthly value. SELECT * FROM (SELECT EXTRACT(MONTH FROM payment_date) payment_month , payment_amount FROM payment) PIVOT (SUM(payment_amount) FOR payment_month IN (1,2,3,4,5,6,7,8,9,10,11,12)); |
It outputs the following:
1 2 3 4 5 6 7 8 9 10 11 12 ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- 245896.55 430552.36 443742.63 457860.27 470467.18 466370.71 415158.28 439898.72 458998.09 461378.56 474499.22 246269.18 |
You can use selective aggregation to get the results by a character label, like
SELECT SUM( CASE WHEN EXTRACT(MONTH FROM payment_date) = 1 AND EXTRACT(YEAR FROM payment_date) = 2019 THEN payment_amount END) AS "JAN" , SUM( CASE WHEN EXTRACT(MONTH FROM payment_date) = 2 AND EXTRACT(YEAR FROM payment_date) = 2019 THEN payment_amount END) AS "FEB" , SUM( CASE WHEN EXTRACT(MONTH FROM payment_date) = 3 AND EXTRACT(YEAR FROM payment_date) = 2019 THEN payment_amount END) AS "MAR" , SUM( CASE WHEN EXTRACT(MONTH FROM payment_date) IN (1,2,3) AND EXTRACT(YEAR FROM payment_date) = 2019 THEN payment_amount END) AS "1FQ" , SUM( CASE WHEN EXTRACT(MONTH FROM payment_date) = 4 AND EXTRACT(YEAR FROM payment_date) = 2019 THEN payment_amount END) AS "APR" FROM payment; |
It outputs the following:
JAN FEB MAR 1FQ APR ---------- ---------- ---------- ---------- ---------- 245896.55 430552.36 443742.63 1120191.54 457860.27 |
You can format the output with a combination of the TO_CHAR and LPAD functions. The TO_CHAR allows you to add a formatting mask, complete with commas and two mandatory digits to the right of the decimal point. The reformatted query looks like
COL JAN FORMAT A13 HEADING "Jan" COL FEB FORMAT A13 HEADING "Feb" COL MAR FORMAT A13 HEADING "Mar" COL 1FQ FORMAT A13 HEADING "1FQ" COL APR FORMAT A13 HEADING "Apr" SELECT LPAD(TO_CHAR(SUM( CASE WHEN EXTRACT(MONTH FROM payment_date) = 1 AND EXTRACT(YEAR FROM payment_date) = 2019 THEN payment_amount END),'9,999,999.00'),13,' ') AS "JAN" , LPAD(TO_CHAR(SUM( CASE WHEN EXTRACT(MONTH FROM payment_date) = 2 AND EXTRACT(YEAR FROM payment_date) = 2019 THEN payment_amount END),'9,999,999.00'),13,' ') AS "FEB" , LPAD(TO_CHAR(SUM( CASE WHEN EXTRACT(MONTH FROM payment_date) = 3 AND EXTRACT(YEAR FROM payment_date) = 2019 THEN payment_amount END),'9,999,999.00'),13,' ') AS "MAR" , LPAD(TO_CHAR(SUM( CASE WHEN EXTRACT(MONTH FROM payment_date) IN (1,2,3) AND EXTRACT(YEAR FROM payment_date) = 2019 THEN payment_amount END),'9,999,999.00'),13,' ') AS "1FQ" , LPAD(TO_CHAR(SUM( CASE WHEN EXTRACT(MONTH FROM payment_date) = 4 AND EXTRACT(YEAR FROM payment_date) = 2019 THEN payment_amount END),'9,999,999.00'),13,' ') AS "APR" FROM payment; |
It displays the formatted output:
Jan Feb Mar 1FQ Apr ------------- ------------- ------------- ------------- ------------- 245,896.55 430,552.36 443,742.63 1,120,191.54 457,860.27 |
INSERT Statement
INSERT Statement
Learning Outcomes
- Learn how to use positional- and named-notation in INSERT statements.
- Learn how to use the VALUES clause in INSERT statements.
- Learn how to use subqueries in INSERT statements.
The INSERT statement lets you enter data into tables and views in two ways: via an INSERT statement with a VALUES clause and via an INSERT statement with a query. The VALUES clause takes a list of literal values (strings, numbers, and dates represented as strings), expression values (return values from functions), or variable values.
Query values are results from SELECT statements that are subqueries (covered earlier in this appendix). INSERT statements work with scalar, single-row, and multiple-row subqueries. The list of columns in the VALUES clause or SELECT clause of a query (a SELECT list) must map to the positional list of columns that defines the table. That list is found in the data dictionary or catalog. Alternatively to the list of columns from the data catalog, you can provide a named list of those columns. The named list overrides the positional (or default) order from the data catalog and must provide at least all mandatory columns in the table definition. Mandatory columns are those that are not null constrained.
Oracle databases differ from other databases in how they implement the INSERT statement. Oracle doesn’t support multiple-row inserts with a VALUES clause. Oracle does support default and override signatures as qualified in the ANSI SQL standards. Oracle also provides a multiple- table INSERT statement. This section covers how you enter data with an INSERT statement that is based on a VALUES clause or a subquery result statement. It also covers multiple-table INSERT statements.
The INSERT statement has one significant limitation: its default signature. The default signature is the list of columns that defines the table in the data catalog. The list is defined by the position and data type of columns. The CREATE statement defines the initial default signature, and the ALTER statement can change the number, data types, or ordering of columns in the default signature.
The default prototype for an INSERT statement allows for an optional column list that overrides the default list of columns. When you provide the column list you choose to implement named-notation, which is the right way to do it. Relying on the insertion order of the columns is a bad idea. An INSERT statement without a list of column names is a position-notation statement. Position-notation is bad because somebody can alter that order and previously written INSERT statements will break or put data in the wrong columns.
Like methods in OOPLs, an INSERT statement without the optional column list constructs an instance (or row) of the table using the default constructor. The override constructor for a row is defined by any INSERT statement when you provide an optional column list. That’s because it overrides the default constructor.
The generic prototype for an INSERT statement is confusing when it tries to capture both the VALUES clause and the result set from a query. Therefore, I’ve opted to provide two generic prototypes.
Insert by value
The first uses the VALUES clause:
INSERT INTO table_name [( column1, column2, column3, ...)] VALUES ( value1, value2, value3, ...); |
Notice that the prototype for an INSERT statement with the result set from a query doesn’t use the VALUES clause at all. A parsing error occurs when the VALUES clause and query both occur in an INSERT statement.
The second prototype uses a query and excludes the VALUES clause. The subquery may return one to many rows of data. The operative rule is that all columns in the query return the same number of rows of data, because query results should be rectangles—rectangles made up of one to many rows of columns.
Insert by subquery
Here’s the prototype for an INSERT statement that uses a subquery:
INSERT INTO table_name [( column1, column2, column3, ...)] ( SELECT value1, value2, value3, ... FROM table_name WHERE ...); |
A query, or SELECT statement, returns a SELECT list. The SELECT list is the list of columns, and it’s evaluated by position and data type. The SELECT list must match the definition of the table or the override signature provided.
Default signatures present a risk of data corruption through insertion anomalies, which occur when you enter bad data in tables. Mistakes transposing or misplacing values can occur more frequently with a default signature, because the underlying table structure can change. As a best practice, always use named notation by providing the optional list of values; this should help you avoid putting the right data in the wrong place.
The following subsections provide examples that use the default and override syntax for INSERT statements in Oracle databases. The subsections also cover multiple-table INSERT statements and a RETURNING INTO clause, which is an extension of the ANSI SQL standard. Oracle uses the RETURNING INTO clause to manage large objects, to return autogenerated identity column values, and to support some of the features of Oracle’s dynamic SQL. Note that Oracle also supports a bulk INSERT statement, which requires knowledge of PL/SQL.
Insert by Values →
An INSERT statement with a VALUES clause can only insert one row at a time in and Oracle database. Other databases, like Microsoft SQL Server and MySQL allow you to insert a comma delimited set of values inside the VALUES clause. Oracle adheres to the ANSI standard that support single row inserts with a VALUES clause and multiple row inserts with a subquery.
Inserting by the VALUES clause is the most common type of INSERT statement. It’s most useful when interacting with single-row inserts.
You typically use this type of INSERT statement when working with data entered through end-user web forms. In some cases, users can enter more than one row of data using a form, which occurs, for example, when a user places a meal order in a restaurant and the meal and drink are treated as order items. The restaurant order entry system would enter a single-row in the order table and two rows in the order_item table (one for the meal and the other for the drink). PL/SQL programmers usually handle the insertion of related rows typically inside a loop structure when they use dynamic INSERT statements. Dynamic inserts are typically performed using NDS (Native Dynamic SQL) statements.
Oracle supports only a single-row insert through the VALUES clause. Multiple-row inserts require an INSERT statement from a query.
The VALUES clause of an INSERT statement accepts scalar values, such as strings, numbers, and dates. It also accepts calls to arrays, lists, or user-defined object types, which are called flattened objects. Oracle supports VARRAY as arrays and nested tables as lists. They can both contain elements of a scalar data type or user-defined object type.
The following sections discuss how you use the VALUES clause with scalar data types, how you convert various data types, and how you use the VALUES clause with nested tables and user-defined object data types.
Inserting Scalar Data Types
Instruction Details →
This section shows you how to INSERT scalar values into tables.
The basic syntax for an INSERT statement with a VALUES clause can include an optional override signature between the table name and VALUES keyword. With an override signature, you designate the column names and the order of entry for the VALUES clause elements. Without an override signature, the INSERT signature checks the definition of the table in the database catalog. The positional order of the column in the data catalog defines the positional, or default, signature for the INSERT statement. As shown previously, you can discover the structure of a table in Oracle with the DESCRIBE command issued at the SQL*Plus command line:
DESCRIBE table_name |
You’ll see the following after describing the rental table in SQL*Plus:
Name Null? Type ------------------------------------ -------- -------- RENTAL_ID NOT NULL NUMBER CUSTOMER_ID NOT NULL NUMBER CHECK_OUT_DATE NOT NULL DATE RETURN_DATE DATE CREATED_BY NOT NULL NUMBER CREATION_DATE NOT NULL DATE LAST_UPDATED_BY NOT NULL NUMBER LAST_UPDATE_DATE NOT NULL DATE |
The rental_id column is a surrogate key, or an artificial numbering sequence. The combination of the customer_id and check_out_date columns serves as a natural key because a DATE data type is a date-time value. If it were only a date, the customer would be limited to a single entry for each day, and limiting customer rentals to one per day isn’t a good business model.
The basic INSERT statement would require that you look up the next sequence value before using it. You should also look up the surrogate key column value that maps to the row where your unique customer is stored in the contact table. For this example, assume the following facts:
- Next sequence value is 1086
- Customer’s surrogate key value is 1009
- Current date-time is represented by the value from the SYSDATE function
- Return date is the fifth date from today
- User adding and updating the row has a primary (surrogate) key value of 1
- Creation and last update date are the value returned from the SYSDATE function.
An INSERT statement must include a list of values that match the positional data types of the database catalog, or it must use an override signature for all mandatory columns.
You can now write the following INSERT statement, which relies on the default signature:
Name Null? Type ------------------------------------ -------- -------- RENTAL_ID NOT NULL NUMBER CUSTOMER_ID NOT NULL NUMBER CHECK_OUT_DATE NOT NULL DATE RETURN_DATE DATE CREATED_BY NOT NULL NUMBER CREATION_DATE NOT NULL DATE LAST_UPDATED_BY NOT NULL NUMBER LAST_UPDATE_DATE NOT NULL DATE |
The rental_id column is a surrogate key, or an artificial numbering sequence. The combination of the customer_id and check_out_date columns serves as a natural key because a DATE data type is a date-time value. If it were only a date, the customer would be limited to a single entry for each day, and limiting customer rentals to one per day isn’t a good business model.
The basic INSERT statement would require that you look up the next sequence value before using it. You should also look up the surrogate key column value that maps to the row where your unique customer is stored in the contact table. For this example, assume the following facts:
- Next sequence value is 1086
- Customer’s surrogate key value is 1009
- Current date-time is represented by the value from the SYSDATE function
- Return date is the fifth date from today
- User adding and updating the row has a primary (surrogate) key value of 1
- Creation and last update date are the value returned from the SYSDATE function.
An INSERT statement must include a list of values that match the positional data types of the database catalog, or it must use an override signature for all mandatory columns.
You can now write the following INSERT statement, which relies on the default signature:
SQL> INSERT INTO rental 2 VALUES 3 ( 1086 4 , 1009 5 , SYSDATE 6 , TRUNC(SYSDATE + 5) 7 ,1 8 , SYSDATE 9 , 1 10 , SYSDATE); |
If you weren’t using SYSDATE for the date-time value on line 5, you could manually enter a date-time with the following Oracle proprietary syntax:
5 , TO_DATE('15-APR-2011 12:53:01','DD-MON-YYYY HH24:MI:SS') |
The TO_DATE function is an Oracle-specific function. The generic conversion function would be the CAST function. The problem with a CAST function by itself is that it can’t handle a format mask other than the database defaults (‘DD-MON-RR‘ or ‘DD-MON-YYYY‘). For example, consider this syntax:
5 , CAST('15-APR-2011 12:53:02' AS DATE) |
It raises the following error:
5 , CAST('15-APR-2011 12:53:02' AS DATE) FROM dual * ERROR AT line 1: ORA-01830: DATE format picture ends before converting entire input string |
You actually need to double cast this type of format mask when you want to store it as a DATE data type. The working syntax casts the date-time string as a TIMESTAMP data type before recasting the TIMESTAMP to a DATE, like
5 , CAST(CAST('15-APR-2011 12:53:02' AS TIMESTAMP) AS DATE) |
Before you could write the preceding INSERT statement, you would need to run some queries to find the values. You would secure the next value from a rental_s1 sequence in an Oracle database with the following command:
SQL> SELECT rental_s1.NEXTVAL FROM dual; |
This assumes two things, because sequences are separate objects from tables. First, code from which the values in a table’s surrogate key column come must appear in the correct sequence. Second, a sequence value is inserted only once into a table as a primary key value.
In place of a query that finds the next sequence value, you would simply use a call against the .nextval pseudocolumn in the VALUES clause. You would replace line 3 with this:
3 ( rental_s1.NEXTVAL |
The .nextval is a pseudocolumn, and it instantiates an instance of a sequence in the current session. After a call to a sequence with the .nextval pseudocolumn, you can also call back the prior sequence value with the .currval pseudocolumn.
Assuming the following query would return a single-row, you can use the contact_id value as the customer_id value in the rental table:
SQL> SELECT contact_id 2 FROM contact 3 WHERE last_name = 'Potter' 4 AND first_name = 'Harry'; |
Taking three steps like this is unnecessary, however, because you can call the next sequence value and find the valid customer_id value inside the VALUES clause of the INSERT statement. The following INSERT statement uses an override signature and calls for the next sequence value on line 11. It also uses a scalar subquery to look up the correct customer_id value with a scalar subquery on lines 12 through 15.
SQL> INSERT INTO rental 2 ( rental_id 3 , customer_id 4 , check_out_date 5 , return_date 6 , created_by 7 , creation_date 8 , last_updated_by 9 , last_update_date ) 10 VALUES 11 ( rental_s1.NEXTVAL 12 ,(SELECT contact_id 13 FROM contact 14 WHERE last_name = 'Potter' 15 AND first_name = 'Harry') 16 , SYSDATE 17 , TRUNC(SYSDATE + 5) 18 , 1 19 , SYSDATE 20 , 3 21 , SYSDATE); |
When a subquery returns two or more rows because the conditions in the WHERE clause failed to find and return a unique row, the INSERT statement would fail with the following message:
,(SELECT contact_id * ERROR AT line 3: ORA-01427: single-ROW subquery returns more than one ROW |
In fact, the statement could fail when there are two or more “Harry Potter” names in the data set because three columns make up the natural key of the contact table. The third column is the member_id, and all three should be qualified inside a scalar subquery to guarantee that it returns only one row of data.
Handling Oracle’s Large Objects
Instruction Details →
This section shows you how to INSERT large object values into tables.
Oracle’s large objects present a small problem when they’re not null constrained in the table definition. You must insert empty object containers or references when you perform an INSERT statement.
Assume, for example, that you have the following three large object columns in a table:
Name Null? Type ------------------------------- -------- ----------------------- ITEM_DESC NOT NULL CLOB ITEM_ICON NOT NULL BLOB ITEM_PHOTO BINARY FILE LOB |
The item_desc column uses a CLOB (Character Large Object) data type, and it is a required column; it could hold a lengthy description of a movie, for example. The item_icon column uses a BLOB (Binary Large Object) data type, and it is also a required column. It could hold a graphic image. The item_photo column uses a binary file (an externally managed file) but is fortunately null allowed or an optional column in any INSERT statement. It can hold a null or a reference to an external graphic image.
Oracle provides two functions that let you enter an empty large object, and they are:
EMPTY_BLOB() EMPTY_CLOB() |
Although you could insert a null value in the item_photo column, you can also enter a reference to an Oracle database virtual directory file. Here’s the syntax to enter a valid BFILE name with the BFILENAME function call:
10 , BFILENAME('VIRTUAL_DIRECTORY_NAME', 'file_name.png') |
You can insert a large character or binary stream into BLOB and CLOB data types by using the stored procedures and functions available in the dbms_lob package. Chapter 13 covers the dbms_lob package.
You can use an empty_clob function or a string literal up to 32,767 bytes long in a VALUES clause. You must use the dbms_lob package when you insert a string that is longer than 32,767 bytes. That also changes the nature of the INSERT statement and requires that you append the RETURNING INTO clause. Here’s the prototype for this Oracle proprietary syntax:
INSERT INTO some_table [( column1, column2, column3, ...)] VALUES ( value1, value2, value3, ...) RETURNING column1 INTO local_variable; |
The local_variable is a reference to a procedural programming language. It lets you insert a character stream into a target CLOB column or insert a binary stream into a BLOB column.
Capturing the Last Sequence Value
Instruction Details →
This section shows you how to INSERT a new sequence in a parent table and a copy of that new sequence as a foreign key value in a child table.
Sometimes you insert into a series of tables in the scope of a transaction. In this scenario, one table gets the new sequence value (with a call to sequence_name.nextval) and enters it as the surrogate primary key, and another table needs a copy of that primary key to enter into a foreign key column. While scalar subqueries can solve this problem, Oracle provides the .currval pseudocolumn for this purpose.
The steps to demonstrate this behavior require a parent table and a child table. The parent table is defined as follows:
Name Null? Type ------------------------------------ -------- -------------- PARENT_ID NOT NULL NUMBER PARENT_NAME VARCHAR2(10) |
The parent_id column is the primary key for the parent table. You include the parent_id column in the child table. In the child table, the parent_id column holds a copy of a valid primary key column value as a foreign key to the parent table.
Name Null? Type ------------------------------------ -------- -------------- CHILD_ID NOT NULL NUMBER PARENT_ID NUMBER PARENT_NAME VARCHAR2(10) |
After creating the two tables, you can manage inserts into them with the .nextval and .currval pseudocolumns. The sequence calls with the .nextval pseudocolumn insert primary key values, and the sequence calls with the .currval pseudocolumn insert foreign key values.
You would perform these two INSERT statements as a group:
SQL> INSERT INTO parent 2 VALUES 3 ( parent_s1.NEXTVAL 4 ,'One Parent'); SQL> INSERT INTO child 2 VALUES 3 ( child_s1.NEXTVAL 4 , parent_s1.CURRVAL 5 ,'One Child'); |
The .currval pseudocolumn for any sequence fetches the value placed in memory by call to the .nextval pseudocolumn. Any attempt to call the .currval pseudocolumn before the .nextval pseudocolumn raises an ORA-02289 exception. The text message for that error says the sequence doesn’t exist, which actually means that it doesn’t exist in the scope of the current session. Line 4 in the insert into the child table depends on line 3 in the insert into the parent table.
You can use comments in INSERT statements to map to columns in the table. For example, the following shows the technique for the child table from the preceding example:
SQL> INSERT INTO child 2 VALUES 3 ( child_s1.NEXTVAL -- CHILD_ID 4 , parent_s1.CURRVAL -- PARENT_ID 5 ,'One Child') -- CHILD_NAME 6 / |
Comments on the lines of the VALUES clause identify the columns where the values are inserted. A semicolon doesn’t execute this statement, because a trailing comment would trigger a runtime exception. You must use the semicolon or forward slash on the line below the last VALUES element to include the last comment.
Insert by Subquery Results →
An INSERT statement with a subquery can insert one to many rows of data into any table provided the SELECT-list of the subquery matches the data dictionary definition of the table or the named-notation list provided by the INSERT statement. An INSERT statement with a subquery cannot have a VALUES keyword in it, or it raises an error.
The generic prototype for an INSERT statement follows the pattern of an INSERT statement by value prototype with one exception, it excludes the VALUES keyword and replaces the common delimited list of values with a SELECT-list from a subquery. If you want to rely on the positional definition of the table, exclude the list of comma delimited column values. The optional comma-delimited list of column values is necessary when you want to insert columns in a different order or exclude optional columns.
The generic prototype is:
INSERT INTO table_name [( column1, column2, column3, ...)] ( SELECT value1, value2, value3, ... FROM table_name WHERE ...); |
The subquery, or SELECT statement, must return a SELECT-list that maps to the column definition in the data dictionary or the optional comma-delimited column list.
Oracle Container User
After you create and provision the Oracle Database 21c Express Edition (XE), you can create a c##student container user with the following two step process.
- Create a c##student Oracle user account with the following command:
CREATE USER c##student IDENTIFIED BY student DEFAULT TABLESPACE users QUOTA 200M ON users TEMPORARY TABLESPACE temp;
- Grant necessary privileges to the newly created c##student user:
GRANT CREATE CLUSTER, CREATE INDEXTYPE, CREATE OPERATOR , CREATE PROCEDURE, CREATE SEQUENCE, CREATE SESSION , CREATE TABLE, CREATE TRIGGER, CREATE TYPE , CREATE VIEW TO c##student;
As always, I hope this helps those looking for how to do something that’s less than clear because everybody uses tools.
Tiny SQL Developer
The first time you launch SQL Developer, you may see a very small or tiny display on the screen. With some high resolution screens the text is unreadable. Unless you manually configure the sqldeveloper shortcut, you generally can’t use it.
On my virtualization on a 27″ screen it looks like:
As an Administrator user, you right click the SQLDeveloper icon and click the Compatibility tab, which should look like the following dialog. You need to check the Compatibility Mode, which by default is unchecked with Windows 8 displayed in the select list.
Check the Compatibility Mode box and the select list will no longer be gray scaled. Click on the select list box and choose Windows 7. After the change you should see the following:
After that change, you need to click on the Change high DPI settings gray scaled button, which will display the following dialog box.
Click the Override high DPI scaling behavior check box. It will change the gray highlighted Scaling Performed by select box to white. Then, you click the Scaling Performed by select box and choose the System option.
Click the OK button on the nested SQLDeveloper Properties dialog box. Then, click the Apply button on the SQLDeveloper Properties button and the OK button. You will see a workable SQL Developer interface when you launch the program through your modified shortcut.
Protocol adapter error
One of the errors that defeats a lot of new users who install the Oracle Database on the Windows operating system is a two-step event. The first step occurs when you try to connect to the database and it raises the following error:
SQL*Plus: Release 18.0.0.0.0 - Production on Fri Jan 7 21:00:42 2022 Version 18.4.0.0.0 Copyright (c) 1982, 2018, Oracle. All rights reserved. ERROR: ORA-12541: TNS:no listener |
The second step may occur after you get the “no listener” error when you try to start the Oracle listener and it fails to start. The Oracle listener control command is:
lsnrctl start |
When it returns the following error:
LSNRCTL FOR 64-bit Windows: Version 18.0.0.0.0 - Production ON 07-JAN-2022 21:02:20 Copyright (c) 1991, 2018, Oracle. ALL rights reserved. Starting tnslsnr: please wait... Unable TO OpenSCManager: err=5 TNS-12560: TNS:protocol adapter error TNS-00530: Protocol adapter error |
The problem is generally in two configuration files. They are the listener.ora and tnsnames.ora files. This typically occurs when the developer fails to set the localhost in the Windows operating system hosts configuration file. The chain of events that causes these errors can be avoided when the user puts the following two lines:
127.0.0.1 localhost ::1 localhost |
in the following hosts file:
C:\Windows\system32\drivers\etc\hosts |
You can typically avoid these errors when you configure the hosts configuration file correctly before installing the Oracle Database. That’s because the Oracle database installation will use localhost keyword instead of the current, and typically DHCP assigned, IP address.
The loss of connectivity errors typically occur when the IP address changes after the installation. DHCP IP addresses often change as machines disconnect and reconnect to a network.
You can fix a DHCP IP installation of an Oracle database by editing the listener.ora and tnsnames.ora files. You replace the IP addresses with the localhost keyword.
The listener.ora and tnsnames.ora files look like the following for an Oracle Database 21c Express Edition (provided you installed them in a C:\app\username directory:
listener.ora
# listener.ora Network Configuration File: C:\app\username\product\21.0.0\dbhomeXE\NETWORK\ADMIN\listener.ora # Generated by Oracle configuration tools. DEFAULT_SERVICE_LISTENER = XE SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = CLRExtProc) (ORACLE_HOME = C:\app\username\product\21.0.0\dbhomeXE) (PROGRAM = extproc) (ENVS = "EXTPROC_DLLS=ONLY:C:\app\username\product\21.0.0\dbhomeXE\bin\oraclr21.dll") ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521)) (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) ) |
tnsnames.ora
# tnsnames.ora Network Configuration File: C:\app\mclaughlinm\product\21.0.0\dbhomeXE\NETWORK\ADMIN\tnsnames.ora # Generated by Oracle configuration tools. XE = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = XE) ) ) LISTENER_XE = (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521)) ORACLR_CONNECTION_DATA = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) (CONNECT_DATA = (SID = CLRExtProc) (PRESENTATION = RO) ) ) |
As always, I hope this helps those looking for a solution to something that can take more time than it should to fix.
Oracle EBS Forms
Somebody wanted to know how to discover the difference between a customized and generic set of forms in an Oracle EBS solution without manually cataloging. That’s simple, create a vanilla instance from the media and then designate the customized instance as production; and create database links for them respectively as @vanilla and @production.
After doing that, here a query that will return only the customized forms:
COL application_short_name FORMAT A20 HEADING "Application|Short Name" COL form_name FORMAT A20 HEADING "Form Name" COL basepath FORMAT A12 HEADING "Basepath|Product|Top" SET PAGESIZE 9999 SELECT a.application_short_name , f.form_name , a.basepath FROM fnd_form@production f INNER JOIN fnd_application@production a ON f.application_id = a.application_id WHERE NOT EXISTS (SELECT NULL FROM fnd_form@vanilla vf WHERE f.application_id = vf.application_id AND f.form_id = vf.form_id AND f.form_name = vf.form_name) ORDER BY form_name; |
As always, I hope this helps.
Linux sqlplus wrapper
Here’s a quick way to ensure you can use the up-arrows and navigation keys when using the sqlplus command-line interface. You can just add it to your .bashrc file.
sqlplus () { path=`which rlwrap 2>/dev/null`; file=''; if [ -n ${path} ]; then file=${path##/*/}; fi; if [ -n ${file} ] && [[ ${file} = "rlwrap" ]]; then rlwrap sqlplus "${@}"; else echo "Command-line history unavailable: Install the rlwrap package."; $ORACLE_HOME/bin/sqlplus "${@}"; fi } |
As always, I hope this helps those looking of solutions.
Oracle’s Sparse Lists
Oracle’s PL/SQL Programming Language is really quite nice. I’ve written 8 books on it and still have fun coding in it. One nasty little detail about Oracle’s lists, introduced in Oracle 8 as PL/SQL Tables according their documentation, is they rely on sequential numeric indexes. Unfortunately, Oracle lists support a DELETE method, which can create gaps in the sequential indexes.
Oracle calls a sequence without gaps densely populated and a sequence with gaps sparsely populated. This can cause problems when PL/SQL code inadvertently removes elements at the beginning, end, or somewhere in the middle of the list. That’s because a program can then pass the sparsely populated list as a parameter to another stored function or procedure where the developer may traverse the list in a for-loop. That traversal may raise an exception in a for-loop, like this when it has gaps in the index sequence:
DECLARE * ERROR AT line 1: ORA-01403: no data found ORA-06512: AT line 20 |
Oracle’s myriad built-in libraries don’t offer a function to compact a sparsely populated list into a densely populated list. This post provides a compact stored procedure that converts a sparsely populated list to a densely populated list.
The first step to using the compact stored procedure requires that you create an object type in SQL, like this list of 20-character strings:
DROP TYPE list; CREATE OR REPLACE TYPE list IS TABLE OF VARCHAR2(20); / |
Now, you can implement the compact stored procedure by passing the User-Defined Type as it’s sole parameter.
CREATE OR REPLACE PROCEDURE compact ( sparse IN OUT LIST ) IS /* Declare local variables. */ iterator NUMBER; -- Leave iterator as null. /* Declare new list. */ dense LIST := list(); BEGIN /* Initialize the iterator with the starting value, which is necessary because the first element of the original list could have been deleted in earlier operations. Setting the initial iterator value to the first numeric index value ensures you start at the lowest available index value. */ iterator := sparse.FIRST; /* Convert sparsely populated list to densely populated. */ WHILE (iterator <= sparse.LAST) LOOP dense.EXTEND; dense(dense.COUNT) := sparse(iterator); iterator := sparse.NEXT(iterator); END LOOP; /* Replace the input parameter with the compacted list. */ sparse := dense; END; / |
Before we test the compact stored procedure, let’s create deleteElement stored procedure for our testing:
CREATE OR REPLACE PROCEDURE deleteElement ( sparse IN OUT LIST , element IN NUMBER ) IS BEGIN /* Delete a value. */ sparse.DELETE(element); END; / |
Now, let’s use an anonymous block to test compacting a sparsely populated list into a densely populated list. The test program will remove the first, last, and one element in the middle before printing the sparsely populated list’s index and string values. This test will show you gaps in the remaining non-sequential index values.
After you see the gaps, the test program compacts the remaining list values into a new densely populated list. It then prints the new index values with the data values.
DECLARE /* Declare a four item list. */ lv_strings LIST := list('one','two','three','four','five','six','seven'); BEGIN /* Check size of list. */ dbms_output.put_line('Print initial list size: ['||lv_strings.COUNT||']'); dbms_output.put_line('==================================='); /* Delete a value. */ deleteElement(lv_strings,lv_strings.FIRST); deleteElement(lv_strings,3); deleteElement(lv_strings,lv_strings.LAST); /* Check size of list. */ dbms_output.put_line('Print modified list size: ['||lv_strings.COUNT||']'); dbms_output.put_line('Print max index and size: ['||lv_strings.LAST||']['||lv_strings.COUNT||']'); dbms_output.put_line('==================================='); FOR i IN 1..lv_strings.LAST LOOP IF lv_strings.EXISTS(i) THEN dbms_output.put_line('List list index and item: ['||i||']['||lv_strings(i)||']'); END IF; END LOOP; /* Call a procedure by passing current sparse collection and the procedure returns dense collection. */ dbms_output.put_line('==================================='); dbms_output.put_line('Compacting list.'); compact(lv_strings); dbms_output.put_line('==================================='); /* Print the new maximum index value and list size. */ dbms_output.put_line('Print new index and size: ['||lv_strings.LAST||']['||lv_strings.COUNT||']'); dbms_output.put_line('==================================='); FOR i IN 1..lv_strings.COUNT LOOP dbms_output.put_line('List list index and item: ['||i||']['||lv_strings(i)||']'); END LOOP; dbms_output.put_line('==================================='); END; / |
It produces output, like:
Print initial list size: [7] =================================== Print modified list size: [4] Print max index and size: [6][4] =================================== List list index and item: [2][two] List list index and item: [4][four] List list index and item: [5][five] List list index and item: [6][six] =================================== Compacting list. =================================== Print new index and size: [4][4] =================================== List list index and item: [1][two] List list index and item: [2][four] List list index and item: [3][five] List list index and item: [4][six] =================================== |
You can extend this concept by creating User-Defined Types with multiple attributes, which are essentially lists of tuples (to draw on Pythonic lingo).
Design Database Triggers
Designing and implementing database triggers is always interesting and sometimes not easy. I believe most of the difficulty comes from not implementing the triggers in a way that lets you perform single use case testing. For example, a trigger typically fires as a result of an INSERT, UPDATE, or DELETE statement. That means you can’t test the trigger’s logic independently from the SQL statement.
This post shows you how to implement an Oracle Database trigger that ensures a last_name field always has a hyphen when it is composed of two surnames. It also shows you how to build debugging directly into the trigger with Oracle’s conditional compilation logic (covered in my Oracle Database 12c PL/SQL Programming book on pages 170-171) while writing the debug comments to a debug logging table.
The example works through the design in stages. To begin the process, you need to define a zeta table and zeta_s sequence (no magic in the table or sequence names).
-- Create the zeta demo table. CREATE TABLE zeta ( zeta_id NUMBER , last_name VARCHAR2(30)); -- Create the zeta_s demo sequence. CREATE SEQUENCE zeta_s; |
Next, you write a basic on insert row-level (or, row-by-row) trigger. The following white_space trigger only fires when the last_name column value contains a whitespace between two components of a last name.
The code follows below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | /* || Create an on insert trigger to implement the desired || logic, which replaces a whitespace between two portions || of a last_name column. */ CREATE OR REPLACE TRIGGER white_space BEFORE INSERT ON zeta FOR EACH ROW WHEN (REGEXP_LIKE(NEW.last_name,' ')) BEGIN :NEW.last_name := REGEXP_REPLACE(:NEW.last_name,' ','-',1,1); END white_space; / |
You can now test the white_space trigger with these two INSERT
statements:
-- Two test insert statements. INSERT INTO zeta ( zeta_id, last_name ) VALUES ( zeta_s.NEXTVAL, 'Baron-Schwartz' ); INSERT INTO zeta ( zeta_id, last_name ) VALUES ( zeta_s.NEXTVAL, 'Zeta Jones' ); |
After running the two INSERT statements, you can query the last_name from the zeta table and verify that there’s always a hyphen between the two components of the last name, like:
SELECT * FROM zeta; |
It should display:
ZETA_ID LAST_NAME ---------- ------------------------------ 1 Baron-Schwartz 2 Zeta-Jones |
However, the business logic is violated when you run an UPDATE statement, like:
-- Update data and break the business rule. UPDATE zeta SET last_name = 'Zeta Jones' WHERE last_name = 'Zeta-Jones'; |
A fresh query like
SELECT * FROM zeta; |
Should display the following, which allowed an UPDATE
statement to put in a non-conforming last name value:
ZETA_ID LAST_NAME ---------- ------------------------------ 1 Baron-Schwartz 2 Zeta Jones |
You need to expand the role of your white_space trigger to prevent this undesired outcome by enabling it to fire on an insert or update event. You do that by adding ON UPDATE to line 8 below. The modified white_space trigger for both SQL events is:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | /* || Create an on insert or update trigger to implement the || desired logic, which replaces a whitespace between two || portions of a last_name column. */ CREATE OR REPLACE TRIGGER white_space BEFORE INSERT OR UPDATE ON zeta FOR EACH ROW WHEN (REGEXP_LIKE(NEW.last_name,' ')) BEGIN :NEW.last_name := REGEXP_REPLACE(:NEW.last_name,' ','-',1,1); END white_space; / |
Having made the change on line 8 above, you can now retest the white_space trigger with the following UPDATE
statement. You should note that the WHERE clause uses a whitespace because the last UPDATE
statement with the INSERT
-only white_space trigger allowed its change.
UPDATE zeta SET last_name = 'Zeta-Jones' WHERE last_name = 'Zeta Jones'; |
Re-query the zeta table:
SELECT * FROM zeta; |
It should display the following values that meet the business rule:
ZETA_ID LAST_NAME ---------- ------------------------------ 1 Baron-Schwartz 2 Zeta Jones |
The modified white_space trigger doesn’t let us capture debug information and it doesn’t let us see whether the SQL event is an INSERT
or UPDATE
statement. It also fails to differentiate between outcomes from an INSERT
and UPDATE
event.
You can fix this by:
- Creating a debug_log table that captures debugging information.
- Creating a debug_procedure to format diagnostic strings.
- Using the Data Manipulation Language (DML) Event Functions (covered in my Oracle Database 12c PL/SQL Programming book’s Table 12-3 on page 533) to track whether the event is an
INSERT
orUPDATE
statement.
The three steps to make the trigger capable of different outcomes and debugging are:
- The following creates a debug_log table:
-- Create the debug_log table. CREATE TABLE debug_log ( message VARCHAR2(78));
- The following creates an a debug procedure:
-- Create a debug logging procedure. CREATE OR REPLACE PROCEDURE debug ( event VARCHAR2 := 'Unknown' , location VARCHAR2 , COLUMN VARCHAR2 ) IS /* Local message variable. */ lv_message VARCHAR2(78); /* Set procedure as an autonomous transaction. */ PRAGMA AUTONOMOUS_TRANSACTION; BEGIN /* Build, insert, and commit message in log. */ lv_message := event || ' event at ' || location || ' on column [' || COLUMN || ']'; INSERT INTO debug_log ( message ) VALUES ( lv_message ); COMMIT; END; /
- The following creates an a replacement white_space trigger equipped with event tracking and conditional compilation debug calls to the debug_log table:
You actually need to change the session before compiling this trigger with the following command so that the conditional compilation instructions work:
ALTER SESSION SET PLSQL_CCFLAGS = 'DEBUG:1';
Then, create the white_space trigger from the following code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45
-- Create a debug logging procedure. CREATE OR REPLACE TRIGGER white_space BEFORE INSERT OR UPDATE ON zeta FOR EACH ROW WHEN (REGEXP_LIKE(NEW.last_name,' ')) DECLARE lv_event VARCHAR2(9); BEGIN /* Conditional debugging. */ $IF $$DEBUG = 1 $THEN debug( location => 'before IF statement' , column_value => ':new.last_name' ); $END IF INSERTING THEN lv_event := 'Inserting'; /* Conditional debugging. */ $IF $$DEBUG = 1 $THEN debug( event => lv_event , location => 'after IF statement' , column_value => ':new.last_name' ); $END :NEW.last_name := REGEXP_REPLACE(:NEW.last_name,' ','-',1,1); ELSIF UPDATING THEN lv_event := 'Updating'; /* Conditional debugging. */ $IF $$DEBUG = 1 $THEN debug( event => lv_event , location => 'after ELSIF statement' , column_value => ':new.last_name' ); $END RAISE_APPLICATION_ERROR(-20001,'Whitespace replaced with hyphen.'); END IF; /* Conditional debugging. */ $IF $$DEBUG = 1 $THEN debug( location => 'after END IF statement' , column_value => ':new.last_name' ); $END END white_space; /
A new test case for the modified white_space trigger uses an INSERT and UPDATE statement, like:
INSERT INTO zeta ( zeta_id, last_name ) VALUES ( zeta_s.NEXTVAL, 'Pinkett Smith' ); UPDATE zeta SET last_name = 'Pinkett Smith' WHERE last_name = 'Pinkett-Smith'; |
The UPDATE
statement violates the business rule and the new white_space trigger throws an error when an attempt is made to update the last_name with two names separated by a whitespace. The UPDATE statement raises the following error stack:
UPDATE zeta * ERROR AT line 1: ORA-20001: Whitespace replaced WITH hyphen. ORA-06512: AT "STUDENT.WHITE_SPACE", line 31 ORA-04088: error during execution OF TRIGGER 'STUDENT.WHITE_SPACE' |
Re-query the zeta table:
SELECT * FROM zeta; |
It should display the following values that meet the business rule. The new third row in the table came from the INSERT
statement in the test case.
ZETA_ID LAST_NAME ---------- ------------------------------ 1 Baron-Schwartz 2 Zeta-Jones 3 Pinkett-Smith |
Unfortunately, there’s a lot of debugging clutter in the white_space trigger. The other downside is it requires testing from INSERT and UPDATE statements rather than simple anonymous block. You can fix that by doing two things:
- Remove the body of the trigger to an autonomous zeta_function.
- Put a logic router in the trigger with a call to the autonomous zeta_function.
Here’s the script to create the zeta_function:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | CREATE OR REPLACE FUNCTION zeta_function ( column_value VARCHAR2 , event VARCHAR2 ) RETURN VARCHAR2 IS /* Return value. */ lv_retval VARCHAR2(30) := column_value; /* Set function as an autonomous transaction. */ PRAGMA AUTONOMOUS_TRANSACTION; BEGIN /* Conditional debugging. */ $IF $$DEBUG = 1 $THEN debug( location => 'before IF statement' , column_value => ':new.column_value' ); $END /* Check if event is INSERT statement. */ IF event = 'INSERTING' THEN /* Conditional debugging. */ $IF $$DEBUG = 1 $THEN debug( event => INITCAP(event) , location => 'after IF statement' , column_value => ':new.column_value' ); $END /* Replace a whitespace with a hyphen. */ lv_retval := REGEXP_REPLACE(column_value,' ','-',1,1); /* Check if event is UPDATE statement. */ ELSIF event = 'UPDATING' THEN /* Conditional debugging. */ $IF $$DEBUG = 1 $THEN debug( event => INITCAP(event) , location => 'after ELSIF statement' , column_value => ':new.column_value' ); $END /* Raise error to state policy allows no changes. */ RAISE_APPLICATION_ERROR(-20001,'Whitespace replaced with hyphen.'); END IF; /* Conditional debugging. */ $IF $$DEBUG = 1 $THEN debug( location => 'after END IF statement' , column_value => ':new.column_value' ); $END /* Return modified column for insert or original column for update. */ RETURN lv_retval; END zeta_function; / |
The refactored white_space trigger follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | CREATE OR REPLACE TRIGGER white_space BEFORE INSERT OR UPDATE ON zeta FOR EACH ROW WHEN (REGEXP_LIKE(NEW.last_name,' ')) DECLARE lv_event VARCHAR2(9); BEGIN /* Set evaluation event. */ IF INSERTING THEN lv_event := 'INSERTING'; ELSIF UPDATING THEN lv_event := 'UPDATING'; END IF; /* || Assign the result of the formatted string to the || new last_name value. */ :NEW.last_name := zeta_function( event => lv_event , column_value => :NEW.last_name); END white_space; / |
A new test case for the modified white_space trigger uses an INSERT and UPDATE statement with some new values.
INSERT INTO zeta ( zeta_id, last_name ) VALUES ( zeta_s.NEXTVAL, 'Day Lewis' ); UPDATE zeta SET last_name = 'Day Lewis' WHERE last_name = 'Day-Lewis'; |
The UPDATE
statement continues to violate the business rule and the modified white_space trigger throws a different error stack. The new error stack includes the zeta_function because that’s where you throw the error. It is caught and re-thrown by the white_space trigger.
UPDATE zeta * ERROR AT line 1: ORA-20001: Whitespace replaced WITH hyphen. ORA-06512: AT "STUDENT.ZETA_FUNCTION", line 47 ORA-06512: AT "STUDENT.WHITE_SPACE", line 13 ORA-04088: error during execution OF TRIGGER 'STUDENT.WHITE_SPACE' |
Re-query the zeta table:
SELECT * FROM zeta; |
It should display the following values that meet the business rule. The new third row in the table came from the INSERT
statement in the test case.
ZETA_ID LAST_NAME ---------- ------------------------------ 1 Baron-Schwartz 2 Zeta-Jones 3 Pinkett-Smith 4 Day-Lewis |
Now, you can query the debug_log table and see the debug messages that you captured from testing the INSERT
and UPDATE
statements. You get three messages from the INSERT
statement test and only two from the UPDATE
statement test.
MESSAGE ------------------------------------------------------------------ Unknown event at before IF statement on column [:new.last_name] Inserting event at after IF statement on column [:new.last_name] Unknown event at after END IF statement on column [:new.last_name] Unknown event at before IF statement on column [:new.last_name] Updating event at after ELSIF statement on column [:new.last_name] |
As always, I hope this helps people see new ways to solve problems.
SQL*Plus Tutorial
SQL Interactive and Batch Processing
SQL*Plus provides an interactive and batch processing environment that dispatches commands to the SQL and PL/SQL engines. You can work either in the interactive SQL*Plus command-line interface (CLI) or in Oracle SQL Developer through a Java-based GUI. This section explains how to use these two primary interfaces to the SQL and PL/SQL engines. There are many other commercial products from other vendors that let you work with Oracle, but coverage of those products is beyond the scope of this book.
SQL*Plus Command-Line Interface
SQL*Plus is the client software for Oracle that runs SQL statements and anonymous block PL/SQL statements in an interactive and batch development environment. The statements are organized in the order that you generally encounter them as you start working with SQL*Plus or the MySQL Monitor.
Connecting to and Disconnecting from SQL*Plus
After installing the Oracle Database on the Linux OS, you access SQL*Plus from the command line. This works when the operating system finds the sqlplus executable in its path environment variable ($PATH
on Linux). Linux installations require that you configure
When sqlplus
is in the path environment variable, you can access it by typing the following:
sqlplus some_username/some_password |
The preceding connect string may use IPC or the network to connect to the Oracle database. You can connect through the network by specifying a valid net service name, like this:
sqlplus some_username/some_password@some_net_service_name |
While this works, and many people use it, you should simply enter your user name and let the database prompt you for the password. That way, it’s not displayed as clear text.
To avoid displaying your password, you should connect in the following way, which uses IPC:
sqlplus some_username |
Or you can connect using the network layer by using a net service name like this:
sqlplus some_username@some_tns_alias |
You’ll then see a password prompt. As you type your password, it is masked from prying eyes. The password also won’t be visible in the window of the command session.
The problem with either of these approaches is that you’ve disclosed your user account name at the operating system level. No matter how carefully you’ve host-hardened your operating system, there’s no reason to disclose unnecessary details. The recommended best practice for connecting at the command line is to use /nolog
, like this:
sqlplus /nolog |
After you’re connected as an authenticated user, you can switch to work as another user by using the following syntax, which discloses your password to the screen but not the session window:
SQL> CONNECT some_otheruser/some_password |
Or you can connect through a net service name, like
SQL> CONNECT some_otheruser@net_service_name/some_password |
Alternatively, you can connect with or without a net service name to avoid displaying your password:
SQL> CONNECT some_otheruser |
As with the preceding initial authorization example, you are prompted for the password. Entering it in this way also protects it from prying eyes.
If you try to run the sqlplus executable and it fails with a message that it can’t find the sqlplus executable, you must correct that issue. Check whether the $ORACLE_HOME/bin
is found in the respective $PATH
environment variable. Like PATH, the ORACLE_HOME
is also an operating system environment variable. ORACLE_HOME
should point to where you installed the Oracle database.
You can use the following commands to check the contents of your path environment variable. Instructions for setting these are in the Oracle Database Installation Guide for your platform and release:
Linux or Unix:
echo $PATH |
When you’ve connected to SQL*Plus, you will see the SQL> prompt, like:
SQL> |
Working in the SQL*Plus Environment
Unlike other SQL environments, the SQL*Plus environment isn’t limited simply to running SQL statements. Originally, it was written as a SQL report writer. This means SQL*Plus contains a number of features to make it friendlier and more useful. (That’s why SQL*Plus was originally known as an Advanced Friendly Interface [AFI]). Examples of these friendlier and useful features include a set of well-designed formatting extensions that enables you to format and aggregate result set data. SQL*Plus also lets you interactively edit files from the command line.
This section explains how you can dynamically configure your environment to suit your needs for each connection, configure SQL*Plus to remember settings for every connection, discover features through the interactive help menus, and shell out of or exit the SQL*Plus environment.
Configuring SQL*Plus Environment You can configure your SQL*Plus environment in two ways. One requires that you configure it each time that you start a session (dynamically). The other requires that you configure the glogin.sql
file, which is the first thing that runs after a user authenticates and establishes a connection with the database. The caveat to modifying the glogin.sql
file is that any changes become universal for all users of the Oracle Database installation. Also, only the owner of the Oracle account can make these changes.
Dynamically Configuring SQL*Plus—
Every connection to SQL*Plus is configurable. Some developers choose to put these instructions inside their script files, while others prefer to type them as they go. Putting them in the script files means you have to know what options you have first. The SQL*Plus SHOW command lets you find all of them with the keyword ALL, like this:
SHOW ALL |
The SQL*Plus SHOW
command also lets you see the status of a given environment variable.
The following command displays the default value for the FEEDBACK
environment variable:
SHOW FEEDBACK |
It returns the default value unless you’ve altered the default by configuring it in the glogin.sql
file. The oracle user has the rights to make any desired changes in this file, but they apply to all users who connect to the database.
The default value for FEEDBACK
is
FEEDBACK ON FOR 6 OR more ROWS |
By default, an Oracle database shows the number of rows touched by a SQL command only when six or more rows are affected. If you also want to show feedback when five or fewer rows are affected, the following syntax resets the environment variable:
SET FEEDBACK ON |
It returns 0 or the number of rows affected by any SQL statement.
Setting these environment variables inside script files allows you to designate runtime behaviors, but you should also reset them to the default at the conclusion of the script. When they’re not reset at the end of a script, they can confuse a user expecting the default behaviors.
Configuring the Default SQL*Plus Environment File—The glogin.sql file is where you define override values for the environment variables. You might want to put many things beyond environment variable values into your glogin.sql
configuration file. The most common is a setting for the default editor in Linux or Unix, because it’s undefined out of the box. You can set the default editor to the vi text editor in Linux by adding the following line to the glogin.sql
file:
DEFINE _EDITOR=vi |
The DEFINE
keyword has two specialized uses in SQL*Plus. One lets you define substitution variables (sometimes called user variables) that act as session-level variables. The other lets you enable or disable the ampersand (&
) symbol as a substitution variable operator. It is enabled by default because the DEFINE
environment variable is ON
by default. You disable the specialized role by setting DEFINE
to OFF. SQL*Plus treats the ampersand (&
) as an ordinary text character when DEFINE
is OFF
. You can find more on this use of the DEFINE
environment variable in the “When to Disable Substitution Variables” sidebar later in this appendix.
Substitution variables are placeholder variables in SQL statements or session-level variables in script files. They are placeholder variables when you precede them with one ampersand (&
) and are session-level variables when you precede them with two ampersands. As placeholders, they are discarded after a single use. Including two ampersands (&&
) makes the assigned value of a substitution variable reusable. You can set a session-level variable either with the DEFINE
command, as shown previously with the _EDITOR
variable, or by using a double ampersand (&&
), as in the following:
SELECT '&&BART' FROM dual; |
With two ampersands, the query prompts the user for a value for the BART session-level variable and sets the value as a session-level variable. A single ampersand would simply prompt, use it, and discard it. Assuming you enter “Cartoon Character” as the response to the preceding query, you see the value by querying it with a single or double ampersand:
SELECT '&BART' AS "Session Variable" FROM dual; |
This displays the following:
SESSION Variable ----------------- Cartoon Character |
Or you can use the DEFINE command like this:
DEFINE BART |
This displays the following:
DEFINE BART = "Cartoon Character" (CHAR) |
The scope of the session variable lasts throughout the connection unless you undefine it with the following command:
UNDEFINE BART |
Although you can define substitution variables, you can use them only by preceding their name with an ampersand. That’s because a single ampersand also lets you read the contents of substitution variables when they’re set as session-level variables. Several user variables are reserved for use by Oracle Database. These user variables can contain letters, underscores, or numbers in any order. When reserved for use by Oracle, these variables all start with an underscore, as is the case with the _EDITOR
variable. Any reference to these variables is case-insensitive.
SQL*Plus checks the contents of the _EDITOR
user variable when you type the EDIT
command, often abbreviated as ED
. The EDIT
command launches the executable stored in the _EDITOR
user variable. The Windows version of Oracle Database comes preconfigured with Notepad as the default editor. It finds the Notepad utility because it’s in a directory found in the operating system path variable. If you choose another editor, you need to ensure that the executable is in your default path environment.
The DEFINE
command also lets you display the contents of all session-level variables. There is no all option for the DEFINE
command, as there is for the SHOW
command. You simply type DEFINE
without any arguments to get a list of the default values:
DEFINE _DATE = "09-AUG-18" (CHAR) DEFINE _CONNECT_IDENTIFIER = "XE" (CHAR) DEFINE _USER = "STUDENT" (CHAR) DEFINE _PRIVILEGE = "" (CHAR) DEFINE _SQLPLUS_RELEASE = "1102000200" (CHAR) DEFINE _EDITOR = "vim" (CHAR) DEFINE _O_VERSION = "Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production" (CHAR) DEFINE _O_RELEASE = "1102000200" (CHAR) |
The preceding user variables are set by Oracle during a /nolog connection. When you connect as a container or pluggable user, the DEFINE command displays a different result. Shown next is the example after having connected as the student (a pluggable database user):
DEFINE _DATE = "09-AUG-18" (CHAR) DEFINE _CONNECT_IDENTIFIER = "" (CHAR) DEFINE _USER = "" (CHAR) DEFINE _PRIVILEGE = "" (CHAR) DEFINE _SQLPLUS_RELEASE = "1102000200" (CHAR) DEFINE _EDITOR = "vim" (CHAR) |
The last two lines are displayed only when you’re connected as a user to the Oracle Database 12c database. As previously explained, you can define the contents of other substitution variables.
Although substitution variables have many uses, their primary purpose is to support the SQL*Plus environment. For example, you can use them to reset the SQL> prompt
. You can reset the default SQL*Plus prompt by using two predefined session-level variables, like this:
SET sqlprompt "'SQL:'_user at _connect_identifier>" |
This would change the default prompt to look like this when the _user
name is system and the _connect_identifier
is orcl
:
SQL: SYSTEM AT orcl> |
This type of prompt takes more space, but it shows you your current user and schema at a glance. It’s a handy prompt to help you avoid making changes in the wrong schema or instance, which occurs too often in daily practice.
Using Interactive Help in the SQL*Plus Environment SQL*Plus also provides an interactive help console that contains an index of help commands. You can find the index of commands by typing the following in SQL*Plus:
SQL> help INDEX |
It displays the following:
Enter Help [topic] FOR help. @ COPY PAUSE SHUTDOWN @@ DEFINE PRINT SPOOL / DEL PROMPT SQLPLUS ACCEPT DESCRIBE QUIT START APPEND DISCONNECT RECOVER STARTUP ARCHIVE LOG EDIT REMARK STORE ATTRIBUTE EXECUTE REPFOOTER TIMING BREAK EXIT REPHEADER TTITLE BTITLE GET RESERVED WORDS (SQL) UNDEFINE CHANGE HELP RESERVED WORDS (PL/SQL) VARIABLE CLEAR HOST RUN WHENEVER OSERROR COLUMN INPUT SAVE WHENEVER SQLERROR COMPUTE LIST SET XQUERY CONNECT PASSWORD SHOW |
You can discover more about the commands by typing help with one of the index keywords. The following demonstrates the STORE
command, which lets you store the current buffer contents as a file:
SQL> help store |
It displays the following:
STORE ----- Saves attributes OF the CURRENT SQL*Plus environment IN a script. STORE {SET} file_name[.ext] [CRE[ATE] | REP[LACE] | APP[END]] |
This is one way to save the contents of your current SQL statement into a file. You’ll see another, the SAVE
command, shortly in this appendix. You might want to take a peek in the “Writing SQL*Plus Log Files” section later in this appendix if you’re experimenting with capturing the results of the HELP utility by spooling the information to a log file.
As discussed, the duration of any SQL*Plus environment variable is from the beginning to the end of any session. Define environment variables in the glogin.sql file when you want them to be available in all SQL*Plus sessions.
Shelling Out of the SQL*Plus Environment In cases where you don’t want to exit an interactive session of SQL*Plus, you can leave the session (known as shelling out) and run operating system commands. The HOST command lets you do that, like so:
SQL> HOST |
Anything that you do inside this operating system session other than modify files is lost when you leave it and return to the SQL*Plus session. The most frequent things that most developers do in a shelled-out session are check the listing of files and rename files. Sometimes, developers make small modifications to files, exit the subshell session, and rerun the file from SQL*Plus.
You exit the operating system shell environment and return to SQL*Plus by typing EXIT
.
An alternative to shelling out is to run a single operating system command from SQL*Plus. For example, you can type the following in Windows to see the contents of the directory from which you entered SQL*Plus:
SQL> HOST dir |
Linux works with the HOST
command, too. In Linux, you also have the option of a shorthand version of the HOST
command—the exclamation mark (!
). You use it like this:
SQL> ! ls -al |
The difference between the !
and HOST
commands is that you can’t use substitution variables with !
.
Exiting SQL*Plus Environment You use QUIT
or EXIT
to exit a session in the SQL*Plus program. Either command ends a SQL*Plus session and releases any session variables.
The next sections show you how to write, save, edit, rerun, edit, abort, call, run, and pass parameters to SQL statements. Then you’ll learn how to call PL/SQL programs and write SQL*Plus log files.
Writing SQL Statements with SQL*Plus
A simple and direct way to demonstrate how to write SQL statements in SQL*Plus is to write a short query. Queries use the SELECT
keyword to list columns from a table and use the FROM
keyword to designate a table or set of tables. The following query selects a string literal value (“Hello World!”
) from thin air with the help of the pseudo table dual. The dual pseudo table is a structure that lets you query one or more columns of data without accessing a table, view, or stored program. Oracle lets you select any type of column except a large object (LOB) from the dual table. The dual table returns only one row of data.
SELECT 'Hello World!' FROM dual; |
Notice that Oracle requires single quotation marks as delimiters of string literal values. Any attempt to substitute double quotation marks raises an ORA-00904
error message, which means you’ve attempted to use an invalid identifier. For example, you’d generate the following error if you used double quotes around the string literal in the original statement:
SELECT "Hello World!" FROM dual * ERROR AT line 1: ORA-00904: "Hello World!": invalid identifier |
If you’re coming from the MySQL world to work in Oracle databases, this may seem a bit provincial. MySQL works with either single or double quotes as string delimiters, but Oracle doesn’t. No quote delimiters are required for numeric literals.
SQL*Plus places a query or other SQL statement in a special buffer when you run it. Sometimes you may want to save these queries in files. The next section shows you how to do that.
Saving SQL Statements with SQL*Plus
Sometimes you’ll want to save a SQL statement in a file. That’s actually a perfect activity for the SAVE
or STORE
command (rather than spooling a log file). Using the SAVE
or STORE
command lets you save your current statement to a file. Capturing these ad hoc SQL statements is generally important—after all, SQL statements ultimately get bundled into rerunnable script files before they ever move into production systems.
Use the following syntax to save a statement as a runnable file:
SAVE some_new_file_name.SQL |
If the file already exists, you can save the file with this syntax:
SAVE some_new_file_name.SQL REPLACE |
Editing SQL Statements with SQL*Plus
You can edit your current SQL statements from within SQL*Plus by using EDIT
. SQL*Plus preconfigures itself to launch Notepad when you type EDIT
or the shorthand ED
in any Windows installation of Oracle Database.
Although the EDIT
command points to Notepad when you’re working in Windows, it isn’t configured by default in Linux or Unix. You have to set the editor for SQL*Plus when running on Linux or Unix. Refer to the “Working in the SQL*Plus Environment” section earlier in the appendix for details about setting up the editor.
Assuming you’ve configured the editor, you can edit the last SQL statement by typing EDIT
like this (or you can use ED
):
SQL> EDIT |
The temporary contents of any SQL statement are stored in the afiedt.buf file by default. After you edit the file, you can save the modified statement into the buffer and rerun the statement. Alternatively, you can save the SQL statement as another file.
Rerunning SQL*Plus SQL Statements from the Buffer
After you edit a SQL statement, SQL*Plus automatically lists it for you and enables you to rerun it. Use a forward slash (/
)to run the last SQL statement from the buffer. The semicolon at the end of your original SQL statement isn’t stored in the buffer; it’s replaced by a forward slash. If you add the semicolon back when you edited the SQL statement, you would see something like the following with the semicolon at the end of the last line of the statement:
SQL> EDIT Wrote FILE afiedt.buf 1* SELECT 'Hello World!' AS statement FROM dual; |
A forward slash can’t rerun this from the buffer because the semicolon is an illegal character. You would get an error like this:
SQL> / SELECT 'Hello World!' AS statement FROM dual; * ERROR AT line 1: ORA-00911: invalid character |
To fix this error, you should re-edit the buffer contents and remove the semicolon. The forward slash would then run the statement.
Some SQL statements have so many lines that they don’t fit on a single page in your terminal or shell session. In these cases, you can use the LIST
command (or simply a lowercase l or uppercase L) to see only a portion of the current statement from the buffer. The LIST
command by itself reads the buffer contents and displays them with line numbers at the SQL prompt.
If you’re working with a long PL/SQL block or SQL statement, you can inspect ranges of line numbers with the following syntax:
SQL> LIST 23 32 |
This will echo back to the console the inclusive set of lines from the buffer if they exist. Another command-line interface is used to edit line numbers. It’s very cumbersome and limited in its utility, so you should simply edit the SQL statement in a text editor.
Aborting Entry of SQL Statements in SQL*Plus
When you’re working at the command line, you can’t just point the mouse to the prior line and correct an error; instead, if your statement has an error, your must either abort the statement or run it and wait for it to fail. SQL*Plus lets you abort statements with errors.
To abort a SQL statement that you’re writing interactively, press ENTER, type a period (.
) as the first character on the new line, and then press ENTER
again. This aborts the statement but leaves it in the active buffer file in case you went to edit it.
After aborting a SQL statement, you can use the instructions in the previous “Editing SQL Statements with SQL*Plus” section to edit the statement with the ed utility—that is, if editing the statement is easier than retyping the whole thing.
Calling and Running SQL*Plus Script Files
Script (or batch) files are composed of related SQL statements and are the primary tool for implementing new software and patching old software. You use script files when you run installation or update programs in test, stage, and production environments. Quality and assurance departments want script files to ensure code integrity during predeployment testing. If errors are found in the script file, the script file is fixed by a new version. The final version of the script file is the one that a DBA runs when installing or upgrading an application or database system.
A script is rerunnable only if it can manage preexisting conditions in the production database without raising errors. You must eliminate all errors because administrators might not be able to judge which errors can be safely ignored. This means the script must perform conditional drops of tables and data migration processes.
Assuming you have a file named create_data.sql
in a /Home/student/Data
directory, you can run it with the @
(at) command in SQL*Plus. This script can be run from within SQL*Plus with either a relative filename or an absolute filename. A relative filename contains no path element because it assumes the present working path. An absolute filename requires a fully qualified path (also known as a canonical path) and filename.
The relative filename syntax depends on starting SQL*Plus from the directory where you have saved the script file. Here’s the syntax to run the create_data.sql
file:
@create_data.SQL |
Although the relative filename is easy to use, it limits you to starting SQL*Plus from a specific directory, which is not always possible. The absolute filename syntax works regardless of where you start SQL*Plus. Here’s an example for Linux:
@/home/student/Data/create_data.SQL |
The @
command is also synonymous with the SQL*Plus START
command. This means you can also run a script file based on its relative filename like this:
START create_data.SQL |
The @
command reads the script file into the active buffer and then runs the script file. You use two @@
symbols when you call one script file from another script file that exists in the same directory. Combining the @@
symbols instructs SQL*Plus to look in the directory specified by the command that ran the calling script. This means that a call such as the following runs a subordinate script file from the same directory:
@@some_subordinate.SQL |
If you need to run scripts delivered by Oracle and they reside in the ORACLE_HOME
, you can use a handy shortcut: the question mark (?
). The question mark maps to the ORACLE_HOME
. This means you can run a library script from the \rdbms subdirectory of the ORACLE_HOME
with this syntax in Linux:
?\rdbms\somescript.SQL |
The shortcuts and relative path syntax are attractive during development but should be avoided in production. Using fully qualified paths from a fixed environment variable such as the $ORACLE_HOME in Linux is generally the best approach.
Passing Parameters to SQL*Plus Script Files
Understanding how to write and run static SQL statements or script files is important, but understanding how to write and run SQL statements or script files that can solve dynamic problems is even more important. To write dynamic scripts, you use substitution variables, which act like placeholders in SQL statements or scripts. As mentioned earlier, SQL*Plus supports two modes of processing: interactive mode and call mode.
Interactive Mode Parameter Passing When you call a script that contains substitution variables, SQL*Plus prompts for values that you want to assign to the substitution variables. The standard prompt is the name of the substitution variable, but you can alter that behavior by using the ACCEPT
SQL*Plus command.
For example, assume that you want to write a script that looks for a table with a name that’s some partial string, but you know that the search string will change. A static SQL statement wouldn’t work, but a dynamic one would. The following dynamic script enables you to query the database catalog for any table based on only the starting part of the table name. The placeholder variable is designated using an ampersand (&
) or two. Using a single ampersand instructs SQL*Plus to make the substitution at runtime and forget the value immediately after the substitution. Using two ampersands (&&
) instructs SQL*Plus to make the substitution, store the variable as a session-level variable, and undefine the substitution variable.
SQL> SELECT table_name 2 , column_id 3 , column_name 4 FROM user_tab_columns 5 WHERE TABLE LIKE UPPER('&input')||'%'; |
The UPPER
function on line 5 promotes the input to uppercase letters because Oracle stores all metadata in uppercase and performs case-sensitive comparisons of strings by default. The query prompts as follows when run:
Enter VALUE FOR input: it |
When you press ENTER
, it shows the substitution of the value for the placeholder, like so:
old 5: WHERE table_name LIKE UPPER('&input')||'%' NEW 5: WHERE table_name LIKE UPPER('it')||'%' |
At least this is the default behavior. The behavior depends on the value of the SQL*Plus VERIFY
environment variable, which is set to ON by default. You can suppress that behavior by setting the value of VERIFY
to OFF
:
SET VERIFY OFF |
You can also configure the default prompt by using SQL*Plus formatting commands, like so:
ACCEPT input CHAR PROMPT 'Enter the beginning part of the table name:' |
This syntax acts like a double ampersand assignment and places the input substitution in memory as a session-level variable.
You can also format output through SQL*Plus. The COL[UMN]
command qualifies the column name, the FORMAT
command sets formatting to either numeric or alphanumeric string formatting, and the HEADING
command lets you replace the column name with a reporting header. The following is an example of formatting for the preceding query:
SQL> COLUMN table_name FORMAT A20 HEADING "Table Name" SQL> COLUMN column_id FORMAT 9990 HEADING "Column|ID" SQL> COLUMN column_name FORMAT A20 HEADING "Column Name" |
The table_name
column and column_name
column now display the first 20 characters before wrapping to the next line because they are set to an alphanumeric size of 20 characters. The column_id
column now displays the first four numeric values and would display a 0 when the column_id
value is less than 1. Actually, this only illustrates the possibility of printing at least a 0 because a surrogate key value can’t have a value less than 1. The column headers for the table_name and column_name columns print in title case with an intervening whitespace, while the column_id column prints “Column” on one line and “ID” on the next.
Batch Mode Parameter Passing Batch mode operations typically involve a script file that contains more than a single SQL statement. The following example uses a file that contains a single SQL statement because it successfully shows the concept and conserves space.
The trick to batch submission is the -s option flag, or the silent option. Script files that run from the command line with this option flag are batch programs (those using the SQL*Plus call mode). They suppress a console session from being launched and run much like statements submitted through the JDBC API or ODBC API. Batch programs must include a QUIT
or EXIT
statement at the end of the file or they will hang in SQL*Plus. This technique lets you create a file that can run from an operating system script file, also commonly known as a shell script.
The following sample.sql file shows how you would pass a parameter to a dynamic SQL statement embedded in a script file:
-- Disable echoing substitution. SET VERIFY OFF -- Open log file. SPOOL demo.txt -- Query data based on an externally set parameter. SELECT table_name , column_id , column_name FROM user_tab_columns WHERE table_name LIKE UPPER('&1')||'%'; -- Close log file. SPOOL OFF -- End session connection. QUIT; |
You would call the program from a batch file in Windows or a shell script in Linux. The syntax would include the user name and password, which presents a security risk. Provided you’ve secured your local server and you routinely purge your command history, you would call a sample.sql script from the present working directory like this:
sqlplus -s student/student @sample.SQL |
You can also pass the user name and password as connection parameters, which is illustrated in the following sample:
SET VERIFY OFF SPOOL demo.txt CONNECT &1/&2 SELECT USER FROM dual; SPOOL OFF QUIT; |
The script depends on the /nolog
option to start SQL*Plus without connecting to a schema.
You would call it like this, providing the user name and password:
sqlplus -s /nolog @create_data.sql student student |
As mentioned, there are risks to disclosing user names and passwords, because the information from the command line can be hacked from user history logs. Therefore, you should use anonymous login or operating system user validation when you want to run scripts like these.
Calling PL/SQL Programs
PL/SQL provides capabilities that don’t exist in SQL that are required by some database-centric applications. PL/SQL programs are stored programs that run inside a separate engine from the SQL statement engine. Their principal role is to group SQL statements and procedural logic to support transaction scopes across multiple SQL statements.
PL/SQL supports two types of stored programs: anonymous blocks and named blocks. Anonymous blocks are stored as trigger bodies and named blocks can be either stand-alone functions or procedures. PL/SQL also supports packages, which are groups of related functions and procedures. Packages support function and procedure overloading and provide many of the key utilities for Oracle databases. Oracle also supports object types and object bodies with the PL/SQL language. Object types support MEMBER and STATIC functions and procedures.
Oracle Database 12c PL/SQL Programming
Functions and procedures support pass-by-value and pass-by-reference methods available in other procedural programming languages. Functions return a value when they’re placed as right operands in an assignment and as calling parameters to other functions or procedures. Procedures don’t return a value or reference as a right operand and can’t be used as calling parameters to other functions or procedures.
Sometimes you’ll want to output diagnostic information to your console or formatted output from small PL/SQL programs to log files. This is easy to do in Oracle Database because PL/SQL supports anonymous block program units.
Before you can receive output from a PL/SQL block, you must open the buffer that separates the SQL*Plus environment from the PL/SQL engine. You do so with the following SQL*Plus command:
SET SERVEROUTPUT ON SIZE UNLIMITED |
You enable the buffer stream for display to the console by changing the status of the SERVEROUTPUT
environment variable to ON. Although you can set the SIZE
parameter to any value, the legacy parameter limit of 1 million bytes no longer exists. That limit made sense in earlier releases because of physical machine limits governing console speed and network bandwidth. Today, there’s really no reason to constrain the output size, and you should always use UNLIMITED
when you open the buffer.
You now know how to call the various types of PL/SQL programs. Whether the programs are yours or built-ins provided by Oracle, much of the logic that supports features of Oracle databases rely on stored programs.
Executing an Anonymous Block Program The following example demonstrates a traditional “Hello World!” program in an anonymous PL/SQL block. It uses a specialized stored program known as a package. Packages contain data types, shared variables, and cursors, functions, and procedures. You use the package name, a dot (the component selector), and a function or procedure name when you call package components.
You print “Hello World!”
with the following anonymous block program unit:
SQL> BEGIN 2 DBMS_OUTPUT.PUT_LINE('Hello World!'); 3 END; 4 / |
PL/SQL is a strongly typed language that uses declarative blocks rather than the curly braces you may know best from C, C#, C++, Java, Perl, or PHP. The execution block starts with the BEGIN
keyword and ends with an EXCEPTION
or END
keyword. Since the preceding sample program doesn’t employ an exception block, the END keyword ends the program. All statements and blocks in PL/SQL end with a semicolon. The forward slash on line 4 executes the anonymous block program because the last semicolon ends the execution block. The program prints “Hello World!” to the console, provided you opened the buffer by enabling the SQL*Plus SERVEROUTPUT
environment variable.
Anonymous block programs are very useful when you need one-time procedural processing and plan to execute it in the scope of a single batch or script file. Displaying results from the internals of the PL/SQL block is straightforward, as discussed earlier in this section: enable the SERVEROUTPUT environment variable.
Setting a Session Variable Inside PL/SQL Oracle databases also support session variables, which are not the same as session-level substitution variables. Session variables act like global variables in the scope and duration of your connection, as do session-level substitution variables, but the former differ from substitution variables in two ways. Substitution variables are limited to a string data type, while session variables may have any of the following data types: BINARY_DOUBLE
, BINARY_FLOAT
, CHAR
, CLOB
, NCHAR
, NCLOB
, NUMBER
, NVARCHAR2
, REFCURSOR
, or VARCHAR2
. Session variables, more commonly referred to as bind variables, can’t be assigned a value in SQL*Plus or SQL scope. You must assign values to session variables in an anonymous PL/SQL block.
Session variables, like session-level substitution variables, are very useful because you can share them across SQL statements. You must define session variables with the VARIABLE keyword, which gives them a name and data type but not a value. As an example, you can define a bind variable as a 20-character-length string like so:
VARIABLE whom VARCHAR2(20) |
You can assign a session variable with an anonymous PL/SQL block or a CALL to a stored function. Inside the anonymous block, you reference the variable with a colon preceding the variable name. The colon points to a session-level scope that is external to its local block scope:
BEGIN :whom := 'Sam'; END; / |
After assigning a value to the session variable, you can query it in a SQL statement or reuse it in another PL/SQL anonymous block program. The following query from the dual pseudo table concatenates string literals before and after the session variable:
SELECT 'Play it again, ' || :whom || '!' FROM dual; |
The colon appears in SQL statements, too. Both the anonymous block and SQL statement actually run in execution scopes that are equivalent to other subshells in operating system shell scripting. The query prints the following:
Play it again, Sam! |
The dual pseudo table is limited to a single row but can return one to many columns. You can actually display 999 columns, which is the same as the number of possible columns for a table in the Oracle Database.
Executing a Named Block Program Stored functions and procedures are known as named blocks, whether they’re stand-alone programs or part of a package. You can call a named function into a session variable or return the value in a query. Procedures are different because you execute them in the scope of a session or block and they have no return value (procedures are like functions that return a void data type).
The following is a “Hello World!” function that takes no parameters:
SQL> CREATE OR REPLACE FUNCTION hello_function RETURN VARCHAR2 IS 2 BEGIN 3 RETURN 'Hello World!'; 4 END hello_function; 5 / |
A query of the function uses the dual
pseudo table, like so:
SELECT hello_function FROM dual; |
When you call in a query a function that doesn’t have defined parameters, you can omit the parentheses traditionally associated with function calls with no arguments. However, if you use the SQL*Plus CALL
syntax, you must provide the opening and closing parentheses or you raise an ORA-06576
error message. Assuming that the return value of the function will be assigned to a bind variable of output, you need to define the session variable before calling the function value into the output variable.
The following defines a session variable as a 12-character, variable-length string:
VARIABLE my_output VARCHAR2(12) |
The following statement calls the function and puts the result in the session variable :my_output
. Preceding the session variable with a colon is required to make it accessible from SQL statements or anonymous PL/SQL blocks.
CALL hello_world AS INTO :my_output; |
The lack of parentheses causes this statement to fail and raises an ORA-06576 error message.
Adding the parentheses to the CALL statement makes it work:
CALL hello_world() AS INTO :my_output; |
Procedures work differently and are run by the EXECUTE
command. The following defines a stored procedure that echoes out the string "Hello World!"
Procedures are easier to work with from SQL*Plus because you don’t need to define session variables to capture output. All you do is enable the SQL*Plus SERVEROUTPUT
environment variable.
SQL> CREATE OR REPLACE PROCEDURE hello_procedure IS 2 BEGIN 3 dbms_output.put_line('Hello World!'); 4 END hello_procedure; 5 / |
You can execute the procedure successfully like so:
EXECUTE hello_procedure; |
Or you can execute the procedure with parentheses, like so:
EXECUTE hello_procedure(); |
You should see "Hello World!"
using either form. If it isn’t displayed, enable the SQL*Plus SERVEROUTPUT
environment variable. Remember that nothing returns to the console without enabling the SERVEROUTPUT
environment variable.
All the examples dealing with calls to PL/SQL named blocks use a pass-by-value method, which means that values enter the program units, are consumed, and other values are returned.
Writing SQL*Plus Log Files
When you’re testing the idea of how a query should work and want to capture one that did work, you can write it directly to a file. You can also capture all the activity of a long script by writing it to a log file. You can write log files in either of two ways: capture only the feedback messages, such as “four rows updated,” or capture the statement executed and then the feedback message. The output of the latter method are called verbose log files.
You can write verbose log files by leveraging the SQL*Plus ECHO
environment variable in SQL*Plus. You enable it with this command:
SET ECHO ON |
Enabling the ECHO command splits your SQL commands. It dispatches one to run against the server and echoes the other back to your console. This allows you to see statements in your log file before the feedback from their execution.
You open a log file with the following command:
SPOOL /home/student/Data/somefile.txt |
This logs all output from the script to the file /home/student/Data/somefile.txt
until the SPOOL OFF
command runs in the session. The output file’s extension is not required but defaults to .lst
when not provided explicitly. As an extension, .lst
doesn’t map to a default application in Windows or Linux environments. It’s a convention to use some file extension that maps to an editor as a text file.
You can append to an existing file with the following syntax:
SPOOL /home/student/Data/somefile.txt APPEND |
Both of the foregoing syntax examples use an absolute filename. You use a relative filename when you omit the qualified path, in which case the file is written to the directory where you launched sqlplus.
When using a relative path, you should know that it looks in the directory where you launched sqlplus
. That directory is called the present working directory or, by some old csh (C Shell) folks, the current working directory.
You close a log file with the following command:
SPOOL OFF |
No file exists until you close the buffer stream. Only one open buffer stream can exist in any session. This means you can write only to one log file at a time from a given session. Therefore, you should spool only in script files that aren’t called by other script files that might also spool to a log file. You shouldn’t attempt to log from the topmost script because that makes triaging errors among the programming units more complex.
A pragmatic approach to development requires that you log work performed. Failure to log your work can have impacts on the integrity of data and processes.