MacLochlainns Weblog

Michael McLaughlin's Technical Blog

Site Admin

Archive for the ‘sql’ Category

MySQL Partitioned Tables

with one comment

MySQL Partitioned Tables

Learning Outcomes

  • Learn about List Partitioning.
  • Learn about Range Partitioning.
  • Learn about Columns Partitioning.
  • Learn about Hash Partitioning.
  • Learn about Key Partitioning.
  • Learn about Subpartitioning.

Lesson Material

MySQL supports partitioning of tables. It supports range, list, hash, and key partitioning. Range partitioning lets you partition based on column values that fall within given ranges. List partitioning lets you partition based on columns matching one of a set of discrete values. Hash partitioning lets you partition based on the return value from a user-defined expression (the result from a stored SQL/PSM function). Key partitioning performs like hash partitioning, but it lets a user select one or more columns from the set of columns in a table; a hash manages the selection process for you. A hash is a method of organizing keys to types of data, and hashes speed access to read and change data in tables.

Each of the following subsections discusses one of the supported forms of partitioning in MySQL. Naturally, there are differences between other databases and MySQL’s implementation.

List Partitioning

A MySQL list partition works by identifying a column that contains an integer value, the franchise_number in the following example. Partitioning clauses follow the list of columns and constraints and require a partitioning key to be in the primary key or indexed.

The following list partition works with literal numeric values. MySQL uses the IN keyword for list partitions. Note that there’s no primary key designated and an index is on the auto-incrementing surrogate key column. A complete example is provided to avoid confusion on how to index the partitioning key:

CREATE TABLE franchise
( franchise_id     INT UNSIGNED AUTO_INCREMENT
, franchise_number INT UNSIGNED
, franchise_name   VARCHAR(20)
, city             VARCHAR(20)
, state            VARCHAR(20)
, index idx (franchise_id))
PARTITION BY LIST(franchise_number)
( PARTITION offshore VALUES IN (49,50)
, PARTITION west VALUES IN (34,45,48)
, PARTITION desert VALUES IN (46,47)
, PARTITION rockies VALUES IN (38,41,42,44));

The inclusion of a PRIMARY KEY constraint on the franchise_id column would trigger an ERROR 1503 when the partitioning key isn’t the primary key. The reason for the error message is that a primary key implicitly creates a unique index, and that index would conflict with the partitioning by list instruction. The use of a non-unique idx index on the franchise_id column is required when you want to partition on a non-primary key column.

Range Partitioning

Range partitioning works only with an integer value or an expression that resolves to an integer against the primary key column. The limitation of the integer drives the necessity of choosing an integer column for range partitioning. You can’t define a range-partitioned table with a PRIMARY KEY constraint unless the primary key becomes your partitioning key, like
the one below.

CREATE TABLE ordering
( ordering_id     INT UNSIGNED AUTO_INCREMENT
, item_id         INT UNSIGNED
, rental_amount   DECIMAL(15,2)
, rental_date     DATE
, index idx (ordering_id))
PARTITION BY RANGE(item_id)
( PARTITION jan2011 VALUES LESS THAN (10000) , PARTITION feb2011 VALUES LESS THAN (20000) , PARTITION mar2011 VALUES LESS THAN (30000));

Range partitioning is best suited to large tables that you want to break into smaller pieces based on the integer column. You can also use stored functions that return integers as the partitioning key instead of the numeric literals shown. Few other options are available in MySQL.

Columns Partitioning

Columns partitioning is a new variant of range and list partitioning. It is included in MySQL 5.5 and forward. Both range and list partitioning work on an integer-based column (using TINYINT, SMALLINT, MEDIUMINT, INT [alias INTEGER], and BIGINT). Columns partitioning extends those models by expanding the possible data types for the partitioning column to include CHAR, VARCHAR, BINARY, and VARBINARY string data types, and DATE, DATETIME, or TIMESTAMP data types. You still can’t use other number data types such as DECIMAL and FLOAT. The TIMESTAMP data type is also available only in range partitions with the caveat that you use a UNIX_TIMESTAMP function, according to MySQL Bug 42849.

Hash Partitioning

Hash partitions ensure an even distribution of rows across a predetermined number of partitions. It is probably the easiest way to partition a table quickly to test the result of partitioning on a large table. You should base hash partitions on a surrogate or natural primary key.

The following provides a modified example of the ordering table:

CREATE TABLE ordering
( ordering_id       INT UNSIGNED PRIMARY KEY AUTO_INCREMENT
, item_id           INT UNSIGNED
, rental_amount     DECIMAL(15,2)
, rental_date       DATE)
PARTITION BY HASH(ordering_id) PARTITIONS 8;

This is the partitioning type that benefits from a PRIMARY KEY constraint because it automatically creates a unique index that can be used by the hash. A non-unique index such as the list partitioning example doesn’t work for a hash partition.

Key Partitioning

Key partitioning is valuable because you can partition on columns that aren’t integers. It performs along the line of hash partitioning, except the MySQL Server uses its own hashing expression.

CREATE TABLE orders_list
( order_list_id     INT UNSIGNED AUTO_INCREMENT
, customer_surname  VARCHAR(30)
, store_id          INT UNSIGNED
, salesperson_id    INT UNSIGNED
, order_date        DATE
, index idx (order_list_id))
PARTITION BY KEY (order_date) PARTITIONS 8;

This is the only alternative when you want to partition by date ranges. Like the hash partition, it’s easy to deploy. The only consideration is the number of slices that you want to make of the data in the table.

Subpartitioning

The concept of subpartitioning is also known as composite partitioning. You can subpartition range or list partitions with a hash, linear hash, or linear key.

A slight change to the previously created ordering table is required to demonstrate composite partitioning: we’ll add a store_id column to the table definition. The following is an example of a range partition subpartitioned by a hash:

CREATE TABLE ordering
INT UNSIGNED
INT UNSIGNED
DATE
 ( ordering_id     INT UNSIGNED AUTO_INCREMENT
, item_id          INT UNSIGNED
, store_id         INT UNSIGNED
, rental_amount    DECIMAL(15,2)
, rental_date      DATE
, index idx (ordering_id))
PARTITION BY RANGE(item_id)
  SUBPARTITION BY HASH(store_id) SUBPARTITIONS 4
( PARTITION jan2011 VALUES LESS THAN (10000)
, PARTITION feb2011 VALUES LESS THAN (20000)
, PARTITION mar2011 VALUES LESS THAN (30000));

Composite partitioning is non-trivial and might require some experimentation to achieve optimal results. Plan on making a few tests of different scenarios before you deploy a solution.

Written by maclochlainn

May 30th, 2022 at 10:08 pm

Oracle Partitioned Tables

without comments

Oracle Partitioned Tables

Learning Outcomes

  • Learn about List Partitioning.
  • Learn about Range Partitioning.
  • Learn about Hash Partitioning.
  • Learn about Composite Partitioning.

Lesson Material

Partitioning is the process of breaking up a data source into a series of data sources. Partitioned tables are faster to access and transact against. Partitioning data becomes necessary as the amount of data grows in any table. It speeds the search to find rows and insert, update, or delete rows.

Oracle Database 21c supports four types of table partitioning: list, range, hash, and composite partitioning.

List Partitioning

A list partition works by identifying a column that contains a value, such as a STATE column in an ADDRESS table. Partitioning clauses follow the list of columns and constraints.

A list partition could use a STATE column, like the following (the complete example is avoided to conserve space, and the three dots represent the balance of partitions not shown):

CREATE TABLE franchise
( franchise_id    NUMBER CONSTRAINT pk_franchise PRIMARY KEY
, franchise_name  VARCHAR(20)
, city            VARCHAR(20)
, state           VARCHAR(20))
PARTITION BY LIST(state)
( PARTITION offshore VALUES('Alaska', 'Hawaii')
, PARTITION west VALUES('California', 'Oregon', 'Washington')
, PARTITION desert VALUES ('Arizona','New Mexico')
, PARTITION rockies VALUES ('Colorado', 'Idaho', 'Montana', 'Wyoming')
, ... );

This can be used with other values such as ZIP codes with great effect, but the maintenance of list partitioning can be considered costly. Cost occurs when the list of values changes over time. Infrequent change means low cost, while frequent change means high costs. In the latter case, you should consider other partitioning strategies. Although an Oracle database supports partitioning on a variable-length string, MySQL performs list partitioning only on integer columns.

Range Partitioning

Range partitioning is very helpful on any column that contains a continuous metric, such as dates or time. It works by stating a minimum set that is less than a certain value, and then a group of sets of higher values until you reach the top most set of values. This type of partition helps you improve performance by letting you search ranges rather than complete data sets. Range partitioning is also available in MySQL.

A range example based on dates could look like this:

PARTITION BY RANGE(rental_date)
( PARTITION rental_jan2011
  VALUES LESS THAN TO_DATE('31-JAN-11','DD-MON-YY')
, PARTITION rental_feb2011
  VALUES LESS THAN TO_DATE('28-FEB-11','DD-MON-YY')
, PARTITION rental_mar2011
  VALUES LESS THAN TO_DATE('31-MAR-11','DD-MON-YY')
, ... );

The problem with this type of partitioning, however, is that the new months require constant management. Many North American businesses simply add partitions for all months in the year as an annual maintenance task during the holidays in November or December. Companies that opt for bigger range increments reap search and access benefits from range partitioning, while minimizing ongoing maintenance expenses.

Hash Partitioning

Hash partitioning is much easier to implement than list or range partitioning. Many DBAs favor it because it avoids the manual maintenance of list and range partitioning. Oracle Database 21c documentation recommends that you implement a hash for the following reasons:

  • There is no concrete knowledge about how much data maps to a partitioning range.
  • The sizes of partitions are unknown at the outset and difficult to balance as data is added to the database.
  • A range partition might cluster data in an ineffective way.

This next statement creates eight partitions and stores them respectively in one of the eight tablespaces. The hash partition manages nodes and attempts to balance the distribution of rows across the nodes.

PARTITION BY HASH(store)
PARTITIONS 8
STORE IN (tablespace1, tablespace2, tablespace3, tablespace4
         ,tablespace5, tablespace6, tablespace7, tablespace8);

As you can imagine the maintenance for this type of partitioning is low. Some DBAs choose this method to get an initial sizing before adopting a list or range partitioning plan. Maximizing the physical resources of the machine ultimately rests with the DBAs who manage the system. Developers need to stand ready to assist DBAs with analysis and syntax support.

Composite Partitioning

Composite partitioning requires a partition and subpartition. The composites are combinations of two types of partitioning—typically, list and range partitioning, or range and hash composite partitioning. Which of these you should choose depends on a few considerations. List and range composite partitioning is done for historical information and is well suited for data warehouses. This method lets you partition on unordered or unrelated column values.

A composite partition like this uses the range as the partition and the list as the subpartition, like the following:

PARTITION BY RANGE (rental_date)
 SUBPARTITION BY LIST (state)
 (PARTITION FQ1_1999 VALUES LESS THAN (TO_DATE('1-APR-2011','DD-MON-YYYY'))
  (SUBPARTITION offshore VALUES('Alaska', 'Hawaii')
  , SUBPARTITION west VALUES('California', 'Oregon', 'Washington')
  , SUBPARTITION desert VALUES ('Arizona','New Mexico')
  , SUBPARTITION rockies VALUES ('Colorado', 'Idaho', 'Montana', 'Wyoming')
  , ... )
,(PARTITION FQ2_1999 VALUES LESS THAN (TO_DATE('1-APR-2011','DD-MON-YYYY'))
  (SUBPARTITION offshore VALUES('Alaska', 'Hawaii')
  , SUBPARTITION west VALUES('California', 'Oregon', 'Washington')
  , SUBPARTITION desert VALUES ('Arizona','New Mexico')
  , SUBPARTITION rockies VALUES ('Colorado', 'Idaho', 'Montana', 'Wyoming')
  , ... )
, ... )

Range and hash composite partitioning is done for historical information when you also need to stripe data. Striping is the process of creating an attribute in a table that acts as a natural subtype or separator of data. Users typically view data sets of one subtype, which means organizing the data by stripes (subtypes) can speed access based on user access patterns.

Range is typically the partition and the hash is the subpartition in this composite partitioning schema. The syntax for this type of partition is shown next:

PARTITION BY RANGE (rental_date)
 SUBPARTITION BY HASH(store)
  SUBPARTITIONS 8 STORE IN (tablespace1, tablespace2, tablespace3
                           ,tablespace4, tablespace5, tablespace6
                           ,tablespace7, tablespace8)
   ( PARTITION rental_jan2011
     VALUES LESS THAN TO_DATE('31-JAN-11','DD-MON-YY')
   , PARTITION rental_feb2011
     VALUES LESS THAN TO_DATE('28-FEB-11','DD-MON-YY')
   , PARTITION rental_mar2011
     VALUES LESS THAN TO_DATE('31-MAR-11','DD-MON-YY')
   , ... )

Written by maclochlainn

May 30th, 2022 at 9:38 pm

PL/SQL Table Function

without comments

An Oracle example was requested as a comparison against the quick tutorial I wrote on how to do this in PostgreSQL’s PL/pgSQL. Unfortunately, there are many more moving parts to deliver this type of solution in Oracle’s PL/SQL.

The functions is same and simple. It returns the list of conquistadors that were originally German. It does that by filtering on the lang column in the table. For example, you use ‘de‘ for German. The additional moving parts are the required User-Defined Types (UDTs); one is a record structure and the other is a list (or Oracle parlance table).

The drops are unconditional and as such will trigger errors the first time they’re run but including PL/SQL blocks to make them conditional would have made the code much larger. It’s already larger because Oracle doesn’t support comma-delimited lists in the VALUES clause.

I’ll stage this with the same conquistador table used in the last post. Then, connect to the psql shell and run the following script file:

/* Drop the conquistador table. */
DROP TABLE conquistador;
 
/* Create the conquistador table. */
CREATE TABLE conquistador
( conquistador_id   NUMBER
, conquistador      VARCHAR(30)
, actual_name       VARCHAR(30)
, nationality       VARCHAR(30)
, lang              VARCHAR(2));
 
/* Drop the conquistador sequence. */
DROP SEQUENCE conquistador_seq;
 
/* Create the conquistador_seq with a 1001 start value. */
CREATE SEQUENCE conquistador_seq START WITH 1001;
 
/* Insert 9 rows into the table. */
INSERT INTO conquistador
( conquistador_id, conquistador, actual_name, nationality, lang )
VALUES
(conquistador_seq.NEXTVAL,'Juan de Fuca','Ioánnis Fokás','Greek','el');
 
INSERT INTO conquistador
( conquistador_id, conquistador, actual_name, nationality, lang )
VALUES 
(conquistador_seq.NEXTVAL,'Nicolás de Federmán','Nikolaus Federmann','German','de');
 
INSERT INTO conquistador
( conquistador_id, conquistador, actual_name, nationality, lang )
VALUES
(conquistador_seq.NEXTVAL,'Sebastián Caboto','Sebastiano Caboto','Venetian','it');
 
INSERT INTO conquistador
( conquistador_id, conquistador, actual_name, nationality, lang )
VALUES
(conquistador_seq.NEXTVAL,'Jorge de la Espira','Georg von Speyer','German','de');
 
INSERT INTO conquistador
( conquistador_id, conquistador, actual_name, nationality, lang )
VALUES
(conquistador_seq.NEXTVAL,'Eusebio Francisco Kino','Eusebius Franz Kühn','Italian','it');
 
INSERT INTO conquistador
( conquistador_id, conquistador, actual_name, nationality, lang )
VALUES
(conquistador_seq.NEXTVAL,'Wenceslao Linck','Wenceslaus Linck','Bohemian','cs');
 
INSERT INTO conquistador
( conquistador_id, conquistador, actual_name, nationality, lang )
VALUES
(conquistador_seq.NEXTVAL,'Fernando Consag','Ferdinand Konšcak','Croatian','sr');
 
INSERT INTO conquistador
( conquistador_id, conquistador, actual_name, nationality, lang )
VALUES
(conquistador_seq.NEXTVAL,'Américo Vespucio','Amerigo Vespucci','Italian','it');
 
INSERT INTO conquistador
( conquistador_id, conquistador, actual_name, nationality, lang )
VALUES
(conquistador_seq.NEXTVAL,'Alejo García','Aleixo Garcia','Portuguese','pt');

While unnecessary in PL/pgSQL, you must create User-Defined Types (UDTs) to write a table function. You must also create a local procedure to avoid allocating memory before assigning values to the list. These are the UDTs required:

/* Drop the dependency before the dependent type. */
DROP TYPE conquistador_table;
DROP TYPE conquistador_struct;
 
/* Create the UDT for a record structure accessible in SQL. */
CREATE OR REPLACE
  TYPE conquistador_struct IS OBJECT
  ( conquistador      VARCHAR(30)
  , actual_name       VARCHAR(30)
  , nationality       VARCHAR(30));
/
 
/* Create a list of the UDT. */
CREATE OR REPLACE
  TYPE conquistador_table IS TABLE OF conquistador_struct;
/

Drop any existing function or procedure of the same name before you try to build it. Oracle’s OR REPLACE fails when you try to use it for a function when there is already a procedure using the same name, and vice versa.

/* Drop the function to avoid any conflict with a procedure of
   the same name. */
DROP FUNCTION getConquistador;

Now, you can build another script file to create the getConquistador function, like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
/* Create the function. */
CREATE OR REPLACE
  FUNCTION getConquistador
  (pv_lang IN VARCHAR) RETURN conquistador_table IS
 
  /* Declare a return variable. */
  lv_retval  CONQUISTADOR_TABLE := conquistador_table();
 
  /* Declare a dynamic cursor. */
  CURSOR get_conquistador
  ( cv_lang  VARCHAR2 ) IS
    SELECT c.conquistador
    ,      c.actual_name
    ,      c.nationality
    FROM   conquistador c
    WHERE  c.lang = cv_lang;
 
  /* Local procedure to add to the song. */
  PROCEDURE ADD
  ( pv_input  CONQUISTADOR_STRUCT ) IS
  BEGIN
    lv_retval.EXTEND;
    lv_retval(lv_retval.COUNT) := pv_input;
  END ADD;
 
BEGIN
  /* Read through the cursor and assign to the UDT table. */
  FOR i IN get_conquistador(pv_lang) LOOP
    add(conquistador_struct( i.conquistador
                           , i.actual_name
                           , i.nationality ));
  END LOOP;
 
  /* Return collection. */
  RETURN lv_retval;
END;
/

While there is some white space for readability, the Oracle version is basically twice as long as the PL/pgSQL version. It also requires you to add UDTs to the data dictionary to make it work. PL/pgSQL actually doesn’t let you add references to type definitions and requires you use enumerated descriptions with column definitions.

Then, you can test it with the following syntax. The TABLE function is required to convert the list to a SQL consumable result set:

COL conquistador  FORMAT A21
COL actual_name   FORMAT A21
COL nationality   FORMAT A12
SELECT * FROM TABLE(getConquistador('de'));

It will return the following:

CONQUISTADOR          ACTUAL_NAME           NATIONALITY
--------------------- --------------------- ------------
Nicolás de Federmán   Nikolaus Federmann    German
Jorge de la Espira    Georg von Speyer      German
 
2 rows selected.

As always, I hope this helps with a technique that’s useful.

Written by maclochlainn

May 28th, 2022 at 6:47 pm

PostgreSQL Unicode

with 3 comments

It seems unavoidable to use Windows. Each time I’m compelled to run tests on the platform I find new errors. For example, they don’t use 4-byte unicode and as a result when you want to use Unicode in PostgreSQL there’s a mismatch.

For example, change the Active Console Code Page with the chcp (change code page) to match the one PostgreSQL uses, like:

chip 1252

It lets you avoid this warning message:

Password for user postgres:
psql (14.1)
WARNING: Console code page (437) differs from Windows code page (1252)
         8-bit characters might not work correctly. See psql reference
         page "Notes for Windows users" for details.
Type "help" for help.
 
postgres=#

However, it won’t avoid display issues with real Unicode values. For example, let’s use a small international table like the following:

/* Conditionally drop the conquistador table. */
DROP TABLE IF EXISTS conquistador;
 
/* Create the conquistador table. */
CREATE TABLE conquistador
( conquistador_id   SERIAL
, conquistador      VARCHAR(30)
, actual_name       VARCHAR(30)
, nationality       VARCHAR(30)
, lang              VARCHAR(2));
 
/* Insert some conquistadors into the table. */
INSERT INTO conquistador
( conquistador
, actual_name
, nationality
, lang )
VALUES
 ('Juan de Fuca','Ioánnis Fokás','Greek','el')
,('Nicolás de Federmán','Nikolaus Federmann','German','de')
,('Sebastián Caboto','Sebastiano Caboto','Venetian','it')
,('Jorge de la Espira','Georg von Speyer','German','de')
,('Eusebio Francisco Kino','Eusebius Franz Kühn','Italian','it')
,('Wenceslao Linck','Wenceslaus Linck','Bohemian','cs')
,('Fernando Consag','Ferdinand Konšcak','Croatian','sr')
,('Américo Vespucio','Amerigo Vespucci','Italian','it')
,('Alejo García','Aleixo Garcia','Portuguese','pt');
 
/* Query the values from the conquistador table. */
SELECT * FROM conquistador;

When you call the script to load it, like:

\i testScript.sql

It’ll display the following, which you can check against the strings in the VALUES clause above. There are encoding issues on lines 1, 2, 3, 5, 7, and 8 below.

 conquistador_id |      conquistador      |     actual_name      | nationality | lang
-----------------+------------------------+----------------------+-------------+------
               1 | Juan de Fuca           | Ioánnis Fokás      | Greek       | el
               2 | Nicolás de Federmán  | Nikolaus Federmann   | German      | de
               3 | Sebastián Caboto      | Sebastiano Caboto    | Venetian    | it
               4 | Jorge de la Espira     | Georg von Speyer     | German      | de
               5 | Eusebio Francisco Kino | Eusebius Franz Kühn | Italian     | it
               6 | Wenceslao Linck        | Wenceslaus Linck     | Bohemian    | cs
               7 | Fernando Consag        | Ferdinand Konšcak   | Croatian    | sr
               8 | Américo Vespucio      | Amerigo Vespucci     | Italian     | it
               9 | Alejo García           | Aleixo Garcia        | Portuguese  | pt
(9 rows)

If you’re like me, it was annoying. The problem is that the native 2-byte Unicode of Microsoft sends values into PostgreSQL that are invalid. Those codes are read back with unintended values from other character encoding sets.

While you can’t set Windows generic encoding to 65001 without causing the system problems, you can set Active Console Code Page value in the scope of a Command-Line session before running the script.

The chcp command lets you set it to 4-byte Unicode, like:

chcp 65001

Now, rerun the script and PostgreSQL will display the correct character encoding set with some spacing irregularities. However, that’s not what’s important when you call table from another programming language through the ODBC-layer. The data will be returned in a 4-byte Unicode encoding stream.

 conquistador_id |      conquistador      |     actual_name      | nationality | lang
-----------------+------------------------+----------------------+-------------+------
               1 | Juan de Fuca           | Ioánnis Fokás        | Greek       | el
               2 | Nicolás de Federmán    | Nikolaus Federmann   | German      | de
               3 | Sebastián Caboto       | Sebastiano Caboto    | Venetian    | it
               4 | Jorge de la Espira     | Georg von Speyer     | German      | de
               5 | Eusebio Francisco Kino | Eusebius Franz Kühn  | Italian     | it
               6 | Wenceslao Linck        | Wenceslaus Linck     | Bohemian    | cs
               7 | Fernando Consag        | Ferdinand Konšcak    | Croatian    | sr
               8 | Américo Vespucio       | Amerigo Vespucci     | Italian     | it
               9 | Alejo García           | Aleixo Garcia        | Portuguese  | pt
(9 rows)

A similar error to what I encountered testing MySQL Workbench’s ability to export SQL Server databases 10 years ago. I thought giving a solution to get coerce correct 4-byte Unicode data insertion may help those who also may be surprised by the behavior.

PL/SQL List Function

without comments

Students wanted to see how to write PL/SQL functions that accept, process, and return lists of values. I thought it would be cool to also demonstrate coupling of loop behaviors and wrote the example using the 12-Days of Christmas lyrics.

The twelve_days function accepts two different collections. One is an Attribute Data Type (ADT) and the other a User-Defined Type (UDT). An ADT is based on a scalar data type, and a UDT is based on an object type. Object types are basically data structures, and they support both positional and named notation for variable assignments.

The twelve_days function returns a list of string, which is an ADT of the VARCHAR2 data type. Creating the ADT types is easy and a single step, like:

/* Create a days object type. */
CREATE OR REPLACE
  TYPE days IS TABLE OF VARCHAR2(8);
/
 
/* Create a string object type. */
CREATE OR REPLACE
  TYPE song IS TABLE OF VARCHAR2(36);
/

Creating the UDT is more complex and requires two steps. You need to create the UDT object type, or structure, and then the list based on the UDT object type, like:

/* Create a lyric object type. */
CREATE OR REPLACE
  TYPE lyric IS OBJECT
  ( DAY   VARCHAR2(8)
  , gift  VARCHAR2(24));
/
 
/* Create a lyrics object type. */
CREATE OR REPLACE
  TYPE lyrics IS TABLE OF LYRIC;
/

Now, you can create the twelve_days function that uses these ADT and UDT types, like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
CREATE OR REPLACE
  FUNCTION twelve_days
  ( pv_days   DAYS
  , pv_gifts  LYRICS ) RETURN song IS
 
  /* Initialize the collection of lyrics. */
  lv_retval  SONG := song();
 
  /* Local procedure to add to the song. */
  PROCEDURE ADD
  ( pv_input  VARCHAR2 ) IS
  BEGIN
    lv_retval.EXTEND;
    lv_retval(lv_retval.COUNT) := pv_input;
  END ADD;
 
BEGIN
  /* Read forward through the days. */
  FOR i IN 1..pv_days.COUNT LOOP
    ADD('On the ' || pv_days(i) || ' day of Christmas');
    ADD('my true love sent to me:');
 
    /* Read backward through the lyrics based on the ascending value of the day. */
    FOR j IN REVERSE 1..i LOOP
      IF i = 1 THEN
        ADD('-'||'A'||' '||pv_gifts(j).gift);
      ELSE
        ADD('-'||pv_gifts(j).DAY||' '||pv_gifts(j).gift);
      END IF;
    END LOOP;
 
    /* A line break by verse. */
    ADD(CHR(13));
  END LOOP;
 
  /* Return the song's lyrics. */
  RETURN lv_retval;
END;
/

You may notice the local add procedure on lines 10 thru 15. It lets you perform the two tasks required for populating an element in a SQL object type list in one line in the main body of the twelve_days function.

The add procedure first uses the EXTEND function to allocate space before assigning the input value to the newly allocated element in the list. Next, you can call the function inside the following SQL query:

SELECT column_value AS "12-Days of Christmas"
FROM   TABLE(twelve_days(days('first','second','third','fourth'
                             ,'fifth','sixth','seventh','eighth'
                             ,'nineth','tenth','eleventh','twelfth')
                        ,lyrics(lyric(DAY => 'and a', gift => 'Partridge in a pear tree')
                               ,lyric(DAY => 'Two',   gift => 'Turtle doves')
                               ,lyric(DAY => 'Three', gift => 'French hens')
                               ,lyric(DAY => 'Four',  gift => 'Calling birds')
                               ,lyric(DAY => 'Five',  gift => 'Golden rings' )
                               ,lyric(DAY => 'Six',   gift => 'Geese a laying')
                               ,lyric(DAY => 'Seven', gift => 'Swans a swimming')
                               ,lyric(DAY => 'Eight', gift => 'Maids a milking')
                               ,lyric(DAY => 'Nine',  gift => 'Ladies dancing')
                               ,lyric(DAY => 'Ten',   gift => 'Lords a leaping')
                               ,lyric(DAY => 'Eleven',gift => 'Pipers piping')
                               ,lyric(DAY => 'Twelve',gift => 'Drummers drumming'))));

It will print:

12-Days of Christmas
------------------------------------
On the first day of Christmas
my true love sent to me:
-A Partridge in a pear tree
 
On the second day of Christmas
my true love sent to me:
-Two Turtle doves
-and a Partridge in a pear tree
 
On the third day of Christmas
my true love sent to me:
-Three French hens
-Two Turtle doves
-and a Partridge in a pear tree
 
... redacted for space ...
 
On the twelfth day of Christmas
my true love sent to me:
-Twelve Drummers drumming
-Eleven Pipers piping
-Ten Lords a leaping
-Nine Ladies dancing
-Eight Maids a milking
-Seven Swans a swimming
-Six Geese a laying
-Five Golden rings
-Four Calling birds
-Three French hens
-Two Turtle doves
-and a Partridge in a pear tree

As always, I hope the example helps those looking for a solution to this type of problem.

Written by maclochlainn

May 13th, 2022 at 12:57 am

MySQL RegExp Default

with 4 comments

We had an interesting set of questions regarding the REGEXP comparison operator in MySQL today in both sections of Database Design and Development. They wanted to know the default behavior.

For example, we built a little movie table so that we didn’t change their default sakila example database. The movie table was like this:

CREATE TABLE movie
( movie_id     int unsigned primary key auto_increment
, movie_title  varchar(60)) auto_increment=1001;

Then, I inserted the following rows:

INSERT INTO movie 
( movie_title )
VALUES
 ('The King and I')
,('I')
,('The I Inside')
,('I am Legend');

Querying all results with this query:

SELECT * FROM movie;

It returns the following results:

+----------+----------------+
| movie_id | movie_title    |
+----------+----------------+
|     1001 | The King and I |
|     1002 | I              |
|     1003 | The I Inside   |
|     1004 | I am Legend    |
+----------+----------------+
4 rows in set (0.00 sec)

The following REGEXP returns all the rows because it looks for a case insensitive “I” anywhere in the string.

SELECT movie_title
FROM   movie
WHERE  movie_title REGEXP 'I';

The implicit regular expression is actually:

WHERE  movie_title REGEXP '^.*I.*$';

It looks for zero-to-many of any character before and after the “I“. You can get any string beginning with an “I” with the “^I“; and any string ending with an “I” with the “I$“. Interestingly, the “I.+$” should only match strings with one or more characters after the “I“, but it returns:

+----------------+
| movie_title    |
+----------------+
| The King and I |
| The I Inside   |
| I am Legend    |
+----------------+
3 rows in set (0.00 sec)

This caught me by surprise because I was lazy. As pointed out in the comment, it only appears to substitute a “.*“, or zero-to-many evaluation for the “.+” because it’s a case-insensitive search. There’s another lowercase “i” in the “The King and I” and that means the regular expression returns true because that “i” has one-or-more following characters. If we convert it to a case-sensitive comparison with the keyword binary, it works as expected because it ignores the lowercase “i“.

WHERE  binary movie_title REGEXP '^.*I.*$';

This builds on my 10-year old post on Regular Expressions. As always, I hope these notes helps others discovering features and behaviors of the MySQL database, and Bill thanks for catching my error.

Written by maclochlainn

April 29th, 2022 at 11:50 pm

Oracle ODBC DSN

without comments

As I move forward with trying to build an easy to use framework for data analysts who use multiple database backends and work on Windows OS, here’s a complete script that lets you run any query stored in a file to return a CSV file. It makes the assumption that you opted to put the user ID and password in the Windows ODBC DSN, and only provides the ODBC DSN name to make the connection to the ODBC library and database.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
# A local function for verbose reporting.
function Get-Message ($param, $value = $null) {
  if (!($value)) {
    Write-Host "Evaluate swtich    [" $param "]" } 	  
  else {
    Write-Host "Evaluate parameter [" $param "] and [" $value "]" } 
}
 
# Read SQLStatement file and minimally parse it.
function Get-SQLStatement ($sqlStatement) {
  # Set localvariable for return string value.
  $statement = ""
 
  # Read a file line-by-line.
  foreach ($line in Get-Content $sqlStatement) {
    # Use regular expression to replace multiple whitespace.
    $line = $line -replace '\s+', ' '
 
    # Add a whitespace to avoid joining keywords from different lines;
    # and remove trailing semicolons which are unneeded.
    if (!($line.endswith(";"))) {
      $statement += $line + " " }
    else {
      $statement += $line.trimend(";") }
  }
  # Returned minimally parsed statement.
  return $statement
}
 
# Set default type of SQL statement value to a query.
$stmt = "select"
 
# Set a variable to hold a SQL statement from a file.
$query = ""
 
# Set default values for SQL input and output files.
$outFile = "output.csv"
$sqlFile = "query.sql"
 
# Set default path to: %USERPROFILE%\AppData\Local\Temp folder, but ir 
# the tilde (~) in lieu of the %USERPROFILE% environment variable value.
$path = "~\AppData\Local\Temp"
 
# Set a verbose switch.
$verbose = $false
 
# Wrap the Parameter call to avoid a type casting warning.
try {
  param (
    [Parameter(Mandatory)][hashtable]$args
  )
}
catch {}
 
# Check for switches and parameters with arguments.
for ($i = 0; $i -lt $args.count; $i += 1) {
  if (($args[$i].startswith("-")) -and ($args[$i + 1].startswith("-"))) {
    if ($args[$i] = "-v") {
      $verbose = $true }
      # Print to verbose console.
    if ($verbose) { Get-Message $args[$i] }}
  elseif ($args[$i].startswith("-")) {
    # Print to verbose console.
    if ($verbose) { Get-Message $args[$i] $args[$i + 1] }
 
    # Evaluate and take action on parameters and values.
    if ($args[$i] -eq "-o") {
      $outfile = $args[$i + 1] }
    elseif ($args[$i] -eq "-q") {
      $sqlFile = $args[$i + 1] }
    elseif ($args[$i] -eq "-p") {
      $path = $args[$i + 1] }
  }
}
 
# Set a PowerShell Virtual Drive.
New-PSDrive -Name folder -PSProvider FileSystem -Description 'Forder Location' `
            -Root $path | Out-Null
 
# Remove the file only when it exists.
if (Test-Path folder:$outFile) {
  Remove-Item -Path folder:$outFile }
 
# Read SQL file into minimally parsed string.
if (Test-Path folder:$sqlFile) {
  $query = Get-SQLStatement $sqlFile }
 
# Set a ODBC DSN connection string.
$ConnectionString = 'DSN=OracleGeneric'
 
# Set an Oracle Command Object for a query.
$Connection = New-Object System.Data.Odbc.OdbcConnection;
$Connection.ConnectionString = $ConnectionString
 
# Attempt connection.
try {
  $Connection.Open()
 
  # Create a SQL command.
  $Command = $Connection.CreateCommand();
  $Command.CommandText = $query;
 
  # Attempt to read SQL command.
  try {
    $row = $Command.ExecuteReader();
 
    # Read while records are found.
    while ($row.Read()) {
      # Initialize output for each row.
      $output = ""
 
      # Navigate across all columns (only two in this example).
      for ($column = 0; $column -lt $row.FieldCount; $column += 1) {
        # Mechanic for comma-delimit between last and first name.  
        if ($output.length -eq 0) { 
          $output += $row[$column] }
        else {
          $output += ", " + $row[$column] }
      }
      # Write the output from the database to a file.
      Add-Content -Value $output -Path folder:$outFile
    }
  } catch {
    Write-Error "Message: $($_.Exception.Message)"
    Write-Error "StackTrace: $($_.Exception.StackTrace)"
    Write-Error "LoaderExceptions: $($_.Exception.LoaderExceptions)"
  } finally {
    # Close the reader.
    $row.Close() }
} catch {
  Write-Error "Message: $($_.Exception.Message)"
  Write-Error "StackTrace: $($_.Exception.StackTrace)"
  Write-Error "LoaderExceptions: $($_.Exception.LoaderExceptions)"
} finally {
  $Connection.Close() }

You can use a command-line call like this:

powershell ./OracleContact.ps1 -v -o output.csv -q script.sql -p .

It produces the following verbose output to the console:

Evaluate swtich    [ -v ]
Evaluate parameter [ -o ] and [ output.csv ]
Evaluate parameter [ -q ] and [ script.sql ]
Evaluate parameter [ -p ] and [ . ]

You can suppress printing to the console by eliminating the -v switch from the parameter list.

As always, I hope this helps those looking for a solution to less tedious interactions with the Oracle database.

Transaction Management

without comments

Transaction Management

Learning Outcomes

  • Learn how to use Multiversion Concurrency Control (MVCC).
  • Learn how to manage ACID-compliant transactions.
  • Learn how to use:

    • SAVEPOINT Statement
    • COMMIT Statement
    • ROLLBACK Statement

Lesson Material

Transaction Management involves two key components. One is Multiversion Concurrency Control (MVCC) so one user doesn’t interfere with another user. The other is data transactions. Data transactions packag SQL statements in the scope of an imperative language that uses Transaction Control Language (TCL) to extend ACID-compliance from single SQL statements to groups of SQL statements.

Multiversion Concurrency Control (MVCC)

Multiversion Concurrency Control (MVCC) uses database snapshots to provide transactions with memory-persistent copies of the database. This means that users, via their SQL statements, interact with the in-memory copies of data rather than directly with physical data. MVCC systems isolate user transactions from each other and guarantee transaction integrity by preventing dirty transactions, writes to the data that shouldn’t happen and that make the data inconsistent. Oracle Database 12c prevents dirty writes by its MVCC and transaction model.

Transaction models depend on transactions, which are ACID-compliant blocks of code. Oracle Database 12c provides an MVCC architecture that guarantees that all changes to data are ACID-compliant, which ensures the integrity of concurrent operations on data—transactions.

ACID-compliant transactions meet four conditions:

Atomic
They complete or fail while undoing any partial changes.
Consistent
They change from one state to another the same way regardless of whether
the change is made through parallel actions or serial actions.
Isolated
Partial changes are never seen by other users or processes in the concurrent system.
Durable
They are written to disk and made permanent when completed.

Oracle Database 12c manages ACID-compliant transactions by writing them to disk first, as redo log files only or as both redo log files and archive log files. Then it writes them to the database. This multiple-step process with logs ensures that Oracle database’s buffer cache (part of the instance memory) isn’t lost from any completed transaction. Log writes occur before the acknowledgement-of-transactions process occurs.

The smallest transaction in a database is a single SQL statement that inserts, updates, or deletes rows. SQL statements can also change values in one or more columns of a row in a table. Each SQL statement is by itself an ACID-compliant and MVCC-enabled transaction when managed by a transaction-capable database engine. The Oracle database is always a transaction-capable system. Transactions are typically a collection of SQL statements that work in close cooperation to accomplish a business objective. They’re often grouped into stored programs, which are functions, procedures, or triggers. Triggers are specialized programs that audit or protect data. They enforce business rules that prevent unauthorized changes to the data.

SQL statements and stored programs are foundational elements for development of business applications. They contain the interaction points between customers and the data and are collectively called the application programming interface (API) to the database. User forms (typically web forms today) access the API to interact with the data. In well-architected business application software, the API is the only interface that the form developer interacts with.

Database developers, such as you and I, create these code components to enforce business rules while providing options to form developers. In doing so, database developers must guard a few things at all cost. For example, some critical business logic and controls must prevent changes to the data in specific tables, even changes in API programs. That type of critical control is often written in database triggers. SQL statements are events that add, modify, or delete data. Triggers guarantee that API code cannot make certain additions, modifications, or deletions to critical resources, such as tables. Triggers can run before or after SQL statements. Their actions, like the SQL statements themselves, are temporary until the calling scope sends an instruction to commit the work performed.

A database trigger can intercept values before they’re placed in a column, and it can ensure that only certain values can be inserted into or updated in a column. A trigger overrides an INSERT or UPDATE statement value that violates a business rule and then it either raises an error and aborts the transaction or changes the value before it can be inserted or updated into the table. Chapter 12 offers examples of both types of triggers in Oracle Database 12c.
MVCC determines how to manage transactions. MVCC guarantees how multiple users’ SQL statements interact in an ACID compliant manner. The next two sections qualify how data transactions work and how MVCC locks and isolates partial results from data transactions.

Data Transaction

Data Manipulation Language (DML) commands are the SQL statements that transact against the data. They are principally the INSERT, UPDATE, and DELETE statements. The INSERT statement adds new rows in a table, the UPDATE statement modifies columns in existing rows, and the DELETE statement removes a row from a table.

The Oracle MERGE statement transacts against data by providing a conditional insert or update feature. The MERGE statement lets you add new rows when they don’t exist or change column values in rows that do exist.

Inserting data seldom encounters a conflict with other SQL statements because the values become a new row or rows in a table. Updates and deletes, on the other hand, can and do encounter conflicts with other UPDATE and DELETE statements. INSERT statements that encounter conflicts occur when columns in a new row match a preexisting row’s uniquely constrained columns. The insertion is disallowed because only one row can contain the unique column set.

These individual transactions have two phases in transactional databases such as Oracle. The first phase involves making a change that is visible only to the user in the current session. The user then has the option of committing the change, which makes it permanent, or rolling back the change, which undoes the transaction. Developers use Transaction Control Language (TCL) commands to confirm or cancel transactions. The COMMIT statement confirms or makes permanent any change, and the ROLLBACK statement cancels or undoes any change.

A generic transaction lifecycle for a two-table insert process implements a business rule that specifies that neither INSERT statement works unless they both work. Moreover, if the first INSERT statement fails, the second INSERT statement never runs; and if the second INSERT statement fails, the first INSERT statement is undone by a ROLLBACK statement to a SAVEPOINT.

After a failed transaction is unwritten, good development practice requires that you write the failed event(s) to an error log table. The write succeeds because it occurs after the ROLLBACK statement but before the COMMIT statement.

A SQL statement followed by a COMMIT statement is called a transaction process, or a two-phase commit (2PC) protocol. ACID-compliant transactions use a 2PC protocol to manage one SQL statement or collections of SQL statements. In a 2PC protocol model, the INSERT, UPDATE, MERGE, or DELETE DML statement starts the process and submits changes. These DML statements can also act as events that fire database triggers assigned to the table being changed.

Transactions become more complex when they include database triggers because triggers can inject an entire layer of logic within the transaction scope of a DML statement. For example, database triggers can do the following:

  • Run code that verifies, changes, or repudiates submitted changes
  • Record additional information after validation in other tables (they can’t write to the table being changed—or, in database lexicon, “mutated”
  • Throw exceptions to terminate a transaction when the values don’t meet business rules

As a general rule, triggers can’t contain a COMMIT or ROLLBACK statement because they run inside the transaction scope of a DML statement. Oracle databases give developers an alternative to this general rule because they support autonomous transactions. Autonomous transactions run outside the transaction scope of the triggering DML statement. They can contain a COMMIT statement and act independently of the calling scope statement. This means an autonomous trigger can commit a transaction when the calling transaction fails.

As independent statements or collections of statements add, modify, and remove rows, one statement transacts against data only by locking rows: the SELECT statement. A SELECT statement typically doesn’t lock rows when it acts as a cursor in the scope of a stored program. A cursor is a data structure that contains rows of one-to-many columns in a stored program. This is also known as a list of record structures.

Cursors act like ordinary SQL queries, except they’re managed by procedure programs row by row. There are many examples of procedural programming languages. PL/SQL and SQL/PSM programming languages are procedural languages designed to run inside the database. C, C++, C#, Java, Perl, and PHP are procedural languages that interface with the database through well-defined interfaces, such as Java Database Connectivity (JDBC) and Open Database Connectivity (ODBC).

Cursors can query data two ways. One way locks the rows so that they can’t be changed until the cursor is closed; closing the cursor releases the lock. The other way doesn’t lock the rows, which allows them to be changed while the program is working with the data set from the cursor. The safest practice is to lock the rows when you open the cursor, and that should always be the case when you’re inserting, updating, or deleting rows that depend on the values in the cursor not changing until the transaction lifecycle of the program unit completes.

Loops use cursors to process data sets. That means the cursors are generally opened at or near the beginning of program units. Inside the loop the values from the cursor support one to many SQL statements for one to many tables.

Stored and external programs create their operational scope inside a database connection when they’re called by another program. External programs connect to a database and enjoy their own operational scope, known as a session scope. The session defines the programs’ operational scope. The operational scope of a stored program or external program defines the transaction scope. Inside the transaction scope, the programs interact with data in tables by inserting, updating, or deleting data until the operations complete successfully or encounter a critical failure. These stored program units commit changes when everything completes successfully, or they roll back changes when any critical instruction fails. Sometimes, the programs are written to roll back changes when any instruction fails.

In the Oracle Database, the most common clause to lock rows is the FOR UPDATE clause, which is appended to a SELECT statement. An Oracle database also supports a WAIT n seconds or NOWAIT option. The WAIT option is a blessing when you want to reply to an end user form’s request and can’t make the change quickly. Without this option, a change could hang around for a long time, which means virtually indefinitely to a user trying to run your application. The default value in an Oracle database is NOWAIT, WAIT without a timeout, or wait indefinitely.

You should avoid this default behavior when developing program units that interact with customers. The Oracle Database also supports a full table lock with the SQL LOCK TABLE command, but you would need to embed the command inside a stored or external program’s instruction set.

Written by maclochlainn

April 5th, 2022 at 2:20 pm

Oracle Unit Test

without comments

A unit test script may contain SQL or PL/SQL statements or it may call another script file that contains SQL or PL/SQL statements. Moreover, a script file is a way to bundle several activities into a single file because most unit test programs typically run two or more instructions as unit tests.

Unconditional Script File

You can write a simple unit test like the example program provided in the Lab 1 Help Section, which includes conditional logic. However, you can write a simpler script that is unconditional and raises exceptions when preconditions do not exist.

The following script file creates a one table and one_s sequence. The DROP TABLE and DROP SEQUENCE statements have the same precondition, which is that the table or sequence must previously exist.

-- Drop table one.
DROP TABLE one;
 
-- Crete table one.
CREATE TABLE one
( one_id    NUMBER
, one_text  VARCHAR2(10));
 
-- Drop sequence one_s.
DROP SEQUENCE one_s;
 
-- Create sequence one_s.
CREATE SEQUENCE one_s;

After writing the script file, you can save it in the lab2 subdirectory as the unconditional.sql file. After you login to the SQL*Plus environment from the lab2 subdirectory. You call the unconditional.sql script file from inside the SQL*Plus environment with the following syntax:

@unconditional.sql

It will display the following output, which raises an exception when the one table or one_s sequence does not already exist in the schema or database:

DROP TABLE one
           *
ERROR at line 1:
ORA-00942: table or view does not exist
 
Table created.
 
DROP SEQUENCE one_s
              *
ERROR at line 1:
ORA-02289: sequence does not exist
 
Sequence created.

An unconditional script raises exceptions when a precondition of the statement does not exist. The precondition is not limited to objects, like the table or sequence; and the precondition may be specific data in one or several rows of one or several tables. You can avoid raising conditional errors by writing conditional scripts.

Conditional Script File

A conditional script file contains statements that check for a precondition before running a statement, which effectively promotes their embedded statements to a lambda function. The following logic recreates the logic of the unconditional.sql script file as a conditional script file:

-- Conditionally drop a table and sequence.
BEGIN
  FOR i IN (SELECT   object_name
            ,        object_type
            FROM     user_objects
            WHERE    object_name IN ('ONE','ONE_S')
            ORDER BY object_type ) LOOP
    IF i.object_type = 'TABLE' THEN
      EXECUTE IMMEDIATE 'DROP TABLE '||i.object_name||' CASCADE CONSTRAINTS';
    ELSE
      EXECUTE IMMEDIATE 'DROP SEQUENCE '||i.object_name;
    END IF;
  END LOOP;
END;
/
 
-- Crete table one.
CREATE TABLE one
( one_id    NUMBER
, one_text  VARCHAR2(10));
 
-- Create sequence one_s.
CREATE SEQUENCE one_s;

You can save this script in the lab2 subdirectory as conditional.sql and then unit test it in SQL*Plus. You must manually drop the one table and one_s sequence before running the conditional.sql script to test the preconditions.

You will see that the conditional.sql script does not raise an exception because the one table or one_s sequence is missing. It should generate output to the console, like this:

PL/SQL procedure successfully completed.
 
Table created.
 
Sequence created.

As a rule, you should always write conditional script files. Unconditional script files throw meaningless errors, which may cause your good code to fail a deployment test that requires error free code.

Written by maclochlainn

April 5th, 2022 at 1:59 pm

Selective Aggregation

without comments

Selective Aggregation

Learning Outcomes

  • Learn how to combine CASE operators and aggregation functions.
  • Learn how to selective aggregate values.
  • Learn how to use SQL to format report output.

Selective aggregation is the combination of the CASE operator and aggregation functions. Any aggregation function adds, sums, or averages the numbers that it finds; and when you embed the results of a CASE operator inside an aggregation function you get a selective result. The selectivity is determined by the WHEN clause of a CASE operator, which is more or less like an IF statement in an imperative programming language.

The prototype for selective aggregation is illustrated with a SUM function below:

SELECT   SUM(CASE
               WHEN left_operand = right_operand THEN result
               WHEN left_operand > right_operand THEN result
               WHEN left_operand IN (SET OF comma-delimited VALUES) THEN result
               WHEN left_operand IN (query OF results) THEN result
               ELSE alt_result
             END) AS selective_aggregate
FROM     some_table;

A small example let’s you see how selective aggregation works. You create a PAYMENT table and PAYMENT_S sequence for this example, as follows:

-- Create a PAYMENT table.
CREATE TABLE payment
( payment_id     NUMBER
, payment_date   DATE	      CONSTRAINT nn_payment_1 NOT NULL
, payment_amount NUMBER(20,2) CONSTRAINT nn_payment_2 NOT NULL
, CONSTRAINT pk_payment PRIMARY KEY (payment_id));
 
-- Create a PAYMENT_S sequence.
CREATE SEQUENCE payment_s;

After you create the table and sequence, you should insert some data. You can match the values below or choose your own values. You should just insert values for a bunch of rows.

After inserting 10,000 rows, you can get an unformatted total with the following query:

-- Query total amount.
SELECT   SUM(payment_amount) AS payment_total
FROM     payment;

It outputs the following:

PAYMENT_TOTAL
-------------
   5011091.75

You can nest the result inside the TO_CHAR function to format the output, like

-- Query total formatted amount.
SELECT   TO_CHAR(SUM(payment_amount),'999,999,999.00') AS payment_total
FROM     payment;

It outputs the following:

PAYMENT_TOTAL
---------------
   5,011,091.75

Somebody may suggest that you use a PIVOT function to rotate the data into a summary by month but the PIVOT function has limits. The pivoting key must be numeric and the column values will use only those numeric values.

-- Pivoted summaries by numeric monthly value.
SELECT   *
FROM    (SELECT EXTRACT(MONTH FROM payment_date) payment_month
         ,      payment_amount
         FROM   payment)
         PIVOT (SUM(payment_amount) FOR payment_month IN
                 (1,2,3,4,5,6,7,8,9,10,11,12));

It outputs the following:

	 1	    2	       3	  4	     5		6	   7	      8 	 9	   10	      11	 12
---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
 245896.55  430552.36  443742.63  457860.27  470467.18	466370.71  415158.28  439898.72  458998.09  461378.56  474499.22  246269.18

You can use selective aggregation to get the results by a character label, like

SELECT   SUM(
           CASE
             WHEN EXTRACT(MONTH FROM payment_date) = 1
             AND  EXTRACT(YEAR FROM payment_date) = 2019  THEN payment_amount
           END) AS "JAN"
,        SUM(
           CASE
             WHEN EXTRACT(MONTH FROM payment_date) = 2
             AND  EXTRACT(YEAR FROM payment_date) = 2019  THEN payment_amount
           END) AS "FEB"
,        SUM(
           CASE
             WHEN EXTRACT(MONTH FROM payment_date) = 3
             AND  EXTRACT(YEAR FROM payment_date) = 2019  THEN payment_amount
           END) AS "MAR"
,        SUM(
           CASE
             WHEN EXTRACT(MONTH FROM payment_date) IN (1,2,3)
             AND  EXTRACT(YEAR FROM payment_date) = 2019 THEN payment_amount
           END) AS "1FQ"
,        SUM(
           CASE
             WHEN EXTRACT(MONTH FROM payment_date) = 4
             AND  EXTRACT(YEAR FROM payment_date) = 2019  THEN payment_amount
           END) AS "APR"
FROM     payment;

It outputs the following:

       JAN	  FEB	     MAR	1FQ	   APR
---------- ---------- ---------- ---------- ----------
 245896.55  430552.36  443742.63 1120191.54  457860.27

You can format the output with a combination of the TO_CHAR and LPAD functions. The TO_CHAR allows you to add a formatting mask, complete with commas and two mandatory digits to the right of the decimal point. The reformatted query looks like

COL JAN FORMAT A13 HEADING "Jan"
COL FEB FORMAT A13 HEADING "Feb"
COL MAR FORMAT A13 HEADING "Mar"
COL 1FQ FORMAT A13 HEADING "1FQ"
COL APR FORMAT A13 HEADING "Apr"
SELECT   LPAD(TO_CHAR(SUM(
           CASE
             WHEN EXTRACT(MONTH FROM payment_date) = 1
             AND  EXTRACT(YEAR FROM payment_date) = 2019  THEN payment_amount
           END),'9,999,999.00'),13,' ') AS "JAN"
,        LPAD(TO_CHAR(SUM(
           CASE
             WHEN EXTRACT(MONTH FROM payment_date) = 2
             AND  EXTRACT(YEAR FROM payment_date) = 2019  THEN payment_amount
           END),'9,999,999.00'),13,' ') AS "FEB"
,        LPAD(TO_CHAR(SUM(
           CASE
             WHEN EXTRACT(MONTH FROM payment_date) = 3
             AND  EXTRACT(YEAR FROM payment_date) = 2019  THEN payment_amount
           END),'9,999,999.00'),13,' ') AS "MAR"
,        LPAD(TO_CHAR(SUM(
           CASE
             WHEN EXTRACT(MONTH FROM payment_date) IN (1,2,3)
             AND  EXTRACT(YEAR FROM payment_date) = 2019 THEN payment_amount
           END),'9,999,999.00'),13,' ') AS "1FQ"
,        LPAD(TO_CHAR(SUM(
           CASE
             WHEN EXTRACT(MONTH FROM payment_date) = 4
             AND  EXTRACT(YEAR FROM payment_date) = 2019  THEN payment_amount
           END),'9,999,999.00'),13,' ') AS "APR"
FROM     payment;

It displays the formatted output:

Jan	      Feb	    Mar 	  1FQ		Apr
------------- ------------- ------------- ------------- -------------
   245,896.55	 430,552.36    443,742.63  1,120,191.54    457,860.27