Archive for July, 2017
Cassandra Query Language
After installing Cassandra and reading Cassandra The Definitive Guide, it struck me that I should learn a bit more about the Cassandra Query Language (CQL). So, after I setup a single-node environment and created a .bashcassandra
environment file to connect as a student
user to the Cassandra instance:
# Add the Java and JRE paths to the $PATH environments. export set PATH=$PATH:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.45.x86_64:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.45.x86_64/jre # Add the $JAVA_HOME and $JRE_HOME environment variables. export set JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.45.x86_64/ export set JRE_HOME=/usr |
Having started Cassandra as the cassandra
user, I connected to the Cassandra Query Language Shell (cqlsh
) to learn how to write CQL. You can find the basic structure of the Cassandra Query Language (CQL) on the Apache Cassandra website. I also discovered that CQL by itself can’t let you join tables without using Apache SparkSQL. Apache SparkSQL adds the ability to perform CQL joins in Cassandra, and became available in 2015.
I also learned you can’t use a CREATE OR REPLACE
command when you change certain aspects of User-Defined Functions (UDFs). You actually need to drop any UDF before you change RETURNS NULL ON NULL INPUT
clause to a CALLED ON NULL INPUT
clause or vice versa. You can’t embed Java that connects to database without using the cassandra-java-driver-2.0.2
driver.
You connect to the cqlsh
like this:
cqlsh |
Here’s my script that creates Cassandra keyspace, which is more or less a database. You use the USE
command to connect to the keyspace or database, like you would in MySQL. You do not have sequences in Cassandra because they’re not a good fit for a distributed architecture. Cassandra does not support a native procedural extension like relational databases. You must create User-defined functions (UDFs) by embedding the logic in Java.
This script does the following:
- Creates a keyspace
- Uses the keyspace
- Conditionally drops tables and functions
- Creates two tables
- Inserts data into the two tables
- Queries data from the tables
I also included a call to a UDF inside a query in two of the examples. One of the queries demonstrates how to return a JSON structure from a query. To simplify things and provide clarification of the scripts behaviors, the details are outlined below.
- The first segment of the script creates the keyspace, changes the scope to use the keyspace, conditionally drop tables, create tables, and insert values into the tables:
/* Create a keyspace in Cassandra, which is like a database in MySQL or a schema in Oracle. */ CREATE KEYSPACE IF NOT EXISTS student WITH REPLICATION = { 'class':'SimpleStrategy' ,'replication_factor': 1 } AND DURABLE_WRITES = true; /* Use the keyspace or connect to the database. */ USE student; /* Drop the member table from the student keyspace. */ DROP TABLE IF EXISTS member; /* Create a member table in the student keyspace. */ CREATE TABLE member ( member_number VARCHAR , member_type VARCHAR , credit_card_number VARCHAR , credit_card_type VARCHAR , PRIMARY KEY ( member_number )); /* Conditionally drop the contact table from the student keyspace. */ DROP TABLE IF EXISTS contact; /* Create a contact table in the student keyspace. */ CREATE TABLE contact ( contact_number VARCHAR , contact_type VARCHAR , first_name VARCHAR , middle_name VARCHAR , last_name VARCHAR , member_number VARCHAR , PRIMARY KEY ( contact_number )); /* Insert a row into the member table. */ INSERT INTO member ( member_number, member_type, credit_card_number, credit_card_type ) VALUES ('SFO-12345','GROUP','2222-4444-5555-6666','VISA'); /* Insert a row into the contact table. */ INSERT INTO contact ( contact_number, contact_type, first_name, middle_name, last_name, member_number ) VALUES ('CUS_00001','FAMILY','Barry', NULL,'Allen','SFO-12345'); /* Insert a row into the contact table. */ INSERT INTO contact ( contact_number, contact_type, first_name, middle_name, last_name, member_number ) VALUES ('CUS_00002','FAMILY','Iris', NULL,'West-Allen','SFO-12345'); /* Insert a row into the member table. */ INSERT INTO member ( member_number, member_type, credit_card_number, credit_card_type ) VALUES ('SFO-12346','GROUP','3333-8888-9999-2222','VISA'); /* Insert a row into the contact table. */ INSERT INTO contact ( contact_number, contact_type, first_name, middle_name, last_name, member_number ) VALUES ('CUS_00003','FAMILY','Caitlin','Marie','Snow','SFO-12346');
- The following queries the member table:
/* Select all columns from the member table. */ SELECT * FROM member;
It returns the following:
member_number | credit_card_number | credit_card_type | member_type ---------------+---------------------+------------------+------------- SFO-12345 | 2222-4444-5555-6666 | VISA | GROUP SFO-12346 | 3333-8888-9999-2222 | VISA | GROUP
- Create a
concatenate
User-defined function (UDF) for Cassandra. The first step requires you to edit thecassandra.yaml
file, which you find in the/etc/cassandra/default.conf
directory. There is a single parameter that you need to edit, and it is theenable_user_defined_functions
parameter. By default the parameter is set tofalse
, and you need to enable it to create UDFs.After you make the edit, the
cassandra.yaml
file should look like this:1089 1090 1091 1092 1093
# If unset, all GC Pauses greater than gc_log_threshold_in_ms will log at # INFO level # UDFs (user defined functions) are disabled by default. # As of Cassandra 3.0 there is a sandbox in place that should prevent execution of evil code. enable_user_defined_functions: true
After you make the change, you can create your own UDF. The following UDF formats the first, middle, and last name so there’s only one whitespace between the first and last name when there middle name value is null.
This type of function must use a
CALLED ON NULL INPUT
clause in lieu of aRETURNS NULL ON NULL INPUT
clause. The latter would force the function to return a null value if any one of the parameters were null./* Drop the concatenate function because a replace disallows changing a RETURNS NULL ON NULL INPUT with a CALLED ON NULL INPUT without raising an "89: InvalidRequest" exception. */ DROP FUNCTION concatenate; /* Create a user-defined function to concatenate names. */ CREATE OR REPLACE FUNCTION concatenate (first_name VARCHAR, middle_name VARCHAR, last_name VARCHAR) CALLED ON NULL INPUT RETURNS VARCHAR LANGUAGE java AS $$ /* Concatenate first and last names when middle name is null, and first, middle, and last names when middle name is not null. */ String name; /* Check for null middle name. */ if (middle_name == null) { name = first_name + " " + last_name; } else { name = first_name + " " + middle_name + " " + last_name; } return name; $$;
- Query the values from the
contact
table with the UDF function in theSELECT
-list:/* Query the contact information. */ SELECT member_number , contact_number , contact_type , concatenate(first_name, middle_name, last_name) AS full_name FROM contact;
It returns the following:
member_number | contact_number | contact_type | full_name ---------------+----------------+--------------+-------------------- SFO-12345 | CUS_00001 | FAMILY | Barry Allen SFO-12345 | CUS_00002 | FAMILY | Iris West-Allen SFO-12346 | CUS_00003 | FAMILY | Caitlin Marie Snow
- Query the values from the
contact
table with a JSON format:/* Query the contact information and return in a JSON format. */ SELECT JSON contact_number , contact_type , concatenate(first_name, middle_name, last_name) AS full_name FROM contact;
It returns the following:
[json] ------------------------------------------------------------------------------------------------- {"contact_number": "CUS_00001", "contact_type": "FAMILY", "full_name": "Barry Allen"} {"contact_number": "CUS_00002", "contact_type": "FAMILY", "full_name": "Iris West-Allen"} {"contact_number": "CUS_00003", "contact_type": "FAMILY", "full_name": "Caitlin Marie Snow"}
You can call the script from a relative directory inside cqlsh
, like this:
SOURCE 'cstudent.cql' |
At the end of the day, the concept of adding and removing nodes is attractive. Though, the lack of normal relational mechanics and narrowly supported set of CQL semantics leaves me with open questions. For example, is clustering without a coordinator really valuable enough to settle for eventual, or tunable, consistency with such a narrowly scoped query language?
As always, I hope this helps those looking for a quick how-to on Cassandra.
DevOps Handbook Review
DevOps: Is it a mindset or process? That’s a big question for managers because while you can’t manage a mindset, you can manage a process. DevOps is actually a framework of processes.
Some actually say DevOps is actually a lot like making a patchwork quilt. You need to design the patches before you figure out how to stitch them together. I believe this is true, and base that on my experience as a release engineer at Oracle Corporation.
You need to adopt:
- An Agile system development life cycle that ensures you value outcome over process.
- A value stream that lets you track the efficiency and effectiveness of costs verses outcomes.
- A measurable development pipeline that tracks costs and outcomes.
- A management strategy that handles operational, tactical, and strategic goals.
Those are big goals to accomplish but how will you achieve them? I would suggest that you first try to understand the experiences of others who adopted a DevOps methodology.
I’d like to recommend The DevOps Handbook because it uses case studies to highlight the best practices in DevOps. They do use some engineering terms to describe things that could be simpler. For example, telemetry is a fancy word for plans but it strikes me they use the word for a purpose. The authors want to commit us to adopting a rigorous planned route before we implement DevOps approaches.
While I may not choose all the words they did to convey these ideas and case studies, they are consistent in their approach. Along the way, they introduce much of the supporting case studies one-by-one. Moreover, they demonstrate approaches taken to solve types of problems and leave the integration of ideas to us.
Here’s a layout of the book since the table of contents is missing on Amazon.com’s website.
- Part 1: How DevOps applies lean principles to technology
- Chapter 1: Introduction
- Chapter 2: The Principles of Flow
- Chapter 3: The Principles of Feedback
- Chapter 4: The Principles of Continual Learning and Experimentation
- Part 2: Where to Start the DevOps Transformation
- Chapter 5: Selecting which Value Stream to Start With
- Chapter 6: Understanding the Work in Our Value Stream
- Chapter 7: How to Design Our Organization and Architecture
- Chapter 8: How to Get Great Outcomes by Integrating Operations into Daily Work
- Part 3: How to Implement Technical Practices for Continuous Delivery
- Chapter 9: Create the Foundations of the Deployment Pipeline
- Chapter 10: Enable Fast and Reliable Automated Testing
- Chapter 11: Enable and Practice Continuous Integration
- Chapter 12: Automate and Enable Low-Risk Releases
- Chapter 13: Architect for Low-Risk Releases
- Part 4: How to Implement Technical Practices for Fast and Continuous Feedback
- Chapter 14: Create Telemetry to Enable Seeing and Solving Problems
- Chapter 15: Analyze Telemetry to Better Anticipate Problems and Achieve Goals
- Chapter 16: Enable Feedback to Safely Deploy Code
- Chapter 17: Integrate Hypothesis-Driven Development and Testing
- Chapter 18: Create Review and Coordination Process to Increase Qualify of Work
- Part 5: How to implement Feedback to Drive Sooner, Faster, and Cheaper Results
- Chapter 19: Enable and Inject Learning into Daily Work
- Chapter 20: Convert Local Discoveries into Global Improvements
- Chapter 21: Reserve Time to Create Organizational Learning and Improvement
- Part 6: How to Implement Feedback on Achieving Information Security Goals
- Chapter 22: Information Security as Everyone’s Job, Every Day
- Chapter 23: Protecting the Deployment Pipeline
I hope this helps those looking for a good reference on DevOps.
Install Cassandra on Fedora
It was quite interesting to discover that DataStax no longer provides the DataStax Community version of Apache Cassandra or the DataStax Distribution of Apache Cassandra. Needless to say, I was quite disappointed because it means folks will get less opportunity to learn how to use Cassandra because it makes it more difficult for beginning developers.
I spent a good hour sorting through what was available and then figuring out the real requirements to install Apache Cassandra 3.11. These are the instructions.
Install Java and JRE as Prerequisites
jre-8u141-linux-x64.rpm
). You should use the rpm
utility to install the JRE package, like the following example:
rpm -ivh /home/student/Downloads/jre-8*.rpm |
It should generate the following installation report:
Preparing... ################################# [100%] package jre1.8.0_141-1.8.0_141-fcs.x86_64 is already installed sh-4.2# rpm -qa jre sh-4.2# rpm -qf jre error: file /jre: No such file or directory sh-4.2# rpm -qa | grep jre jre1.8.0_141-1.8.0_141-fcs.x86_64 sh-4.2# rpm -qa | grep jre | rpm -qi rpm: no arguments given for query sh-4.2# rpm -qi `rpm -qa | grep jre` Name : jre1.8.0_141 Version : 1.8.0_141 Release : fcs Architecture: x86_64 Install Date: Mon 24 Jul 2017 11:09:58 PM PDT Group : Development/Tools Size : 139460427 License : http://java.com/license Signature : (none) Source RPM : jre1.8.0_141-1.8.0_141-fcs.src.rpm Build Date : Wed 12 Jul 2017 04:47:52 AM PDT Build Host : jdk7-lin2-amd64 Relocations : /usr/java Packager : Java Software <jre-comments@java.sun.com> Vendor : Oracle Corporation URL : URL_REF Summary : Java Platform Standard Edition Runtime Environment Description : The Java Platform Standard Edition Runtime Environment (JRE) contains everything necessary to run applets and applications designed for the Java platform. This includes the Java virtual machine, plus the Java platform classes and supporting files. The JRE is freely redistributable, per the terms of the included license. |
Confirm Java and JRE Installation
alternatives
utility with the --config
option and the keyword of java
or jre
.
sh-4.2# alternatives --config java |
It should generate the following list when you check for the java
library:
There are 3 programs which provide 'java'. Selection Command ----------------------------------------------- * 1 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79-2.5.5.0.fc20.x86_64/jre/bin/java + 2 /usr/lib/jvm/jre-1.8.0-openjdk.x86_64/bin/java 3 /usr/java/jre1.8.0_141/bin/java Enter to keep the current selection[+], or type selection number: |
It should generate the following list when you check for the javac
library:
There are 2 programs which provide 'javac'. Selection Command ----------------------------------------------- * 1 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79-2.5.5.0.fc20.x86_64/bin/javac + 2 /usr/lib/jvm/java-1.8.0-openjdk.x86_64/bin/javac Enter to keep the current selection[+], or type selection number: |
After installing and selecting them as the designated alternative, if you have more than one Java or JRE installed on your OS, you should create a configuration file for the root user. You should include the following to set your $PATH
, $JAVA_HOME
, and $JRE_HOME
environment variables:
# Add the Java and JRE paths to the $PATH environments. export set PATH=$PATH:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.45.x86_64:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.45.x86_64/jre # Add the $JAVA_HOME and $JRE_HOME environment variables. export set JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.45.x86_64/ export set JRE_HOME=/usr |
Install Apache Cassandra
yum
utility is the best way to install Apache Cassandra. However, you will need to configure the /etc/yum.repos.d/cassandra.repo
before you attempt to install Cassandra 3.11 from the Apache organization, like this:
[cassandra] name=Apache Cassandra baseurl=https://www.apache.org/dist/cassandra/redhat/311x/ gpgcheck=1 repo_gpgcheck=1 gpgkey=https://www.apache.org/dist/cassandra/KEYS |
After you’ve added the necessary yum configuration file and ensured you’re using both Java 1.8 and JRE 1.8, you can install Apache Cassandra with the following yum
command as the root
user or as a sudoer member with the sudo
command:
yum install -y cassandra |
If successful, you should see the following output:
Loaded plugins: langpacks, refresh-packagekit cassandra/signature | 819 B 00:00 cassandra/signature | 2.9 kB 00:00 !!! mysql-connectors-community | 2.5 kB 00:00 mysql-tools-community | 2.5 kB 00:00 mysql56-community | 2.5 kB 00:00 http://yum.postgresql.org/9.3/fedora/fedora-20-x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. updates/20/x86_64/metalink | 2.6 kB 00:00 Resolving Dependencies --> Running transaction check ---> Package cassandra.noarch 0:3.11.0-1 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: cassandra noarch 3.11.0-1 cassandra 28 M Transaction Summary ================================================================================ Install 1 Package Total download size: 28 M Installed size: 37 M Downloading packages: warning: /var/cache/yum/x86_64/20/cassandra/packages/cassandra-3.11.0-1.noarch.rpm: Header V4 RSA/SHA256 Signature, key ID fe4b2bda: NOKEY Public key for cassandra-3.11.0-1.noarch.rpm is not installed cassandra-3.11.0-1.noarch.rpm | 28 MB 00:07 Retrieving key from https://www.apache.org/dist/cassandra/KEYS Importing GPG key 0xF2833C93: Userid : "Eric Evans <eevans@sym-link.com>" Fingerprint: cec8 6bb4 a0ba 9d0f 9039 7cae f835 8fa2 f283 3c93 From : https://www.apache.org/dist/cassandra/KEYS Importing GPG key 0x8D77295D: Userid : "Eric Evans <eevans@sym-link.com>" Fingerprint: c496 5ee9 e301 5d19 2ccc f2b6 f758 ce31 8d77 295d From : https://www.apache.org/dist/cassandra/KEYS Importing GPG key 0x2B5C1B00: Userid : "Sylvain Lebresne (pcmanus) <sylvain@datastax.com>" Fingerprint: 5aed 1bf3 78e9 a19d ade1 bcb3 4bd7 36a8 2b5c 1b00 From : https://www.apache.org/dist/cassandra/KEYS Importing GPG key 0x0353B12C: Userid : "T Jake Luciani <jake@apache.org>" Fingerprint: 514a 2ad6 31a5 7a16 dd00 47ec 749d 6eec 0353 b12c From : https://www.apache.org/dist/cassandra/KEYS Importing GPG key 0xFE4B2BDA: Userid : "Michael Shuler <michael@pbandjelly.org>" Fingerprint: a26e 528b 271f 19b9 e5d8 e19e a278 b781 fe4b 2bda From : https://www.apache.org/dist/cassandra/KEYS Running transaction check Running transaction test Transaction test succeeded Running transaction (shutdown inhibited) Warning: RPMDB altered outside of yum. Installing : cassandra-3.11.0-1.noarch 1/1 Verifying : cassandra-3.11.0-1.noarch 1/1 Installed: cassandra.noarch 0:3.11.0-1 Complete! |
Starting Cassandra
cassandra
user. Before starting Cassandra, you need to create a .bashrc
file for the cassandra
user because one isn’t created by default since you can’t log on to the Linux OS as the cassandra
user. The home directory for the cassandra
user is /var/lib/cassandra
and the owner of that directory is the root
user.
As the root
user, create the following .bashrc
file for the cassandra
user:
# Wrap sqlplus with rlwrap to edit prior lines with the # up, down, left and right keys. cqlsh() { if [ "$RLWRAP" = "0" ]; then cqlsh "$@" else rlwrap cqlsh "$@" fi } # Set vi as a command line editor. set -o vi # Add the Java and JRE paths to the $PATH environments. export set PATH=$PATH:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.45.x86_64:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.45.x86_64/jre # Add the $JAVA_HOME and $JRE_HOME environment variables. export set JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.45.x86_64/ export set JRE_HOME=/usr |
You should start Cassandra in background, like this:
cassandra |
Using Cassandra
cqlsh |
You will see the following:
Connected to Test Cluster at 127.0.0.1:9042. [cqlsh 5.0.1 | Cassandra 3.11.0 | CQL spec 3.4.4 | Native protocol v4] Use HELP for help. cqlsh> HELP Documented shell commands: =========================== CAPTURE CLS COPY DESCRIBE EXPAND LOGIN SERIAL SOURCE UNICODE CLEAR CONSISTENCY DESC EXIT HELP PAGING SHOW TRACING CQL help topics: ================ AGGREGATES CREATE_KEYSPACE DROP_TRIGGER TEXT ALTER_KEYSPACE CREATE_MATERIALIZED_VIEW DROP_TYPE TIME ALTER_MATERIALIZED_VIEW CREATE_ROLE DROP_USER TIMESTAMP ALTER_TABLE CREATE_TABLE FUNCTIONS TRUNCATE ALTER_TYPE CREATE_TRIGGER GRANT TYPES ALTER_USER CREATE_TYPE INSERT UPDATE APPLY CREATE_USER INSERT_JSON USE ASCII DATE INT UUID BATCH DELETE JSON BEGIN DROP_AGGREGATE KEYWORDS BLOB DROP_COLUMNFAMILY LIST_PERMISSIONS BOOLEAN DROP_FUNCTION LIST_ROLES COUNTER DROP_INDEX LIST_USERS CREATE_AGGREGATE DROP_KEYSPACE PERMISSIONS CREATE_COLUMNFAMILY DROP_MATERIALIZED_VIEW REVOKE CREATE_FUNCTION DROP_ROLE SELECT CREATE_INDEX DROP_TABLE SELECT_JSON |
Upgrade APEX 4 to 5.1
This blog post shows you how to upgrade APEX Version 4.0.2 on a default Oracle Database 11g XE instance to APEX 5.1.2. Oracle’s APEX t upgrade document was just a bit too short and missed some details. It divided into five parts. The first part confirms your starting point. The second part downloads and positions the extracted software. The third part installs APEX 5. The fourth part configures APEX 5. The fifth part shows you how to access and get to the APEX 5 home page.
Confirm APEX 4 Installation
- Verify the database version by connecting as the
system
user through SQL*Plus and running the following query:SELECT banner FROM v$version WHERE banner LIKE 'Oracle Database%';
It should return the following when you’re upgrading the Oracle Database 11g XE:
BANNER -------------------------------------------------------------------------------- Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
- Verify the APEX version by connecting as the
system
user through SQL*Plus and running the following query:COLUMN version_no FORMAT A16 HEADING "Version Number" COLUMN api_compatibility FORMAT A16 HEADING "API|Compatibility" COLUMN patch_applied FORMAT A14 HEADING "Patch Applied" SELECT * FROM apex_release;
It should return the following when you start with the base Oracle Database 11g XE:
API Version Number Compatibility Patch Applied ---------------- ---------------- -------------- 4.0.2.00.09 2010.05.13
- Verify the XML Database version by connecting as the
system
user through SQL*Plus and running the following query:COLUMN comp_name FORMAT A20 COLUMN version FORMAT A12 COLUMN status FORMAT A8 SELECT comp_name , version , status FROM dba_registry WHERE comp_id = 'XDB';
It should return the following when you’re upgrading the Oracle Database 11g XE:
COMP_NAME VERSION STATUS -------------------- ------------ -------- Oracle XML Database 11.2.0.2.0 VALID
- Verify the
memory_target
of the instance by connecting as the system user through SQL*Plus and running the following query. It should be no smaller than 300 MB.show parameter memory_target
It should return the following when you’re upgrading the Oracle Database 11g XE:
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ memory_target big integer 1G
- Oracle’s instructions qualify that APEX 5 will install into the
APEX_050000
schema. Oracle creates the newAPEX_050000
schema with a default of thesysaux
andtemp
table space. You can verify these as the system user through SQL*Plus by running the following two queries. The first one checks for the tablespaces and the second for available space and auto extensibility.COLUMN default_tablespace FORMAT A22 COLUMN temporary_tablespace FORMAT A22 SELECT default_tablespace , temporary_tablespace FROM dba_users WHERE username = 'APEX_040000';
It should return the following when you’re upgrading the Oracle Database 11g XE:
DEFAULT_TABLESPACE TEMPORARY_TABLESPACE ---------------------- ---------------------- SYSAUX TEMP
COLUMN tablespace_name FORMAT A10 HEADING "Tablespace Name" COLUMN file_name FORMAT A38 HEADING "File Name" COLUMN available_space FORMAT 999,999.00 HEADING "Available|Space MB" COLUMN autoextensible FORMAT A10 HEADING "Auto|Extensible" SELECT tablespace_name , file_name , ((maxbytes - bytes) / 1024) / 1024 AS available_space , autoextensible FROM dba_data_files WHERE tablespace_name IN ('SYSAUX','SYSTEM');
It should return the following when you’re upgrading the Oracle Database 11g XE:
Tablespace Available Auto Name File Name Space MB Extensible ---------- -------------------------------------- ----------- ---------- SYSAUX /u01/app/oracle/oradata/XE/sysaux.dbf 32,027.98 YES SYSTEM /u01/app/oracle/oradata/XE/system.dbf 200.00 YES
Download APEX 5
- Download the APEX software from the Oracle web site. Assuming you download the software as the
student
user, you can save it in yourDownloads
directory.You should open a Terminal session and connect as the
oracle
user. If you’ve setup your instance correctly, you will need to first become theroot
user and then theoracle
user. As theoracle
user, you source the Oracle environment and copy theapex_x.x.x.zip
file from the~student/Downloads
directory to the/u01/app/oracle
directory.
- You copy the file from the
student
user’sDownloads
directory with the following command:cp /home/student/Downloads/apex_x.x.x.zip /u01/app/oracle
- You unzip the copied
apex_x.x.x.zip
file (version 5.1.2 in this example) with the following command, and it will create a newapex
directory as a subdirectory of the/u01/app/oracle
directory.unzip apex_5.1.2.zip
Install APEX 5
- You should query the
dba_users
view to check the status of theapex_public_user
andanonymous
user accounts, like this:COLUMN username FORMAT A18 HEADING "User Name" COLUMN account_status FORMAT A10 HEADING "Account|Status" SELECT username , account_status FROM dba_users WHERE username IN ('APEX_PUBLIC_USER','ANONYMOUS');
It should return the following when you’re upgrading the Oracle Database 11g XE but the anonymous user name may be open if you’ve previously unlocked it:
Account User Name Status ------------------ ---------- APEX_PUBLIC_USER LOCKED ANONYMOUS LOCKED
- You can unlock the
apex_public_user
andanonymous
accounts with the following statements:ALTER USER apex_public_user ACCOUNT UNLOCK; ALTER USER anonymous ACCOUNT UNLOCK;
- The installation uses the
flows_files
schema, which should be installed. You can verify the default and temporary tablespaces with the following query:COLUMN default_tablespace FORMAT A22 COLUMN temporary_tablespace FORMAT A22 SELECT default_tablespace , temporary_tablespace FROM dba_users WHERE username = 'FLOWS_FILES';
DEFAULT_TABLESPACE TEMPORARY_TABLESPACE ---------------------- ---------------------- SYSAUX TEMP
- Open a Terminal session, connect as the
oracle
user, source the Oracle environment file, and change your active directory to the/u01/app/oracle/apex
directory, and open a SQL*Plus connection as thesys
user. You need superuser privileges, which means you need to connect to the Oracle database with the “sys as sysdba
” syntax.sqlplus sys as sysdba
You can now install APEX 5.x.x by calling the following script with four parameters:
@apexins.sql SYSAUX SYSAUX TEMP /i/
It will take a couple minutes for the installation script to succeed. You will know that it is completed when you see the following message:
Thank you for installing Oracle Application Express 5.1.2.00.09 Oracle Application Express is installed in the APEX_050100 schema. The structure of the link to the Application Express administration services is as follows: http://host:port/pls/apex/apex_admin (Oracle HTTP Server with mod_plsql) http://host:port/apex/apex_admin (Oracle XML DB HTTP listener with the embedded PL/SQL gateway) http://host:port/apex/apex_admin (Oracle REST Data Services) The structure of the link to the Application Express development interface is as follows: http://host:port/pls/apex (Oracle HTTP Server with mod_plsql) http://host:port/apex (Oracle XML DB HTTP listener with the embedded PL/SQL gateway) http://host:port/apex (Oracle REST Data Services) PL/SQL procedure successfully completed. Disconnected from Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
- After installing APEX 5, you can re-verify the APEX version by connecting as the
system
user through SQL*Plus and running the following query:COLUMN version_no FORMAT A16 HEADING "Version Number" COLUMN api_compatibility FORMAT A16 HEADING "API|Compatibility" COLUMN patch_applied FORMAT A14 HEADING "Patch Applied" SELECT * FROM apex_release;
It should return the following after upgrading with APEX 5:
API Version Number Compatibility Patch Applied ---------------- ---------------- -------------- 5.1.2.00.09 2016.08.24 APPLIED
Configure APEX 5
- At this point, you need to set the internal password, which you can do by navigating to the
/u01/app/oracle/apex
directory. In that directory, you should open a SQL*Plus session as thesys
user with the “sys as sysdba
” privilege. Run the following script to set the APEX Administrator’s credentials:@apxchpwd.SQL
It will prompt you for parameters, like so:
================================================================================ This script can be used to change the password of an Application Express instance administrator. If the user does not yet exist, a user record will be created. ================================================================================ Enter the administrator's username [ADMIN] ADMIN User "ADMIN" exists. Enter ADMIN's email [ADMIN] mclaughlinm@byui.edu Enter ADMIN's password [] Changed password of instance administrator ADMIN.
- The next task requires you to run the
apex_epg_config.sql
script with one directory parameter. You should be connect to thesys
user with the “sys as sysdba
” privilege:@apex_epg_config.SQL /u01/app/oracle
It will take a couple minutes to complete this script, and when it is complete it displays:
. Loading images directory: /u01/app/oracle/apex/images
- While the default port for XML DB is 8080, you should confirm it with this query:
SELECT dbms_xdb.gethttpport FROM dual;
It should return the following:
GETHTTPPORT ----------- 8080
Connect to and use APEX 5
- You can type the following URL into your local browser to get to the APEX 5 Administration page:
http://localhost:8080/apex/apex_admin
It should display the following login. The password is the one you entered when you ran the
apxchpwd.sql
script in the configuration section of this post.
After you enter proper credentials, click the Sign in to Administration button to proceed. It should display the following APEX 5 home page.
You can now work in APEX 5 Administration and setup a individual workspaces.
Cleanup APEX 4
Migrating functionality to APEX 5 is possible but reworking the existing design in the context of new features is better. After you have migrated your applications and upgraded your production instance, you can drop the APEX_040000 user/schema and remove any APEX 4 workspaces. This segment of shows you how to remove an APEX 4 workspace and drop the APEX_040000 user/schema.
- The following anonymous PL/SQL block will remove an APEX 4 workspace from a user schema. It’s designed for you to run it inside the target schema but you can change it to run it as the system user against multiple schemas.
DECLARE /* Cursor for all APEX 4 tables and sequences. */ CURSOR c IS SELECT uo.object_type , uo.object_name FROM user_objects uo WHERE uo.object_name IN ('DEPT' ,'APEX$_WS_WEBPG_SECTION_HISTORY' ,'APEX$_ACL' ,'APEX$_WS_WEBPG_SECTIONS' ,'APEX$_WS_ROWS' ,'EMP' ,'APEX$_WS_FILES' ,'APEX$_WS_TAGS' ,'APEX$_WS_LINKS' ,'APEX$_WS_NOTES' ,'DEMO_USERS' ,'DEMO_CUSTOMERS' ,'DEMO_ORDERS' ,'DEMO_PRODUCT_INFO' ,'DEMO_ORDER_ITEMS' ,'DEMO_STATES' ,'APEX$_WS_HISTORY' ,'DEMO_USERS_SEQ' ,'DEMO_PROD_SEQ' ,'DEMO_ORD_SEQ' ,'DEMO_ORDER_ITEMS_SEQ' ,'DEMO_CUST_SEQ' ,'CUSTOM_HASH' ,'CUSTOM_AUTH') ORDER BY uo.object_type DESC; BEGIN FOR i IN c LOOP IF i.object_type = 'TABLE' THEN EXECUTE IMMEDIATE 'DROP '||i.object_type||' '||i.object_name||' CASCADE CONSTRAINTS'; ELSE EXECUTE IMMEDIATE 'DROP '||i.object_type||' '||i.object_name; END IF; END LOOP; END; /
- You can then connect as the
sys
user with the “sys as sysdba
” privilege and drop the APEX_040000 user/schema, like this:DROP USER apex_040000 CASCADE;
SQL Logic Overkill, again …
It’s interesting to watch people try to solve problems. For example, the student is required to use a scalar subquery in a SQL lab exercise that I wrote. It should be a simple fix. The problem is structured with an incorrect foreign key value in an external CSV file and the restriction that you can not replace the value in the external CSV file. I hoped that students would see the easiest option was to write a scalar subquery in the SELECT
clause to replace the value found in the external file. There’s even a hint about how to use a scalar subquery.
Students who are new to SQL can take very interesting approaches to solve problems. The flexibility of SQL can lead them to solve problems in interesting ways. While the following solution worked to solve the problem, it’s wrong on two levels:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | INSERT INTO TRANSACTION (SELECT transaction_s1.NEXTVAL , tr.transaction_account , CASE WHEN NOT tr.transaction_type = (SELECT common_lookup_id FROM common_lookup WHERE common_lookup_table = 'TRANSACTION' AND common_lookup_column = 'TRANSACTION_TYPE' AND common_lookup_type = 'CREDIT') THEN cl.common_lookup_id END AS transaction_type , tr.transaction_date , (tr.transaction_amount / 1.06) AS transaction_amount , tr.rental_id , tr.payment_method_type , tr.payment_account_number , tr.created_by , tr.creation_date , tr.last_updated_by , tr.last_update_date FROM transaction_reversal tr CROSS JOIN common_lookup cl WHERE cl.common_lookup_table = 'TRANSACTION' AND cl.common_lookup_column = 'TRANSACTION_TYPE' AND cl.common_lookup_type = 'CREDIT'); |
The CASE
statement on lines 4 through 12 substitutes a value only when the source value is not a match. That means if the source file is ever correct a null value would become the transaction_type
column value, which would make the statement fail because the transaction_type
column is NOT NULL
constrained in the target transaction
table. Therefore, the logic of the student’s approach requires adding an ELSE
clause to the CASE
statement for the event that the source file is ever corrected. The modified CASE
statement would be =the following:
4 5 6 7 8 9 10 11 12 13 14 | , CASE WHEN NOT tr.transaction_type = (SELECT common_lookup_id FROM common_lookup WHERE common_lookup_table = 'TRANSACTION' AND common_lookup_column = 'TRANSACTION_TYPE' AND common_lookup_type = 'CREDIT') THEN cl.common_lookup_id ELSE tr.transaction_type END AS transaction_type |
The second element of student thought at issue is the CROSS JOIN
to the in-line view. It does one thing right and another wrong. It uses the unique key to identify a single row, which effectively adds all the columns for that one row to all rows returned from the external transaction_reversal
table. The CROSS JOIN
is a correct approach to adding values for computations to a query when you need those columns for computations. The problem with this CROSS JOIN
logic may not be immediately obvious when you write it in ANSI SQL 1992 syntax, but it should become obvious when you replace the inline view with a Common Table Expression (CTE) in ANSI SQL 1999 syntax, like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | INSERT INTO TRANSACTION (WITH cte AS (SELECT * FROM common_lookup WHERE common_lookup_table = 'TRANSACTION' AND common_lookup_column = 'TRANSACTION_TYPE' AND common_lookup_type = 'CREDIT') SELECT transaction_s1.NEXTVAL , tr.transaction_account , cte.common_lookup_id AS transaction_type , tr.transaction_date , (tr.transaction_amount / 1.06) AS transaction_amount , tr.rental_id , tr.payment_method_type , tr.payment_account_number , tr.created_by , tr.creation_date , tr.last_updated_by , tr.last_update_date FROM transaction_reversal tr CROSS JOIN cte); |
Unfortunately, you would discover that Oracle Database 11g does not support the use of an ANSI SQL 1999 WITH clause inside as the source for an INSERT
statement. Oracle Database 12c does support the use of the ANSI SQL 1999 WITH clause inside a subquery of an INSERT
statement. That’s an “Oops!” for Oracle 11g because that means the Oracle database fails to meet the ANSI SQL 1999 compliance test. 😉 Great that they fixed it in Oracle 12c. While the nested query would work in Oracle as an ordinary query (outside of an INSERT
statement). It raises the following error when you embed it in an INSERT
statement:
ERROR AT line 20: ORA-32034: unsupported USE OF WITH clause |
The WITH
clause does highlight a key problem with the idea of a CROSS JOIN
in this situation. You don’t need all the columns from the common_lookup
table. You only need the common_lookup_id
column. That make the CROSS JOIN
approach suboptimal if it worked.
The complex logic in the original approach is wasted. That’s true because the common_lookup_id
value can be supplied to each row as the value from a scalar subquery. The scalar query runs once and the result is placed in the return set for each row. You implement the scalar subquery in the SELECT
clause, like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | INSERT INTO TRANSACTION (SELECT transaction_s1.NEXTVAL , tr.transaction_account , (SELECT common_lookup_id FROM common_lookup WHERE common_lookup_table = 'TRANSACTION' AND common_lookup_column = 'TRANSACTION_TYPE' AND common_lookup_type = 'CREDIT') AS transaction_type , tr.transaction_date , (tr.transaction_amount / 1.06) AS transaction_amount , tr.rental_id , tr.payment_method_type , tr.payment_account_number , tr.created_by , tr.creation_date , tr.last_updated_by , tr.last_update_date FROM transaction_reversal tr); |
There really was no intent or logical outcome where the value from the original CASE
statement would be different than the subquery’s common_lookup_id
value. That fact makes adding an ELSE
clause useless, and the solution viable though inefficient. Also, there was no need for the additional columns from the common_lookup
table because they are unused. The subquery on lines 4 through 8 provides the optimal solution and improved efficiency.
Developers should ask themselves two questions when they write SQL:
- If my logic is so elegant why do I need it to be so elegant?
- Is there a simpler solution to provide the desired result set?
If there aren’t good answers to both questions, they should re-write it. I hope the examples answer questions and help folks solve problems.