dbms_dictionary_check package back-ported to 19c [19.22]

DBMS_DICTIONARY_CHECK package has been back-ported from 23c to oracle 19c when applying January 2024 RU [19.22]. The package is a read-only PL/SQL package procedure that helps you identify Oracle Database dictionary. its in some sense the evolution of hcheck.sql script that you need to download and install, this is not needed anymore … this package is now available as built-in is really a positive move.

To Run the Package to examine your database state:

SQL> set serveroutput on size unlimited
SQL> execute dbms_dictionary_check.full;

A detailed report will be generated as shown below:

To Check “Critical” Findings and the status:

SQL> execute dbms_dictionary_check.critical

The following are the 3 main areas that package will consider and flag “Critical”and should be in “PASS” or valid state otherwise it has negative impact and will require your attention/fix:

To Run a FULL Repair for the reported findings:

SQL> set serveroutput on size unlimited
SQL> EXECUTE dbms_dictionary_check.full(repair=>TRUE);

For further reference documentation: https://docs.oracle.com/en/database/oracle/oracle-database/23/arpls/dbms-dictionary-check.html#GUID-BD44F6AB-437B-4351-9CAE-9190675FAD21

Oracle 23c Boolean Data Type

In Oracle 23c release you can now have a boolean data type in your table column, to illustrate:

I will create a dummy table and insert different values…note NULL is acceptable:

SQL> CREATE TABLE hr.example (id NUMBER, c1 BOOLEAN, c2 BOOL);

SQL> INSERT INTO hr.example VALUES (1, TRUE, NULL);

SQL> INSERT INTO hr.example VALUES (2, TRUE, 0);

SQL> INSERT INTO hr.example VALUES (3, TRUE, False);

SQL> INSERT INTO hr.example VALUES (4, 0, 1);

SQL> commit;

querying the table:

Filtering with TRUE values:

more information can be found here:

https://docs.oracle.com/en/database/oracle/oracle-database/23/sqlrf/Data-Types.html#GUID-285FFCA8-390D-4FA9-9A51-47B84EF5F83A

Oracle Export DataPump in Cloud Autonomous Environment

using your ADMIN account in Oracle Autonomous database environment you have two methods to export data using export datapump:

Oracle SQL Developer

DBMS_DATAPUMP API

Here I am going to simulate using DBMS_DATAPUMP API method:

First, I will create a user defined directory called “RAMBO” this directory will be appended automatically in the DBFS file system.

After that, I will use DBMS_DATAPUMP API to run export datapump job:

set scan off
set serveroutput on
set escape off
whenever sqlerror exit
DECLARE
h1 number;
s varchar2(1000):=NULL;
errorvarchar varchar2(100):= ‘ERROR’;
tryGetStatus number := 0;
success_with_info EXCEPTION;
PRAGMA EXCEPTION_INIT(success_with_info, -31627);
begin
h1 := dbms_datapump.open (operation => ‘EXPORT’, job_mode => ‘FULL’, job_name => ‘EXP_SD1’, version => ‘COMPATIBLE’);
tryGetStatus := 1;
dbms_datapump.set_parameter(handle => h1, name => ‘COMPRESSION’, value => ‘ALL’);
dbms_datapump.set_parallel(handle => h1, degree => 1);
dbms_datapump.add_file(handle => h1, filename => ‘EXPDAT-‘||to_char(sysdate,’hh24_mi_ss’)||’.LOG’, directory => ‘RAMBO’, filetype => 3);
dbms_datapump.set_parameter(handle => h1, name => ‘KEEP_MASTER’, value => 1);
dbms_datapump.add_file(handle => h1, filename => ‘EXPDAT%U-‘||to_char(sysdate,’hh24_mi_ss’)||’.DMP’, directory => ‘RAMBO’, filesize => ‘500M’, filetype => 1);
dbms_datapump.set_parameter(handle => h1, name => ‘INCLUDE_METADATA’, value => 1);
dbms_datapump.set_parameter(handle => h1, name => ‘DATA_ACCESS_METHOD’, value => ‘AUTOMATIC’);
dbms_datapump.set_parameter(handle => h1, name => ‘ESTIMATE’, value => ‘BLOCKS’);
dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
dbms_datapump.detach(handle => h1);
errorvarchar := ‘NO_ERROR’;
EXCEPTION
WHEN OTHERS THEN
BEGIN
IF ((errorvarchar = ‘ERROR’)AND(tryGetStatus=1)) THEN
DBMS_DATAPUMP.DETACH(h1);
END IF;
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
RAISE;
END;
/

To list files (dump file and log file) generated:

SELECT * FROM DBMS_CLOUD.list_files(‘RAMBO’);

Oracle 19c Instillation in Linux Redhat 9

You can install Oracle 19c in Linux Redhat version 9, while running the installer command…errors were thrown:

./runInstaller 

There are missing library packages that requires instillation in your OS:

dnf install -y libnsl
dnf install -y libnsl.i686
dnf install -y libnsl2
dnf install -y libnsl2.i686

I Hope this helps…

ORA-12547: TNS:lost contact After Operating System Restart

My Final Post in 2023, weird issue was faced where a database system after Linux operating system reboot startups in a defect way. when you attempt to connect through Bequeath Protocol:

ERROR:
ORA-12547: TNS:lost contact

After investigation….The Solution:

cat /etc/oratab

current entry was the following:

ORCLCDB:/opt/oracle/product/19c/dbhome_1/:Y

changed it to:

ORCLCDB:/opt/oracle/product/19c/dbhome_1:Y

Typo’s can cause serious problems…it took some time to figure out the problem.

oracle database JAVA Patch Error: [javavm_refresh] Error 127

While patching oracle 19c database JAVA component , the following errors were thrown:

Patching component oracle.javavm.client, 19.0.0.0.0…
Make failed to invoke “/usr/bin/make -f ins_rdbms.mk javavm_refresh ORACLE_HOME=/opt/oracle/product/19c/dbhome_1 OPATCH_SESSION=apply”….’make: perl: Command not found
make: *** [javavm_refresh] Error 127

NApply was not able to restore the home. Please invoke the following scripts:

  • restore.[sh,bat]
  • make.txt (Unix only)
    to restore the ORACLE_HOME. They are located under
    “/opt/oracle/product/19c/dbhome_1/.patch_storage/NApply/2023-11-01_14-06-22PM”

UtilSession failed: Re-link fails on target “javavm_refresh”.

Log file location: /opt/oracle/product/19c/dbhome_1/cfgtoollogs/opatch/opatch2023-11-01_14-06-22PM_1.log

OPatch failed with error code 73

The first thing I will do is to restore the oracle database binaries to a consistent state:

cd /opt/oracle/product/19c/dbhome_1/.patch_storage/NApply/2023-11-01_14-06-22PM

./restore.sh

The second step is to add perl location in your $PATH environment variable $ORACLE_HOME/perl/bin and test it after profile reload:

echo $PATH

env | grep perl

After that using OPatch utility try to apply java patch again….should be successful !

Oracle Database 23c Hybrid Read-Only mode for pluggable databases (PDBs)

Prior to Oracle 23c release you have two options for OPEN_MODE with pluggable databases either read/write (default mode) or read only.

With read-only mode all types of accounts (local PDB accounts) or common-user accounts can perform read-only operations on the pluggable database tables.

In oracle 23c a new mode is introduced which is hybrid read-only mode…(local PDB accounts) can perform ONLY read operations against tables while common-user accounts can performed read/write operations against tables in the pluggable database system.

Let us check the traditional v$pdbs view to check my current 23c pluggable databases OPEN_MODE

SQL> select name, open_mode from v$pdbs;

Also, let us explore the NEW database view V$CONTAINER_TOPOLOGY that includes the column IS_HYBRID_READ_ONLY:

SQL> select *from V$CONTAINER_TOPOLOGY;

— will create common user called C##DBA granted DBA role

SQL> CREATE USER C##DBA IDENTIFIED BY test123 CONTAINER=ALL;

SQL> grant dba to C##DBA CONTAINER=ALL;

SQL> alter user C##DBA default role all;

— will create local user in pluggable database PDB2 with the name dba1

SQL> create user dba1 identified by test123;

SQL> grant dba to dba1;

SQL> alter user dba1 default role all;

Now, Let us enable hybrid read only mode:

sqlplus / as sysdba

SQL> ALTER PLUGGABLE DATABASE PDB2 CLOSE IMMEDIATE;

Pluggable database altered.

SQL> ALTER PLUGGABLE DATABASE PDB2 OPEN HYBRID READ ONLY;

Pluggable database altered.

Checking from container level the pluggable database PDB2 is now open with “Hybrid Read Only”mode:

Let us test the expected functionality:

As local user I will be able to perform “SELECT” queries only….any data modification queries such as insert,update,delete will not work:

sqlplus “dba1/test123″@PDB2

SQL> select * from hr.table1;

FNAME                LANAME

——————– ——————–

EMAD                 AL-MOUSA

Tim                  Allen

SQL> insert into hr.table1 values (‘Adam’,’Smith’);

insert into hr.table1 values (‘Adam’,’Smith’)

               *

ERROR at line 1:

ORA-16000: Attempting to modify database or pluggable database that is open for

read-only access.

Now, let us try with common-user C##DBA….the account should be able to do all kind of operations (SELECT,INSERT,UPDATE,DELETE) unlike “local”database account:

sqlplus “C##DBA/test123″@PDB2

SQL> select * from hr.table1;

FNAME                LANAME

——————– ——————–

EMAD                 AL-MOUSA

Tim                  Allen

SQL> insert into hr.table1 values (‘Adam’,’Smith’);

1 row created.

SQL> commit;

Commit complete.

Someone would ask…nice feature but what benefits does it provide ?

  • It will help with “maintenance” activities where you want to restrict application account from performing any data modifications for some time and only DBA container level can modify data.
  • It will help with patching especially with datapatch step where SQL code modification is required to be done with NO locking in-place.
  • It can help with forensics activities and investigation, to limit accounts within pluggable database level from performing any data modification while analysis and investigation is in-place.

How To List/Find PATH_PREFIX parameter value in pluggable database

PATH_PREFIX is a parameter used while pluggable database is being created to ensure user-level directories are created in a certain pre-defined directory in the operating system. The parameter is optional and must be specified during database creation.

How to know the parameter value of PATH_PREFIX in a pluggable database….you can know that using cdb_properties view in a pluggable database level:

sqlplus / as sysdba

The parameter was introduced in Oracle 12cR2 as a security in enhancement in CDB architecture.

sources:

https://docs.oracle.com/en/database/oracle/oracle-database/18/multi/overview-of-pdb-creation.html#GUID-7AC4C851-4E5E-4622-9B29-31D5A1C98426

extjob oracle database binary permission and ownership after applying Oracle database patch (RU) release update

Oracle has just released Oracle Critical Patch Update for July 2023 : https://www.oracle.com/security-alerts/cpujul2023.html

Interesting part when you apply the RU (Release Update) (implicit re-linking of the database binaries happen) which impacts extjob binary executable ownership and permission.

so, after patching its highly recommended that your run root.sh script again (either through sudo or ask your OS admin to run it)

As shown below the executable is now owned by root with 4750 permission

I hope this helps.

How To Apply Data Pump Recommended Proactive Patches

Datapump bundle patches are special patches for datapump component of the Oracle database system, and can be applied “AFTER” the main database patch is applied…The RU (release update) patch needs to be already applied on the Oracle DB System.

its important to mention that this patch is considered “Non-Binary Online Patch” which means you can apply it while database is up and running fine. However, you need to make sure no datapump jobs are currently running while applying the patch.

The benefits of applying this patch are many…for example bug fixes for datapump component will be fixed, it will speed up datapump operations…etc.

First, check there are No pre-requisite conflicts:

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./

Then, go to the directory where the datapump patch is located:

cd /var/DATAPUMP_BUNDLE_PATCH_19.19.0.0.0/35261302

After that, apply the patch through OPATCH utility (use latest one)

[oracle@oracle-19c]$ $ORACLE_HOME/OPatch/opatch apply
Oracle Interim Patch Installer version 12.2.0.1.37
Copyright (c) 2023, Oracle Corporation. All rights reserved.

Oracle Home : /opt/oracle/product/19c/dbhome_1
Central Inventory : /opt/oracle/oraInventory
from : /opt/oracle/product/19c/dbhome_1/oraInst.loc
OPatch version : 12.2.0.1.37
OUI version : 12.2.0.7.0
Log file location : /opt/oracle/product/19c/dbhome_1/cfgtoollogs/opatch/opatch2023-04-20_15-38-36PM_1.log

Verifying environment and performing prerequisite checks…
OPatch continues with these patches: 35261302

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.
Backing up files…
Applying interim patch ‘35261302’ to OH ‘/opt/oracle/product/19c/dbhome_1’

Patching component oracle.rdbms, 19.0.0.0.0…

Patching component oracle.rdbms.dbscripts, 19.0.0.0.0…
Patch 35261302 successfully applied.
Log file location: /opt/oracle/product/19c/dbhome_1/cfgtoollogs/opatch/opatch2023-04-20_15-38-36PM_1.log

OPatch succeeded.

Then, go to the OPATCH directory location and run datapatch perl script:

cd $ORACLE_HOME/OPatch

./datapatch -verbose

Log file for this invocation: /opt/oracle/cfgtoollogs/sqlpatch/sqlpatch_4829_2023_04_21_11_13_37/sqlpatch_invocation.log

Connecting to database…OK
Gathering database info…done

Note: Datapatch will only apply or rollback SQL fixes for PDBs
that are in an open state, no patches will be applied to closed PDBs.
Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
(Doc ID 1585822.1)

Bootstrapping registry and package to current versions…done
Error: prereq checks failed!
verify_queryable_inventory returned ORA-20001: Latest xml inventory is not loaded into table
Prereq check failed, exiting without installing any patches.

Please refer to MOS Note 1609718.1 and/or the invocation log
/opt/oracle/cfgtoollogs/sqlpatch/sqlpatch_4829_2023_04_21_11_13_37/sqlpatch_invocation.log

And, that’s it….its online operation without the need to be done in maintenance windows (database services shutdown activity).