Applying PSU results ORA-22308: operation not allowed on evolved type

While applying Oracle PSU patches on 12cR1 database and checking the view dba_registry_sqlpatch the STATUS was “WITH ERRORS”.

 

In order to fix this you need to check the log file for patch (you can find the location for the log file  from dba_registry_sqlpatch OR using DBMS_QOPATCH package) , in my case the error was referring to database vault type:

 

create or replace type dvsys.ku$_dv_realm_member_t as object
*
ERROR at line 1:
ORA-22308: operation not allowed on evolved type

 

to fix this:

 

SQL> drop type dvsys.ku$_dv_realm_member_t validate;

 

Then,

 

cd $ORACLE_HOME/OPatch

 

./datapatch -verbose

Managing RMAN Configuration using DBMS_BACKUP_RESTORE Package

In this blog article, I will illustrate the ability to change RMAN configuration through SQL using DBMS_BACKUP_RESTORE package. This package is not well known and is not documented.

 

Traditionally we use RMAN interface utility as shown:

rman target /

RMAN> show all;

rman1

Let us now explore changing RMAN configuration through the package:

SQL> VARIABLE rman_config NUMBER;

SQL> EXECUTE :rman_config := SYS.DBMS_BACKUP_RESTORE.SETCONFIG(‘BACKUP OPTIMIZATION’,’ON’);

rman2

Checking through the RMAN interface utility, we can see that the configuration change is reflected successfully:

rman3

 

What are the benefits?

This will help you with the provisioning and automation of your new databases to have identical RMAN configuration. Also, this will enable you to propagate RMAN configuration changes to all of your landscape.

Oracle PSU Error UtilSession failed: Lock file left by a different patch, OPatch will not try re-using the lock file

PROBLEM DESCRIPTION:

for some reason my sessions was kicked out and the patch process didn’t complete. Then, when i tried to re-apply the patch the following error was thrown:

UtilSession failed: Lock file left by a different patch, OPatch will not try re-using the lock file. 

OPatch failed with error code 73

SOLUTION:

remove the patch lock file using the below UNIX command:

rm $ORACLE_HOME/.patch_storage/patch_locked

then try to re-apply the patch again.

Applying PSU results “patching Following executables are active”

while patching oracle 12c database on Linux OS environment, i faced the below error while applying the patch using Opatch utility (although the database is shutdown):

 

Verifying environment and performing prerequisite checks…

Prerequisite check “CheckActiveFilesAndExecutables” failed.

The details are:

Following executables are active :

/oracl/db11/product/12.1.0.2/lib/libclntsh.so.12.1

UtilSession failed: Prerequisite check “CheckActiveFilesAndExecutables” failed.

 

solution:

find the process that is locking the file system using the fuser command

fuser -u /oracl/db11/product/12.1.0.2/lib/libclntsh.so.12.1

and then kill it using kill -9 command

set define off and ‘&’ in your SQL Code

have you ever faced a problem when you executed a SQL code sent to you by a developer and the below pop window appeared ??

 

POP & in SQL CODE

to avoid this just add the following starting statement:

set define off

set define off

When you have ‘&’ inside SQL Script you need to specify “set define off” at the beginning of the code .  It will Turn off substitution variables.

 

 

 

Oracle 12cR2 In-Memory for JSON Columns

One of the new exciting features of Oracle 12cR2 is the ability to use In-Memory technology with JSON data.

Brief background:

  • JSON stands for JavaScript Object Notation, JSON is a language-independent data format.

 

  • In Oracle database JSON data are stored in columns with VARCHAR2, CLOB, or BLOB. These columns have “is json” check constraint.

 

Let us explore the setup:

 

The following prerequisites should be met before using In-Memory with JSON:

The compatibility of the database should be 12.2.0.0

SQL> show parameter compatible

The following parameter max_string_size must be set to “extended”

max string size standard

If its set to “standard” as normally it would be in your database, change it to “extended” by following the steps:

—— Shutdown the database

SQL> shutdown immediate;

—— Startup the database in upgrade mode

SQL> startup upgrade

—— change the parameter setting

SQL> alter system set MAX_STRING_SIZE =extended scope=spfile;

SQL> @?/rdbms/admin/utl32k.sql

—– restart the database in normal mode:

SQL> shutdown immediate;

SQL> startup;

SQL> @?/rdbms/admin/utlrp.sql

**** Remark:  The reason you need to change to extended data type, is that when you create a JSON column with VARCHAR2 you will have the ability to store documents up to  32,767 bytes.

Also ensure the following init.ora parameter files are set:

Inmemory_expression_usage

Inmemory_virtual_columns

 

And of course ensure that you enabled your in-memory area (feature).

 

Now, we can proceed in creating a dummy table with JSON for testing:

CREATE TABLE dummy_json

(id          VARCHAR2 (32) NOT NULL PRIMARY KEY,

date_loaded TIMESTAMP (6) WITH TIME ZONE,

json_doc VARCHAR2 (23767)

CONSTRAINT ensure_json CHECK (json_doc IS JSON));

 

dummy table with JSON

Then I will insert few records, for example:

 

INSERT INTO DUMMY_JSON

VALUES (

1,

SYSTIMESTAMP,

‘{

“color”: “black”,

“rgb”: [0,0,0],

“hex”: “#000000”}’);

 

Remark: While inserting records if you face this error:

ORA-02290: check constraint (HR.ENSURE_JSON) violated

This means that the JSON data was inserted in a wrong format….so review your JSON code.

ALTER TABLE dummy_json INMEMORY;

 

dummy in memory

Checking it by querying user_tables :

select TABLE_NAME,INMEMORY from user_tables where INMEMORY=’ENABLED’;

user table query

—— If you are already upgrading the database from 12cR1 to 12cR2 and have many tables with JSON column and would like to utilize the in-memory capability, you need first to ensure you met the perquisites as stated above are met and then execute the following script

 

SQL> @?/admin/utlimcjson.sql

This script will upgrade all currently existing tables with JSON columns so it can be populated in the in-memory area whenever this is required.

 

Finally, Having tables with JSON column in in-memory will improve performance for the following especially when using SQL Functions: json_table,  json_query, and  json_value .

 

 

 

Flashback Pluggable Database Level in Oracle Multitenant Architecture 12cR2 (12.2.0.1)

First, the case of UNDO tablespace:

In Oracle 12cR1 the undo tablespace was common and shared between all container databases, so features like flashback was only feasible on the container-level.

In 12cR2 this is changed as you create a local undo tablespace.

By executing the following query:

SQL> SELECT con_id, tablespace_name

FROM   cdb_tablespaces

WHERE  tablespace_name LIKE ‘UNDO%’

ORDER BY con_id;

 

Only the root container has UNDO tablespace

 

pic1

An existing pluggable database with no local undo tablespace

pic2

To enable Local Undo Tablespace:

Access the database as sysdba to the root container CBD$ROOT

SQL>shutdown immediate;

SQL> startup upgrade;

SQL> alter database local undo on;

SQL> shutdown immediate;

SQL>startup;

To check that its successful, execute the following query:

SQL>set pages 1000

SQL>set lines 300

SQL> SELECT property_name, property_value

FROM   database_properties

WHERE  property_name = ‘LOCAL_UNDO_ENABLED’;

pic3

pic4

To Enable Flash Back:

 

Configure flash recovery area:

 

SQL> alter system set db_recovery_file_dest = ‘C:\app\emodb\oradata\oraclehero\fast_recover_area\’ scope=both;

 

SQL> archive log list;

pic5

 

SQL>  alter system set db_flashback_retention_target = 2880 scope = both;

 

**retention specified will be for 2 days 2880 minutes

Second, Enable flashback from CDB$root level:

SQL> alter database flashback on;

 

If you try to enable flashback from pluggable database level you will receive the following error:

 

SQL> alter session set container=PDB_EXPLORE;

 

SQL> alter database flashback on;

alter database flashback on

*

ERROR at line 1:

ORA-03001: unimplemented feature

Important remark: it’s important to enable force logging on the pluggable database level:

SQL> select pdb_name, force_logging, force_nologging from cdb_pdbs where pdb_name=’PDB_EXPLORE’;

pic6

To enable it:

 

SQL> alter session set container=PDB_EXPLORE;

SQL> alter pluggable database pdb_explore enable force logging;

 

pic7

Also, check that tablespaces are configured for flash back for pluggable database PDB_EXPLORE:

select name,flashback_on from v$tablespace;

pic8

Now to explore recover we can do it from different perspectives:

  • SCN
  • Specific date
  • Recovery point

 

SQL> Select dbms_flashback.get_system_change_number from dual;

 

GET_SYSTEM_CHANGE_NUMBER

————————

13524572

 

SQL> select to_char (sysdate, ‘mm-dd-yyyy hh24: mi: ss’) cdate from dual;

 

CDATE

———————

05-12-2017 12: 53: 35

 

SQL> create restore point test_flash guarantee flashback database;

 

 

Now to explore it, current Table called CITIES has the current data:

 

pic9

*** insert new records in the table:

INSERT INTO “EXP_SCHEMA”.”CITIES” (CITY_NAME, COUNTRY_NAME, TOURISM_LEVEL) VALUES (‘ BARCELONA’, ‘SPAIN’, ‘8’);

INSERT INTO “EXP_SCHEMA”.”CITIES” (CITY_NAME, COUNTRY_NAME, TOURISM_LEVEL) VALUES (‘ BERLIN’, ‘GERMANY’, ‘7’);

INSERT INTO “EXP_SCHEMA”.”CITIES” (CITY_NAME, COUNTRY_NAME, TOURISM_LEVEL) VALUES (‘TOKYO’, ‘JAPAN’, ‘6’);

Commit;

 

pic10

pic11

 

SQL> alter pluggable database pdb_explore close;

 

SQL> flashback pluggable database PDB_EXPLORE to restore point test_flash;

 

 

SQL> alter pluggable database pdb_explore open resetlogs;

 

 

Now check the table data again , its back as it was !!!

pic12

 

TDE (Transparent Data Encryption) Tablespace Live Conversion in Oracle 12cR2

One of the new exciting new security features in Oracle 12cR2 (12.2.0.1) , is the ability to encrypt your tablespaces “online“. In the past Implementing TDE required creating a new encrypted tablespace and then moving the data from the original tablespace to the encrypted tablespace (export/import data pump operations), which means a down time of your systems unless you use active data guard which minimizes TDE conversion implementation.

Let us now explore and simulate:

Current environment is: Oracle 12cR2 (Container Database Architecture) in Windows OS

Current ORACLE_HOME=C:\app\emodb\product\12.2.0\dbhome_2

Create a directory for the key store:

C:\app\emodb\product\12.2.0\dbhome_2\key_store

Go to sqlnet.ora file and add the below lines:

ENCRYPTION_WALLET_LOCATION=

(SOURCE=

(METHOD=file)

(METHOD_DATA=

(DIRECTORY=C:\app\emodb\product\12.2.0\dbhome_2\key_store)))

Set your COMPATIBLE Initialization parameter file to 12.2.0.0 :

SQL> ALTER SYSTEM SET COMPATIBLE = ‘12.2.0.0’ SCOPE = SPFILE;

Then execute:

SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE ‘C:\app\emodb\product\12.2.0\dbhome_2\key_store’ IDENTIFIED BY super_mario$88;

 

create keystore

ewallet will be created:

 

eWallet

Open the key store by executing:

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY super_mario$88;

 

adminster key management

 

Now I will create the Master Key:

SQL> ADMINISTER KEY MANAGEMENT SET ENCRYPTION KEY IDENTIFIED BY super_mario$88 WITH BACKUP USING ‘mario_bck7’;

 

encryption key pass

Backup of key store generated (it’s better to place it in a different directory)

ewallet_backup

I have already created and activated a master key in the root container and, one in each of the pluggable databases. Now I will create local master key for pluggable database

 

SQL>alter session set container=PDBORCL;

 

SQL>ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY super_mario$88;

SQL>ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY super_mario$88 WITH BACKUP;

Now the interesting part…..the online tablespace conversion:

I will run the alter tablespace command to encrypt an existing tablespace named “COMPANY_INFO_TS”:

 

SQL>ALTER TABLESPACE COMPANY_INFO_TS  ENCRYPTION ONLINE USING ‘AES256’ ENCRYPT FILE_NAME_CONVERT = (‘COMPANY_INFO.DBF’, ‘COMPANY_INFO_ENC.DBF’);

tablespace conversion command

If you don’t specify the encryption algorithm, the default encryption will be AES128

Encrypted new data file is generated:

encrypted dbf file

 

of course you need to increase the tablespace size before your start the conversion, as the conversion process will require more space allocation.

You can use DataWalker for Oracle to test viewing the data file blocks that is not encrypted, and encrypted one:

http://www.toolswatch.org/2014/03/new-tool-datawalker-for-oracle-v1-0-in-the-wild/

 

Useful queries:

** to check wallet status

SELECT * FROM v$encryption_wallet;

** to check encryption keys created 1 in CDB and the other in PDB database:

SELECT con_id, key_id FROM v$encryption_keys;