datapatch errors while Patching Oracle CDB environment

In a container database environment i have faced two errors that i would like to share how i fixed them. As you know after applying the database patches on the binaries the next step is to startup the database and then run “datapatch” script.

ERROR NUMBER 1:

DBD::Oracle::st bind_param failed: ORA-01652: unable to extend temp segment by 128 in tablespace TEMP (DBD ERROR: OCILobWrite in dbd_rebind_ph_lob) [for Statement “BEGIN
INSERT INTO sys.dba_registry_sqlpatch_ru_info
(patch_id,
patch_uid,
patch_descriptor,
ru_version,
ru_build_description,
ru_build_timestamp,
patch_directory)
VALUES
(:patch_id,
:patch_uid,
:patch_descriptor,
:ru_version,
:ru_build_description,
TO_TIMESTAMP(:ru_build_timestamp, ‘YYMMDDHH24MISS’),
:patch_directory);
COMMIT;
END;” with ParamValues: :patch_descriptor=OCIXMLTypePtr=SCALAR(0x3b38118), :patch_directory=’PK………P.[{r
.l.h.&.|lgy..m.na.w)Y%Q|T5..3…9.Nn7s2.7.SsDq.’N..e…r{M.jvL:.wڍK`l.?Z-=u.|&Zdx;..vg{Z.?5rFcf.͋9i{8R.A..V..wV}..>..I~6=,; &c….ݳ<..waA…il>..y.tsp6??>WKi4.W.q…%Z.?…?.#g..&?.?s…G”…”.g”7xHm.G9A\uj.L9.
tXDqz.|.g.I4..w3′.d.249.N..”..]7.%Y.qy_.._.}w$.woi<../vUB.P..(Y…3zqF.J9j.(N+81.N.S)}…..6E+..Z TUTwZ 봣wZ TS4m.’+.g%#.Rc.4F+9!6$l.sG0.zd.”}ErJ(.1Y..EdAq.r.s.F.S A..f.P(8f..3..Q.sw.S.0QZ..k{L..|.:.0D9.B7.3.)Q……).H C..N.!…’, :patch_id=’31281355′, :patch_uid=’23688465′, :ru_build_description=’Release_Update’, :ru_build_timestamp=’200703031501′, :ru_version=’19.8.0.0.0′] at /opt/oracle/product/19c/dbhome_1/sqlpatch/sqlpatch.pm line 5538.

SOLUTION:

you need to resize the TEMP data file by increasing it in PDB$SEED database

sqlplus / as sysdba

SQL> alter session set container=PDB$SEED;

Session altered.

SQL> select FILE_NAME,TABLESPACE_NAME from dba_temp_files;

/opt/oracle/oradata/ORCLCDB/pdbseed/temp012020-05-30_19-35-09-522-PM.dbf
TEMP

SQL> alter database tempfile ‘/opt/oracle/oradata/ORCLCDB/pdbseed/temp012020-05-30_19-35-09-522-PM.dbf’ resize 55M;

Database altered.

ERROR NUMBER 2:

Datapatch error:

Validating logfiles…
DBD::Oracle::st execute failed: ORA-01732: data manipulation operation not legal on this view (DBD ERROR: error possibly near <> indicator at char 7 in ‘UPDATE <>dba_registry_sqlpatch
SET status = :p1, action_time = SYSTIMESTAMP
WHERE patch_id = :p2
AND patch_uid = :p3
AND action = UPPER(:p4)
AND action_time = (SELECT MAX(action_time)
FROM dba_registry_sqlpatch
WHERE patch_id = :p5
AND patch_uid = :p6
AND action = UPPER(:p7))’) [for Statement “UPDATE dba_registry_sqlpatch
SET status = ?, action_time = SYSTIMESTAMP
WHERE patch_id = ?
AND patch_uid = ?
AND action = UPPER(?)
AND action_time = (SELECT MAX(action_time)
FROM dba_registry_sqlpatch
WHERE patch_id = ?

SOLUTION:

sqlplus / as sysdba

ALTER SYSTEM SET “_simple_view_merging”=TRUE;

important remark: underscore parameters should’t be manipulated unless advised by Oracle support engineer so the proposed solution is a workaround so set the parameter….re-execute datapatch and then unset the parameter.

Oracle Database Lost Write Detection & Recovery Options

  • What is a lost write?

lost write occurs when an I/O subsystem acknowledges the completion of the block write even though the writers did not occur in the first place. Another situation is when a former image of the block overwrites the current block image.

  • Error Symptoms:

The ORA-600 [3020] stuck recovery error could occur on the Standby database for several reasons including & ORA-752

IMPORTANT REMARK: Do NOT repair the Standby by restoring a backup taken from the Primary, as that will lead to corrupting the Standby also!

  • Possible ACTIONS:
  1. First, immediately Open a Case with Oracle Support !
  • An ORA-752 error definitively identifies a lost write on the Primary. Consider failing over to the Standby immediately if data integrity is critical and some data loss is acceptable.
  • Inspect the alert log file for more information (take backup of the alert log file to offsite place).
  • Inspect the trace files generated using the view:

SQL> select originating_timestamp,detailed_location,message_level,message_text,problem_key from  V$diag_alert_ext where message_level=1 and message_text like’%ORA-00600%’ order by originating_timestamp desc;

  • Inspect OS logs:

cd /var/log

ls -l message*

  • Perform multiple dumps so Oracle support can review:

Dump of the controlfiles

SQL> alter session set events ‘immediate trace name controlf level xx’;

Dump of the datafile headers:

SQL> alter session set events ‘immediate trace name file_hdrs level 10’;

Dump of the redo log headers:

SQL> alter session set events ‘immediate trace name redohdr level 10’;

  • The alert log message will provide the datafile number along with the corresponding block number (replace &file_number, &block_number values):

SQL> spool /exp/oraxx1/corrupted_file_info.txt

SQL> Select * from DBA_EXTENTS where FILE_ID=&file_number and  &block_number BETWEEN BLOCK_ID and BLOCK_ID+BLOCKS-1;

SQL> spool off;

SQL> spool /exp/oraxx1/object_number.txt

SQL> Select * from DBA_OBJECTS    where DATA_OBJECT_ID = &object_number;

SQL> spool off;

  • If feasible/possible, drop and recreate the affected objects on the PRIMARY database (perform good analysis):

Once the objects have been recreated, use the following procedure to skip corrupted block on the STANDBY DB:

Temporarily disable lost write protection on the standby:

SQL> ALTER SYSTEM SET DB_LOST_WRITE_PROTECT = NONE;

Allow recovery to proceed in spite of block corruptions by running the RECOVER command with the ALLOW n CORRUPTION clause, where n is the number of allowable corrupt blocks.

SQL> alter database recover automatic standby database

      allow 1 corruption;

Once the alert log indicates the blocks have been marked corrupt, restart managed recovery.

SQL> alter database recover cancel;

 SQL> alter database recover managed standby database using current logfile disconnect;

  • If the affected objects cannot be recreated then activate the standby database. By activating the standby database you will be experiencing data loss but data integrity will be ensured.

Issue the following SQL statement on the standby database to convert it to a primary:

SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE;

Then, immediately backup the database (ALL standby databases Linux Host Names are included in NetBackup backup policy)

Backup the database against the standby recovery catalog

  • Detecting & Protecting Against Lost Writes

Set DB_LOST_WRITE_PROTECT to TYPICAL on primary and standby databases, if your database is 18c you can utilize the new feature “shadow tablespace”

*** shadow table-space setup:

ensure that compatible parameter is set to 18.0:

SQL> show parameter compatible

NAME                                 TYPE        VALUE

———————————— ———– ——————————

compatible                           string      18.0.0

SQL> create bigfile tablespace SHADOW datafile ‘/orad04/oradbp02/shadow.dbf’ size 120M AUTOEXTEND ON lost write protection;

SQL> select tablespace_name from dba_tablespaces where CONTENTS=’LOST WRITE PROTECTION’;

SQL> select tablespace_name,status,bigfile,contents,logging,allocation_type,encrypted,lost_write_protect,chunk_tablespace from dba_tablespaces where tablespace_name=’SHADOW’;

// to enable LOST WRITE PROTECTION:

SQL> alter database enable lost write protection;

// to disable LOST WRITE PROTECTION:

SQL> ALTER DATABASE DISABLE LOST WRITE PROTECTION;

SQL> select * from new_lost_write_datafiles$;

NOTE: you can enable shadow lost write protection for a tablespace or a data file levels also.

Oracle Database In-Memory Base Level Feature in Oracle 19c

In-Memory database technology is already introduced in Oracle database system, however in Oracle 19c things changed ! The feature is now free with limited size of  16 GB. You can now explore “Base-Level” feature starting with Oracle 19.8 version where JULY 2020 RU (Release Update) is applied.

The new Base Level feature supports all Database In-Memory features except for the following:

  • Automatic In-Memory  (AIM)
  • Compression levels other than MEMCOMPRESS FOR QUERY LOW
  • Excluded columns (all columns of a table are populated)
  • The CELLMEMORY feature on Exadata

To enable In-Memory “Base-level” feature:

go to cd $ORACLE_HOME/dbs and edit your init.ora file by adding the following two parameters and “restart” your database

INMEMORY_FORCE=BASE_LEVEL

INMEMORY_SIZE=7G

SQL> create table dba.in_mem as select * from dba_objects;

Table created.

SQL> explain plan SET STATEMENT_ID=’EMO1′ for select count(*) from dba.in_mem;

SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY(‘PLAN_TABLE’,’EMO1′,’ALL’));

PLAN_TABLE_OUTPUT

——————————————————————————–

Plan hash value: 3285478656

———————————————————————

| Id | Operation | Name | Rows | Cost (%CPU)| Time |

———————————————————————

| 0 | SELECT STATEMENT | | 1 | 115 (0)| 00:00:01 |

| 1 | SORT AGGREGATE | | 1 | | |

| 2 | TABLE ACCESS FULL| IN_MEM | 25010 | 115 (0)| 00:00:01 |

———————————————————————

Query Block Name / Object Alias (identified by operation id):

PLAN_TABLE_OUTPUT

——————————————————————————–

————————————————————-

1 – SEL$1

2 – SEL$1 / IN_MEM@SEL$1

Column Projection Information (identified by operation id):

———————————————————–

1 – (#keys=0) COUNT(*)[22]

2 – (rowset=1019)

21 rows selected.

*** — Now i will load the table to Memory:

SQL> ALTER TABLE dba.in_mem INMEMORY PRIORITY HIGH;

Table altered.

— to check objects uploaded in memory you can query dictionary view V$IM_SEGMENTS:

SQL> select OWNER,SEGMENT_NAME,SEGMENT_TYPE from v$IM_SEGMENTS ;

OWNER SEGMENT_NAME SEGMENT_TYPE

——————————————————————————————————————————– ——————————————————————————————————————————– ——————

dba IN_MEM TABLE

SQL> explain plan SET STATEMENT_ID=’EMO2′ for select count(*) from dba.in_mem;

SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY(‘PLAN_TABLE’,’EMO2′,’ALL’));

PLAN_TABLE_OUTPUT

————————————————————————————————————————————————————————————————————————————————————————————————————

Plan hash value: 3285478656

——————————————————————————

| Id | Operation | Name | Rows | Cost (%CPU)| Time |

——————————————————————————

| 0 | SELECT STATEMENT | | 1 | 5 (0)| 00:00:01 |

| 1 | SORT AGGREGATE | | 1 | | |

| 2 | TABLE ACCESS INMEMORY FULL| IN_MEM | 25010 | 5 (0)| 00:00:01 |

——————————————————————————

Query Block Name / Object Alias (identified by operation id):

————————————————————-

1 – SEL$1

2 – SEL$1 / IN_MEM@SEL$1

Column Projection Information (identified by operation id):

———————————————————–

1 – (#keys=0) COUNT(*)[22]

2 – (rowset=1019)

21 rows selected.

 

as shown above the explain plan shows full scan against database table stored in-memory.

** resources/refrences:

https://blogs.oracle.com/in-memory/base-level-198-ru

https://docs.oracle.com/en/database/oracle/oracle-database/20/ftnew/database-memory-base-level.html

https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/INMEMORY_FORCE.html#GUID-1CAEDEBC-AE38-428D-B07E-6718A7225548

https://www.oracle.com/a/tech/docs/twp-oracle-database-in-memory-19c.pdf

Datapatch error Error at line 2082: script md/admin/sdottsmeta.sql

I am sharing this blog post to cover an issue i have faced while invoking “datapatch” after rolling back a patch:

logfile: /ora/dbt01/cfgtoollogs/sqlpatch/29789911/23573028/29789911_rollback_dbt01_2020Jul16_06_56_44.log (errors)
-> Error at line 2082: script md/admin/sdottsmeta.sql
– SP2-0310: unable to open file “/ora/dbt01/product/19.3/sqlpatch/29789911/23573028/&sql_script.sql”
Patch 31281355 apply: WITH ERRORS
logfile: &full_logfile (errors)

Can’t use string (“Could not open logfile &full_log”…) as a HASH ref while “strict refs” in use at /ora/dbt01/product/19.3/sqlpatch/sqlpatch.pm line 6175, <LOGFILE> line 2177.
Please refer to MOS Note 1609718.1 and/or the invocation log
/ora/dbt01/cfgtoollogs/sqlpatch/sqlpatch_39541_2020_07_16_06_55_39/sqlpatch_invocation.log
for information on how to resolve the above errors.

The workaround fix:

Add “set define on” at the end of script $ORACLE_HOME/md/admin/prvtrdfctx.plb

 

 

 

 

 

 

 

 

 

 

 

 

 

ORA-55940: An error occurred during execution of ktliIngestExternData

This will be a short blog post about an issue i have faced the other day, i was trying to query unified audit trail view:

SQL> select * from unified_audit_trail;

ORA-55940: An error occurred during execution of ktliIngestExternData

This error is related to unified audit spill over files located in $ORACLE_BASE/audit/ORACLE_SID

so you have two options:

OPTION 1: upload them  by executing:

exec DBMS_AUDIT_MGMT.LOAD_UNIFIED_AUDIT_FILES;

OPTION 12: remove the spill over files if you think this will not impact your security (use rm Linux OS command)

restart your database and try querying again unified_audit_trail

 

Oracle Database Transportable Export/Import for Pluggable Database (PDB) Migration

There are many ways to migrate your Oracle database, here I am exploring transportable export/import method that is fast and efficient.

I will be migrating a pluggable database from windows OS to Linux which both of them has “little” endian….so there will be no conversion process taking place:

SELECT d.PLATFORM_NAME, ENDIAN_FORMAT

FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d

WHERE tp.PLATFORM_ID = d.PLATFORM_ID;

1

Verify that User/Application tablespaces are self contained:

The following check will ensure from physical and logical side that there are no dependency user objects stored under SYSTEM,SYSAUX tablespaces.

SQL> alter session set container=PDB_TEST;

SQL> EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK(‘FAST_START_TBS,TS_NE,TS_TESTING,TABLESPACE_DUMMY,TS_SCHEMA,TS_SPECIAL_SCHEMA’, TRUE);

SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;

2

As shown above the query is showing violations …..for example: index REG_ID_PK under HR schema is stored in SYSAUX tablespace, what I am going to do is to migrate this index to USERS tablespace.

SQL> ALTER INDEX HR.REG_ID_PK REBUILD TABLESPACE USERS;

To repeat the same process to other indexes…you can use the following wrapping SQL:

SQL> select ‘alter index ‘||OWNER||’.’||INDEX_NAME||’  REBUILD TABLESPACE USERS;’ from dba_indexes where OWNER=’HR’ and tablespace_name=’SYSAUX’;

After successfully finishing the cleanup proceed to the next step.

Create a directory in the source database:

SQL> CREATE OR REPLACE DIRECTORY export_dir AS  ‘C:\18c_vm\export_dir’ ;

SQL> grant read,write on directory export_dir to system;

Set the source tablespaces in “read-only” mode, this can be done by executing the following wrap statement:

 

SQL> select ‘alter tablespace ‘ || tablespace_name || ‘ read only;’ from dba_tablespaces where tablespace_name not in (‘SYS’,’SYSTEM’,’UNDOTBS1′,’TEMP’,’SYSAUX’);

expdp system/XXXX@//localhost:1521/pdb_origin full=y transportable=always directory=export_dir dumpfile=pdb_tts.dmp metrics=y exclude=statistics compression=all logfile=pdb_tts_export.log

After that, copy the dump file and the data file to the destination database server.

All of the following steps will be performed in the destination (Linux) server:

create a directory for the destination pluggable database:

mkdir /opt/oracle/oradata/ORCLCDB/PDB_TEST

add the following PDB service in the tnsnames.ora file:

PDB_TEST =

(DESCRIPTION =

(ADDRESS = (PROTOCOL = TCP)(HOST = 0.0.0.0)(PORT = 1521))

(CONNECT_DATA =

(SERVER = DEDICATED)

(SERVICE_NAME = PDB_TEST)

)

)

 

create the pluggable database in the destination database server:

CREATE PLUGGABLE DATABASE “PDB_TEST” ADMIN USER “admin” IDENTIFIED BY “XXXX”

FILE_NAME_CONVERT=(

‘/opt/oracle/oradata/ORCLCDB/pdbseed/system01.dbf’, ‘/opt/oracle/oradata/ORCLCDB/PDB_TEST/SYSTEM01.DBF’,

‘/opt/oracle/oradata/ORCLCDB/pdbseed/sysaux01.dbf’, ‘/opt/oracle/oradata/ORCLCDB/PDB_TEST/SYSAUX01.DBF’,

‘/opt/oracle/oradata/ORCLCDB/pdbseed/undotbs01.dbf’, ‘/opt/oracle/oradata/ORCLCDB/PDB_TEST/UNDOTBS01.DBF’,

‘/opt/oracle/oradata/ORCLCDB/pdbseed/temp012019-02-04_19-34-58-889-PM.dbf’, ‘/opt/oracle/oradata/ORCLCDB/PDB_TEST/TEMPD.DBF’

)

STORAGE UNLIMITED TEMPFILE REUSE;

within the pluggable database PDB_TEST define the dump directories as follows:

sqlplus / as sysdba

SQL> alter session set container=PDB_TEST;

SQL> alter database open;

SQL> CREATE DIRECTORY dump_dir AS ‘/opt/oracle/oradata/ORCLCDB/PDB_TEST/DUMP’;

SQL> GRANT read, write on directory dump_dir to system;

I will now perform the import operation using transportable tablesapce:

impdp system/XXXX@pdb_test directory=dump_dir dumpfile=PDB_TTS.DMP logfile=full_tts_imp.log metrics=y transport_datafiles=’/opt/oracle/oradata/ORCLCDB/PDB_TEST/DUMP/USERSD01.DBF’

 

Now, I will perform post-migration verification:

3

As shown above the tablespace USERS is created

4

As shown above HR schema was created, and I was able to perform a select statement to view the table HR.COUNTRIES !

Since this is a demo, I will put the tablesapce in the “SOURCE” windows database environment back to read,write mode:

SQL> alter tablespace USERS read write;

Final Notes:

Putting source database user tablespaces to “read-only” means outage…as the system/application is expected to perform both read,write operations. If the data transported is huge then its important to reduce the cut-off time by taking RMAN incremental backups then switching the tablespaces to read-only for the last and final incremental backup as this will minimize down-time.

AWR database is not moved using transportable tablespaces, to move it go the source database instance and perform the following:

SQL> @?/rdbms/admin/awrextr.sql

Enter value for dbid: 2331138178

Enter value for num_days: 7

Enter value for begin_snap: 355

Enter value for end_snap: 382

Enter value for directory_name: DUMP_DIR

 

5

After that I will copy the dump file to the destination database server:

cp AWRDAT_355_382.DMP /opt/oracle/oradata/ORCLCDB/PDB_TEST/DUMP

6789101112

AWR data will be transfarred from the staging schema AWR_STAGE to the AWR tables in SYS schema.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Oracle 19c – Oracle Spatial Bug ORA-13199

I have recently upgraded a database to Oracle 19c with latest patches applied (APRIL 2020) 19.7.0.0.200414  ….as you know by now Oracle 19c is the long-term release.

After editing and committing the SQL transaction on an Oracle Spatial Table using ESRI ArcGIS Desktop Applicaiton….the following errors were thrown to the end-user:

ORA-03114: not connected to ORACLE

ORA-00603: Oracle server session terminated

ORA-00600: internal error code arguments[6006],[1],[]

ORA-13199: Spatila index commit Failure

Process ID: 90662
Session ID: 392 serial number: 27690

Solution:

This is a bug and you will need to download and apply the patch from Oracle Support.

Patch 29789911: ORA-13199: SPATIAL COMMIT FAILURE

*** Updated August 6 2020: The patch 29789911 is already included in 19.8 RU so there is no need to apply one-off-patch just apply latest RU

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Oracle Database Product Life Cycle Support 19c,18c, 12cR2 Long Term & Innovation Releases !

*** This post has been updated in JUNE 2020 *****

Oracle has just changed their product life cycle support for their database technology especially.

There are two types of releases now: Long Term Release , and Innovation Release

Long Term Release:  are ideal for use cases that benefit from less frequent upgrades to newer releases. This type of release offers highest stability. The long term release will have 5 years of Premier Support.

Innovation Release:  is a release between long term releases. This type of releases will provide new enhancements and capabilities. it will have short period of Premier support which is 2 years. The idea here is to provide cutting edge technologies for rapidly evolving technologies.

So Oracle 20c most probably will be an “innovation” release !

updated in JUNE 2020 the following are support dates:

Oracle 12cR2 support will end 31st March 2022

Oracle 18c (is considered innovation release now !!): 8th June 2021

Oracle 19c: 30th April 2024

Oracle_Product_LifeCycle_Support_JUNE2020

 

 

 

 

 

 

Oracle SYSAUX size is exponentially expanding after upgrade to Oracle 12cR2

After upgrading Oracle database to 12cR2 SYSAUX table-space was noticed to be exponentially increasing space and table SYS. WRI$_ADV_OBJECTS was largest !

SYSAUX_TBS

Why is that ?

In Oracle 12.2 a new feature “Optimizer Statistics Advisor” [AUTO_STATS_ADVISOR_TASK] runs huge number of times causing SYSAUX tablespace to grow

SOLUTION:

Using the following query I will calculate the number of rows in table SYS. WRI$_ADV_OBJECTS:

SQL> SELECT COUNT(*) FROM WRI$_ADV_OBJECTS WHERE TASK_ID=(SELECT DISTINCT ID FROM WRI$_ADV_TASKS WHERE NAME=’AUTO_STATS_ADVISOR_TASK’);

  COUNT(*)

———-

  28251381

Migrate the rows from the current table to a new table

SQL> CREATE TABLE WRI$_ADV_OBJECTS_NEW AS SELECT * FROM WRI$_ADV_OBJECTS WHERE TASK_ID !=(SELECT DISTINCT ID FROM WRI$_ADV_TASKS WHERE NAME=’AUTO_STATS_ADVISOR_TASK’);

Table created.

Then truncate the original table:

SQL> TRUNCATE TABLE WRI$_ADV_OBJECTS;

Table truncated.

SQL> INSERT /*+ APPEND */ INTO WRI$_ADV_OBJECTS SELECT * FROM WRI$_ADV_OBJECTS_NEW;

20703 rows created.

SQL> commit;

Commit complete.

SQL> ALTER INDEX WRI$_ADV_OBJECTS_IDX_01 REBUILD;

Index altered.

SQL> ALTER INDEX WRI$_ADV_OBJECTS_PK REBUILD;

Index altered.

SQL> drop table WRI$_ADV_OBJECTS_NEW purge;

Table dropped.

Drop the statistics advisor task from dictionary to refrain from executing:

DECLARE
v_tname VARCHAR2(32767);
BEGIN
v_tname := ‘AUTO_STATS_ADVISOR_TASK’;
DBMS_STATS.DROP_ADVISOR_TASK(v_tname);
END;
/

SQL> EXEC DBMS_STATS.INIT_PACKAGE();

PL/SQL procedure successfully completed.

SQL> DECLARE

filter1 CLOB;

BEGIN

filter1 := DBMS_STATS.CONFIGURE_ADVISOR_RULE_FILTER(‘AUTO_STATS_ADVISOR_TASK’,NULL,NULL,’DISABLE’);

END;

/  2    3    4    5    6

PL/SQL procedure successfully completed.

Upgrading from Oracle 12cR1 to 12cR2 error ORA-40362: invalid object SYS.ODCIPARTINFOLIST

After upgrading a database from Oracle 12cR1 to 12cR2 (Non-CDB) environment, the following errors were thrown in the post-upgrade logs, and XDB component was not valid:

Oracle Database 12.2 Post-Upgrade Status Tool           02-24-2020 09:47:50

Component                               Current         Version  Elapsed Time

Name                                    Status          Number   HH:MM:SS

Oracle Server

   ORA-40362: invalid object SYS.ODCIPARTINFOLIST specified in the statement

Oracle Server                           INVALID      12.2.0.1.0  00:06:34

Oracle Text

    ORA-06550: line 4, column 22: PLS-00905: object CTXSYS.TEXTINDEXMETHODS is i

nvalid

ORA-06550: line 4, column 6:

PL/SQL: Statement ignored

PLS-00905: object CTXSYS.TEXTINDEXMETHODS is invalid ORA-06550: line 4, colu

mn 6:

PL/SQL: Statement ignored

ORA-06550: line 4, column 6: PL/SQL: Statement ignored

    PL/SQL: Statement ignored

    ORA-06552: PL/SQL: Declaration ignored ORA-06553: PLS-905: object CTXSYS.CTX

_CONTAINS is invalid

ORA-06552: PL/SQL: Declaration ignored

ORA-06553: PLS-905:

object CTXSYS.CTX_CONTAINS is invalid

ORA-06552: PL/SQL: Declaration ignored

ORA

-06553: PLS-905: object CTXSYS.CTX_CONTAINS is invalid

ORA-06552: PL/SQL: Declar

ation ignored

ORA-06553: PLS-905: object CTXSYS.CTX_CONTAINS is invalid

ORA-0655

2: PL/SQL: Declaration ignored

ORA-06553: PLS-905: object CTXSYS.CTX_CONTAINS is

invalid

ORA-06552: PL/SQL: Declaration ignored

ORA-06553: PLS-905: object CTXSY

S.CTX_CONTAINS is invalid

ORA-06552: PL/SQL: Declaration ignored

ORA-06553: PLS-

905: object CTXSYS.CTX_CONTAINS is invalid

ORA-06552: PL/SQL: Declaration ignore

d

ORA-06553: PLS-905: object CTXSYS.CTX_CONTAINS is invalid

ORA-06552: PL/SQL: D

eclaration ignored

ORA-06553: PLS-905: object CTXSYS.CTX_CONTAINS is invalid

ORA

-06552: PL/SQL: Declaration ignored

ORA-06553: PLS-905: object CTXSYS.CTX_CONTAI

NS is invalid

ORA-06552: PL/SQL: Declaration ignored

ORA-06553: PLS-905: object

CTXSYS.CTX_CONTAINS is invalid

ORA-06552: PL/SQL: Declaration ignored

ORA-06553:

PLS-905: object CTXSYS.CTX_CONTAINS is invalid

ORA-06552: PL/SQL: Declaration i

gnored

ORA-06553: PLS-905: object CTXSYS.CTX_CONTAINS is invalid

ORA-06552: PL/S

QL: Declaration ignored

ORA-06553: PLS-905: object CTXSYS.CTX_CONTAINS is invali

d

ORA-06552: PL/SQL: Declaration ignored

ORA-06553: PLS-905: object CTXSYS.CTX_C

ONTAINS is invalid

ORA-06552: PL/SQL: Declaration ignored

ORA-06553: PLS-905: ob

ject CTXSYS.CTX_CONTAINS is invalid

Solution:

sqlplus / as sysdba

SQL> GRANT EXECUTE ON ODCIPartInfoList TO PUBLIC;

SQL> spool xdbreload.log

SQL> set echo on;

SQL> shutdown immediate;

SQL> startup upgrade;

SQL> @?/rdbms/admin/xdbrelod.sql

SQL> shutdown immediate;

SQL> startup;

SQL> @?/rdbms/admin/utlrp.sql

SQL> spool off

*** check database components after that:

SQL> select comp_name,status from dba_registry;

if still NOT Valid…..then perform a “drastic measure” by removing Oracle Text component and re-install it again !

To Remove Oracle Text:

sqlplus / as sysdba

SQL>@?/ctx/admin/catnoctx.sql
SQL> drop procedure sys.validate_context;
SQL>@?/rdbms/admin/utlrp.sql

check database components

SQL> select name,status from dba_registry;

if everything is Ok ….then we can install oracle text using DBCA (Database Configuration Assistant)

export DISPLAY=PCXXXXX:0.0

cd $ORACLE_HOME/bin

dbca