Tags

11g (3) 12c (4) 18c (2) 19c (3) ASM (3) Critical Patch (12) Dataguard (10) GRID (3) GitLab (2) Linux (10) OEM (2) ORA Errors (15) Oracle (20) RMAN (5)

Wednesday, January 21, 2026

SGA resize

How to resize SGA

PURPOSE: All documents are provided on this Blog just for educational purposes only.  Please make sure that you run it in your test environment before to move on to production environment. 


SELECT name,value/1024/1024/1024 "SGA (GB)" FROM v$sga;
SELECT sum(value)/1024/1024/1024 "TOTAL SGA (GB)" FROM v$sga;

NOTE: create pfile from spfile before do the chanege.

alter system set sga_target=72G scope=spfile sid='*';

---
SQL> alter system set sga_max_size=32G scope=spfile;

System altered.

!!! PGA_AGGREGATE_TARGET should be lower than PGA_AGGREGATE_LIMIT.

NOTE: Soll den wert kleiner sein als SGA und grosser als pga_aggregate_target

System altered.

NOTE: Soll den wert kleiner sein als pga_aggregate_limit

System altered.

----


#usefull link with nice example.



Friday, December 5, 2025

Oracle JDBC Connection Test



How to test Oracle connection via JDBC Thin driver.



PURPOSE: All documents are provided on this Blog just for educational purposes only.  Please make sure that you run it in your test environment before to move on to production environment. 


STEP 1.
Create a file OracleCon.java with the following code.
vi OracleCon.java

import java.sql.*;
public class OracleCon
{
public static void main(String args[])
  {
   Class.forName("oracle.jdbc.driver.OracleDriver");
     String url = "jdbc:oracle:thin:@ ORADBPRD.th.com:1521:KARDBPRD";
       Connection conn =
         DriverManager.getConnection(url,"user_tst01","pw_xyz");
         conn.setAutoCommit(false);
     Statement stmt = conn.createStatement();
     ResultSet rs=stmt.executeQuery("select user from dual;");
     while(rs.next())
     {
      System.out.println (rs.getString(1));
     }
   stmt.close();
   System.out.println ("Connection succeeded");
  }
}

STEP 2.
Compile the code.


Execute the program as follow.

Connection succeeded




Tuesday, June 10, 2025

April Patch 19c 2025

  

Oracle Critical Patch Update on 19c - January 2025

Oracle Critical Patch Update (CPU)

19.27.0.0.250415 GI RU Combo:

Patch 37591516 - Combo of OJVM Component Release Update 19.27.0.0.250415 + Grid Infrastructure Apr 2025 Release Update 19.27.0.0.25041


PURPOSE: All documents are provided on this Blog just for educational purposes only.  Please make sure that you run it in your test environment before to move on to production environment. 

Execute the following commands:

STEP 1

Oracle® Database Patch 37641958 - GI Release Update 19.27.0.0.250415

1)
NOTE: IMPORTANT
GI Home:
You need to update first opatch to the higher or latest version.
You can download latest version from the following link.

For April CPU is required OPatch latest version. It can be downloaded by the following link

Download lest OPatch from "https://updates.oracle.com/download/6880880.html"

Latest OPatch copy files.
mv OPatch OPatch_old
cd $DOWNLOAD_PATCH/OPatch
cp -r OPatch $GRID_HOME

You Need to do the same on ORACLE_HOME
Oracle ORACLE_HOME
Latest OPatch copy files
cd $ORACLE_HOME
mv OPatch OPatch_old
cd $DOWNLOAD_HOME/OPatch
cp -r OPatch $ORACLE_HOME


Unzip January patch file downloaded.

unzip p37641958_190000_<platform>.zip

Check the conflict:
From Grid_Home

$DOWNLOAD_PATCH is a directory where the patch is located.

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir  $DOWNLOAD_PATCH/37641958/37642901

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $DOWNLOAD_PATCH/37641958/37654975

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $DOWNLOAD_PATCH/37641958/37643161

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $DOWNLOAD_PATCH/37641958/37762426

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -
phBaseDir $DOWNLOAD_PATCH/37641958/36758186


For Database home, as home user
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $DOWNLOAD_PATCH/37641958/37642901

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $DOWNLOAD_PATCH/37641958/37654975


2)
Shut down all instances and listeners associated with the Oracle home that you are updating.
Stop a PDB from Oracle home:
alter pluggable database KARDBPDB close immediate;
srvctl status database -d KARDBTST
Database is not running.

Stop Listener from GI Home

srvctl stop  listener -l LISTENER
srvctl status listener -l LISTENER
Listener LISTENER is enabled
Listener LISTENER is not running

Stop ASM
srvctl status asm
ASM is not running.

3)
As root user execute the following command.
You have following options
 i) To patch only the GI Home
 <GI_HOME>/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/37641958 -  oh <GI_HOME>

ii) To patch oracle_home
  opatchauto apply <UNZIPPED_PATCH_LOCATION>/37641958 -oh <oracle_home1_path>

 iii) Patching Oracle Home and Grid Home togather:
 As root user, execute the following command on each node of the cluster:
 # <GI_HOME>/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/37641958

I am going to apply the path for both grid home and oracle home together.

NOTE: make sure that the Clusterware is running:
If it is not running, you need to start it as follow.

crsctl start has

export PATH=$PATH:$GRID_HOME/OPatch
$GRID_HOME/OPatch/opatchauto apply $DOWNLOAD_PATCH/37641958

Post-Installation Instructions
Single/Multitenant (CDB/PDB) DB
sqlplus / as sysdba
startup
alter pluggable database all open;
quit
cd $ORACLE_HOME/OPatch
./datapatch -verbose

Any databases that have invalid objects need to execute utlrp.sql run

For example:
cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql

-- You can check valid or invalid objects with the following command.
set pagesize 250;
col STATUS format a20;
col COMP_NAME format a35;
select COMP_NAME, version, status from dba_registry;


STEP 2

For a Non Oracle RAC environment, shut down all databases and listeners associated with the Oracle home that you are updating.

From Oracle_home.
Normally no need to stop ASM. You need to just stop the DB and Listener

alter pluggable database KARDBPDB close immediate;
srvctl stop database -d KARDBTST
srvctl stop  listener -l LISTENER

export PATH=$PATH:/usr/ccs/bin

#$ORACLE_HOME/OPatch/opatch apply $DOWNLOAD_PATCH/37499406

Post installation Steps.

startup upgrade
alter pluggable database all open upgrade;
exit
cd $ORACLE_HOME/OPatch
./datapatch -verbose
sqlplus / as sysdba
shutdown
startup
alter pluggable database all open;

cd $ORACLE_HOME/rdbms/admin
SQL> @utlrp.sql

==== DONE ====

Verify the patch Version and Status of the patch:
col ID format a5
col COMMENTS format a20
col VERSION format a12
col BUNDLE format a5
col ACTION_TIME format a30
col ACTION format a10
set linesize 150
select substr(action_time,1,30) action_time, substr(id,1,8) id, substr(action,1,10) action, substr(version,1,8) version, substr(comments,1,20) comments from sys.registry$history ;



set pagesize 20;
set linesize 200;
col ACTION_TIME format a30;
col DESCRIPTION format a50;
select PATCH_ID,ACTION,ACTION_TIME,DESCRIPTION,STATUS from registry$sqlpatch order by ACTION_TIME ;


select PATCH_ID,DESCRIPTION,STATUS from registry$sqlpatch order by ACTION_TIME ;



Enjoy !

Your comments are most valuable and it would help me to come up with better posts. 

Tuesday, May 7, 2024

physical standby without duplicate command

physical standby without duplicate command

create a physical standby database using RMAN without using duplicate command

PURPOSE: All documents are provided on this Blog just for educational purposes only.  Please make sure that you run it in your test environment before to move on to production environment. 


Step 1: Backup the database that includes backup of datafiles, archivelogs and controlfile for standby
Step 2: Move the backups to the standby server
Step 3: Make proper changes in the parameter files of both primary and standby database
Step 4: Do the restore and recover on standby database

Step 5: Put the standby database in recover managed mode



Step 1:

=>> Primary:

rman target /
RMAN> run
{
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
backup database plus archivelog;
}

Backup controlfile for standby 

rman target /
backup device type disk format
'/fast_recovery_area/TSTDB/backup/%U' current controlfile for standby;


Step 2:

Move the backups and standby controlfile to the standby server


-- parameter setting on Standby DB:
-- Standby steps 

shutdown immediate;
startup mount;
alter database archivelog;
alter database open;
archive log list;


alter system set db_recovery_file_dest='/fast_recovery_area' scope=spfile;

alter system set db_recovery_file_dest_size = 50G scope=spfile sid='*';

alter system set log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST'
scope=spfile;


Standby DB

rman target /
startup nomount;

-- RMAN> set dbid=3959250272; if need it. In my case no need

restore standby controlfile from
'/fast_recovery_area/TSTDB/backup/fb2q6pco_7659_1_1';
sql 'alter database mount standby database';
restore database;

RMAN> exit
SQL> alter database recover managed standby database disconnect
from session;

SQL> ALTER DATABASE RECOVER  managed standby database  cancel;  
-- cancel managed standby DB

-- Primany DB:

-- check the parameter

-- with 3 hour delay
alter system set log_archive_dest_2='SERVICE=tst_stb_01 compression=DISABLE delay=180';  

-- no delay
alter system set log_archive_dest_2='SERVICE=tst_stb_01 compression=DISABLE';  


alter system set log_archive_dest_state_2=defer;

alter system set log_archive_dest_state_2=enable;

-- Standby DB

Wait and check the alert.log files


SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;

You can also check the last log apply date and time:

SELECT
    CASE
        WHEN TO_CHAR(MAX(COMPLETION_TIME), 'YYYY-MM-DD') = TO_CHAR(SYSDATE, 'YYYY-MM-DD')
        THEN TO_CHAR(MAX(COMPLETION_TIME), 'YYYY-MM-DD HH24:MI:SS')
        ELSE 'ERROR'
    END AS last_apply_time
FROM V$ARCHIVED_LOG
WHERE DEST_ID = (SELECT MIN(DEST_ID) FROM V$ARCHIVE_DEST WHERE STATUS = 'VALID');
 


Friday, December 15, 2023

Oracle Data Pump

Oracle Data Pump

 PURPOSE: All documents are provided on this Blog just for educational purposes only.  Please make sure that you run it in your test environment before to move on to production environment. 

ORA-31633: unable to create master table.

SELECT owner_name, job_name, operation, job_mode, state, attached_sessions
FROM dba_datapump_jobs WHERE job_name NOT LIKE 'BIN$%' ORDER BY 1,2;

You can see the master table.

SELECT o.status, o.object_id, o.object_type, o.owner||'.'||object_name "OWNER.OBJECT"
FROM dba_objects o, dba_datapump_jobs j WHERE o.owner=j.owner_name
AND o.object_name=j.job_name AND j.job_name NOT LIKE 'BIN$%' ORDER BY 4,2;
You can check the  running Job:
drop table SYS.EXPDP_JOB;

Create Directory for DATAPUMP:

Create or change a directory for the expdp

CREATE OR REPLACE DIRECTORY DATA_PUMP_DIR AS '/fast_recovery_area/${ORACLE_SID}/dpump';

GRANT READ, WRITE ON DIRECTORY DATA_PUMP_DIR TO system;

Directory location.

col DIRECTORY_PATH format a43;
col OWNER format a10;
col DIRECTORY_NAME format a20;
set linesize 200;
select * from dba_directories where directory_name='DATA_PUMP_DIR';

Export full DB:

. /users/oracle/set_home_env KARDBTES
expdp "'"/ as sysdba"'" directory=DATA_PUMP_DIR \
logfile=dexp_ISMDB.log JOB_NAME= dexp_KARDBTES DUMPFILE=KARDBTES_%U.dmp FULL=YES

Import full DB:

impdp "'"/ as sysdba"'" DUMPFILE=kardbtes_%U.dmp logfile=imp_FULLDB_KARDBTES.log \
directory=DATA_PUMP_DIR PARALLEL=4 full=y

Export Table:

#!/bin/bash . /users/oracle/set_home_env TSTDB cd /fast_recovery_area/TSTDB/dpdump expdp \'/ as sysdba\' tables=HR.JOBS directory=data_pump_dir \
LOGFILE=JOBS.$(date +%d%m%y-%H%M).log JOB_NAME=exp_HR_JOBS_job \
DUMPFILE=HR_JOBS_.$(date +%d%m%y-%H%M).dmp

# If you want to just export data_only:
expdp system/xxxxx directory=DATA_PUMP_DIR dumpfile=table_data_%U.dmp \tables=KARDBTES.MIGTAB01 content=data_only logfile=export_to_fix_.log

Import Table
# You can import multiple tables as follow
cd /oracledb/datapump
impdp system/xxxx directory=DATA_PUMP_DIR dumpfile=table_data_%u.dmp \
tables=KARDBTES.MIGTAB01 , KARDBTES.MIGTAB02 logfile=import_to_fix.log parallel=4

Import table as a different name. REMAP Command.

#!/bin/bash
. /users/oracle/set_home_env TSTDB
cd /fast_recovery_area/TSTDB/dpdump
impdp \'/ as sysdba\' \
DIRECTORY=data_pump_dir \
  DUMPFILE=HR_EMPLOYEES_01_.dmp \
  LOGFILE=EMPLOYEES.$(date +%d%m%y-%H%M).log \
  TABLES=HR.EMPLOYEES \
  EXCLUDE=CONSTRAINT # Notewendig wenn die Tabelle constraint enthaelt

Export Scheam.

#!/bin/bash
. /users/oracle/set_home_env KARDBTES
expdp "'"/ as sysdba"'" schemas=KARDBTES01 directory=DATA_PUMP_DIR \
logfile=exp_KARDBTES01_$(date +%d%m%y-%H%M).log JOB_NAME=expdp_KARDBTES01 \
DUMPFILE=KARDBTES01_%U_$(date +%d%m%y-%H%M).dmp

Or:

#Create a script to export multiple schemas.

vi datapump_exp_schema.sh
. /users/oracle/set_home_env KARDBTES
expdp "'"/ as sysdba"'" schemas=KARDBTES01, KARDBTES02 directory=DATA_PUMP_DIR parallel=5 \
logfile=dexp_KARDBTES .log JOB_NAME= dexp_KARDBTES DUMPFILE=kardbtes_%U.dmp

#Execute the script as follow.
nohup /oracledb/datapump/datapump_exp_schema.sh >/oracledb/datapump/datapump_exp_schema.log &

#Import data into another DB with same Schemaname.

vi datapump_imp_schema.sh
. /users/oracle/setdb_home KARDBTES
impdp "'"/ as sysdba"'" schemas= KARDBTES01, KARDBTES02 directory=DATA_PUMP_DIR parallel=5 \
logfile=dimp_ KARDBTES.log JOB_NAME= dimp_KARDBTES DUMPFILE= KARDBTES _%U.dmp TRANSFORM=oid:n

#Execute JOB es follow:
nohup /oracledb/datapump/datapump_imp_schema.sh >/oracledb/datapump / datapump_imp_schema.log &

Import as different Schemaname REMAP Schema.

#!/bin/bash
. /users/oracle/setdb_home KARDBTES
impdp \'/ as sysdba\' schemas=KARDB_DEV REMAP_SCHEMA=KARDB_DEV:KARDB_TST exclude=statistics \
directory=data_pump_dir LOGFILE=KARDB_DEV_imp.$(date +%d%m%y-%H%M).log JOB_NAME=imp_kardb_job \
DUMPFILE=KARDB_DEV01.210624-1109.dmp

At the end you may need to execute following command

sqlplus /
(ownname  => 'SCHEMA_NAME', tabname => 'TABLE_NAME' , estimate_percent => 5,
degree => 8, cascade => true);
SQL> alter system flush shared_pool;

export import NO DATA just Schema Structure.

#!/bin/bash
. set_home_env KARDBTES
echo $ORACLE_SID
date
YYYYMMDD=`date +%Y%m%d`
expdp "'"/ as sysdba"'" content=METADATA_ONLY \
schemas=schema1,schema2,schema3 \
directory=DATA_PUMP_DIR reuse_dumpfiles=y \
dumpfile=kardbtes_description_$YYYYMMDD.dmp \ logfile=kardbtes_description_$YYYYMMDD.log
date

Import Schema as follow.

You may need to create table and index before you import 

Example:

Tablespace:

CREATE TABLESPACE DATA
DATAFILE '+DATA01' SIZE 1G AUTOEXTEND ON NEXT 1G MAXSIZE UNLIMITED
LOGGING
DEFAULT
NO INMEMORY
ONLINE
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 16K
SEGMENT SPACE MANAGEMENT AUTO
FLASHBACK ON;

Index

CREATE TABLESPACE INDX
DATAFILE '+DATA01' SIZE 1G AUTOEXTEND ON NEXT 1G MAXSIZE UNLIMITED
LOGGING
DEFAULT
NO INMEMORY
ONLINE
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 16K
SEGMENT SPACE MANAGEMENT AUTO
FLASHBACK ON;


#!/bin/bash
. /users/oracle/set_home_env KARDBTES
echo $ORACLE_SID
date
YYYYMMDD=`date +%Y%m%d`
impdp "'"/ as sysdba"'"  content=metadata_only \
schemas=schema1,schema2,schema3 \
exclude=statistics \
logfile=KARDBTES_nodata_impdp_$YYYYMMDD.log \ JOB_NAME=imp_nodataKARDBTES \
DUMPFILE=KARDBTES_description_schema_20210319.dmp
date

Exclude Some Tables

Export complete Schema except some Tables - Example.

#!/bin/bash
. /users/oracle/set_home_env KARDBTESß
expdp "'"/ as sysdba"'"  schemas=schemaname directory=DATA_PUMP_DIR \
logfile=exp_schema.log PARALLEL=7 JOB_NAME=expdp_exclude \
DUMPFILE=schema_name_%U.dmp \
EXCLUDE=TABLE:\"IN\(\'table1\',\'table2\',\'table3\',\'table4\'\)\"

Exclude Some Tables

Import complete Schema except some Tables - Example.

#!/bin/bash
. /users/oracle/set_home_env KARDBTES
impdp "'"/ as sysdba"'"  schemas=schemaname directory=DATA_PUMP_DIR \
logfile=imp_schema.log PARALLEL=7 JOB_NAME=impdp_exclude \ DUMPFILE=schema_name_%U.dmp \
EXCLUDE=TABLE:\"IN\(\'table1\',\'table2\',\'table3\',\'table4\'\)\" \
TABLE_EXISTS_ACTION=REPLACE




















SGA resize

How to resize SGA PURPOSE:   All documents are provided on this Blog just for educational purposes only.  Please make sure that you run it i...