Tags

11g (5) 12c (6) 18c (3) 19c (4) ASM (1) Critical Patch (11) Data Pump (1) Dataguard (9) Diverse (3) GRID (7) GitLab (2) Linux (8) OEM (2) ORA Errors (13) Oracle (12) RMAN (4)

Tuesday, May 7, 2024

physical standby without duplicate command

physical standby without duplicate command

create a physical standby database using RMAN without using duplicate command

PURPOSE: All documents are provided on this Blog just for educational purposes only.  Please make sure that you run it in your test environment before to move on to production environment. 


Step 1: Backup the database that includes backup of datafiles, archivelogs and controlfile for standby
Step 2: Move the backups to the standby server
Step 3: Make proper changes in the parameter files of both primary and standby database
Step 4: Do the restore and recover on standby database

Step 5: Put the standby database in recover managed mode



Step 1:

=>> Primary:

rman target /
RMAN> run
{
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
backup database plus archivelog;
}

Backup controlfile for standby 

rman target /
backup device type disk format
'/fast_recovery_area/TSTDB/backup/%U' current controlfile for standby;


Step 2:

Move the backups and standby controlfile to the standby server


-- parameter setting on Standby DB:
-- Standby steps 

shutdown immediate;
startup mount;
alter database archivelog;
alter database open;
archive log list;


alter system set db_recovery_file_dest='/fast_recovery_area' scope=spfile;

alter system set db_recovery_file_dest_size = 50G scope=spfile sid='*';

alter system set log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST'
scope=spfile;


Standby DB

rman target /
startup nomount;

-- RMAN> set dbid=3959250272; if need it. In my case no need

restore standby controlfile from
'/fast_recovery_area/TSTDB/backup/fb2q6pco_7659_1_1';
sql 'alter database mount standby database';
restore database;

RMAN> exit
SQL> alter database recover managed standby database disconnect
from session;

SQL> ALTER DATABASE RECOVER  managed standby database  cancel;  
-- cancel managed standby DB

-- Primany DB:

-- check the parameter

-- with 3 hour delay
alter system set log_archive_dest_2='SERVICE=tst_stb_01 compression=DISABLE delay=180';  

-- no delay
alter system set log_archive_dest_2='SERVICE=tst_stb_01 compression=DISABLE';  


alter system set log_archive_dest_state_2=defer;

alter system set log_archive_dest_state_2=enable;

-- Standby DB

Wait and check the alert.log files


SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;

You can also check the last log apply date and time:

SELECT
    CASE
        WHEN TO_CHAR(MAX(COMPLETION_TIME), 'YYYY-MM-DD') = TO_CHAR(SYSDATE, 'YYYY-MM-DD')
        THEN TO_CHAR(MAX(COMPLETION_TIME), 'YYYY-MM-DD HH24:MI:SS')
        ELSE 'ERROR'
    END AS last_apply_time
FROM V$ARCHIVED_LOG
WHERE DEST_ID = (SELECT MIN(DEST_ID) FROM V$ARCHIVE_DEST WHERE STATUS = 'VALID');
 


Friday, December 15, 2023

Oracle Data Pump

Oracle Data Pump

 PURPOSE: All documents are provided on this Blog just for educational purposes only.  Please make sure that you run it in your test environment before to move on to production environment. 

 

ORA-31633: unable to create master table.

You can check the  running Job:

SELECT owner_name, job_name, operation, job_mode, state, attached_sessions FROM dba_datapump_jobs WHERE job_name NOT LIKE 'BIN$%' ORDER BY 1,2;

 

You can see the master table.

SELECT o.status, o.object_id, o.object_type, o.owner||'.'||object_name "OWNER.OBJECT"

FROM dba_objects o, dba_datapump_jobs j WHERE o.owner=j.owner_name AND o.object_name=j.job_name AND j.job_name NOT LIKE 'BIN$%' ORDER BY 4,2;

 

drop table SYS.EXPDP_JOB;

STEPS:

1.  Create or change a directory for the expdp

CREATE OR REPLACE DIRECTORY DATA_PUMP_DIR AS '/fast_recovery_area/dpump_DB';

GRANT READ, WRITE ON DIRECTORY DATA_PUMP_DIR TO system;

 

Directory location.

col DIRECTORY_PATH format a43;
col OWNER format a10;
col DIRECTORY_NAME format a20;
set linesize 200;
select * from dba_directories where directory_name='DATA_PUMP_DIR';

Export FULL DB:

. /users/oracle/set_home_env KARDBTES
expdp "'"/ as sysdba"'" directory=DATA_PUMP_DIR logfile=dexp_ISMDB.log JOB_NAME= dexp_KARDBTES DUMPFILE=KARDBTES_%U.dmp FULL=YES

Import FULL DB:

impdp "'"/ as sysdba"'" DUMPFILE=kardbtes_%U.dmp logfile=imp_FULLDB_KARDBTES.log directory=DATA_PUMP_DIR PARALLEL=4 full=y

2.   Export TABLE:

expdp system/xxxxx directory=DATA_PUMP_DIR dumpfile=table_data_%U.dmp tables=KARDBTES.MIGTAB01 content=data_only logfile=export_to_fix_.log

 

3.   Import TABLES:

cd /oracledb/datapump

impdp system/xxxx directory=DATA_PUMP_DIR dumpfile=table_data_%u.dmp tables=KARDBTES.MIGTAB01 , KARDBTES.MIGTAB02 logfile=import_to_fix.log parallel=4


 Export Schema: 

#!/bin/bash

. /users/oracle/set_home_env KARDBTES

expdp "'"/ as sysdba"'" schemas=KARDBTES01 directory=DATA_PUMP_DIR logfile=exp_KARDBTES01_`date +%b_%d_%H_%M_%S.`log JOB_NAME=expdp_KARDBTES01 DUMPFILE=KARDBTES01_%U_`date +%b_%d_%H_%M_%S.`dmp

Or:

4.  Create a script to export multiple schemas.

vi datapump_exp_schema.sh

. /users/oracle/set_home_env KARDBTES

expdp "'"/ as sysdba"'" schemas=KARDBTES01, KARDBTES02 directory=DATA_PUMP_DIR parallel=5 logfile=dexp_KARDBTES .log JOB_NAME= dexp_KARDBTES DUMPFILE=kardbtes_%U.dmp

Execute the script as follow.

nohup /oracledb/datapump/datapump_exp_schema.sh >/oracledb/datapump/datapump_exp_schema.log &

 

Import data into another DB with same Schemaname.

vi datapump_imp_schema.sh

. /users/oracle/setdb_home KARDBTES

impdp "'"/ as sysdba"'" schemas= KARDBTES01, KARDBTES02 directory=DATA_PUMP_DIR parallel=5 logfile=dimp_ KARDBTES.log JOB_NAME= dimp_KARDBTES DUMPFILE= KARDBTES _%U.dmp TRANSFORM=oid:n

#Execute JOB es follow: 

nohup /oracledb/datapump/datapump_imp_schema.sh >/oracledb/datapump / datapump_imp_schema.log &


Import as different Schemaname 

vi datapump_imp_schema.sh

#!/bin/bash

. /users/oracle/setdb_home KARDBTES

impdp \'/ as sysdba\' schemas=KARDB_DEV REMAP_SCHEMA=KARDB_DEV:KARDB_TST exclude=statistics directory=data_pump_dir LOGFILE=SANC_DEV_imp.$(date +%d%m%y-%H%M).log JOB_NAME=imp_kardb_job DUMPFILE=KARDB_DEV01.210624-1109.dmp

 

 

5.  At the end, do execute the following command. It depend whether you just exporting table or schema

sqlplus /

SQL> exec dbms_stats.gather_table_stats (ownname  => 'SCHEMA_NAME', tabname => 'TABLE_NAME' , estimate_percent => 5, degree => 8, cascade => true);

SQL> alter system flush shared_pool;

 

6.  If you want to just export and import Schema structure without data. Create a script as follow. (no_date)

more expdb_full_schema_description.sh

 

#!/bin/bash

. set_home_env KARDBTES

echo $ORACLE_SID

date

YYYYMMDD=`date +%Y%m%d`

expdp "'"/ as sysdba"'" content=METADATA_ONLY \

schemas=schema1,schema2,schema3 \

directory=DATA_PUMP_DIR reuse_dumpfiles=y \

dumpfile=kardbtes_description_$YYYYMMDD.dmp \ logfile=kardbtes_description_$YYYYMMDD.log

date

7.       You can import as schema as follow.

You may nee to create table and index befor you import.

example:

 Tablespace: 

CREATE TABLESPACE DATA
DATAFILE '+DATA01' SIZE 1G AUTOEXTEND ON NEXT 1G MAXSIZE UNLIMITED
LOGGING
DEFAULT
NO INMEMORY
ONLINE
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 16K
SEGMENT SPACE MANAGEMENT AUTO 

FLASHBACK ON;

Index

CREATE TABLESPACE INDX
DATAFILE '+DATA01' SIZE 1G AUTOEXTEND ON NEXT 1G MAXSIZE UNLIMITED
LOGGING
DEFAULT
NO INMEMORY
ONLINE
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 16K
SEGMENT SPACE MANAGEMENT AUTO 

FLASHBACK ON;

 

#!/bin/bash

. /users/oracle/set_home_env KARDBTES

echo $ORACLE_SID

date

YYYYMMDD=`date +%Y%m%d`

impdp "'"/ as sysdba"'"  content=metadata_only \

schemas=schema1,schema2,schema3 \

exclude=statistics \

logfile=kurs1_nodata_MDV_MASTER_impdp_$YYYYMMDD.log \ JOB_NAME=impdp_nodataKARDBTES DUMPFILE=KARDBTES_description_schema_20210319.dmp

date

 

Exclude some tables expdp (export complete schema except some tables):

 

#!/bin/bash

. /users/oracle/set_home_env KARDBTES

expdp "'"/ as sysdba"'"  schemas=schemaname directory=DATA_PUMP_DIR \ logfile=exp_schema.log PARALLEL=7 JOB_NAME=expdp_exclude \ DUMPFILE=schema_name_%U.dmp \

EXCLUDE=TABLE:\"IN\(\'table1\',\'table2\',\'table3\',\'table4\'\)\"

 

Exclude some tables imp(Import complete schema except some tables):

#!/bin/bash

. /users/oracle/set_home_env KARDBTES

impdp "'"/ as sysdba"'"  schemas=schemaname directory=DATA_PUMP_DIR \ logfile=imp_schema.log PARALLEL=7 JOB_NAME=impdp_exclude \ DUMPFILE=schema_name_%U.dmp \

EXCLUDE=TABLE:\"IN\(\'table1\',\'table2\',\'table3\',\'table4\'\)\" \ TABLE_EXISTS_ACTION=REPLACE

 

Wednesday, May 17, 2023

Linux update RHEL 7 to RHEL 8 using Leapp

Linux update RHEL 7 to RHEL 8 using Leapp

Upgrading the system from RHEL 7 to RHEL 8 using Leapp

PURPOSE: All documents are provided on this Blog just for educational purposes only.  Please make sure that you run it in your test environment before to move on to production environment. 


Linux update redhat 8.6 using leapp
subscription-manager list --installed
subscription-manager repos --enable rhel-7-server-rpms
subscription-manager repos --enable rhel-7-server-extras-rpms
subscription-manager release --unset

yum versionlock clear 

cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.5 (Maipo)

yum update -y # bring the current version to update first 
or 
yum --releasever=7.9 update -y

reboor -r now
uname -a

yum install leapp leapp-repository cockpit-leapp wget -y

# Download letest leapp-data file:

https://access.redhat.com/articles/3664871

tar -xzf leapp-data21.tar.gz -C /etc/leapp/files && rm leapp-data21.tar.gz

y

ll /etc/leapp/files


# This enable GUI for report:

systemctl enable --now cockpit

https://hostname:9090

firewall-cmd --list-all
firewall-cmd --permanent --add-port=9090/tcp
firewall-cmd --reload

# Preupgrade task:

leapp preupgrade

UI for report

https://[hostname]:9090/leapp

https://hostname:9090


Fix the errors 

1. symbolic links point to absolute paths that have non-utf8 encoding and need to be fixed additionally

SOLUTION:

https://access.redhat.com/solutions/6989732

ls -l / | grep ^l

unlink /oracle 

ln -s oracle12.2 /oracle

ll / | grep ^l


2. The following packages have not been signed by Red Hat and may be removed during the upgrade...

SOLUTION:

Ignore this error for now.


3. Support for the following RHEL 7 device drivers has been removed in RHEL 8: - pata_acpi

# do this after reboot ther server.

rmmod pata_acpi

or

modprobe -r pata_acpi


4. If you depend on remote root logins using passwords, consider setting up a different user for remote administration or adding "PermitRootLogin yes" to sshd_config.

vi /etc/ssh/sshd_config

uncommit parameter

PermitRootLogin yes

systemctl restart sshd


5. The following RHEL 7 device drivers are no longer maintained RHEL 8: - e1000 - mptspi - mptbase - mptscsih - mptspi - e1000

# you need to remove Network adapter and recreat it via VCenter

# before you remove network adapter. Create an new interface:

cd /etc/sysconfig/network-scripts


vi ifcfg-ens192_NEW

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=xxx.xx.xx.xx
PREFIX=22
GATEWAY=xxx.xx.xx.x
DNS1=xxx.xx.xx.x
DNS2=xxx.xx.xx.x
DNS3=xxx.xx.xx.x
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=eui64
NAME=ens160
DEVICE=ens160
ONBOOT=yes
NM_CONTROLLED=no


Login to VCenter:

Edit Setting ==> Network adapter 1 (remove) 

Edit Setting ==> Add New Device ==> Network Adapter (It will automatically create the correct one VMXNET 3)

# Via Launch Concole do following:

mv ifcfg-ens32 ifcfg-ens32_OLD

mv ifcfg-ens192_NEW ifcfg-ens192


6. On legacy (BIOS) systems, GRUB core...

# can be ignored at this point


7. Please register user choices with leapp answer cli command or by manually editing the answerfile.

leapp answer --section remove_pam_pkcs11_module_check.confirm=True


8. 16 packages will be skipped because

# can be ignored for now


#Do again.

leapp preupgrade

#check the report again


# IMPORTENT: Before update do following export command. Otherwise you will get following error. 

Disk Requirements:

   At least 1597MB more space needed on the / filesystem.

export LEAPP_OVL_SIZE=4096

leapp upgrade

#check the logs 

reboot the system ==> IMPORTENT open the console über VMWare

-- The actual update will started. Watch the over the console

-- The system will be rebooted automatecally or you do by your self after update

-- Now login via ssh

uname -a


After upgrade:

alternatives --set python /usr/bin/python3

# if needed 
getenforce
setenforce 1
getenforce

# Check any rmp von el7. Be carefully

rpm -qa | grep -i el7

yum erase xxx xxx xxx -y

yum list kernel   # check any old kernal availabel

df -h

# If your environment needed. 

cat /etc/default/grub

cat /proc/cmdline

grub2-mkconfig -o /boot/grub2/grub.cfg


yum --releasever=8.7 update -y

done!!

Your comments are most valuable and it would help me to come up with better posts. 


Friday, March 24, 2023

Oracle January Patch 19 2023

  

Oracle January Patch 19 2023

Oracle Critical Patch Update on 19c - January 2023

Oracle Critical Patch Update (CPU)

Patch 34773504 - Combo of OJVM Component Release Update 19.18.0.0.230117 + Grid Infrastructure Jan 2023 Release Update 19.18.0.0.230117


PURPOSE: All documents are provided on this Blog just for educational purposes only.  Please make sure that you run it in your test environment before to move on to production environment. 

Execute the following commands:

STEP 1

Patch 34762026 - GI Release Update 19.18.0.0.230117 (REL-JAN230131)

1)
NOTE: IMPORTANT
GI Home:
You need to update first opatch to the higer or letest version.
You can download letest version from the following link.

For JAN CPU use latest OPatch Version "OPatch 12.2.0.1.36 for DB 19.0.0.0.0 (Jan 2023)". 

Download Latest OPatch from "Download Latest OPatch"

Latest OPatch copy files.
p6880880_190000_Linux-x86-64.zip

cd $GRID_HOME
mv OPatch OPatch_old
cd $DOWNLOAD_PATCH/OPatch
cp -r OPatch $GRID_HOME

You Need to do the same on ORACLE_HOME
Oracle ORACLE_HOME
Latest OPatch copy files

cd $ORACLE_HOME
mv OPatch OPatch_old
cd $DOWNLOAD_HOME/OPatch
cp -r OPatch $ORACLE_HOME


Unzip January patch file downloaded.
p34762026_190000_Linux-x86-64.zip

Check the conflict:
From Grid_Home

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir  $DOWNLOAD_PATCH/34762026/34765931

Oracle Interim Patch Installer version 12.2.0.1.29
Copyright (c) 2020, Oracle Corporation.  All rights reserved.

PREREQ session


Oracle Home       : /u02/oracle/app/home/product/19.0.0/grid_19

Central Inventory :$ORACLE_BASE/app/oraInventory
   from           :$GRID_HOME/oraInst.loc
OPatch version    : 12.2.0.1.29
OUI version       : 12.2.0.1.4
Log file location : $GRID_HOME/cfgtoollogs//opatch/opatchxxxx

Invoking prereq "checkconflictagainstohwithdetail"


Prereq "checkConflictAgainstOHWithDetail" passed.


OPatch succeeded.

I am not going to add  output of all commands. The output should looks like as above.

$DOWNLOAD_PATCH is a directory where the patch is located.

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $DOWNLOAD_PATCH/34762026/34768559
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $DOWNLOAD_PATCH/34762026/34768569
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $DOWNLOAD_PATCH/34762026/34863894
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $DOWNLOAD_PATCH/34762026/33575402


For Database home, as home user
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $DOWNLOAD_PATCH/34762026/34765931
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $DOWNLOAD_PATCH/34762026/34768559


2)
Shut down all instances and listeners associated with the Oracle home that you are updating.
Stop a PDB from Oracle home:
alter pluggable database KARDBPDB close immediate;
srvctl stop database -d KARDBTST
srvctl status database -d KARDBTST
Database is not running.

Stop Listener from GI Home

srvctl stop  listener -l LISTENER
srvctl status listener -l LISTENER
Listener LISTENER is enabled
Listener LISTENER is not running

Stop ASM
srvctl stop asm -f
srvctl status asm
ASM is not running.

3)
As root user execute the following command.
You have following options
 i) To patch only the GI Home
 <GI_HOME>/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/34762026 -  oh <GI_HOME>

ii) To patch oracle_home
  opatchauto apply <UNZIPPED_PATCH_LOCATION>/34762026 -oh <oracle_home1_path>

 iii) Patching Oracle Home and Grid Home togather:
 As root user, execute the following command on each node of the cluster:
 # <GI_HOME>/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/34762026

I am going to apply the path for both grid home and oracle home together.

NOTE: make sure that the Clusterware is running:
crsctl status has
If it is not running, you need to start it as follow.

crsctl start has

export PATH=$PATH:$GRID_HOME/OPatch
$GRID_HOME/OPatch/opatchauto apply $DOWNLOAD_PATCH/34762026

Post-Installation Instructions
For the both Single/Multitenant (CDB/PDB) DB required following steps.

sqlplus / as sysdba
startup
alter pluggable database all open;
quit
cd $ORACLE_HOME/OPatch
./datapatch -verbose

Any databases that have invalid objects need to execute utlrp.sql run

For example:
cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql

-- You can check valid or invalid objects with the following command.
set pagesize 250;
col STATUS format a20;
col COMP_NAME format a35;
select COMP_NAME, version, status from dba_registry;


STEP 2
Patch 34786990 - Oracle JavaVM Component Release Update 19.18.0.0.230117

For a Non Oracle RAC environment, shut down all databases and listeners associated with the Oracle home that you are updating.

From Oracle_home.
Normally no need to stop ASM. You need to just stop the DB and Listener

alter pluggable database KARDBPDB close immediate;
srvctl stop database -d KARDBTST
srvctl stop  listener -l LISTENER

export PATH=$PATH:/usr/ccs/bin

#$ORACLE_HOME/OPatch/opatch apply $DOWNLOAD_PATCH/34786990

Post installation Steps.
startup
alter pluggable database all open;
exit
cd $ORACLE_HOME/OPatch
./datapatch -verbose
sqlplus / as sysdba


cd $ORACLE_HOME/rdbms/admin
SQL> @utlrp.sql

$ORACLE_HOME/OPatch/opatch lspatches |sort

==== DONE ====

Verify the patch Version and Status of the patch:
col ID format a5
col COMMENTS format a20
col VERSION format a12
col BUNDLE format a5
col ACTION_TIME format a30
col ACTION format a10
set linesize 150
select substr(action_time,1,30) action_time, substr(id,1,8) id, substr(action,1,10) action, substr(version,1,8) version, substr(comments,1,20) comments from sys.registry$history ;



set pagesize 20;
set linesize 200;
col ACTION_TIME format a30;
col DESCRIPTION format a50;
select PATCH_ID,ACTION,ACTION_TIME,DESCRIPTION,STATUS from registry$sqlpatch order by ACTION_TIME ;


select PATCH_ID,DESCRIPTION,STATUS from registry$sqlpatch order by ACTION_TIME ;



Enjoy !

Your comments are most valuable and it would help me to come up with better posts. 

physical standby without duplicate command

physical standby without duplicate command create a physical standby database using RMAN without using duplicate command PURPOSE:   All docu...