Utility Belt

ASM, Shellscripts

Map ASM Disks to Physical Devices [2]

“Holly Better Viewing, DBAtman!”

So you liked the previous script posted here? Nice, because looking further on the subject I found this pretty similar post, by Mohammad Nazmul Huda.

The additional script there is actually not working in my server, but the idea is great. So, I did just some small adjustments and it’s working pretty fine now:

# asm_report.sh (Adjusted by DBAtman):

printf "\n%-15s %-14s %-11s %-7s\n" "ASM disk" "based on" "Minor,Major" "Size (MB)"
printf "%-15s %-14s %-11s %-7s\n" "===============" "=============" "===========" "========="
export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome1
for i in `/usr/sbin/oracleasm listdisks`
do
v_asmdisk=`/usr/sbin/oracleasm querydisk -d $i | awk '{print $2}'| sed 's/\"//g'`
v_minor=`/usr/sbin/oracleasm querydisk -d $i | awk -F[ '{print $2}'| awk -F] '{print $1}' | awk -F, '{print $1}'`
v_major=`/usr/sbin/oracleasm querydisk -d $i | awk -F[ '{print $2}'| awk -F] '{print $1}' | awk -F, '{print $2}'`
v_device=`ls -la /dev | awk -v v_minor="$v_minor," -v v_major=$v_major '{if ( $5==v_minor ) { if ( $6==v_major ) { print $10}}}'`
v_size_bt=`blockdev --getsize64 /dev/${v_device}`
v_size=`expr $v_size_bt / 1024 / 1024`
Total_size=`expr $Total_size + $v_size`
Formated_size=`echo $v_size | sed -e :a -e 's/\(.*[0-9]\)\([0-9]\{3\}\)/\1,\2/;ta'`
printf "%-15s %-14s %-11s %-7s\n" $v_asmdisk "/dev/$v_device" "[$v_minor $v_major]" $Formated_size
done
Formated_Total_size=`echo $Total_size | sed -e :a -e 's/\(.*[0-9]\)\([0-9]\{3\}\)/\1,\2/;ta'`
printf "\nTotal (MB): %43s\n\n" $Formated_Total_size

Ok, and how it works?
[root@greporasrv ~]# sh asm_report.sh

ASM disk        based on      Minor,Major Size (MB)
=============== ============= =========== =========
DATA01          /dev/sdg1     [8 97]       255,999
DATA02          /dev/sdh1     [8 113]      255,999
DATA03          /dev/sdi1     [8 129]      255,999
DATA04          /dev/sdj1     [8 145]      255,999
FRA01           /dev/sdk1     [8 161]      307,199

Total (MB): 1,331,195

Even better, right?

Na, na, na, na, na, na, na, na, na, na, na, na… <strong>DBAtman</strong>!

ASM, Shellscripts

Map ASM Disks to Physical Devices

“Holly Mapping, DBAtman!”

So are you finding difficulties to to map your ASM disks to physical devices?
As you may noticed, this is not a direct thing, which causes some manual work.

To save me from this, I found this great post by Alejandro Vargas, with a very nice script to make this mapping easier.

I found however, it was done for RHEL/OEL 6 and older, and I’m in OEL7. So I did some small changes to adapt it.

Anyway, decided to share as this is a great script to have handy, your Utility Belt!

# Alejandro’s script (RHEL/OEL 6 and older):

#!/bin/ksh
for i in `/etc/init.d/oracleasm listdisks`
do
v_asmdisk=`/etc/init.d/oracleasm querydisk $i | awk  '{print $2}'`
v_minor=`/etc/init.d/oracleasm querydisk $i | awk -F[ '{print $2}'| awk -F] '{print $1}' | awk '{print $1}'`
v_major=`/etc/init.d/oracleasm querydisk $i | awk -F[ '{print $2}'| awk -F] '{print $1}' | awk '{print $2}'`
v_device=`ls -la /dev | grep $v_minor | grep $v_major | awk '{print $10}'`
echo "ASM disk $v_asmdisk based on /dev/$v_device  [$v_minor $v_major]"
done

# Adjustments by DBAtman (RHEL/OEL7):

#!/bin/ksh
for i in `/usr/sbin/oracleasm listdisks`
do
v_asmdisk=`/usr/sbin/oracleasm querydisk -d $i | awk '{print $2}'`
v_minor=`/usr/sbin/oracleasm querydisk -d $i | awk -F[ '{print $2}'| awk -F] '{print $1}' | awk -F, '{print $1}'`
v_major=`/usr/sbin/oracleasm querydisk -d $i | awk -F[ '{print $2}'| awk -F] '{print $1}' | awk -F, '{print $2}'`
v_device=`ls -la /dev | grep $v_minor | grep $v_major | awk '{print $10}'`
echo "ASM disk $v_asmdisk based on /dev/$v_device [$v_minor $v_major]"
done

# Example of execution:

[root@greporasrv]$ for i in `/usr/sbin/oracleasm listdisks`
> do
> v_asmdisk=`/usr/sbin/oracleasm querydisk -d $i | awk '{print $2}'`
> v_minor=`/usr/sbin/oracleasm querydisk -d $i | awk -F[ '{print $2}'| awk -F] '{print $1}' | awk -F, '{print $1}'`
> v_major=`/usr/sbin/oracleasm querydisk -d $i | awk -F[ '{print $2}'| awk -F] '{print $1}' | awk -F, '{print $2}'`
> v_device=`ls -la /dev | grep $v_minor | grep $v_major | awk '{print $10}'`
> echo "ASM disk $v_asmdisk based on /dev/$v_device [$v_minor $v_major]"
> done
ASM disk "DATA01" based on /dev/sdg1 [8 97]
ASM disk "DATA02" based on /dev/sdh1 [8 113]
ASM disk "DATA03" based on /dev/sdi1 [8 129]
ASM disk "DATA04" based on /dev/sdj1 [8 145]
ASM disk "FRA01" based on /dev/sdk1 [8 161]

Na, na, na, na, na, na, na, na, na, na, na, na… DBAtman!

Kill, RAC

Procedure to Kill All Sessions from User/Module/… over a RAC and Scheduler Job to do so…

“Holly slaughter, DBAtman!!”

Please write me a procedure to kill all sessions from a specific Module!
Really? Ok doke!

First lets create a control table, so we can see what is being killed and any error:

create table KILLED_SESSIONS(dt_kill date, nodesess number, 
nodejob number, username_k varchar(50), servername varchar(200), 
sid_nb number, serial_nb number,cmd varchar(300));

And let’s go to the fun part:

create or replace procedure KILL_PRC (p_module varchar2) as
  sqltext varchar2(32000) := null;
BEGIN
-- Killing Local
   FOR REC IN (select 'alter system kill session '''||sid||','||serial#||''' immediate' cmd, username, machine,sid,serial# from 
		v$session where module='''||p_module||''') LOOP
    sqltext := (rec.cmd);
    insert into SYS.KILLED_SESSIONS values (sysdate,SYS_CONTEXT ('USERENV', 'INSTANCE'),SYS_CONTEXT ('USERENV', 'INSTANCE'),rec.username,rec.machine,rec.sid,rec.serial#,rec.cmd);
    commit;
    begin
     execute immediate sqltext;
    Exception
      When Others Then
         insert into SYS.KILLED_SESSIONS(cmd) values ('ERROR_L: '||rec.cmd);
         commit;
    end;
    END LOOP;
-- Remote
   FOR REC IN (select 'alter system kill session '''||sid||','||serial#||',@'||inst_id||''' ' cmd,username, machine,sid,serial#, inst_id
     from gv$session where module='''||p_module||''' and inst_id<>SYS_CONTEXT ('USERENV', 'INSTANCE') ) LOOP
    sqltext := (rec.cmd);
    insert into SYS.KILLED_SESSIONS values (sysdate,SYS_CONTEXT ('USERENV', 'INSTANCE'),rec.inst_id,rec.username,rec.machine,rec.sid,rec.serial#,rec.cmd);
    commit;
    begin
      execute immediate sqltext;
    Exception
      When Others Then
         insert into SYS.KILLED_SESSIONS(cmd) values ('ERROR_R: '||rec.cmd);
         commit;
    end;
    END LOOP;
Exception
    When Others Then
    insert into SYS.KILLED_SESSIONS(cmd) values ('ERROR_G');
    commit;
END;
/

Nice right?
Of course you can adapt this filter by user, by machine (host connected), application, service name, or what ever you want. Or even combine items. Just make sure you are not killing required SYS sessions…

With this, you can simply grant privilege on this to other users.

Oh, by the way, this can be even scheduled in a scheduler job, for periodical kills, if you want:

BEGIN
 DBMS_SCHEDULER.CREATE_JOB (
 job_name => 'KILL_PRC_ADHOC',
 job_type => 'STORED_PROCEDURE',
 job_action => 'SYS.KILL_PRC',
 start_date => '01-JAN-19 12.00.00 AM EST5EDT',
 repeat_interval => 'FREQ=DAILY',
 end_date => null,
 auto_drop => FALSE,
 comments => 'Running Daily at midnight EST');
END;
/

exec dbms_scheduler.set_job_argument_value(job_name => 'KILL_PRC_ADHOC', argument_position => 1, argument_value => 'MYAPP');

exec dbms_scheduler.enable('SYS.KILL_PRC_ADHOC');

Also, this needs to be created in a user with the required credentials to do so.

Taking a quote from my friend Spiderman:
“With Great Power Comes Great Responsibility” (Ben, Uncle).

Na, na, na, na, na, na, na, na, na, na, na, na… DBAtman!

Jobs

Disable/Enable Maintenance Jobs

“Holy Maintenance Jobs, DBAtman!!”

First some considerations: We could just disable window changing, once the jobs are related to windows, and Resource Manager Plans use to be as well. Keep that in mind. However, to answer the question directly:

# To disable/enable all maintenance jobs in from/for all windows:

EXEC DBMS_AUTO_TASK_ADMIN.disable;
EXEC DBMS_AUTO_TASK_ADMIN.enable;

# And to disable/enable specific maintenance jobs from/for all windows:

exec DBMS_AUTO_TASK_ADMIN.DISABLE('AUTO OPTIMIZER STATS COLLECTION',NULL, NULL);
exec DBMS_AUTO_TASK_ADMIN.DISABLE('AUTO SPACE ADVISOR',NULL, NULL);
exec DBMS_AUTO_TASK_ADMIN.DISABLE('SQL TUNING ADVISOR', NULL, NULL);

exec DBMS_AUTO_TASK_ADMIN.ENABLE('AUTO OPTIMIZER STATS COLLECTION',NULL, NULL);
exec DBMS_AUTO_TASK_ADMIN.ENABLE('AUTO SPACE ADVISOR',NULL, NULL);
exec DBMS_AUTO_TASK_ADMIN.ENABLE('SQL TUNING ADVISOR', NULL, NULL);

More information and details about it can be taken from here: https://docs.oracle.com/cd/E11882_01/server.112/e25494/tasks.htm#ADMIN11836

Na, na, na, na, na, na, na, na, na, na, na, na… DBAtman!

Shellscripts, SQPlus

Run Script for a List of Databases

Holy big list of databases, DBAtman!“”
How to run same script for a list of databases quickly?

In my case I have same password/user in all databases, but in case you haven’t you can make an internal loop with a similar awk command to retrieve users and passwords from a file.

The list:

$: cat /tmp/dbs.cfg
db01
db02
db03
testdb

The script:

for DBSID in ${*-$(awk -F: '!/^#/ {print $1}' /tmp/dbs.cfg}
do
    print "
        connect user/password@${DBSID}
        @script_to_run.sql
        exit " |
    sqlplus /nolog >> /tmp/output_test.log
done

Na, na, na, na, na, na, na, na, na, na, na, na… DBAtman!

Database, Objects

Decoding ROWID

“Holy location address, DBAtman!”

Recently I needed to decode a rowid so I could find some information about it, I found this bit of code that I thought useful:

SET SERVEROUTPUT ON
DECLARE
v_rid VARCHAR2(20) ;
v_type NUMBER;
v_obj NUMBER;
v_rfno NUMBER;
v_bno NUMBER;
v_rno NUMBER;
BEGIN
v_rid := 'AAAAASAABAAAADxAAb';
dbms_output.put_line('Row_ID = "'||v_rid||'"');
dbms_rowid.rowid_info(CHARTOROWID(v_rid), v_type, v_obj, v_rfno, v_bno, v_rno);
IF v_type = 0 THEN
dbms_output.put_line('RowID Type -> Restricted');
ELSE dbms_output.put_line('RowID Type -> Extended');
END IF;
dbms_output.put_line('Object ID = "'||v_obj||'"');
dbms_output.put_line('Relative File Number = "'||v_rfno||'"');
dbms_output.put_line('Block Number = "'||v_bno||'"');
dbms_output.put_line('Row Number = "'||v_rno||'"');
END;
/

Note that I have hard-coded the rowid but it is relatively easy to either edit this or indeed to incorporate this into a procedure.

Here’s the sample output

Row_ID = "AAAAASAABAAAADxAAb"
RowID Type -> Extended
Object ID = "18"
Relative File Number = "1"
Block Number = "241"
Row Number = "27"

Na, na, na, na, na, na, na, na, na, na, na, na… DBAtman!

Indexes

Rebuild all indexes of a Partitioned Table

“Holy rebuilding work, DBAtman!”
Cool, we have that scripted in my Utility Belt!

If you frequently need to collect all indexes of a partioned table (local and global indexes), this is a quick script that make the task a little bit easier:

begin
-- local indexes
for i in (select p.index_owner owner, p.index_name, p.partition_name
from dba_indexes i, dba_ind_partitions p
where i.owner='&OWNER'
and   i.table_name='&TABLE'
and   i.partitioned='YES'
and   i.visibility='VISIBLE' -- Rebuild only of the visible indexes, to get real effect :)
and   p.index_name=i.index_name
and   p.index_owner=i.owner
order by 1,2) loop
execute immediate 'alter index '||i.owner||'.'||i.index_name||' rebuild  partition '||i.partition_name||' online parallel 12'; -- parallel 12 solve most of the problems
execute immediate 'alter index '||i.owner||'.'||i.index_name||' parallel 1'; -- If you don't use parallel indexes in your database, or the default parallel of the index, or what you want...
end loop;
-- global indexes
for i in (select i.owner owner, i.index_name
from dba_indexes i
where i.owner='&OWNER'
and   i.table_name='&TABLE'
and   i.partitioned='NO'
and   i.visibility='VISIBLE' -- same comment
order by 1,2) loop
execute immediate 'alter index '||i.owner||'.'||i.index_name||' rebuild online parallel 12'; -- same
execute immediate 'alter index '||i.owner||'.'||i.index_name||' parallel 1'; -- same :)
end loop;
end;
/

Na, na, na, na, na, na, na, na, na, na, na, na… DBAtman!

Database, Statistics

Oracle TPS: Evaluating Transaction per Second

“Holy ‘My Boss wants Oracle TPS metrics’, DBAtman!”
Don’t worry young Robin, we can get it for you…

Sometimes this information has some ‘myth atmosphere’… Maybe because of that Oracle doesn’t have this information that clear and it’s not the most useful metric.
But for comparison to another systems and also to performance/’throughput’ with different infrastructure/database configuration, it can be useful.

It can be seen by AWR on “Report Summary” section, on “Load Profile”, “Transactions” item:

awr_tps

But if you want to calculate it through SQL query?
And if you want to have a historic from this metric?

I found a reference for this calculation here, using v$sysstat.
It’s the only reference I found, and it on 10g documentation… It refers this metric as:

Number of Transactions = (DeltaCommits+DeltaRollbacks)/Time

It also refers as DeltaCommits and DeltaRollbacks, respectively, “user commits” and user “rollbacks”.

Where it goes a possible SQL to do that:

WITH hist_snaps
AS (SELECT instance_number,
snap_id,
round(begin_interval_time,'MI') datetime,
(  begin_interval_time + 0 - LAG (begin_interval_time + 0)
OVER (PARTITION BY dbid, instance_number ORDER BY snap_id)) * 86400 diff_time
FROM dba_hist_snapshot), hist_stats
AS (SELECT dbid,
instance_number,
snap_id,
stat_name,
VALUE - LAG (VALUE) OVER (PARTITION BY dbid,instance_number,stat_name ORDER BY snap_id)
delta_value
FROM dba_hist_sysstat
WHERE stat_name IN ('user commits', 'user rollbacks'))
SELECT datetime,
ROUND (SUM (delta_value) / 3600, 2) "Transactions/s"
FROM hist_snaps sn, hist_stats st
WHERE     st.instance_number = sn.instance_number
AND st.snap_id = sn.snap_id
AND diff_time IS NOT NULL
GROUP BY datetime
ORDER BY 1 desc;

I like to use PL/SQL Developer to see this kind of data. And it regards us to make very good charts very quickly. I try it in a small database here, just for example:

7days_tps

Jedi Master Jonathan Lewis wrote a good post about Transactions and this kind of AWR metric here.

Na, na, na, na, na, na, na, na, na, na, na, na… DBAtman!

Database Link, Table

Copy Large Table Through DBLink

“Holy nothing works here, DBAtman!”
Take it easy, young Robin: DBAtman always has a solution…

To understand the situation:

Task: Need to migrate large database 11.1.0.6 to 12c Multi-Tenant Database with minimum downtime.
To better use the features, reorginize objects and compress data, I decided to migrate the data logically (not physically).
The first option was to migrate schema by schema through datapump with database link. There is no long columns.

Problem1: The database was veeery slow with perfect match to Bug 7722575 -DATAPUMP VIEW KU$_NTABLE_DATA_VIEW causes poor plan / slow Expdp.
workaround: None
Solution: Upgrade to 11.2. (No way).
Other things: Yes, I tried to change the cursor sharing, the estimate from blocks to statistics and all things documented. It doesn’t work.

Ok doke! Let’s use traditional exp/imp tools (with some migration area), right?
Problem2: ORA-12899 on import related to multiblocking x singleblocking charsets.
Solution: https://grepora.com/2015/11/20/charsets-single-byte-vs-multibyte-issue/
:)

Done? Not for all. For some tables, just happened the error:

EXP-00006: internal inconsistency error
EXP-00000: Export terminated unsuccessfully

An what Oracle says? “Solution: Use Datapump!”😛

Well, well… I realized I was going to become by myself…
Ok, so lets create table as select using database link. For most of all, ok…
But, for example, one of the missing tables has 700 million rows (350GB of compressed and no partitioned data).
Just to remmember that DBLink exclude parallel options (always serial).

The solution was to make a McGayver, this way:
1) Creating an aux table (source database):

alter session force parallel query parallel 12;
create table SCHEMA_OWNER.AUX_ROWID(ROW_ID,NUM) as select rowid, rownum from SCHEMA_OWNER.TABLE;
alter session disable parallel query;

* This table will be used to break the table in chunks.

2) Script run_chunck.sql to run each chunk of data:

DECLARE
counter number;
CURSOR cur_data is
select row_id from (
select row_id, num from SCHEMA_OWNER.AUX_ROWID@SOURCEDB order by num)
where num >= &1
and num <=&2;
BEGIN
counter :=0;
FOR x IN cur_data LOOP
BEGIN
counter := counter +1;
insert into SCHEMA_OWNER.TABLE select * from SCHEMA_OWNER.TABLE@SOURCEDB where rowid = x.row_id;
if counter = 1000 then ---commit every 1000 rows
commit;
counter := 0;
end if;
EXCEPTION
when OTHERS then
dbms_output.put_line('Error ROW_ID: '||x.row_id||sqlerrm);
END;
END LOOP;
COMMIT;
END;
/
exit;

3) Run in a BAT or SH like (my example was made for a bat, with “chunks” of 50 million rows – and commits by every 1k, defined on item 2):

@echo off
set /p db="Target Database.: "
set /p user="Username.......: "
set /p pass="Password..................: "
pause
START sqlplus %user%/%pass%@%db% @run_chunck.sql 1 2060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 2060054 52060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 52060054 102060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 102060054 152060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 152060054 202060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 202060054 252060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 252060054 302060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 302060054 352060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 352060054 402060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 402060054 452060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 452060054 502060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 502060054 552060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 552060054 602060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 602060054 652060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 652060054 702060053 -- count(*) from table

Watching the inserts running…

targetdb>@sess
User:DBATMAN_HELPS
USERNAME EVENT SQL_ID
---------- ---------- -------------------------
DBATMAN_HELPS SQL*Net message from dblink 6qc1hsnkkfhnw
DBATMAN_HELPS SQL*Net message from dblink 6qc1hsnkkfhnw
DBATMAN_HELPS SQL*Net message from dblink 6qc1hsnkkfhnw
DBATMAN_HELPS SQL*Net message from dblink 6qc1hsnkkfhnw
DBATMAN_HELPS SQL*Net message from dblink 6qc1hsnkkfhnw
DBATMAN_HELPS SQL*Net message from dblink 6qc1hsnkkfhnw
DBATMAN_HELPS SQL*Net message from dblink 6qc1hsnkkfhnw
DBATMAN_HELPS SQL*Net message from dblink gt3mq5ct7mt6r
DBATMAN_HELPS SQL*Net message from dblink 6qc1hsnkkfhnw
DBATMAN_HELPS SQL*Net message from dblink 6qc1hsnkkfhnw
DBATMAN_HELPS SQL*Net message from dblink 6qc1hsnkkfhnw
DBATMAN_HELPS SQL*Net message from dblink 6qc1hsnkkfhnw
DBATMAN_HELPS SQL*Net message from dblink 6qc1hsnkkfhnw
DBATMAN_HELPS SQL*Net message from dblink 6qc1hsnkkfhnw
DBATMAN_HELPS SQL*Net message from dblink 6qc1hsnkkfhnw
DBATMAN_HELPS SQL*Net message from client
DBATMAN_HELPS SQL*Net message from client
DBATMAN_HELPS SQL*Net message to client c7a5tcc3a84k6

After a few (26 hours) the copy was successfully concluded.:)

Na, na, na, na, na, na, na, na, na, na, na, na… DBAtman!