文档库

最新最全的文档下载
当前位置:文档库 > ORACLE常用操作

ORACLE常用操作

1、查询占用IO资源前五的sql
select sql_text,disk_reads from (
select q.SQL_TEXT,q.DISK_READS,dense_rank() over (order by disk_reads desc ) disk_read_rank from v$sql q)
where disk_read_rank <= 5 ;
2、查询sql资源的占用情况
select s.SID, s.serial#,spid 操作系统ID,osuser 系统用户,s.MACHINE 计算机名,s.program 工具,http://www.wendangku.net/doc/72a4068f7fd5360cbb1adb01.htmlername 数据库用户,sq.CPU_TIME 消耗CPU时间,sq.SQL_TEXT 执行的sql,
NAME 会话PGA内存, round(st.VALUE / 1024 / 1024,2) 会话占有内存MB
from v$session s
left join v$sesstat st on s.SID = st.SID
left join v$statname sn on sn.STATISTIC# = st.STATISTIC#
left join v$process p on p.ADDR = s.PADDR
left join V$SQL sq on sq.ADDRESS = s.SQL_ADDRESS and sq.HASH_VALUE = s.SQL_HASH_VALUE
where http://www.wendangku.net/doc/72a4068f7fd5360cbb1adb01.html like 'session pga memory';
3、查看产生死锁的信息
select http://www.wendangku.net/doc/72a4068f7fd5360cbb1adb01.htmlername || '@' || s1.machine || '(SID='||s1.sid||') is blocking'
|| http://www.wendangku.net/doc/72a4068f7fd5360cbb1adb01.htmlername ||'@'|| s2.machine||'(SID='||s2.sid||')' AS blocking_status
from v$lock l1,v$session s1,v$lock l2,v$session s2
where s1.sid=l1.sid and s2.sid=l2.sid and l1.BLOCK=1 and l2.request>0
and l1.id1=l2.id1 and l2.id2=l2.id2;
4、查看session中SQL的资源使用情况
select
SE.SID,
SE.SERIAL#,
P.SPID,
http://www.wendangku.net/doc/72a4068f7fd5360cbb1adb01.htmlERNAME,
SE.STATUS,
SE.TERMINAL,
SE.PROGRAM,
SE.MODULE,
SE.SQL_ADDRESS,
ST.EVENT,
ST.P1TEXT,
ST.P1,
ST.P2,
ST.P3,
ST.STATE,
ST.SECONDS_IN_WAIT,
S.PHYSICAL_READS,
S.BLOCK_CHANGES
from v$session se, v$session_wait st, v$sess_io s,v$process p
where se.SID=st.SID
and se.SID=s.SID
and se.PADDR=p.ADDR
and se.SID>6
and st.WAIT_TIME=0
and st.EVENT not like '%SQL%'
5、清掉buffer cache中的数据
alter session set event ' immediate trace name flush_cache';
6、ORA-00845错误处理
(1)立即生效:
umount tmpfs
mount -t tmpfs shmfs size=15G /dev/shm
(2)永久生效,需要重启服务器
修改 /etc/fstab文件,在其中加入一行:
tmpfs /dev/shm tmpfs defaults,size=15000M 0 0
7、RAC 数据启动和关闭的方法grid用户
(1)关闭节点
srvctl stop instance -d SGTMS -i SGTMS1
srvctl stop nodeapps -n tmsdata01
(2)启动节点
srvctl start instance -d SGTMS -i SGTMS1
srvctl start nodeapps -n tmsdata01
(3) 查看数据库服务的状态
crsctl status resource -t
8、查询数据库死锁
select object_name as 对象名称,
s.sid,s.serial#,p.spid as 系统进程号
from v$locked_object l , dba_objects o , v$session s , v$process p
where l.object_id=o.object_id
and l.session_id=s.sid and s.paddr=p.addr;
9、节点1重启后无法启动,报错信息如下,crsd服务没有启动
[grid@tmsdata01 ~]$ srvctl start instance -d SGTMS -i SGTMS1
PRCD-1027 : Failed to retrieve database SGTMS
PRCR-1070 : Failed to check if resource ora.sgtms.db is registered
Cannot communicate with crsd
切换到root用户下 在/opt/app/11.2.0/

grid/bin/ 目录下执行./crsctl status resource -t 命令,报错信息如下:

[root@tmsdata01 bin]# ./crsctl status resource -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.

执行ps -ef | grep d.bin

[root@tmsdata01 bin]# ps -ef | grep d.bin
root 4103 1 0 09:09 ? 00:00:18 /opt/app/11.2.0/grid/bin/ohasd.bin reboot
grid 4688 1 0 09:10 ? 00:00:02 /opt/app/11.2.0/grid/bin/oraagent.bin
grid 4700 1 0 09:10 ? 00:00:00 /opt/app/11.2.0/grid/bin/mdnsd.bin
grid 4710 1 0 09:10 ? 00:00:02 /opt/app/11.2.0/grid/bin/gpnpd.bin
grid 4940 1 0 09:10 ? 00:00:08 /opt/app/11.2.0/grid/bin/gipcd.bin
root 5221 1 0 09:10 ? 00:00:01 /opt/app/11.2.0/grid/bin/orarootagent.bin
root 5234 1 3 09:10 ? 00:02:21 /opt/app/11.2.0/grid/bin/osysmond.bin
root 5415 1 0 09:10 ? 00:00:06 /opt/app/11.2.0/grid/bin/ologgerd -m tmsdata02 -r -d /opt/app/11.2.0/grid/crf/db/tmsdata01
root 11544 1 0 10:22 ? 00:00:00 /opt/app/11.2.0/grid/bin/cssdmonitor
root 11560 1 0 10:22 ? 00:00:00 /opt/app/11.2.0/grid/bin/cssdagent
grid 11572 1 0 10:22 ? 00:00:00 /opt/app/11.2.0/grid/bin/ocssd.bin
root 11640 11481 0 10:25 pts/4 00:00:00 grep d.bin

服务中没有crsd.bin

执行./crsctl stat res -t -init 命令

[root@tmsdata01 bin]# ./crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
1 ONLINE OFFLINE
ora.cluster_interconnect.haip
1 ONLINE OFFLINE
ora.crf
1 ONLINE ONLINE tmsdata01
ora.crsd
1 ONLINE OFFLINE
ora.cssd
1 ONLINE OFFLINE STARTING
ora.cssdmonitor
1 ONLINE ONLINE tmsdata01
ora.ctssd
1 ONLINE OFFLINE
ora.diskmon
1 OFFLINE OFFLINE
ora.evmd
1 ONLINE OFFLINE
ora.gipcd
1 ONLINE ONLINE tmsdata01
ora.gpnpd
1 ONLINE ONLINE tmsdata01
ora.mdnsd
1 ONLINE ONLINE tmsdata01

ora.crsd 的状态为OFFLINE

查看日志信息如下

2014-11-29 08:58:19.032: [UiServer][4227856128] {1:55417:5488} Done for ctx=0x7fd19400b7e0
2014-11-29 08:59:22.983: [GIPCHDEM][805304064] gipchaDaemonProcessDisconnect: daemon has disconnected endp 0x7fd224009900 [0000000000000189] { gipcEndpoint : localAddr 'ipc', remoteAddr 'ipc://gipcd_tmsdata01', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 4412, flags 0xa61e, usrFlags 0x24020

}, hctx 0x32e6970 [0000000000000010] { gipchaContext : host 'tmsdata01', name 'e807-5322-d63c-fb67', luid 'd5e9c213-00000000', numNode 1, numInf 1, usrFlags 0x0, flags 0x1 }
2014-11-29 08:59:22.983: [GIPCHDEM][805304064] gipchaDaemonWork: daemon thread exiting by request hctx 0x32e6970 [0000000000000010] { gipchaContext : host 'tmsdata01', name 'e807-5322-d63c-fb67', luid 'd5e9c213-00000000', numNode 1, numInf 1, usrFlags 0x0, flags 0x9 }
2014-11-29 08:59:23.023: [GIPCHDEM][805304064] gipchaDaemonThreadEntry: daemon thread exiting state gipchaThreadStateDead (4)
2014-11-29 08:59:23.354: [GIPCHTHR][1018365696] gipchaWorkerWork: worker thread exiting by request hctx 0x32e6970 [0000000000000010] { gipchaContext : host 'tmsdata01', name 'e807-5322-d63c-fb67', luid 'd5e9c213-00000000', numNode 1, numInf 1, usrFlags 0x0, flags 0x9 }
2014-11-29 08:59:23.354: [GIPCXCPT][1009960704] gipchaInternalEndpointClose: endpoint close failed due to failure in worker thread 0x7fd23400a560 [00000000000001bb] { gipchaEndpoint : port '04b0-6be5-63ee-9d10', peer ':', srcCid 00000000-00000000, dstCid 00000000-00000000, numSend 0, maxSend 100, groupListType 1, hagroup 0x340e1d0, usrFlags 0x4000, flags 0x130 }, ret gipcretDaemonLost (34)
2014-11-29 08:59:23.356: [GIPCXCPT][767481600] gipchaInternalEndpointClose: endpoint close failed due to failure in worker thread 0x7fd20800a560 [00000000000007fe] { gipchaEndpoint : port 'cc12-0c14-186d-1013', peer 'tmsdata02:CLSFRAME_2/41b7-cea1-17f0-4131', srcCid 00000000-000007fe, dstCid 00000000-00049c5f, numSend 0, maxSend 100, groupListType 1, hagroup 0x3557ef0, usrFlags 0x4000, flags 0x2b4 }, ret gipcretDaemonLost (34)
2014-11-29 08:59:23.356: [GIPCHGEN][767481600] gipchaEndpointCloseF [gipcmodGipcDisconnect : gipcmodGipc.c : 923]: EXCEPTION[ ret gipcretDaemonLost (34) ] failed to close endpoint ctx 0x32e6970 [0000000000000010] { gipchaContext : host 'tmsdata01', name 'e807-5322-d63c-fb67', luid 'd5e9c213-00000000', numNode 1, numInf 1, usrFlags 0x0, flags 0x9 }, endp 0x7fd20800a560 [00000000000007fe] { gipchaEndpoint : port 'cc12-0c14-186d-1013', peer 'tmsdata02:CLSFRAME_2/41b7-cea1-17f0-4131', srcCid 00000000-000007fe, dstCid 00000000-00049c5f, numSend 0, maxSend 100, groupListType 1, hagroup 0x3557ef0, usrFlags 0x4000, flags 0x2b4 }, flags 0x0

发现没有日志生成

在root用户下执行./crsctl stop crs 停止集群服务

reboot节点1

在root用户在目录/opt/app/11.2.0/grid/bin/ 执行./crsctl stat res -t -init 发现

[root@tmsdata01 bin]# ./crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
1 ONLINE ONLINE tmsdata01 Started
ora.cluster_int

erconnect.haip
1 ONLINE ONLINE tmsdata01
ora.crf
1 ONLINE ONLINE tmsdata01
ora.crsd
1 ONLINE ONLINE tmsdata01
ora.cssd
1 ONLINE ONLINE tmsdata01
ora.cssdmonitor
1 ONLINE ONLINE tmsdata01
ora.ctssd
1 ONLINE ONLINE tmsdata01 OBSERVER
ora.diskmon
1 OFFLINE OFFLINE
ora.evmd
1 ONLINE ONLINE tmsdata01
ora.gipcd
1 ONLINE ONLINE tmsdata01
ora.gpnpd
1 ONLINE ONLINE tmsdata01
ora.mdnsd
1 ONLINE ONLINE tmsdata01
crsd服务正常

此问题原因分析:在启动节点关闭的时候只执行了srvctl stop instance -d SGTMS -i SGTMS1 命令 然后强制执行了init0命令,

crsd服务没有正常关闭导致服务异常

10、ORA-04031: unable to allocate 5450432 bytes of shared memory ("shared pool","unknown object","sga heap(3,0)","kglsim per-gran bkt")错误的解决办法
查看/opt/app/oracle/diag/rdbms/sgtms/SGTMS2/trace/alert_SGTMS2.log文件发现:

ORA-04031: unable to allocate 5450432 bytes of shared memory ("shared pool","unknown object","sga heap(5,0)","kglsim per-gran bkt")
Incident details in: /opt/app/oracle/diag/rdbms/sgtms/SGTMS2/incident/incdir_201138/SGTMS2_ora_28580_i201138.trc
Sweep [inc][201138]: completed
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Sat Nov 29 14:41:40 2014
Adjusting the default value of parameter parallel_max_servers
from 960 to 785 due to the value of parameter processes (800)
Starting ORACLE instance (normal)
Specified value of sga_max_size is too small, bumping to 10775166976

主要问题是:Starting ORACLE instance (normal)
Specified value of sga_max_size is too small

Sat Nov 29 14:34:40 2014
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Private Interface 'eth0:1' configured from GPnP for use as a private interconnect.
[name='eth0:1', type=1, ip=169.254.98.17, mac=f8-0f-41-f9-62-a4, net=169.254.0.0/16, mask=255.255.0.0, use=haip:cluster_interconnect/62]
Public Interface 'eth1' configured from GPnP for use as a public interface.
[name='eth1', type=1, ip=10.138.106.198, mac=f8-0f-41-f9-62-a5, net=10.138.106.0/24, mask=255.255.255.0, use=public/1]
Public Interface 'eth1:1' configured from GPnP for use as a public interface.
[name='eth1:1', type=1, ip=10.138.106.200, mac=f8-0f-41-f9-62-a5, net=10.138.106.0/24, mask=255.255.255.0, use=public/1]
Picked latch-free SCN scheme 3
Using LOG_ARCHIVE_DEST_1 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on.
Errors in file /opt/app/oracle/diag/rdbms/sgtms/SGTMS2/trace/SGTMS2_ora_28580.trc (incident=201138):
ORA-04031: unable to allocate 5450432 bytes of shared memory ("shared pool","unknown object","sga heap(5,0)","kglsim per-gran bkt")
Incident details in: /opt/app/oracle/di

ag/rdbms/sgtms/SGTMS2/incident/incdir_201138/SGTMS2_ora_28580_i201138.trc
Sweep [inc][201138]: completed
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Sat Nov 29 14:41:40 2014
Adjusting the default value of parameter parallel_max_servers
from 960 to 785 due to the value of parameter processes (800)
Starting ORACLE instance (normal)
Specified value of sga_max_size is too small, bumping to 10775166976
Sat Nov 29 14:42:07 2014
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Private Interface 'eth0:1' configured from GPnP for use as a private interconnect.
[name='eth0:1', type=1, ip=169.254.98.17, mac=f8-0f-41-f9-62-a4, net=169.254.0.0/16, mask=255.255.0.0, use=haip:cluster_interconnect/62]
Public Interface 'eth1' configured from GPnP for use as a public interface.
[name='eth1', type=1, ip=10.138.106.198, mac=f8-0f-41-f9-62-a5, net=10.138.106.0/24, mask=255.255.255.0, use=public/1]
Public Interface 'eth1:1' configured from GPnP for use as a public interface.
[name='eth1:1', type=1, ip=10.138.106.200, mac=f8-0f-41-f9-62-a5, net=10.138.106.0/24, mask=255.255.255.0, use=public/1]
Picked latch-free SCN scheme 3
Using LOG_ARCHIVE_DEST_1 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on.
Errors in file /opt/app/oracle/diag/rdbms/sgtms/SGTMS2/trace/SGTMS2_ora_31146.trc (incident=201139):
ORA-04031: unable to allocate 5450432 bytes of shared memory ("shared pool","unknown object","sga heap(4,0)","kglsim per-gran bkt")
Incident details in: /opt/app/oracle/diag/rdbms/sgtms/SGTMS2/incident/incdir_201139/SGTMS2_ora_31146_i201139.trc
Sweep [inc][201139]: completed
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Sat Nov 29 14:43:15 2014
Adjusting the default value of parameter parallel_max_servers
from 960 to 785 due to the value of parameter processes (800)
Starting ORACLE instance (normal)
Specified value of sga_ma

解决办法:

一、首先在节点1以上执行,create pfile=‘/opt/app/oracle/db/dbs/123.ora' from spfile='+DATA/sgtms/spfilesgtms.ora';
二、修改/opt/app/oracle/db/dbs/123.ora中内存的相关设置信息和节点1上一致
三、在节点2上执行
sqlplus / as sysdba
startup pfile='/opt/app/oracle/db/dbs/123.ora';
create spfile=‘+DATA/sgtms/spfilesgtms.ora' from pfile='/opt/app/oracle/db/dbs/123.ora';
四、关闭节点2上的数据库服务 shutdown immediate;
五、启动数据库服务 startup;

11、清理监听日志
cd /opt/app/grid/diag/tnslsnr/tmsdata01/listener/trace/
lsnrctl set log_status off
mv listener.log listener.log.20141202
lsnrctl set log_status on