Agent unable to identify state of the ASMDG resource - ORA-00020: maximum number of processes (100) exceeded .

Issue :-
Agent unable to identify state of the ASMDG resource. . The asm_diskstring parameter is not set correctly in the ASM initialization parameter file. (3)

Not able to connect to ASM 
oracle@server1$sqlplus "/ as sysdba"
SQL*Plus: Release 11.2.0.2.0 Production on Wed Sep 6 15:38:30 2017
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
ERROR:
ORA-00020: maximum number of processes (100) exceeded
 Enter user-name:


Solution 1: 
Step 1 :- If you try to connect to ASM then below error will pop up  

oracle@server1$sqlplus "/ as sysdba"
SQL*Plus: Release 11.2.0.2.0 Production on Wed Sep 6 15:38:30 2017
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
ERROR:
ORA-00020: maximum number of processes (100) exceeded
 Enter user-name:

Step 2 :- check ASM session

ps -ef | grep -i LOCAL=YES | grep -i asm

oracle 27154 9375 0 Jul 18 ? 0:00 oracle+ASM (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) 
oracle 28021 9375 0 Jul 07 ? 0:00 oracle+ASM (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) 
oracle 5515 9375 0 Jul 08 ? 0:00 oracle+ASM (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) 
oracle 22722 9375 0 Jul 15 ? 0:00 oracle+ASM (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) 
oracle 927 9375 0 Sep 01 ? 0:00 oracle+ASM (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) 
oracle 7843 9375 0 Jul 19 ? 0:00 oracle+ASM (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) 
oracle 6962 9375 0 Jul 01 ? 0:00 oracle+ASM (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) 
oracle 17643 9375 0 Jul 18 ? 0:00 oracle+ASM (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) 
oracle 6163 9375 0 15:40:13 ? 0:00 oracle+ASM (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) 
oracle 27152 9375 0 Jul 18 ? 0:00 oracle+ASM (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) 
oracle 84 9375 0 Aug 13 ? 0:00 oracle+ASM (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) 

Step 3 :- you can try to reduce some session by removing Above old processes without impacting ASM
oracle@server1$ kill -9 27154 28021 5515 

*********************************************************************************************************************
Solution 2:- 
Perform below as planned activity with proper downtime
Increase the Maximum Number of Processes in the Database with the SQL below :
1. Connect database as sysdba user
SQL> conn / as sysdba
2. Increase maximum number of database process 
SQL> alter system set processes=250 scope=spfile;    // set processes as per capacity of server/DB 
3. Restart DB to take effect.
4. Retest the issue.



How to take export backup or create multiple dump file on multiple file system

Oracle's Export utility (expdp) is a useful part of an overall backup strategy.  Consider 300GB as database size , if we do not have FS of 300 GB , we can not take whole dump on single FS , So what to do , no worries oracle is smart enough , we can create multiple dump files at different location as per your convenient dump file size.

First of all you have to create multiple directories as per space available on FS.

SQL > create directory DATAPUMP as 'data01/export';

Directory created.


SQL > create directory DATA_DUMP_1 as '/data02/export;

Directory created.


SQL > create directory DATA_DUMP_2 as '/data03/export;

Directory created.


13:36:31 SQL> select * from dba_directories;

OWNER      DIRECTORY_NAME       DIRECTORY_PATH
---------- -------------------- -------------------
SYS        DATA_DUMP             /data01/export
SYS        DATA_DUMP_1           /data02/export
SYS        DATA_DUMP_2           /data03/export

Now fire below command

Without parallel :

nohup expdp \"/ AS SYSDBA \" dumpfile=DATA_DUMP:expdp_DATA_DUMP_%U.dmp,DATA_DUMP_1:expdp_DATA_DUMP_1_%U.dmp,DATA_DUMP_2:expdp_DATA_DUMP_2_%U.dmp, logfile=expdp_log_30Aug2017.log parallel=3 filesize=21474836480 schemas=SCOTT  compression=all &

With Parallel :

nohup expdp \"/ AS SYSDBA \" dumpfile=DATA_DUMP:expdp_DATA_DUMP_%U.dmp,DATA_DUMP_1:expdp_DATA_DUMP_1_%U.dmp,DATA_DUMP_2:expdp_DATA_DUMP_2_%U.dmp, logfile=expdp_log_30Aug2017.log parallel=3 filesize=21474836480 schemas=SCOTT compression=all parallel=2 &


Note :- Compression key word will not work for oracle 10g or less

For oracle 10g (10.2.0.3.0)  :-
Compression have  below parameter.
  • metadata_only
  • none
For oracle 11g & 12c 
Compression have  below parameter.
  • all
  • data_only
  • metadata_only
  • none
Look at the export command carefully we have not mentioned directory parameter.

We have provided directory name in dumpfile only like 
DATA_DUMP:expdp_DATA_DUMP_%U.dmp
DATA_DUMP_1:expdp_DATA_DUMP_1_%U.dmp
DATA_DUMP_2:expdp_DATA_DUMP_2_%U.dmp   
separated by comma's 

%U is used to create multiple files in same directory
eg :- 
expdp_DATA_DUMP_01.dmp
expdp_DATA_DUMP_02.dmp
expdp_DATA_DUMP_1_01.dmp
expdp_DATA_DUMP_1_02.dmp
expdp_DATA_DUMP_2_01.dmp 
expdp_DATA_DUMP_2_02.dmp  and so on

We can also provide filesize parameter depending on server File System. In my case I have given 20GB. Which means maximum size of any dumpfile will be 20 GB.

OUTPUT:-

Here in my case I have manually zipped dumpfiles thats why size is showing less than 20GB .





Thats it :)