In the Notifications Schedule page, there is also an option to 'Suspend Notification'.
This option can be used if the schedule is repeating in nature and the suspension is a one-time activity, not to be repeated again.
- Click on the Edit button next to Suspend Notification.
- Choose the 'Start Date' and 'End Date' during which time the notifications have to be suspended - not sent during this period.
Ensure that both Start Date and End Date are chosen correctly before clicking on the 'Ok' button. Else, it will error out with:
Invalid dates for Suspended Notification
Both fields should either be empty or have valid dates
(Start date cannot be after the end date).
- The Notifications can be Suspended / Resumed only at the 'Date' level and not at a particular timestamp.
- Click on the 'Clear' button to remove the settings for 'Suspend Notifications', before the End Date.
- In the Notification Schedule page, different color coding is used to indicate "time slots with E-mail Addresses" and "Suspended Notification".
your suggestion is not useful for my objective or I don't understand well.
I try to clear my situation.
1- I have a COLD BACKUP of OMS each day at 07:15 AM, so for about 30 minutes the OMS is down;
2- I don't want this kind of notification: "Agent is unable to communicate with the OMS. " during this 30 minutes because I have a lot of agent installed and I receive hundreds of email each morning that I consider "FALSE POSITIVE".
I wouldn't like to receive this alert email from each agent during the OMS backup.
I try to put OMS and EM12C server in blackout during this period but it's not a solution for me because the email was sent to me the same.
Another option is to run a script calling emcli on oms to blackout each of the agents for 30+ mins.
I would first try a couple to ensure that a blackout stops that alert.
Other option is to remove that incident rule from EVER firing .. of course that means if the agent/host goes down you wont get anything either.
Surprised the previous suggestion didnt work. .. basically disabling all notifications for a given time window. Also seems dangerous.
Any thoughts of using DG to replicate the db so you dont have to bring down OMS in the first place? What happens if there is a real alert during the 30mins .. you wont see it unless you check the incident pages.
"Surprised the previous suggestion didnt work. .. "
It's probably that I can't understand how to apply this solution.
If I want to disable email alert each morning from 07:15 AM to 07:45 how can I use the suggestion of Rahul-EM ?
setup->notifications->My Notification Schedule
Edit Schedule Definition.. (assuming you likely have a default of a rotating weekly schedule)
Edit Existing Schedule
Clear out the timeslot you dont want alerts for 7-8am for example
I wonder if the Suspend Notification option can be done via emcli - i dont see it in help
Alert notifications, including alerts for Agent Unreachable alerts, are processed by an OMS. If there is no OMS running, the agents will not be able to upload until there is a running OMS to handle file uploads; however, you should not get any alert when there is no OMS is running, except when you have enabled out-of-band notification to send an alert when the EM site is down.
When shutting down an OMS for maintenance, you should blackout the OMS and its related targets, or simply blackout all the targets on the OMS host. It is unnecessary to blackout all agents in this case.
If you really received an alert notification during a period with no OMS running, please provide details and timings.
I have a lot of: Agent is unable to communicate with the OMS. (REASON = Agent is Unreachable (REASON : Agent .. email
during blackput of OMS from 07:20 AM to 08:05 AM.
I receve the email approximally at 07:49, I think when OMS goes up.
Before an OMS puts an agent in Unreachable status, it pings the agent and the agent's host. Assuming the agents in question were up, it seems either there a problem with the OMS pinging process or a problem in the network path between the OMS and the agents. OMS log/trace files should give a clue, but I suggest logging an SR with Support to investigate.