I'm tasked to create a SCOM Monitor MP that needs to be deployed on different servers where it needs to count events in the eventlog and trigger an alert.
Before it is suggested that counting an absolute arithmetic difference is enough and offering links to such solutions, there is a "gotcha!" :
There might be one server, Server A, that gets, say, 1000 events per hour, an increase of 100+ should trigger the alert.
On Server B, only gets 100 events per hour, and an increase of 10+ should also trigger the alert.
Therefore: You *can't* use an absolute diference, but *must* use a percentage difference.
It gets more challenging as there is another "gotcha":
Actually what should trigger the alert is a rate increase of events that falls outside of two Standard Deviations of a Moving Average. The best example is the stock market: Your stock price moves up and down daily, but also steadily moves up due to inflation over the years. You want a monitor that will alert you when the stock price goes above or below two standard deviations from the *moving average*.
I know how to call a PowerShell script from a SCOM Monitor, but I don't know how to get SCOM and PowerShell to "remember" prevous sampling's counts, because Monitors are state-less (e.g. "memoryless") .
Since SCOM, I'm told, can monitor any process state over time, I'm surprised that I don't see any support for statistical time-series analysis .
Any help would be appreciated.