Blog

SCOM: Monitor the monitor part 1: PowerShell

Recently I got a question of an engineer during a community event why SCOM didn’t notify him when SCOM was down.

My first response was very similar to the response of my favorite captain below: printscreen_surf-0018

But this got me thinking actually because the engineer made a good point. That to have a full monitoring you should have another mechanism in place to monitor the monitoring system. Most companies still have a legacy monitoring system in place that can be leveraged to monitor the servers of SCOM but let’s face it: keeping another monitoring system alive just to monitor the SCOM servers only adds complexity to your environment for a small benefit.

That’s why I started building a small independent check with PowerShell. In part 1 of this series I’ll go over how to monitor whether your management servers are still up and running.

To do this we need to make sure that we have a watcher node which is able to ping the management servers. This watcher node may be any machine capable of running PowerShell and does not need to have operationsmanager PowerShell module available. This to make sure we are operating completely independent from SCOM.

Process used

The graph below shows the process used:

monitorthemonitor_servers

In my environment I have 2 management servers which are reachable from the watcher node. The first step is to dynamically determine how many management servers are in my environment. To do this I’m creating the input file which is generated by PowerShell on a management server and updated once a day. This is an automated process because face it: if we need to think about changing the infile.txt when we add or delete another management server we will forget.

This file will be available on the watcher node to do the ping commands even when the management servers are down.

Configuration on the Management server

(this is action 1 in the graph above)

To generate the infile containing all the management servers which are currently in our environment we need to execute the following PowerShell command on the watcher node:

[xml]
#=====================================================================================================
# AUTHOR:    Dieter Wijckmans
# DATE:        03/12/2014
# Name:        Readms.PS1
# Version:    1.0
# COMMENT:    This script will read out all the Management servers in a management group and saves it
#           into a txt file which is used to ping the servers from an external watcher node.
#           This script is scheduled on a management server via scheduled tasks.
#           Make sure to fill in your destination (which is your watcher node) in the variable
#
# Usage:    readms.PS1
# Example:
#=====================================================================================================
$destination: "fill in the destination on the watchernode here"
$ms = get-scommanagementserver

foreach ($mstemp in $ms){
$ms.DisplayName | Out-File $destination
}

[/xml]
Schedule this script on the management server via scheduled tasks and run it once a day.

The program to run is: powershell.exe c:\scripts\readms.ps1

This will generate the infile for the ping command to check the management servers and will place it on the watcher node.

Configuration on the Watcher node

(this is action 2 in the graph above)

Next up is to configure the watcher node to monitor our management servers and alert when they are unreachable. This is done by executing the following PowerShell on a regular basis through schedule tasks. I schedule this task every 5 minutes. This means that you get a mail every 5 min until it’s resolved. Better annoy a little bit more than just send 1 mail which just drowns in the mail volume.

[xml]
#=====================================================================================================
# AUTHOR:    Dieter Wijckmans
# DATE:        03/12/2014
# Name:        Pingtest.PS1
# Version:    1.0
# COMMENT:    This script will ping all the Management servers in a management group according to the
#           input file and escalate when a server is not reachable.
#           Make sure to fill in all the parameters in the parameter section.
#           This script is scheduled on the watcher node via a scheduled tasks.
#           Make sure to fill in your destination (which is your watcher node) in the variable
#
# Usage:    pingtest.PS1
# Example:
#=====================================================================================================

#parameter section: Fill in all the parameters below
$infile = "Location of file with management servers listed"
$outfile = "Location of file which will keep historical data on the pings"
$smtp = "fill in your smtp config to send mail"
$to = "The destination email address"
$from = "The from email address"

#reading the date when the test is executed for logging in the historical file
$testexecuted = Get-Date
#reading in all the objects listed in the infile
$objects = get-content $infile

#running through all the objects and taking action accordingly
foreach ($object in $objects)
{
$pingresult = Test-Connection $object -quiet
if ($pingresult -eq $True)
{
$pingresult = "Online"
}
else
{
$pingresult = "Offline"
$subject = "SCOM: Management Server " + $object + " is down!"
$body = "<b><font color=red>ATTENTION SCOM support staff:</b></font> <br>"
$body += "Management Server: " + $object + " is down! Please check the server!"
send-MailMessage -SmtpServer $smtp -To $to -From $from -Subject $subject -Body $body -BodyAsHtml -Priority high
}
$result = $object + " :ping result: " + $pingresult + " :" + $testexecuted | Out-File $outfile -append

}

#read the length of the inputfile and validate the same amount of lines in the outfile to validate whether all management
#servers are down.
$filelength= Get-content $infile | measure-object -Line
$numberoflines = $filelength.Lines
$file = Get-Content $outfile -Tail $numberoflines
$wordToFind = "Online"
$containsWord = $file | %{$_ -match $wordToFind}
If($containsWord -notcontains $True)
{
$subject = "SCOM: ALL Management Servers are down!"
$body = "<b><font color=red>ATTENTION SCOM support staff:</b></font> <br>"
$body += "All Management servers are down. Please take immediate action"
send-MailMessage -SmtpServer $smtp -To $to -From $from -Subject $subject -Body $body -BodyAsHtml -Priority high
}
[/xml]
Note: Make sure that you change all the parameters in the parameter section.

This script will ping all the machines which are filled in in the infile we created earlier and writes this to the out-file. The outfile is than evaluated and a mail is automatically send when a management server is down. If ALL management servers are down a separate mail is sent to notify that SCOM is completely down.

You can change the mail appearance in the $body fields in the PowerShell.

The outfile will have the following entries:

printscreen_surf-0020

My servers were Offline last night at 21:13:38. So the mailing was triggered and  the mail will look like below when SCOMMS2 is down:

printscreen_surf-0019

When all management servers are down it will look like this:

printscreen_surf-0021

So now we get completely independent from SCOM mails telling us there’s an issue with the SCOM management servers.

  • So what if our watcher node is down? Well I’ve installed a SCOM agent on this machine with a special subscription to notify me when it’s down.
  • So what if our management servers are down AND my watcher node is down… Well then you probably have a far greater problem and your phone will probably be already red hot by now…

You can find the PowerShell scripts and the files here on Technet Gallery:

download-button-fertig11

In Part 2 I’ll go over the ability to monitor your SQL connection of the management servers.

Microsoft System Center Advisor Limited Preview is live!

There are days that products become hot on the spot. It’s all about cloud lately and sometimes it’s amazing how fast things are evolving for us ITPRO’s.

One of these cool products which leverages the possibilities of the cloud, uses the full potential of the virtually endless storage space to store data and use the computing power of the cloud is System Center Advisor.

printscreen-0207

When System Center advisor first emerged it was a small service in the cloud where you had to seperately make a small proxy agent to send data into the cloud and configure it to get usefull data. You had to set up or designate a server as a gateway to send data to the online service. The data was only updated once per day and was only available through a webconsole. It was a nice product but it was way ahead of it’s time for the time being. One of the problems it had was the fact that not a lot of people understood the need for advisor as it was branded as just a system center advisor software…

The potential of the product was already there but it had to be easier to use…

Since SCOM 2012 SP1 Advisor got a revamp and is fully integrated in the SCOM console. It received more rules and better performance and people started embracing the fact that they gained access to the vast dbase of microsoft filled with best practices to automatically evaluate their systems. No need for those complex mbsa scans (ouch remember those…)

More and more people started using the service but still for a lot of customers I visited System Center Advisor was not that well known. It was rather a big unknown. As soon as I explained the possibilities they started using and appreciating the service and installed it in their environment.

[jwplayer mediaid=”1421″]

source: https://channel9.msdn.com/Blogs/C9Team/System-Center-Advisor-Limited-Preview

Now with the new Limited Preview Microsoft is showing the future of this cool product. All the different and familiar functions are still there but there’s more…

Intelligence Packs

If you are familiar with SCOM you’ll definately will know Management packs but Intelligence Packs? Intelligence packs are the new way of adding functionality to your Advisor environment taylored for your business. It is the key to customizing your Advisor to your environment to show the data you specifically want to show in Advisor.

These management packs are stored in the Advisor Gallery and will be installed online. In a later stadium it will be possible to configure and create your own intelligence packs to gather specific data for your environment to further customize your environment. Similar to what you are doing with your management packs in your SCOM environment.

Currently there are Intelligence Packs for:

advisor1

All are available from the Intelligence Pack Gallery and install with just a couple of clicks. Not much configuration is needed afterwards.

The store can be reached through the Intelligence Pack button on the portal:

printscreen-0203

Let’s for example take the Log Management Intelligence pack (this will take some time to get used to). It enables a cool new feature to gather eventlogs of your servers in one central place and search and query them to get a one place to get insight in your environment.

After we have installed the Intelligence Pack through the console it will appear in our main portal view:

printscreen-0216

(notice that I already played with the other intelligence packs as well)

So if we click on the tile “Log Management” we’ll jump to the configuration and tell Advisor which logs we would like to gather in our Advisor to get the insights with queries. Again this is a great way of gathering all your data in one place. When you have all the data in one place you can use it to get insight in your environment because let’s face it: It’s you who knows your environment best.

printscreen-0217

After we have told Advisor to gather the System log on all the machines which are connected to the advisor (both Errors and Warnings) the Intelligence pack will kick in and will gather the info for the first time to give you a view of the collected data.

Search Data explorer

Now that we have data in our Advisor we would love to find out things on our own to get perhaps the root cause why systems are running slow for example. The search can be performed by using the Search Data Explorer. Open the Search Data Explorer on the right to access the search tool:

printscreen-0208

This will open the Search where you can start your journey through the gathered data:

printscreen-0209

On the right you’ll have common search queries to get you started. Expect more and more lists for search queries to come online but if you really need to create your own search query you can always check out the Syntax Preference link under documentation to get you going.

In fact there are 3 easy steps to get your data:

1. Enter the searcg term:

In this example I’m using * to get all my data because Advisor hasn’t run that long it doesn’t have a lot of data yet so I would like to see what’s already in there:

printscreen-0210

Next step is to filter the resulte with the tools in the right column to really only get the data we are after:

The facets are the different objects gathered by type an facets per type. In addition to this it’s also possible to scope what a time frame for the events gathered. This can come in handy when you want to troubleshoot a problem on your environment for example:

printscreen-0211

For now this data is not exportable through PowerShell and only available online. Futher down the road in the developement of Advisor it will be possible to query this data through PowerShell to use the data in your own applications.

Feedback

Another feature that has been introduced in the console is the feedback option.

The button is located on the bottom right and will open the feedback page in a separate window:

printscreen-0212

This will take you straight to the feedback window.

printscreen-0213

People who already worked with connect and forums will find that it’s a mix between those 2. In here you can give tips or requests to further enhance the product with new possibilities but also file bugs you’ve came accross. Members of the community can answer questions to get you going or vote for another request.

This will give you a nice one stop place to get you up to speed fast with the product but most important will give you the opportunity to give feedback first hand. This list will be used by the product team involved to prioritize the new enhancements.

Conclusion

This Limited preview of the next generation of Advisor will give the possibility to gather even more data about your environment and use this data to gain further insight in your environment. Because the system has been setup with Intelligence Packs it’s very easy to taylor the console to your needs. Add the performance of the cloud storage and computing to the game and we have a new additional powerful tool to gather and analyse data.

Will this completely replace all other monitoring needs? Not yet… Will it be a great enhancement to the tools we already have in place? Certainly!

This tool is free of charge during the preview period. So for now the only thing that is stopping you from using this tool is… yourself.

Keep an eye on the blog as I’ll dig deeper into the different intelligence packs when data comes in

Flukso Energy Meter Monitoring Pack: Part 4: Seeing it all in action

This blog post is part of a series check out the other posts in this series:

So after all this hard work. To get the data into my MySQL dbase and into SCOM. What can I actually do with it?

This is the second part of  a far greater monitoring project I’m building to basically monitor my house but now I have control over the temperature and heating in my house using the Nest Thermostat monitoring pack AND can check on my power consumption and basically control my electrical bill.

I’ve created the views in the flukso monitoring view for electricity:

printscreen-0162

Nothing much we can do with this view as this is actually giving me a good reading. It’s in fact what we can do with the data which get into SCOM. Because this data is now into SCOM we can use this data to generate alerts when sudden peaks occur.

A cool one I have setup is the peak right around supper. We have an electrical furnace so when someone starts cooking at around 18h (6PM) I now get an alert becaus the total power consumption is above 4000 watt at that time…

So I know now perfectly well when I need to rush home to get in on time for dinner…

Now that I have this data in I can move forward and build a cool demo to show the added value of having this data in.

This is the second part of the puzzle of monitoring my house. In fact this process can also be used when having a solar power installation to see the generated energy on the graph.

In short notice I will be adding Water readings to the graph as well and have another few things I would like to add to the management pack to be able to patrol my house but more on that later.

Flukso Energy meter monitoring pack: Part 3: Get data into SCOM

This blog post is part of a series check out the other posts in this series:

So after we have successfully set up the connection between the flukso and our mysql dbase with basically following the same route as the nest thermostat data is pouring into our own dbase on the same device..

The only thing left to do is get this data in SCOM as well. I’ve created a separate management pack and PowerShell script for this to give people the ability to install it separate from each other but the goal is to create one big management pack in the end.

This blog post will explain how to retrieve the data with PowerShell (of course) and dump it into a property bag which is readable by SCOM. This is the second phase in our schematic example:

clip_image001

Requirements

We basically need the same requirements as for the NEST thermostat monitoring as we ar using the same route:

What do we need to retrieve the data out of the MySQL dbase.

  • A watchernode which has PowerShell V2.0 installed (can be a server or a desktop laying somewhere)
  • a reg key to identify this watcher node. I’m using “HKLM\SOFTWARE\Flukso\Watchernode” for this
  • The mysql connector installed: http://dev.mysql.com/downloads/connector/net/ (note in this example I’m using version 6.8.3)
  • Scom agent installed on the machine to be able to discover it as a class

There’s no additional install required on the mysql server although you will need the following to connect:

  • Location
  • User which has access to the mysql dbase (I use Root but this is not the safest way)
  • password

I’m using this on a virtual Win2012 machine without any issues.

Retrieve the data from MySQL using a PowerShell script

This is the script I created to get the data out of MySQL.

Note that this script only is retrieving one value. It’s possible to retrieve multiple values all at once but I preferred to use different scripts to get the different parameters out of the dbase.

The script has some prep work for water consumption in there as well but this is not yet fully operational as I need to convert the pulses to l/min so more on that later.

The dbase is filled with data every minute so I run the PowerShell script below every 120 sec to get data in.  The data is measured in watt.

The script used:

clip_image002

It can be downloaded here: http://scug.be/dieter/files/2014/02/perfdatafrommysqlelectricity.rar

[xml]
Param($energysort)
[void][system.reflection.Assembly]::LoadFrom(“C:\Program Files (x86)\MySQL\MySQL Connector Net 6.8.3\Assemblies\v2.0\MySQL.Data.dll”)

#Create a variable to hold the connection:

$myconnection = New-Object MySql.Data.MySqlClient.MySqlConnection

#Set the connection string:

$myconnection.ConnectionString = "database=flukso;server=<fill in ip of server>;user id=<user>;pwd=<password>"

#Call the Connection object’s Open() method:

$myconnection.Open()

#uncomment this to print connection properties to the console
#echo $myconnection

$API = New-Object -ComObject "MOM.ScriptAPI"
$PropertyBag = $API.CreatePropertyBag()

#The dataset must be created before it can be used in the script:
$dataSet = New-Object System.Data.DataSet

$command = $myconnection.CreateCommand()
#$command.CommandText = "select date, time, sensor_1 from fluksodata";
$command.CommandText = "SELECT Sensor_1 FROM fluksodata ORDER BY IDTimestamp DESC LIMIT 1";
$reader = $command.ExecuteReader()
#echo $reader
#The data reader will now contain the results from the database query.

#Processing the Contents of a Data Reader
#The contents of a data reader is processes row by row:

while ($reader.Read()) {
#And then field by field:
for ($i= 0; $i -lt $reader.FieldCount; $i++) {
$value = $reader.GetValue($i).ToString()
}
}
echo $value
$myconnection.Close()

$PropertyBag.AddValue("energysort", $energysort)
$PropertyBag.AddValue("electricity", $value)

[/xml]

This script will basically do the following.

  • Prepare the environment
  • Open the connection to MySQL
  • Get the data in the data reader
  • Read out the last line because we are only interested in the most recent value
  • Fill it in the property bag

Note: I’m also using a variable $energysort to identify the flukso sensor.

Now to get all the different parameters as mentioned above the only things you need to change are:

  • The name of the script itself
  • Command.CommandText = “SELECT Sensor_1 FROM fluksodata ORDER BY IDTimestamp DESC LIMIT 1”;
  • The property bag value: $PropertyBag.AddValue(“electricity”, $value)
If everything goes well you have your data in your MySQL dbase now and can retrieve it remotely via PowerShell to pass it on to SCOM.

Now that we have the PowerShell in place. Check out this blog post to make a management pack for it: http://scug.be/dieter/2014/02/19/nest-thermostat-monitoring-pack-part-3-create-the-mp/

Download the MP here: http://scug.be/dieter/files/2014/02/flukso.energymeter.rar

clip_image002.jpg

 

Flukso Energy meter monitoring pack: Part 2: Get data into MySQL

This blog post is part of a series check out the other posts in this series:

So after we have successfully installed the device and data is flowing to the flukso website we get a nice graph on our dashboard which is available by logging into the website:

clip_image001

Cool… So now we get a clear overview of our energy consumption. But there’s nothing we can do with it basically. We can look at it. Make some adjustments but no alerts, no long term reports nothing…

So as I discussed in the first post there’s an open API which makes the data available locally. This is great. No need to retrieve the data from an external website. It stays inside my own network.

The setup was very similar to my Nest Thermostat approach because I had that framework already in place I planned to use it the same way and get the data in SCOM via the same process.

Again the heart of my setup is my trusty Synology DS412+ hosting my linux distro and my MySQL dbase instance:

clip_image002

How did I get data in?

The setup is very similar to my Nest Thermostat approach. To get the data queried out of the Flukso device I have used the script written by a fellow fluksonian PeterJ (yep that’s the official name of users of flukso): https://docs.google.com/file/d/13wB85cPx_5nykBq3ZShnClHa1rpkRE5edNEluMqxrFaCRlvJrD8Bn_6UDCs9/edit?pli=1

He uses a set of PHP scripts to get the data in.

A high level overview of the install:

  • Connect to your Synology box with Winscp
  • Copy the content of the files to /volume1/web/flukso (Make sure to follow the exact same paths as described on the google drive).
  • Open settings.php and fill in the parameters requested:

[xml]

<?php
// Rename to settings.inc.php

// DB Settings
define(‘DB_HOST’, ‘localhost’);
define(‘DB_NAME’, ‘flukso’);
define(‘DB_USER’, ‘<fill in a user with rights to create dbase on your Mysql>’);
define(‘DB_PASS’, ‘<fill in the password of that user>);

// Flukso settings
define(‘FL_ADDRESS’, ‘ipaddressofflukso:8080/sensor’);
define(‘FL_PASSWORD’, ”); //for future use
define(‘FL_SENSOR1’, ‘sensor ID’);
define(‘FL_SENSOR2’, ”);
define(‘FL_SENSOR3’, ”);

// Meter settings
define(‘START_DAY’, ‘070000’);
define(‘END_DAY’, ‘230000’);
?>

[/xml]

  • Note the sensor ID can be retrieved from the website in the sensor section (make sure to use the ID and not the token)

clip_image003

  • Run the install.php script by accessing your synology via putty to gain a terminal access (more explanation check the Nest thermostat topic here)
  • At this point the dbase should be created an ready to go:

clip_image004

  • All left to do is create another line in the cron and restart it to get data flowing into our dbase and ready to get extracted by SCOM.
  • The crontab which needs to be changed is located in /etc and is named crontab. The line is in red.

Note: make sure to use TABS between the different columns otherwise the line will be deleted with the next reboot. On a Synology box it is… I don”t know on other linux distro’s but better safe than sorry right:

clip_image005

  • The line that needs to be added: */1    *    *    *    *    root    /usr/bin/php /volume1/web/flukso/cronjob.php

After this install the data should normally be coming in.

I’ve tried to create a brief overview on how to setup the Synology to get the data from the flukso into my own MySQL dbase using a community driven script. However this is a System Center blog so I’m not going to go further in detail here.

If you still have questions either check the flukso forum which has some really active members out there eager to help spread the word on this nifty device: https://www.flukso.net/forum

Or connect with me on twitter @dieterwijckmans so I can assist where needed.

Flukso Energy meter monitoring pack: Part 1: Intro on the device used

This blog post is part of a series check out the other posts in this series:

This post is part of an ongoing series on how to monitor my house with SCOM and build scenarios based on the data that comes into SCOM.

More info on the blog series here: http://scug.be/dieter/2014/02/19/monitor-your-home-with-scom/

After monitoring the temperature / humidity and heating in my house I now have turned my focus on the aspects that cost basically money. My electrical bill. To get this data in you need an energy meter. I actually have 2 at the moment so I can level them out to see which one is right… Boys and toys right.

clip_image001

They are both of Belgian companies but can be used on any power grid. The above device is called the smappee.

clip_image002

It’s a rather new device with a very spacy exterior and lighting. Indeed it’s connecting quite easy to your environment and it measures everything beautifully. The nice thing about this device is in fact it has a nice shiny app for iPhone and Android so you can get your data while on the road. The coolest thing is in fact that this device is capable of detecting certain patterns on your internal electrical grid to identify certain devices in your household so you can easily pinpoint what the big consumers of power are. This works quite well… The downside of this device however is that there’s up until now no way to get the data from the device towards your own device. This is not open source. Although there’s no additional fee for the website and the apps it’s kind of useless if you want to get the data out and play with it…

More info on the smappee can be found here: http://www.smappee.com/

clip_image003

The device just below is completely different. Although it serves the same purpose: monitoring your power consumption. This device is just a small box which holds a custom made device which was built from the ground up with the open source community in mind. The software is running on a linux distro, dd-wrt for the routing and you have the possibility to access it via a terminal to gain root access and play with the device. The data gathered is logged to the flukso server and nicely graphed on a custom dashboard protected by your user name and password. You get a nice overview of your consumption even in real time. Besides the electrical consumption you can also check water and gas consumption so an all-round device for a little bit less than the Smappee. The cool thing in fact is that you can access the data locally by checking the box in the admin dashboard. This opens up the local API which can be addressed by a simple CURL call.

More info on the Flukso can be found here: https://www.flukso.net/about

Installation?

The installation for both devices was straight forward. As soon as the device came online you needed to connect it to an account on the website and that was it… Now only to get the data into the device.

To use this you need to have a little background of electrical work. Both website come with a huge disclaimer if you are not confident with installing the metering device ask a professional.

What you need to do is clamp a power metering device over the hot wire of your electrical installation behind the meter and before the first fuse in your fuse box:

clip_image004

After connecting the clamp to the device you are good to go to get things monitored. Both devices use the same tech so if you have both just connect both of the clamps to the wire. No cutting is involved.

So this was a blog about System Center right?

True… But I’m also active in the flukso community and promised to give feedback to them as well how I cracked this box open to get all the data into a MySQL dbase. I’ve used a similar approach as the nest thermostat series which can be found here: http://scug.be/dieter/2014/02/19/nest-thermostat-monitoring-pack-part-i-how-did-i-get-data/

So how did I get data?

Still not much System center content but important for the people who are going to use this or try this at their home because face it… Monitoring is our profession and if we can save some money while we are at it… Check out the other parts to find out how I got data in.

SCOM: System Center Data Access Service stops (event 26380 , 33333)

 

When I started to review a SCOM 2012 R2 environment recently I came across an interesting issue I didn’t witness before… Time to blog the solution!

Problem

The System Center Data Access Service started successfully but stopped within the minute. After investigating I found out that there were at least 2 events logged during the time when the service crashes that could give us a clue on what is going on.

Event 26380: The System Center Data Access Service failed due to an unhandled exception… Cannot be added to the container…

sql02

Event 33333: Data access layer rejected: An entity of type service cannot be owned by a role, a group, or by principals mapped to certificates or asymmetric keys.

sql01

Strange… This worked the day before. What was going on?

After my search on the web I found this article of Travis Wright who had a similar problem with SCSM (which share the same code base so a nice entry point to start my troubleshoot).

http://blogs.technet.com/b/servicemanager/archive/2011/10/04/system-center-data-access-service-start-up-failure-due-to-sql-configuration-change.aspx

By now I could pinpoint that there was an issue on the SQL side.

After heading over to the SQL admin with the article we continued our troubleshoot together. Turned out that the issue was not exact what Travis had experienced. In fact the SQL admin had made a review of the SA accounts and removed the SA role from the scom SDK user. No problem so far… But the SDK user was not defined in SQL as a SQL user but just as a member of a group.

Solution

Turned out that the SQL user had no rights to create an instance when executing  the stored procedure: [p_TypeSpaceSetupBrokerService]

Original

SET @Query = N’CREATE SERVICE [‘ + @ServiceName + N’] ON QUEUE [‘ + @QueueName + N’] ([http://schemas.microsoft.com/SQL/Notifications/PostQueryNotification]);’;

This was changed by the followin stored procedure to authorize the DBO to execute and after that the issue was resolved.

SET @Query = N’CREATE SERVICE [‘ + @ServiceName + N’] AUTHORIZATION [dbo] ON QUEUE [‘ + @QueueName + N’] ([http://schemas.microsoft.com/SQL/Notifications/PostQueryNotification]);’;

Hopefully when you have stumbled on this page it has saved you some extra troubleshooting…

SCU_Jedi Finalist

 

System Center Universe is back in full force on the 30th of January to bring you for the 3th year in a row top notch System Center content. This event is held in Houston Texas but spread through the entire galaxy via a high quality live stream reaching out to all the System Center astronauts throughout the world.

To give a chance to someone to share his System Center force the SCU_Jedi contest was held. This epic journey to find the true SCU_Jedi is now in it’s final stage…

After the first stage the SCU_Jedi council elected a top 3 of the applications to enter the final round.

I was selected with my “Combine the force of the cloud and SCOM” session. As the only not American contestant I’m up against 2 other great candidates. In this final round it is however no longer in our hands but I call upon you

He who has the most votes by 15/12 on his YouTube video posted in the SystemCenterUniverse channel  wins the right to participate in person on this awesome event…

Therefore I would be so grateful if you could like my video on YouTube to get me there one like at a time:

http://www.youtube.com/watch?v=L81cv1bbogo

Please give me the opportunity to share my knowledge by giving this session . Every like counts so spread the word and get more SCOM content on this great event which is growing every year!

It would be a privilege to participate…

and remember…

Keep monitoring the force!

157-master-yoda-star-wars

Scom: Batch reset monitors through PowerShell

Monitors are a very useful addition to SCOM since SCOM 2007 came out back in the days. However for a lot of fresh SCOM administrators the alerts generated by monitors sometimes can create headaches.

An alert is raised when a state is changed and closed when the state changes back to the health condition. This is the really short version…

If you speak to advanced SCOM admins they can all agree that the management of the monitor generated alerts can be tricky from time to time if you work with operators.

If at one point they close an alert in the console which was generated by a monitor but the condition is not changed for the monitor it will remain in unhealthy state until a force reset is done on the monitor itself.

We all know how many monitors are floating around in our environment so it’s just a disaster waiting to happen. Therefore it is wise to reset the unhealthy monitors for your core business services regularly until everybody is aware about the fact that they can not close alerts from a monitor…

However I use this setup also for another annoying thing that can have great impact on your environment. Again this is a scenario to rule out a human error.

  • IF an alert is raised by a monitor going into a unhealthy state, a notification is successfully triggered and a ticket is created… So far so good.
  • BUT if someone closes the ticket or the alert without looking at it the condition remains and no warning will be raised again.
  • As a lot of my customers are using scom as a monitoring tool in the backend and monitor the tickets it generates they will not be alerted again.

Therefore I created this small PowerShell script in combination with a bat file. It will just reset the health of the unhealthy monitors of a specific monitor you specify. Only thing left to do is create a scheduled task for the bat file and you are good to go.

The script can be downloaded at the Gallery together with the bat file.

download-button-fertig11

Example: Fragmentation level is high and we want to be alerted everyday again as long as the condition remains:

SNAG-0168

Check the monitor properties to retrieve the monitor display name:

SNAG-0169

In this case “Logical Disk Fragmentation Level” Copy paste the name.

SNAG-0170

Fill in the name in the batch file and run it.

SNAG-0171

The unhealthy monitors will be reset and their alerts are automatically closed in the console.

SNAG-0172

If we check the monitor again it is now forced to reset state and will fire again the next time it checks the unhealthy condition when this is still true.

 SNAG-0173

This way you will receive a new alert every time this script runs. You could also schedule this during shift change of the helpdesk to get a clear view of the current situation on your environment that they start with a clean sheet.

LiveMeeting 22/11/2012: System Center Products better together…

 

On the 22th of November I’m hosting a LiveMeeting on how to integrate the different System Center products.

a0983d0e-9819-4cbc-a995-f9cb7fd44d36[1]

We’ll go over the different steps to integrate the different System Center products to get past the standard “just monitor it” scenario with SCOM but truly integrate the different products together.

All the products will be positioned within the System Center stack and integrations will be showcased.

If you are looking for a session to convince your boss to install more system center products or just want to convince yourself of the force of system center products brought together…

Look no further this is your session.

Register here:https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032533093&Culture=en-us&community=0

Enough talk, let’s build
Something together.