Blog

SCOM: Automatically create management packs with PowerShell

Recently I was asked by a customer to make a multi tenant SCOM setup with different environments. There are several ways of doing this with connected management groups and all but I opted to keep one management group and make the separation there as this was the best fit for the client. I’m not saying that this is the best fit everywhere but for this particular case it was.

They have a very strict DTAP (Development – Test – Acceptance – Production) lifecycle for their software release model so this should be reflected in the SCOM model as well making things a little bit more complicated.

So to sum up the requirements:

  • Naming convention of the override management packs needed to be consistent
  • An override management pack needs to be created for all management packs introduced in the environment and for all stages in the DTAP process
  • An easy way has to be setup in the procedures for the engineer to create the override MP’s for all environments

You could create a procedure to instruct the engineer to create the management packs as part of implementing a new management pack in the environment but this creates tedious repetitive work which will lead to errors or will just be forgotten.

download (1)

That’s why I’ve automated the process of creating these override management packs with PowerShell following the naming convention which is in affect in your company.

[xml]
###
# This PowerShell script will create override management packs for all management packs which fall into a specific
# patern documented in $orgmanagementpackname
# Usage: CreateManagementPack.ps1
# Note: You can change the parameters below and pass them with the command if desired.
# Based on the script of: Russ Slaten
# http://blogs.msdn.com/b/rslaten/archive/2013/05/09/creating-management-packs-in-scom-2012-with-powershell.aspx
# Updated the script to create a management pack for all environments in the array $environments
###

###
# Declaration of parameters
###
$ManagementServer = "localhost"
$orgmanagementpackname = "microsoft.windows.server.2012*"
$Environments = "P", "A", "D", "T"

###
# Find the managementpacks which fit the filter documented in $orgmanagementpackname
###
$managementpacks = Get-SCOMManagementPack |where{$_.Name -like "*$orgManagementPackName*"} | select name
Foreach ($managementpackocc in $managementpacks)
{
$name = $managementpackocc.name
}
$name
###
# For all managementpacks in array managementpacks create a new override management pack with a correct naming convention
# and 1 override management pack per environment
###
Foreach ($env in $environments)
{
# fill in the name of the management packs
$ManagementPackID = "*Fill in company name here (no spaces!)*."+$env+".$managementpackocc"+"."+"overrides"
$ManagementPackName = "*Fill in company name here*: "+$env+" : "+$managementpackname+" overrides"
Add-PSSnapin Microsoft.EnterpriseManagement.OperationsManager.Client
$MG = New-Object Microsoft.EnterpriseManagement.ManagementGroup($ManagementServer)
$MPStore = New-Object Microsoft.EnterpriseManagement.Configuration.IO.ManagementPackFileStore
$MP = New-Object Microsoft.EnterpriseManagement.Configuration.ManagementPack($ManagementPackID, $ManagementPackName, (New-Object Version(1, 0, 0)), $MPStore)
$MG.ImportManagementPack($MP)
$MP = $MG.GetManagementPacks($ManagementPackID)[0]
$MP.DisplayName = $ManagementPackName
$MP.Description = "Auto Generated Management Pack"
$MP.AcceptChanges()
}
}
[/xml]
Download the script from the technet gallery:

download-button-fertig11

This script will actually find all the management packs which fit the input mask in $orgmanagementpackname and create for each of these management packs an override management pack following the naming in $ManagementPackID and $ManagementPAckName.

This results in the following structure:

printscreen-8-04-2015 0001

Note:

  • Run this script preferable on a management server or a machine which has the SCOM console installed. If you don’t run this on a management server make sure to change the $managementserver variable to point to a valid up and running management server in the management group you would like to have the override packs created in.
  • Because we run this via PowerShell and not execute the work manually there are no “rogue” empty folders created in the monitoring view thus we are not clogging up our console view.

MVP 2015: Cloud and Datacenter Management

 

printscreen-1-04-2015 0004

It’s very strange how time flies by so quickly… It has already been a year since I received my first MVP title and just a week a go I noticed I was up for renewal again…

Receiving the mail above always has something magical like new-year’s eve. A new year full of opportunities lies ahead to experience the true value of the MVP program. During the last year I’ve got to learn a lot of people in person which I knew already online out of the community, got to interact with the product team and took part in some really cool in depth discussions which really benefit the products I work with on a daily basis.

It’s nice to know where to go to if you have a problem with configuring something… And with that I’m not only referring to the MVP community but also the System Center community in general. It’s you out there who keep this community alive and I’m grateful I can contribute to it.

So in conclusion: I hope we’ll meet (again or for the first time) at an event or online and keep continue to spread the Sysctr love.

System Center Universe Dallas around the corner

Wow I can’t believe it has already been a year when I spoke at System Center Universe in Houston. And what a year it was! For me personally a lot of things have changed. System Center Universe 2014 was a great experience after winning the SCU Jedi contest in 2014 I got the possibility to speak at this fine convention.

I don’t know whether it had something to do with SCU but a couple of months later I was awarded with my first MVP in Data Center and Could management which enabled me to even more engage in the System Center Community.

So that’s me… Why should you come to SCU Dallas or take the time to watch the free live cast. Well because the organizing team of SCU has done it again this year… They brought the usual suspects back together for a multi track 1 day event to bring you all the new ins and outs of System Center and Azure. SCU is not only the 1 day event in Dallas but a global event (and who knows even an intergalactic one). The freely available livefeed gives people the ability to watch the sessions in realtime from around the globe and viewing parties are organized to mingle with peers during the event.

Last year I’ve talked about monitoring and embracing the cloud / azure in your monitoring environment. This year I will be talking about another problem I see at a lot of clients and receive a lot of questions about when attending events: How do I create the ultimate dashboard to show my stakeholders at a glance what they want to see.

During my session I’ll go over the do’s and don’ts of dashboarding and will give you quick tips  and tricks from the field to get you going fast to create the ultimate dashboard to monitor your death star.

Check out the agenda  and timeframe to attend in person, attend a viewing party near you or check out the livefeed which is freely available.

Check the speakers: http://www.systemcenteruniverse.com/presenters.htm

Check the agenda here: http://www.systemcenteruniverse.com/agenda.htm

Check the viewing parties: http://www.systemcenteruniverse.com/venue.htm

Hopefully see you there whether it’s virtually or physically!

SCOM: Configure a monitor recovery task for a healthy state

During a recent project a client had a small request to create a monitor and run a command when a device was not accessible anymore. Easy right! But (yep there’s always a but) they wanted to run a command when the monitor was returning back to a healthy state to restart a service when the device came back online… Hmmm and all in 1 monitor.

So the conditions were as follows:

Monitor:

  • Action: Run a PowerShell based monitor to test the connection with the device
  • BAD: Device is down => Run recovery task to remediate
  • GOOD: Device is up again => Run recovery task to restart service

(note: Always do this small matrix of a monitor design to exactly know what the customer wants)

I don’t have the device to simulate but came up with a small example in my lab to show you how to get this working with just 1 monitor. The situation in my lab is very simple. I want to turn on my desk lighting when my pc is on (and I’m working) and turn it off when my pc is not online.

My conditions:

Monitor:

  • Action: Run Powershell based monitor to test the connection and pass the result to SCOM
  • BAD: PC is offline: => turn off my desk lighting
  • GOOD: PC is online:=> turn on my desk lighting

So first things first we need to test the connection to see whether my pc is running. To check this I’m using this small script:

[xml]

param ([string]$target)
$API = New-Object -ComObject "MOM.ScriptAPI"
$PropertyBag = $API.CreatePropertyBag()

$value = Test-connection $target -quiet

$PropertyBag.AddValue("status", $value)

$PropertyBag
$API.Return($propertybag)

[/xml]

So I’m testing the connection and sending the response to SCOM. The  PowerShell “Test-Connection $target –quiet” command will just return true or false as a result whether the target is accessible or not

Creating the Monitor with Silect MP Author

The creation of this monitor consists of 2 parts:

  • Defining the class where the monitor will be targeted to and therefore the machine which will test the connection to the desktop
  • Passing the status from the machine to SCOM and take action by using a monitor

Defining a class:

To properly target this monitor we need to create a class in SCOM which identifies the servers that need to test the connection. In this case I’ve added a reg key to all servers who need to ping the desktop so I’m starting a Registry Target to create my class:

printscreen-0254printscreen-0255

I fill in a server that has the key already in there to make it much easier to browse the registry instead of typing it in with an increased margin for errors.

printscreen-0256

Select the Registry key you want to look for

printscreen-0257

In my case I’ve added a key under HKEY_LOCAL_MACHINE\Software\pingtestwatchernode

printscreen-0258

Select the key and press add and ok

printscreen-0259

Identify your registry target:

printscreen-0260

Identify your discovery for the target

printscreen-0261

In my case I just check whether the key is there. No check on the content.

printscreen-0263

The discovery will run once a day.

printscreen-0264

Review everything and press finish

printscreen-0265

At this point our class is ready to be targeted with our script monitor.

Next up is to create the monitor:

Create a new script monitor:

printscreen-0266

Browse to the PowerShell script and fill in the parameters. In this case I have 1 parameter which is “target” and will hold the IP of the desktop.

printscreen-0267

Define the conditions:

Healthy condition is when the status is true and type boolean

printscreen-0268

Critical condition is when the status is False

printscreen-0269

Note: I’m using a “boolean” Type

Configure the script and select the target you have created earlier on and the availability parent monitor

printscreen-0270

Identify your script based monitor

printscreen-0271

Specify a periodic: run every 2 minutes

printscreen-0272

No alert generation necessary.

printscreen-0273

Review all the parameters and create the script based monitor.

printscreen-0274

Load the management pack in your environment and locate the monitor:

printscreen-0278

Check the properties => recovery tasks and create 2 recovery tasks for the Health state “critical”.

Note that the screenshot below already shows the correct healthy state after config of the mp.

printscreen-0279

Export the managment pack and open it in an editor and locate the “recoveries” section to find your recovery tasks we just created:

printscreen-0280

scroll to the right and locate the “ExecuteOnState” parameter and change the one you want to run when the monitor goes back to healthy from “Error” to “Success”

Save the management pack and reload it in your environment.

printscreen-0281

So all we need to do is test it…

My pc is on: IT-Rambo has his cool backlight:

20141130_230930098_iOS

My pc is off and the light is automatically turned off…

20141130_230904267_iOS

Final Note: If you use this method you need to make sure to NOT save the recovery tasks in the console anymore otherwise the different settings we just changed in our management pack will be again overwritten as SCOM can’t natively configure a recovery task for a healthy state.

You can use this basically for anything where you want to run 2 conditions on the same monitor or even 3 if you have a 3 state monitor.

SCOM: Monitor the monitor part 1: PowerShell

Recently I got a question of an engineer during a community event why SCOM didn’t notify him when SCOM was down.

My first response was very similar to the response of my favorite captain below: printscreen_surf-0018

But this got me thinking actually because the engineer made a good point. That to have a full monitoring you should have another mechanism in place to monitor the monitoring system. Most companies still have a legacy monitoring system in place that can be leveraged to monitor the servers of SCOM but let’s face it: keeping another monitoring system alive just to monitor the SCOM servers only adds complexity to your environment for a small benefit.

That’s why I started building a small independent check with PowerShell. In part 1 of this series I’ll go over how to monitor whether your management servers are still up and running.

To do this we need to make sure that we have a watcher node which is able to ping the management servers. This watcher node may be any machine capable of running PowerShell and does not need to have operationsmanager PowerShell module available. This to make sure we are operating completely independent from SCOM.

Process used

The graph below shows the process used:

monitorthemonitor_servers

In my environment I have 2 management servers which are reachable from the watcher node. The first step is to dynamically determine how many management servers are in my environment. To do this I’m creating the input file which is generated by PowerShell on a management server and updated once a day. This is an automated process because face it: if we need to think about changing the infile.txt when we add or delete another management server we will forget.

This file will be available on the watcher node to do the ping commands even when the management servers are down.

Configuration on the Management server

(this is action 1 in the graph above)

To generate the infile containing all the management servers which are currently in our environment we need to execute the following PowerShell command on the watcher node:

[xml]
#=====================================================================================================
# AUTHOR:    Dieter Wijckmans
# DATE:        03/12/2014
# Name:        Readms.PS1
# Version:    1.0
# COMMENT:    This script will read out all the Management servers in a management group and saves it
#           into a txt file which is used to ping the servers from an external watcher node.
#           This script is scheduled on a management server via scheduled tasks.
#           Make sure to fill in your destination (which is your watcher node) in the variable
#
# Usage:    readms.PS1
# Example:
#=====================================================================================================
$destination: "fill in the destination on the watchernode here"
$ms = get-scommanagementserver

foreach ($mstemp in $ms){
$ms.DisplayName | Out-File $destination
}

[/xml]
Schedule this script on the management server via scheduled tasks and run it once a day.

The program to run is: powershell.exe c:\scripts\readms.ps1

This will generate the infile for the ping command to check the management servers and will place it on the watcher node.

Configuration on the Watcher node

(this is action 2 in the graph above)

Next up is to configure the watcher node to monitor our management servers and alert when they are unreachable. This is done by executing the following PowerShell on a regular basis through schedule tasks. I schedule this task every 5 minutes. This means that you get a mail every 5 min until it’s resolved. Better annoy a little bit more than just send 1 mail which just drowns in the mail volume.

[xml]
#=====================================================================================================
# AUTHOR:    Dieter Wijckmans
# DATE:        03/12/2014
# Name:        Pingtest.PS1
# Version:    1.0
# COMMENT:    This script will ping all the Management servers in a management group according to the
#           input file and escalate when a server is not reachable.
#           Make sure to fill in all the parameters in the parameter section.
#           This script is scheduled on the watcher node via a scheduled tasks.
#           Make sure to fill in your destination (which is your watcher node) in the variable
#
# Usage:    pingtest.PS1
# Example:
#=====================================================================================================

#parameter section: Fill in all the parameters below
$infile = "Location of file with management servers listed"
$outfile = "Location of file which will keep historical data on the pings"
$smtp = "fill in your smtp config to send mail"
$to = "The destination email address"
$from = "The from email address"

#reading the date when the test is executed for logging in the historical file
$testexecuted = Get-Date
#reading in all the objects listed in the infile
$objects = get-content $infile

#running through all the objects and taking action accordingly
foreach ($object in $objects)
{
$pingresult = Test-Connection $object -quiet
if ($pingresult -eq $True)
{
$pingresult = "Online"
}
else
{
$pingresult = "Offline"
$subject = "SCOM: Management Server " + $object + " is down!"
$body = "<b><font color=red>ATTENTION SCOM support staff:</b></font> <br>"
$body += "Management Server: " + $object + " is down! Please check the server!"
send-MailMessage -SmtpServer $smtp -To $to -From $from -Subject $subject -Body $body -BodyAsHtml -Priority high
}
$result = $object + " :ping result: " + $pingresult + " :" + $testexecuted | Out-File $outfile -append

}

#read the length of the inputfile and validate the same amount of lines in the outfile to validate whether all management
#servers are down.
$filelength= Get-content $infile | measure-object -Line
$numberoflines = $filelength.Lines
$file = Get-Content $outfile -Tail $numberoflines
$wordToFind = "Online"
$containsWord = $file | %{$_ -match $wordToFind}
If($containsWord -notcontains $True)
{
$subject = "SCOM: ALL Management Servers are down!"
$body = "<b><font color=red>ATTENTION SCOM support staff:</b></font> <br>"
$body += "All Management servers are down. Please take immediate action"
send-MailMessage -SmtpServer $smtp -To $to -From $from -Subject $subject -Body $body -BodyAsHtml -Priority high
}
[/xml]
Note: Make sure that you change all the parameters in the parameter section.

This script will ping all the machines which are filled in in the infile we created earlier and writes this to the out-file. The outfile is than evaluated and a mail is automatically send when a management server is down. If ALL management servers are down a separate mail is sent to notify that SCOM is completely down.

You can change the mail appearance in the $body fields in the PowerShell.

The outfile will have the following entries:

printscreen_surf-0020

My servers were Offline last night at 21:13:38. So the mailing was triggered and  the mail will look like below when SCOMMS2 is down:

printscreen_surf-0019

When all management servers are down it will look like this:

printscreen_surf-0021

So now we get completely independent from SCOM mails telling us there’s an issue with the SCOM management servers.

  • So what if our watcher node is down? Well I’ve installed a SCOM agent on this machine with a special subscription to notify me when it’s down.
  • So what if our management servers are down AND my watcher node is down… Well then you probably have a far greater problem and your phone will probably be already red hot by now…

You can find the PowerShell scripts and the files here on Technet Gallery:

download-button-fertig11

In Part 2 I’ll go over the ability to monitor your SQL connection of the management servers.

SCOM: PowerShell tip: Set Resource Pool Automatic members

 

Today I ran into a situation where I had to test an advanced notification setup to send alerts to another helpdesk system.

The notification channel activated a PowerShell script with parameters out of the alert to send data to the other system. After creating the notification channel there was no way to check whether the server I already configured was functioning correctly. My 2 management servers were automatically part of the Notifications resource pool thus making it not possible to force my testing through my configured management server.

These are the steps to troubleshoot the notifications on 1 management server and rectifying the situation again after testing and configuring both management servers:

These are my resource pool:

printscreen_surf-0008

Notice the difference in Icon for an automatic and manually populated resource pool.

Right click the notifications Resource Pool and select manual membership.

printscreen_surf-0009

An automatic properties dialog will pop up to give you the possibility to change the membership of this resource pool. Even if you press cancel at this point the resource pool will be converted to manual membership:

printscreen_surf-0010

The active members are shown here. I’ve removed my SCOMMS2 server to continue my test on SCOMMS1 for the PowerShell notification channel.

printscreen_surf-0011

 

printscreen_surf-0012

So after my tests were successful and I configured the SCOMMS2 I wanted to reset the resource pool back to automatic. The catch however is the fact that this is not possible via the GUI.

The following PowerShell oneliner will do the trick however:

get-scomresourcepool –displayname “notifications resource pool” | set-scomresourcepool –enableautomaticmembership $true

printscreen_surf-0015

After hitting F5 the notifications Resource Pool is back to automatic and the 2 management servers are back in the Resource pool

printscreen_surf-0008

printscreen_surf-0016       

Note:

  • If you are executing a PowerShell script on the management server make sure to have the same version of the script on both management servers in the same location
  • Always make sure that the notifications resource pool is set back to automatic to actively divide the load between all the management servers. Otherwise you will loose the great benefit of Resource pools.

ExpertsLive Free ticket giveaway

After last years successful edition Expertslive is back on 18/11/2014!

printscreen_surf-0003

Identified as one of the most Microsoft centric events organized by the community in Holland this event will be packed with sessions regarding the Microsoft stack.

Sessions will span the entire product group inlcuding Azure, System Center, Hyper-v, SQL Server, Windows Server, PowerShell and Office365.

All these session will be provided by top notch speakers in their respective field. Numerous international speakers will bring you the best to get you up to speed as quick as possible. It will be a not to miss event!

I’ll be hosting a session about monitoring everything with SCOM to become the one tool to monitor it all. A not to miss session…

For more info check out : http://www.expertslive.nl

Now the fun part!

Because Scugbe is supporting this event we are entitled to give away 15 free tickets for the event!

twitter-logo

All you have to do is follow @scugbe on twitter and tweet: @scugbe I would love to go to @expertslive! I want to win a ticket! If you are already following @scugbe just send the tweet.

The winners will be announced on 31st of October.

Hopefully see you there in EDE!

SCOM: Connect management groups between on-prem and Azure

 

During a recent project I explored the benefits on hosting a 2 legged SCOM environment for both on-prem and cloud services. Although this is possible with just one management group and site to site VPN to the cloud they opted for a 2 management group approach to keep a certain sort of divider between the on-prem and the cloud.

In this blog post (who knows it could become a series) I’ll show you how to connect the management groups to each other so they can exchange alerts and use 1 console but benefit from presence of a management group on both platforms.

wall2top_z23gd-129

In this scenario I’m going to use connected management groups. As explained here http://technet.microsoft.com/en-us/library/hh230698.aspx

Connecting management groups in SCOM 2012 gives you a couple of benefits. The biggest one in my opinion is the fact you can have multiple management groups with different settings but use 1 console to get all the alerts. The customer wanted the ability to monitor their clients on different thresholds than their own systems. The own systems were mainly situated on site although the other systems were at the clients site or in the cloud.

The management group which will have the consolidated view is called the local management group. In my example it is VLAB which is on prem. The other management groups are called “connected management groups” in this case VCLOUD.

They relate to each other in a hierarchical fashion, with connected groups in the bottom tier and the local group in the top tier. The connected groups are in a peer-to-peer relationship with each other. Each connected group has no visibility or interaction with the other connected groups; the visibility is strictly from the local group into the connected group.

So in this scenario it’s a good idea to connect these management groups to see all data in 1 console for both on-prem and client based. In VCLOUD it’s not possible to see the alerts of VLAB but the other way around it’s possible.

So what do we need to do to obtain this (even without different AD domains and firewalls in between).

First of all prep the VCLOUD in Azure:

Create endpoints on Azure machine

In order to be able to resolve the Azure management group from the on prem we need to make sure that connection is possible to the VCLOUD management server. This is done through port 5723 and 5724.

Open the Azure management portal:

My server is called vcloud-ms1

printscreen-0231

Open the endpoints and add 5723 and 5724 to the endpoints. This in fact opens the firewall of azure to your machines. All communication will happen over these 2 ports.

printscreen-0232

Click add and fill in the endpoints as shown below.

printscreen-0233

Next find the following

  • The Public Virtual IP address (VIP) and take a note. In my case it’s 23.101.73.xxx
  • The DNS name: in my case vcloud-ms1.cloudapp.net

 

printscreen-0234

Prepare the onsite management server

Now that the management server of our VCLOUD management group is configured we need to configure the management server in our VLAB environment to become the local management group which will receive the alerts.

First we need to make sure that the onsite server can resolve AND reach the server in VCLOUD management group.

This can be done by changing the hosts file on the VLAB management server.

Go to c:\windows\system32\drivers\etc\ and open the hosts file:

printscreen-0235 

Note: I’ve deleted the last 3 digits of all the IP addresses above you need to fill in the full IP address as documented in the Windows Azure console.

Let’s check whether this works now from the VLAB management server. Doing THE route check: ping the hostname:

printscreen-0236

hmmm not working. Did we configure something incorrect? Check, double check. NO.

Well this makes perfect sense because: PING IS DISABLED towards Azure machines. Therefore you will get a Request timed out all the time you test no matter what you configure!

Connecting the management groups

Now that we have both ends configured it’s time to see whether we can connect the management groups. Remember: initiate the connection from the local management group (the one who needs to see all alerts and is on top of the hierarchy)

So let’s connect to the management server in VLAB:

Open the Administration pane and select Connected Management Groups and click

printscreen-0237

Right click and choose Add Management Group

printscreen-0238

Fill in all the data requested:

  • Management Group Name: The name of the VCLOUD management group
  • Management Server: The name of the management server in VCLOUD (make sure to use the exact name as filled in in the host file)
  • Account: Because the account we use as SDK service resides in the VLAB AD and is not known in the VCLOUD we need to use the VCLOUD credentials

printscreen-0239

Note: You need to initiate this from the management server where you have changed the host file so make sure there’s a console on there

You will get the message below because it’s not possible to validate the account in the local AD:

printscreen-0240

Just click next and normally you should be connected at this point:

printscreen-0241

Success!

So now all we have to do is configure what we want to show on the local management group.

 

I’ll explain this further in the next blog in this series.

Microsoft System Center Advisor Limited Preview is live!

There are days that products become hot on the spot. It’s all about cloud lately and sometimes it’s amazing how fast things are evolving for us ITPRO’s.

One of these cool products which leverages the possibilities of the cloud, uses the full potential of the virtually endless storage space to store data and use the computing power of the cloud is System Center Advisor.

printscreen-0207

When System Center advisor first emerged it was a small service in the cloud where you had to seperately make a small proxy agent to send data into the cloud and configure it to get usefull data. You had to set up or designate a server as a gateway to send data to the online service. The data was only updated once per day and was only available through a webconsole. It was a nice product but it was way ahead of it’s time for the time being. One of the problems it had was the fact that not a lot of people understood the need for advisor as it was branded as just a system center advisor software…

The potential of the product was already there but it had to be easier to use…

Since SCOM 2012 SP1 Advisor got a revamp and is fully integrated in the SCOM console. It received more rules and better performance and people started embracing the fact that they gained access to the vast dbase of microsoft filled with best practices to automatically evaluate their systems. No need for those complex mbsa scans (ouch remember those…)

More and more people started using the service but still for a lot of customers I visited System Center Advisor was not that well known. It was rather a big unknown. As soon as I explained the possibilities they started using and appreciating the service and installed it in their environment.

[jwplayer mediaid=”1421″]

source: https://channel9.msdn.com/Blogs/C9Team/System-Center-Advisor-Limited-Preview

Now with the new Limited Preview Microsoft is showing the future of this cool product. All the different and familiar functions are still there but there’s more…

Intelligence Packs

If you are familiar with SCOM you’ll definately will know Management packs but Intelligence Packs? Intelligence packs are the new way of adding functionality to your Advisor environment taylored for your business. It is the key to customizing your Advisor to your environment to show the data you specifically want to show in Advisor.

These management packs are stored in the Advisor Gallery and will be installed online. In a later stadium it will be possible to configure and create your own intelligence packs to gather specific data for your environment to further customize your environment. Similar to what you are doing with your management packs in your SCOM environment.

Currently there are Intelligence Packs for:

advisor1

All are available from the Intelligence Pack Gallery and install with just a couple of clicks. Not much configuration is needed afterwards.

The store can be reached through the Intelligence Pack button on the portal:

printscreen-0203

Let’s for example take the Log Management Intelligence pack (this will take some time to get used to). It enables a cool new feature to gather eventlogs of your servers in one central place and search and query them to get a one place to get insight in your environment.

After we have installed the Intelligence Pack through the console it will appear in our main portal view:

printscreen-0216

(notice that I already played with the other intelligence packs as well)

So if we click on the tile “Log Management” we’ll jump to the configuration and tell Advisor which logs we would like to gather in our Advisor to get the insights with queries. Again this is a great way of gathering all your data in one place. When you have all the data in one place you can use it to get insight in your environment because let’s face it: It’s you who knows your environment best.

printscreen-0217

After we have told Advisor to gather the System log on all the machines which are connected to the advisor (both Errors and Warnings) the Intelligence pack will kick in and will gather the info for the first time to give you a view of the collected data.

Search Data explorer

Now that we have data in our Advisor we would love to find out things on our own to get perhaps the root cause why systems are running slow for example. The search can be performed by using the Search Data Explorer. Open the Search Data Explorer on the right to access the search tool:

printscreen-0208

This will open the Search where you can start your journey through the gathered data:

printscreen-0209

On the right you’ll have common search queries to get you started. Expect more and more lists for search queries to come online but if you really need to create your own search query you can always check out the Syntax Preference link under documentation to get you going.

In fact there are 3 easy steps to get your data:

1. Enter the searcg term:

In this example I’m using * to get all my data because Advisor hasn’t run that long it doesn’t have a lot of data yet so I would like to see what’s already in there:

printscreen-0210

Next step is to filter the resulte with the tools in the right column to really only get the data we are after:

The facets are the different objects gathered by type an facets per type. In addition to this it’s also possible to scope what a time frame for the events gathered. This can come in handy when you want to troubleshoot a problem on your environment for example:

printscreen-0211

For now this data is not exportable through PowerShell and only available online. Futher down the road in the developement of Advisor it will be possible to query this data through PowerShell to use the data in your own applications.

Feedback

Another feature that has been introduced in the console is the feedback option.

The button is located on the bottom right and will open the feedback page in a separate window:

printscreen-0212

This will take you straight to the feedback window.

printscreen-0213

People who already worked with connect and forums will find that it’s a mix between those 2. In here you can give tips or requests to further enhance the product with new possibilities but also file bugs you’ve came accross. Members of the community can answer questions to get you going or vote for another request.

This will give you a nice one stop place to get you up to speed fast with the product but most important will give you the opportunity to give feedback first hand. This list will be used by the product team involved to prioritize the new enhancements.

Conclusion

This Limited preview of the next generation of Advisor will give the possibility to gather even more data about your environment and use this data to gain further insight in your environment. Because the system has been setup with Intelligence Packs it’s very easy to taylor the console to your needs. Add the performance of the cloud storage and computing to the game and we have a new additional powerful tool to gather and analyse data.

Will this completely replace all other monitoring needs? Not yet… Will it be a great enhancement to the tools we already have in place? Certainly!

This tool is free of charge during the preview period. So for now the only thing that is stopping you from using this tool is… yourself.

Keep an eye on the blog as I’ll dig deeper into the different intelligence packs when data comes in

A first glance at Squared-Up Operations 1.8

Face it: In my believe Operations Manager is a cool product with lot’s of capabilities out of the box. But there is room for improvement as well. One of these areas of improvement is showing the data which you eagerly collect in SCOM to the operators or even to people who are not that tech minded. All they want to see is whether everything is running fine and they can happily (I do hope so) continue their work. SCOM is very good in showing the data to the Operators but is lacking these capabilities of showing data in a more simple way.

Don’t get me wrong on this… It DOESN’T need to have this capability on board by default… Luckily there are a number of players on the market regarding easy setup dashboards and visualizations of this data in your SCOM environment like Squared-up.

During MMS 2013 I came across Squared-up. A small UK based company who took a rather different approach towards dashboarding. The difference is in fact that they are not focusing on creating dashboards in the console as such but generating these dashboards on top of a lightweight HTML5 server which can be installed on a management server or another server if you like. All you need to do is install the Squared up app and connecting it to your environment. From there on all the data is collected by tapping into the SCOM SDK without interfering with the console as such.

No fuzz, no hassle, just straightforward dashboards out of the box…

So the first part of this blog series (yep I will dig deeper into this program) is to see how we are going to install the product and see what is available out of the box. (Note that the print screens are based on version 1.7. I recently install 1.8 on top of this version without issues)

Website: http://www.squaredup.com/

Install

So let’s start the install:

SNAG-0266

Read through the entire License agreement like I always do (right)

SNAG-0267

Install the HTML5 web-server

SNAG-0268

Yep installing…

SNAG-0269

After install we need to connect it to our management group to be able to tap into the SDK.

SNAG-0270

When all is done we can open the console for the first time by clicking on the link provided on screen:

SNAG-0271

We are using the Operations manager user “administrator”. No extra users need to be created just the users already present in SCOM will do:

SNAG-0272

After first log in you need to put in your activation key and you are good to go.

SNAG-0273

To my surprise data was already coming in and being shown in the website. Without any additional configuration or settings I already have standard view of my environment.

SNAG-0274

First browse through the standard views:

Active Directory view out of the box

SNAG-0313

On the left we get a quick overview of the status of the different services and on the right we get straight out of the box the graphs about the key perfomance indicators straight out of the Datawarehouse in real time. Pretty impressive if you ask me.

Web servers view out of the box:

SNAG-0314

So if I click an alert it will instantly open a new web view with the alert and all the different parameters of the alert in a very sleak design giving you all parameters and data in a glance.

SNAG-0308

This dashboard is also fully functional. It’s possible to close alerts, assign alerts or even reset monitors in a glance as shown below.

SNAG-0309

First conclusion

I’ll definitely have to play more with the product to get to know it’s full potential but so far I’m pleased with what I’m seeing: Easy setup, dashboards straight out of the box filled with data, speed (although my environment is running locally on my demo laptop),…

In a further stadium when I find the 25th hour in a day I’ll dig into the creation of custom dashboards which hopefully will be the same easy setup as the install.

Small tip

If you want to test drive this web console without moving back and forth on your screen you can always open it in a Page view in the console itself like shown below:

printscreen-0215

Enough talk, let’s build
Something together.