Update KB4343909 for Windows 10 1803 kills Windows Defender Application Guard

Update KB4343909 for Windows 10 1803 has broken ‘Windows Defender Application Guard’ (WDAG) after installing the August 2018 KB4343909 update.

The Windows Defender Application Guard reports the error code 0xC0370106 as shown below.

Windows Defender Application Guard Error 0xC0370106

We confirm that it is a ‘known issue’ if you read the release notes of KBb4343909  : “Launching Microsoft Edge using the New Application Guard Window may fail; normal Microsoft Edge instances are not affected.”

The workaround is uninstall the KB4343909 update and install updates KB4340917 and KB4343909 in that specific order. Microsoft will fix this in the September release.

Kenny Buntinx

Speaking @ ITPROCEED 14/06/2016 in Mechelen


It’s that time of the year again! Everyone is waiting for the summer to hit Belgium (I heard it will happen on a Wednesday this year!). Have some time and relax… BUT… Not before we go out with a bang at ITPROceed!

This is THE not to miss event in Belgium focusing on ITPRO’s. This event will be packed with sessions of both national and international speakers who use their expertise and gathered knowledge to prepare you for the next steps in your ITPRO career. ITPROceed is organized by the different Belgian user groups and backed up by Microsoft.

All the new technologies which will revolutionize your ITPRO world will be showcased giving you a real look and feel what the next steps will be to move your environment forward.

I myself will give you insights in the world of OMS. My session is scheduled on the “Transform the datacenter” track. During a demo loaded session I’ll showcase how you can use the latest and greatest in OMS to get the insights and reports you want.

If you are interested in OMS and what it can do for your organization this is a not to miss session.

Ow and by the way… Did I mention the entrance is completely FREE! Number of tickets is limited so sign up today!

More info here:

MVP 2015: Cloud and Datacenter Management


printscreen-1-04-2015 0004

It’s very strange how time flies by so quickly… It has already been a year since I received my first MVP title and just a week a go I noticed I was up for renewal again…

Receiving the mail above always has something magical like new-year’s eve. A new year full of opportunities lies ahead to experience the true value of the MVP program. During the last year I’ve got to learn a lot of people in person which I knew already online out of the community, got to interact with the product team and took part in some really cool in depth discussions which really benefit the products I work with on a daily basis.

It’s nice to know where to go to if you have a problem with configuring something… And with that I’m not only referring to the MVP community but also the System Center community in general. It’s you out there who keep this community alive and I’m grateful I can contribute to it.

So in conclusion: I hope we’ll meet (again or for the first time) at an event or online and keep continue to spread the Sysctr love.

Microsoft System Center Advisor Limited Preview is live!

There are days that products become hot on the spot. It’s all about cloud lately and sometimes it’s amazing how fast things are evolving for us ITPRO’s.

One of these cool products which leverages the possibilities of the cloud, uses the full potential of the virtually endless storage space to store data and use the computing power of the cloud is System Center Advisor.


When System Center advisor first emerged it was a small service in the cloud where you had to seperately make a small proxy agent to send data into the cloud and configure it to get usefull data. You had to set up or designate a server as a gateway to send data to the online service. The data was only updated once per day and was only available through a webconsole. It was a nice product but it was way ahead of it’s time for the time being. One of the problems it had was the fact that not a lot of people understood the need for advisor as it was branded as just a system center advisor software…

The potential of the product was already there but it had to be easier to use…

Since SCOM 2012 SP1 Advisor got a revamp and is fully integrated in the SCOM console. It received more rules and better performance and people started embracing the fact that they gained access to the vast dbase of microsoft filled with best practices to automatically evaluate their systems. No need for those complex mbsa scans (ouch remember those…)

More and more people started using the service but still for a lot of customers I visited System Center Advisor was not that well known. It was rather a big unknown. As soon as I explained the possibilities they started using and appreciating the service and installed it in their environment.

[jwplayer mediaid=”1421″]


Now with the new Limited Preview Microsoft is showing the future of this cool product. All the different and familiar functions are still there but there’s more…

Intelligence Packs

If you are familiar with SCOM you’ll definately will know Management packs but Intelligence Packs? Intelligence packs are the new way of adding functionality to your Advisor environment taylored for your business. It is the key to customizing your Advisor to your environment to show the data you specifically want to show in Advisor.

These management packs are stored in the Advisor Gallery and will be installed online. In a later stadium it will be possible to configure and create your own intelligence packs to gather specific data for your environment to further customize your environment. Similar to what you are doing with your management packs in your SCOM environment.

Currently there are Intelligence Packs for:


All are available from the Intelligence Pack Gallery and install with just a couple of clicks. Not much configuration is needed afterwards.

The store can be reached through the Intelligence Pack button on the portal:


Let’s for example take the Log Management Intelligence pack (this will take some time to get used to). It enables a cool new feature to gather eventlogs of your servers in one central place and search and query them to get a one place to get insight in your environment.

After we have installed the Intelligence Pack through the console it will appear in our main portal view:


(notice that I already played with the other intelligence packs as well)

So if we click on the tile “Log Management” we’ll jump to the configuration and tell Advisor which logs we would like to gather in our Advisor to get the insights with queries. Again this is a great way of gathering all your data in one place. When you have all the data in one place you can use it to get insight in your environment because let’s face it: It’s you who knows your environment best.


After we have told Advisor to gather the System log on all the machines which are connected to the advisor (both Errors and Warnings) the Intelligence pack will kick in and will gather the info for the first time to give you a view of the collected data.

Search Data explorer

Now that we have data in our Advisor we would love to find out things on our own to get perhaps the root cause why systems are running slow for example. The search can be performed by using the Search Data Explorer. Open the Search Data Explorer on the right to access the search tool:


This will open the Search where you can start your journey through the gathered data:


On the right you’ll have common search queries to get you started. Expect more and more lists for search queries to come online but if you really need to create your own search query you can always check out the Syntax Preference link under documentation to get you going.

In fact there are 3 easy steps to get your data:

1. Enter the searcg term:

In this example I’m using * to get all my data because Advisor hasn’t run that long it doesn’t have a lot of data yet so I would like to see what’s already in there:


Next step is to filter the resulte with the tools in the right column to really only get the data we are after:

The facets are the different objects gathered by type an facets per type. In addition to this it’s also possible to scope what a time frame for the events gathered. This can come in handy when you want to troubleshoot a problem on your environment for example:


For now this data is not exportable through PowerShell and only available online. Futher down the road in the developement of Advisor it will be possible to query this data through PowerShell to use the data in your own applications.


Another feature that has been introduced in the console is the feedback option.

The button is located on the bottom right and will open the feedback page in a separate window:


This will take you straight to the feedback window.


People who already worked with connect and forums will find that it’s a mix between those 2. In here you can give tips or requests to further enhance the product with new possibilities but also file bugs you’ve came accross. Members of the community can answer questions to get you going or vote for another request.

This will give you a nice one stop place to get you up to speed fast with the product but most important will give you the opportunity to give feedback first hand. This list will be used by the product team involved to prioritize the new enhancements.


This Limited preview of the next generation of Advisor will give the possibility to gather even more data about your environment and use this data to gain further insight in your environment. Because the system has been setup with Intelligence Packs it’s very easy to taylor the console to your needs. Add the performance of the cloud storage and computing to the game and we have a new additional powerful tool to gather and analyse data.

Will this completely replace all other monitoring needs? Not yet… Will it be a great enhancement to the tools we already have in place? Certainly!

This tool is free of charge during the preview period. So for now the only thing that is stopping you from using this tool is… yourself.

Keep an eye on the blog as I’ll dig deeper into the different intelligence packs when data comes in

Received MVP 2014 award


Yesterday I have received the news that I am awarded with the Microsoft Most Valuable Professional award 2014 in Cloud and Datacenter Management.


I can’t describe how thrilled I am to be a part of this community to share even more knowledge with true experts in the field to gain even more insight in the System Center products.

This couldn’t have been possible without the help and support of a lot of people who guided me into the world of System Center. However there’s a small problem with name dropping: You are always forgetting some people. But hey I’m happy to take the risk.

First of all I would like to go back to 2010. While I was working at a client I came across Kurt Van Hoecke (who’s an MVP now as well) who introduced me to the System Center Suite. I did have an ITIL background but never heard of System Center as such. I agreed to join him to MMS2010 and barely got there due to the ash cloud. During that MMS I already met the people of System Center User Group and other System Center engineers who became good friends afterwards.

Time went by and I started to experiment with SCOM and other Sysctr products. I changed employer specifically to start working with Sysctr products and from then on it started rolling.

I officially joined SCUG Belgium in 2011 and have blogged ever since. Started speaking at events as well with already recently a couple of highlights (Expertslive, SystemCenteruniverse US,…) and hopefully many more to come.

During the past years I enjoyed sharing my knowledge, findings regarding the Sysctr products, helping out people with issues and just meeting new people with the same passion. I can’t count the hours I’ve spend on these activities but I enjoy doing it otherwise you would not continue right?

So what now? Well euhm basically nothing. I will continue blogging, speaking, helping out and hopefully meet even more people with the same passion. As a board member of SCUG I can say that we will continue to provide a platform for System Center content in Belgium and throughout the world. If you would like to start blogging / speaking / contributing here just drop me a line.

So finally I would like to start name dropping… The dangerous stuff right?

First of all thanks to Arlindo Alves and Sigrid VandenWeghe: As Microsoft Belux community leads they provide us (and me) with a solid platform to build and grow our community platform.

Second I would like to thank the members of the SCUG who helped me in the beginning of my wanders through the System Center world.

Third I would like to shout out to some specific people who had a significant impact on my journey II ‘ve travelled so far. Thanks Maarten Goet, Kenny Buntinx, Tim de Keukelaere, Cameron Fuller, Kurt van Hoecke, Kevin Greene, Marnix Wolf, Mike Resseler and so many more I’m forgetting to mention right now.

It’s because of these individuals and much more due to the buzz in the Sysctr community  that I really like sharing my knowledge and meeting new people while I’m speaking

Last I would like to express a special thanks to the Sysctr Community members who provided good content in the past, now and in the future. It’s their blogs, effort and guidance who helped me in the beginning to gain a good insight in the Sysctr world.

Some blogs that really helped me in the beginning (and still are helping me today)

Last but not least I want to encourage you to share your knowledge as well in the community. Every bit of effort even the smallest ones really contribute in keeping this community alive and helping others to fully understand the potential of the system center suite. Hopefully see you at one of the events in the near future!

Connect with me on

System Center Community update (Sysctr night and SCU2014)

The new year 2014 is not even a couple of weeks old and the first Sysctr events are already announced or planned. Don’t you just love it when the community is buzzing again with new and exciting events just around the corner.

System Center Night (22/01/2014 Brussels)


My first appointment will be the System Center Night organized by us, System Center User Group Belgium. For the first time in a long while (heck I can’t even remember that we did this) we are organizing a 2 track evening with 2 sessions on CDM and 2 sessions on ECM.

There are still a couple of seats left but they are limited so If you’re not signed up yet make sure to do so. More info on the SCUGbe events page:

System Center Universe 2014 (30/01/2014 Houston TX)

Second appointment of this year is a week later… And one I’m really looking forward to. As a huge fan of the first hour I’m thrilled to be able to speak at the SystemCenterUniverse (SCU) event in Houston on the 30th of January.


For this I had the battle me into the SCU_JEDI position to get the slot. Those who have voted for me… Thank you I won’t let you down!

The cool thing about this event is not only the out of this world list of speakers and agenda (check it out here) but the fact that it has been broadcasted over the globe in HD from the very beginning. This is giving everyone the opportunity to tune in for free and witness the event live from their own living room, business or even with their own local User Group. That’s right, user groups around the world are organizing Simulcast parties around the globe. If you want to join them check out whether there’s one near your location and jump in:

But another cool thing is the fact that you can really interact with the event… Right from the start Rod Trent has provided great coverage of the event on social media during and after the event. You can really engage with the event in Houston and ask questions to the panel. This is in my believe a huge plus for all the people who are viewing from abroad and is an extra channel how you can experience the event and get all the inside info…

Twitter_icon-140q8bhFollow @rodtrent or check the official hashtag #scu2014 for more info on the event.


My session will be about monitoring your cloud with System Center Operations Manager:

What is that strange Interstellar cloud floating through space holding all your servers, services, data, etc.? Make this not a huge unknown in your universe but send out your probes to get the data back to your mother ship and start monitoring it. Use the force of this massive cloud to even monitor your servers at the mother ship. The possibilities are out there… Just grab them, combine the forces and become a true master of your universe.

It’s scheduled at 2:35pm – 3:20pm Texas time (approx 10PM Brussels time) so tune in to the simulcast if you want to check out my session.

The closest Simulcast party for Belgium and Netherlands is held by ScugNL in Hilversum. More info here:

images (2)

So I hope to see you all live or virtually at System Center Universe 2014.

Twitter_icon-140q8bhIf you want to get in touch: connect and drop me a line on Twitter: @DieterWijckmans

May the force be with you… Always

SCOM: Disk space monitoring extension pack

In a constant quest to keep your environment running, Disk space is one of the things that need to be available to satisfy your organization’s continuously growing hunger for storage.

The price of storage has dropped significantly over the last years but unfortunately the demand for more storage has grown as well as files are getting bigger and more and more data is kept.

SCOM has had different processes over the year to make sure you are properly alerted when disk space is running low. In this post I will show you my method of keeping an eye on all the available disk space. This is however my point of view and open for discussion as usual.

I started this blog post because of a case I received from one of my customers:

  • Disk should be monitored on both Free Mb left AND % free space left.
  • SCOM only needs to react when BOTH the thresholds are breached
  • Different threshold apply to critical and non critical servers
  • Different kind of ticket needs to be created for critical and non critical servers
  • A warning and Alert should be send out to warn upfront and send another warning when things get serious.
  • Every day a new ticket should be sent when the condition was not solved the day before.

My initial response was: Great let’s get Orchestrator in here to get a better part of the logic in there. Answer was as predicted => no.

Ok so let’s break this up in the different categories:

  • Detection
  • Notification
  • Reset

Note: I did already create a management pack for this scenario but am explaining the scenario thoroughly so you can use this guide for another monitoring scenario as well

Download the mp from the gallery:




We are in luck because SCOM already has the ability to monitor on both conditions mentioned above (Free Mb left AND %free space). This was the case in the logical disk monitor and it is still present today BUT (yep there will be a lot of BUTS in this post) this is not the case in the Cluster and Cluster shared Volumes (CSV) monitors. They use the new kind of disk space monitoring where the previous 1 monitor with double thresholds is divided in to 2 separate monitors with a rollup monitor on top. In my opinion a good decision.

So at this point we can use for all different kinds of disks the same method: 2 monitors with 1 rollup monitor on top. GREAT.

So let’s start configuring them! Fill in all the different thresholds and you are good to go right?

In theory yes… but in this case not quit. One of the big hurdles was the fact that a monitor can only fire of one notification as long as it is not reset to healthy. As we need a notification on both warning and error we have an issue here. The notification process is by design built that you only will receive an alert once for either warning or error on the monitor.

Because we need to have a warning AND error we need to create additional monitors to cope with this requirement.

This is in fact how I tackled this issue.

Creating the necessary monitors.

To make sure we can have the ability to act on both thresholds we will need to create 3 monitors: Rollup monitor, Free Space Monitor (%) and Free Space Monitor (MB) like the one which ships out of the box.

So let’s get at it:

Note: I’m using the console to quickly create the management pack to show you with a minimum of authoring knowledge to solve this issue however I advise to dig deeper in the different authoring solutions for SCOM.

Note: All the necessary monitors are already in the management pack which I included in this post. I solely mention the process here so you potentially can use this method to do the same thing for another scenario.

Create the Rollup monitor

A rollup monitor will not check a condition itself but will react on the state of the monitors beneath it. Therefore we have to create this first. To make sure it shows up right under the other monitors we keep the same naming but add the word WARNING at the end.

Open the monitor tab and choose to create a monitor => Aggregate Rollup Monitor…

Fill in the name of the monitor


In this case we want the best state of any member to rollup because we want both %mb free AND %free to be true and thus in warning state before we want to be alerted:


We would like to have an alert when there’s a warning on both monitors underneath this monitor so we change the severity to Warning.


Create the monitors underneath this rollup monitor

To make sure are new rollup monitor is correctly influenced by the monitors underneath we now need to create the monitors with the conditions MB free and % free.

These are included in the management pack as well. Keep an eye on the fact that you need to create a monitor and select the appropriate rollup monitor where they need to reside under like shown below:


For the performance counter in this case I used these parameters:

object: $Target/Property[Type="Windows5!Microsoft.Windows.Server.ClusterDisksMonitoring.ClusterDisk"]/ClusterName$

Counter: % Free Space

Instance: $Target/Property[Type="Windows5!Microsoft.Windows.Server.ClusterDisksMonitoring.ClusterDisk"]/ClusterDiskName$$Target/Property[Type="Windows5!Microsoft.Windows.Server.ClusterDisksMonitoring.ClusterDisk"]/ClusterResourceName$

NOTE: Make sure to turn off the alerting of these rules as we do not want to receive individual alerts but just the alert of the rollup monitor.

If you have created the monitors correctly it should look like this:



As you can see the monitors are now shown right beneath the actual monitors.

You can use this scenario for basically all approaches where you need to make double tickets for the same issue if they are caused by the same 3 state monitor.

Last important step in configuring the monitors

Because we now have the condition set for the warning condition with the appropriate thresholds we need to do the same thing for the out of the box monitor to only show us an alert when both critical conditions are met.

Therefore we need to override them with the proper thresholds and configuration:

For the rollup monitor we want to make sure it generates an alert when both the critical conditions are met therefore we set the following overrides to true:

  • Generates alert
  • Enabled
  • Auto-Resolve

For the alerting part we only want to be alerted on Critical state because otherwise the 2 sets of monitors will interfere with each other therefore we need to set the Alert on State to “critical health state” and last but not least the rollup algorithm needs to be best health state of any member because again we only want to be notified when both conditions are met.



The 2 monitors under the Aggregate Rollup monitor also need to be updated with the correct thresholds + to not generate alerts otherwise we will have useless alerts because we only want to be alerted when both conditions are met.


Creating the necessary groups.

After we have created the monitors we need to make sure that we have a clear difference between the critical servers and the non critical servers. These are necessary to give us the opportunity to create different thresholds and different levels of tickets per category of server.

You can create a group of servers with explicit members and go from there. This is however from a manageability standpoint not a good idea as this requires the discipline to add a server to the group when it changes category or is installed. This leaves way to much opening for errors.

Therefore we are going to create groups based on an attribute which is detectable on the servers. In this case I set a Regkey on the servers identifying whether it’s a critical server or not. This can be easily done by running a script through SCCM or doing it during build of the server.

Note: Do this in a separate management pack than the one you use for your monitors as this management pack if sealed can be reused through your entire environment.

To create the attribute go to the authoring pane and under management pack objects select the attributes


Create new attribute


In this case I name it Critical server.

In the discovery method we need to tell SCOM how the attribute will be detected. In this case I choose to use a regkey.

In the target you select Windows Server and automatically the Target will be put in as Windows Server_Extended

The management pack should be the same management pack as your groups will reside in because we need to operate within the same unsealed management pack.


So after we filled in all the parameters it should look like this:


Last thing to do is to identify the key which is monitored by SCOM.

In my case it’s HKEY_LOCAL_Machine\Category\critical


Next up is to create both our groups: Critical and non critical servers

Create a new group fro the critical servers:


Check out the Dynamic Members rules


Select the Windows_Server_Extended class and check whether the Propery Critical server Equals True


The group will now be populated with all servers where this key has the value “true”


Only thing left to do is do the opposite with a group where there’s only servers not having this key set to true.SNAG-0136SNAG-0137



Because we now have all the building blocks to divide the warning and error on both groups of servers the only thing left to do is create both notification channels with the desired actions configured.

I ended up with 3 scenarios with their notifications to match the requirements:

Notification 1:

I want to be alerted for a critical alert on the Critical servers and create a high priority ticket through my notification channels.


Notification 2:

I want to be alerted for a critical alert on the non critical servers and create a normal priority ticket through my notification channelsSNAG-0166

Notification 3:

I want to be alerted for a warning alert on both the critical servers and the non critical servers and send out a mail through my notification channels.


The next steps in how to get the tickets out scom in your organization should be configured for your environment specific but at this point the different scenarios are covered.


The last thing on the list was to reset the monitors on a daily basis so we are sure that we keep getting alerts as long as the condition was not resolved. This is accomplished by using my resetmonitorsofspecifictype script which I documented in this blogpost:



This blogpost covers all the different questions in this scenario + that we did not have to build any complex scenarios outside of SCOM but used all technology within SCOM to accomplish our goal.

The last thing I would recommend is to seal the management pack used for the group creation. That way you can reuse this in other unsealed management packs as well to make a difference between critical and non critical servers.

Again you can use this approach for all different monitors.

So you’ve installed SCOM… Now what? Livecast is available


On the 11th of June I gave a LiveMeeting on how to get started quickly with SCOM 2012.

I started after a fresh install of SCOM and got through the routine of getting you started quickly by:

  • Performing a post install health check
  • Configuring reporting
  • Configuring the retention of the datawarehouse
  • Deploying your agents
  • Defining the 3 magical questions to configure your monitoring

The webcast is available on technet.

Get it here:


This livemeeting was the first of many more to come in this series to get you up and running fast.

SCOM: Automatically enable AgentProxying


When setting up a new SCOM environment with a lot of Clusters, exchange, DC’s involved the alerts that Agent Proxying is not enabled will quickly pop up. This is in fact one of the most common alerts you get when starting to roll out agents and management packs.

What is this Agent proxying?

This setting is set on agent level and grants the agent to forward data to the management server on behalf of another entity. This basically means that the agent can send info from another entity. Common scenarios are in fact a DC on behalf of the domain or a cluster which can send info about the cluster resources.

In various management pack guides the agent proxy setting is documented as obligatory to be able to do the initial discovery (cluster management pack) so If you did not read the guide and forgot to set this setting the discovery will just not work.

In fact this setting is disabled by default disabled. SCOM will check when data is sent by an agent which is not originated by it’s own entity and will alert you about this happening. But that’s it. No further action is taken.

You can manage this manually by browsing to the Administration pane => agent managed and open the properties of the agent and check the “allow this agent to act as a proxy and discover managed objects on other computers” tick box.

But this can be a hassle especially in a new management group.

There are various scripts out there to enable the agentproxying option on all agents. This however could pose a security risk if malicious data comes into your management group and floods your management server.

Therefore I’m pro for a more selective approach

So this is my short solution to automate this process.

My approach

First take a look at the alert. One of the most common misunderstandings is in fact that it’s not the alert source which need to have the agent proxying option enabled (in this case VSERVER03) but the server in the Alert description (in this case VSERVER001).


This alert is generated by the operations management packs which are installed by default so no tweaking required here.

My solution to automate this process it to use a PowerShell script in combination with a notification channel to react on the alert shown above.

The PowerShell script:

# AUTHOR:    Dieter Wijckmans
# DATE:        10/05/2013
# Name:        set_proxy_enabled.PS1
# Version:    1.0
# COMMENT:    Automatically activate agent proxy through notification channel
# Usage:    .\set_proxy_enabled.ps1

Param ([String]$sAlertID)

###Prepare environment for run###

##Read out the Management server name
$inputScomMS = $env:computername

#Initializing the Ops Mgr 2012 Powershell provider#
Import-Module -Name “OperationsManager”
New-SCManagementGroupConnection -ComputerName $inputScomMS

#Get the alert details
$oAlert = Get-SCOMAlert | where { $_.Id -eq $sAlertID}

$oalert.customfield1 = “agent proxy enabled”

#Get the FQDN name of the agent to set the proxy for

$input = ($oAlert.Description).ToString()
$outputtemp = $input.Split(‘()’)[1]
$agentname = $outputtemp.Trim()

#Set the Agent proxy setting
‘”‘ + $agentname + ‘”‘ | Get-SCOMAgent | Enable-SCOMAgentProxy –Passthru



download the script here:


In a nutshell the following steps will be performed:

  • Read in the parameters from the subscription
  • preparing the environment
  • Reading the alert
  • Finding the server name
  • Setting the Agent proxy setting.

Note I’m also updating customfield1 here to make sure the script ran correctly.

So on to the configuration of our notification:

Navigate to Administration => Notifications => channels

Right click and choose new notification channel:


Name your command notification channel:


Fill in the following (update with your respective paths of course):


“c:\scripts\set_proxy_enabled.ps1” ‘$Data/Context/DataItem/AlertId$’



Move on to the Subscribers:



Click add


Fill in a name:


Configure the subscriber with the channel we just created:


Click Finish twice.


Set up the subscription:

Create a new subscription:


Choose the criteria. In this case we want to trigger this subscription when the Agent proxy not enabled rule logs an alert.


Select the addresses (I choose to send a mail to myself as well as backup option)


Select the channels




And save


Now wait for an alert and check the alert details for our update of custom field 1 and check whether the tick box is enabled at this point.

If you have any question make sure to drop me a line in the comments or ask your question via twitter (better monitored than the comments).

SCOM: Input and Pass user input parameters to Console Task


This is a small thing I figured out to request user input and pass it as a parameter to a console task by using PowerShell.


The client wanted to be able to create tickets from an alert in the console if they were missed by an operator or notification script. The notifications pass the different parameters to a PowerShell script that generates the ticket. So far so good. But they wanted to call the notification from a Alert Console task. There are different parameters that need to be customized per alert to generate the correct ticket. One option was to create an Alert Console task for ALL the different classifications of alerts which would be a nightmare from manageability perspective + clutter the console.


I came up with a small PowerShell script which will ask the user for input and use that input to generate the ticket with the correct info. The user still needs to know what to fill in but still it’s better than creating all the different Alert Console tasks.

This script is reusable in all your scripts you need to make interactive so you can prompt users for input during a console task.


In this example I’ve created a small Alert Console Task to connect to a Remote server via Remote desktop connection which can be different than the server which is generating the alert you’ve selected.

The PowerShell script I used in this example can be downloaded here

1. Creating the console task:

First things first. We need to create the console task.

Navigate to the Authoring pane > Management pack objects > tasks > create new task.

In the selection window select a Console Task > Alert Command line (This is necessary if we want to pass parameters from the alert to the script we would like to run)


Name your task.

Note: This will be the actual name which will appear in your console so keep it short and simple but clear enough so someone will know what the task will do.


Note: I’ve created a separate management pack for all my Console Tasks

Specify the command line options:

  • Application: %windir%\system32\windowspowershell\v1.0\Powershell.exe
  • Parameters: C:\scripts\consoletask\remotedesktop.ps1 $ID$ “$Managed Object Name$”
  • Working Directory: c:\scripts\consoletask\

Note: The script must be copied to the local computer where the console is installed. In this case: c:\scripts\consoletask\


  • If you want to check what alert parameters you can pass to your script click the arrow behind the parameters field
  • Keep the “Display output when this task is run ticked as this is a great tool to check whether there are issues with your script during execution. If you are convinced after testing that the task is working you can change it to hidden.


2. Check the Console Task

When saved, the Task will appear between the Alert Tasks in the task pane when you select an alert.


When you click the task the Console Task Output window will pop up together with a window asking the user to put in a servername. If the user clicks cancel the task is terminated.


Success! The remote connection is started with the server we have put in!


3. Disable the Output window

Now that we have verified everything is working we can disable the task console window so only the prompt for input is shown.

Open the task properties and deselect the box “Display output when this task is run”



4. PowerShell Script

The PowerShell script I used in this example can be downloaded here


The section which is responsible for the input is here so if you have existing scripts this is what you are looking for to make them interactive:

#Get the server to connect to
[System.Reflection.Assembly]::LoadWithPartialName(‘Microsoft.VisualBasic’) | Out-Null
$server =  [Microsoft.VisualBasic.Interaction]::InputBox(“Enter a Servername”, “Servername”, “”)

Enough talk, let’s build
Something together.