MMS 2012: Travel Guide: Hints
On this post I’ll keep track of all the hints I gave in the different posts. This can be handy to get all the hints in a glance or even print them for your reference:
Post 1: MMS 2012 Travel Guide Part 1: Getting settled in Vegas
- Check your travel docs. Make sure you have a passport which is valid for at least 6 months after your stay and if your are from outside of states make sure to fill in your ESTA. More info here: https://esta.cbp.dhs.gov/esta/
- Don’t forget your travel adapter power socket. Buy it upfront of your trip. They tend to overcharge for it at airports. Pack also an extension cord with multiple sockets of your home plug. You only have one adapter but want to charge your phone, laptop, tablet, camera overnight to hit the next day of the MMS madness well prepared
- The Venetian Conference center is hosting the event but the Venetian Hotel and Palazzo Hotel are forming 1 complex so really check where the restaurants are.
Check here to view the map of the 2 hotels: http://www.venetian.com/Company-Information/Map/
- Always check whether you can pay with credit or strictly cash. Just to not get any surprises when the bill is presented. It’s indicate on the side of the cabs whether you can pay with credit card. Always ask for a receipt of the journey while paying and don’t forget the tip!If you are with a group it’s always a good idea to check whether a limo is available because they are in general cheaper for groups but negotiate the price in advance.
Post 2: MMS 2012: Travel Guide Part 2: The session types
- If by any chance you miss the keynotes (couldn’t get out of bed because of the time difference ) you can always review them on YouTube and the post conference dvd.
- All the Instructor Led Labs are available in the Commnet area to do them over again or take them at your own pace.
- Make sure to get in line at the correct Instructor Led Lab in time otherwise your seat will be assigned to someone in the standby queue. If you did not register for this lab upfront but want to attend just get to the session and wait in the standby queue. There’s always a percentage of no-shows so chances are you will still get in.
Post 3: MMS 2012- Travel Guide Part 3- Scheduling your MMS agenda
- Do not to check on speaker, topic whatsoever but book by slot. If you select the timeslot you’ll get a nice overview of the different sessions in that timeslot.
- Always keep a printed copy of your agenda with you. You’ll never know when you run out of juice on your mobile device.
- Some sessions are repeated through out the event so if you have conflicting sessions double check whether the session is given at another time.
Post 4: MMS 2012- Travel Guide Part 4 – Join the Twitter army
- For people from outside of US the roaming costs can get pretty high (=personal experience). There’s however wireless in the entire convention center which is free to use for attendees. So make sure to switch off roaming on your data connection and use wireless while you’re in the conference center.
MMS 2012: Travel Guide Part 3: Scheduling your MMS agenda
This blog post is part of a series. The other parts can be found here: How to survive MMS 2012.
In the previous post I’ve highlighted all the different types of sessions and what to expect from them.
Now I’m going to give you some tips and tricks to book your week at MMS.
First of all head over to the official page of mms at http://www.mms2012.com and select the login button on the top right of the page and login.
Note: Make sure you have your login and password combination ready when you’re at MMS. More info on this subject later on.
So you’ve successfully logged on and are now on the splash page of the MMS site:
Select the tab Sessions & Labs in the top bar to access the MMS Schedule Builder:
TIP: My approach is not to check on speaker, topic whatsoever but book by slot. If you select the timeslot you’ll get a nice overview of the different sessions in that timeslot.
In this case I’m preparing myself for timeslot “Wednesday, April 18 10:15 1M – 11:30 AM. Hit the search button.
I mark all the session I’m interested in by selecting the square icon on the front of the session.
TIP: Don’t waste your time here by choosing really one session per slot. Just browse through the sessions and check what you find interesting. Some slots will be double booked some slots will have some less interesting session for you.
The print screen below show all the different sessions I’ve selected for this timeslot.
When you’ve completely build your calendar you can print the list or save it to outlook or any other system which supports ICS format. Last year there was a tool for windows 7 phones which easily transferred your calendar to your handheld device but there’s no official word whether the app will return this year.
TIP: Always keep a printed copy with you. You’ll never know when you run out of juice on your device.
However if you loose your schedule it’s not a complete disaster because in the hallway there are tons of pc’s supplied which are freely accessible for attendees to check the mms 2012 website to quickly check your agenda in between sessions.
TIP: Some sessions are repeated through out the event so if you have conflicting sessions double check whether the session is given at another time.
MMS 2012: Travel Guide Part 2: The session types
This blog post is part of a series. The other parts can be found here: How to survive MMS 2012.
So you are heading to Vegas for the System Center event of the year: Microsoft Management Summit!
But what can you expect and how are you going to get the most out of the event. This blog post in the series will guide you through the different types of sessions and how to effectively plan your days at MMS.
There’s a huge spectrum of different sessions provided, opportunities to meet with peers, check the latest solutions of partners (and get some gadgets along the way),…
So let’s get started to give you some hints on the “official” part of MMS: the keynotes + session.
The different sessions:
Instructor Lead Labs:
These sessions are basically big classrooms where you can get hands experience with the new software in predefined scenarios. It’s instructor led but also has a manual for each attendee to get you started. You’re working on your workstation in a VM which is specifically designed for this class room. The frequency of these labs tend to go a little fast so if you fall behind just continue at your own pace through the exercise.
TIP: All the Instructor Led Labs are available in the Commnet area to do them over again or take them at your own pace.
TIP: Make sure to get in line at the correct Instructor Led Lab in time otherwise your seat will be assigned to someone in the standby queue. If you did not register for this lab upfront but want to attend just get to the session and wait in the standby queue. There’s always a percentage of no-shows so chances are you will still get in.
This is the majority of the sessions and are listed in the session listing. These sessions have a predefined subject and are delivered by product team members, System Center MVP’s and community members. These are the places where you have to really gather the info you are looking for at MMS. These sessions are recorded and the decks will be available afterwards for download and/or on the conference DVD.
Bird of the feather (BOF) sessions:
These sessions are basically slots in the evening which are available for community members to present some specific topics to a group of attendees. The topics of these sessions are proposed and chosen by you… the attendees. Make sure to check out the list of BOF sessions for some very interesting topics with a great opportunity to get first hand info from some leading experts. There’s a lot of room for interaction in these sessions
Apart from the sessions there are 2 major things you need to check at MMS:
On Tuesday and Wednesday morning there’s the keynote of Brad Anderson which is highlighting some of the accomplishments of the last year and generally gives you a great sneak preview on the roadmap of the System Center suite. Generally there are also some scoops in these keynotes.
TIP: If by any chance you miss these keynotes (couldn’t get out of bed because of the time difference ) you can always review them on YouTube and the post conference dvd.
The expo is definitely something you need to visit. All the different partners have a booth which are generally filled with people who would love to attract you to their product with gadgets, price draws, demo’s, free giveaways,… In the center of the expo is also the Microsoft area where normally there a premier consultants + product team members available to answer your questions.
So now we have all the different session explained. The only thing left to do is see how we can cram this all in a week… Well simply by creating your own schedule with the schedule builder explained in the next chapter of this blogging series.
SCOM 2012: Pass data to custom fields with monitors
During the “ask the experts” session on SystemCenterUniverse 2012 (which was the dress rehearsal for MMS2012) I had the privilege to ask a question about a many used feature at some of my clients: Fill in the custom fields when alerting in SCOM.
My question: “Will it be possible to update the custom fields using a monitor like you do with a rule?
Answer: “We know this is a many requested feature but unfortunately it will not be possible”
The issue is that the architecture to raise an alert is fundamentally different for rules and monitors. With rules it’s possible to pass parameters through the GenerateAlert module while for monitors this is not possible.
So there are 2 possibilities:
Either create an alerting rule for the monitor which passes the parameters to the alert. But this is from a manageability point of view very difficult.
The only thing I came up with is running the parameters through the notification channels:
So let’s play around in the console of SCOM 2012 to get us going.
I already created a rule and a monitor for event 900.
While creating the rule you can specify the custom fields at the alerting tab as shown below:
In my case if filled in the IP address and the computer name in the custom fields.
During the creation of the monitor there’s no option to pass data to the custom fields although the option is still available in the dbase there’s no way to fill them in using the GUI:
So when can this come in handy? I used it to pass my own data to the alert so I could use the alertID in the notification channel to read out the alert with the custom fields to escalate to a problem management tool which uses specific keywords to escalate problems.
This is the powershell I use to fill the custom fields via the notification channel associated with the monitor:
SCOM 2007: Dynamically populating a group from txt file
At one of my clients they are using a custom build problem management tool + cmdb tool.
This means there’s not much flexibility of getting data in and out of these systems. At one point the question came to make a division between either critical servers and non critical servers to adapt the level of monitoring accordingly.
The only possible way to get the server list in was by supplying me with a txt file of all the critical servers.
So well… you can go and click away daily to compare the list with the group or automate the process… Of course I choose the second option.
By creating a new custom management pack that basically will read in the txt file on a regular basis and populate the group by deleting the servers which were removed and adding the new ones. So let’s get going.
The management pack can be downloaded here: Download management pack here
Things to adapt to your needs:
- Line 52: The interval in my management pack is set to every 3 minutes (180 sec) which is way to high for production reasons but great for testing. Set it to more convenient level as this group will not change multiple times a day. Suggestion: 129600 = 6 hours.
- Line 75: If you do not want to alter the management pack above you have to have your input file at c:\extract.txt. This can easily be adapted by editing line 75 before importing the management pack into your environment.
So now you have the xml file. Let’s import it in our environment and test:
Fire up your console and navigate to Administration > management packs
In the actions pane select Import Management Packs…
Select to import the management pack from disk by selecting add…
Click no at the following question. Browse to your management pack location and click import
Browse to your management pack location and click import.
Wait for it to import
When the import was successful you’ll notice the management pack listed in your mp list.
Check whether the group was created by going to the author tab > Groups.
The critical servers group was created but is empty at the moment because the discovery process did not run for the first time.
For this example I’ve created the extract.txt file with 3 servers who are currently residing in my environment.
After the first run of the script the servers are added to the group and the population has been completed.
To test whether the servers are dynamically removed I’ve removed VSERVER05 from the file and wait for the following run of the discovery.
and eureka! A couple of minutes later the server has been removed from the group:
Note: Make sure to not have empty lines in the file because this will trigger an End Of File (EOF) command and stop the script.
Now that you are sure that you always have the latest update of your critical servers group you can target this group for more strict monitoring or any additional overrides you specifically want to apply to these critical servers in your environment.
I’ve based my management pack on other community members who did similar projects:
Kevin Holman did it with populating groups from a CMDB which is SQL based: Populating groups from a sql server
Steve Rachui did it with a txt file and included several groups based on location: Populating from text file based on location
SCOM2007: How to backup your Reporting
This blog post is part of a series how to backup your SCOM environment.
You can find the other parts here:
Another part in the process of backing up your environment and thus making sure that all the data is available to restore your environment is backing up the Reporting services dbase which basically contains all your reports.
The standard report can be easily recreated when reimporting the management pack but if you made custom reports they will be lost if you do not have a backup.
This process consists of 4 steps:
- Backing up the Report Server Databases (Reportserver and Reportservertempdb)
- Backing up the Encryption Keys
- Backing up the config files
- Backing up the Data files.
Let’s get started!
Backing up the Report Server Databases
The 2 dbases to backup are Reportserver and Reportservertempdb. Although it’s not absolutely necessary to backup the Reportservertempdb to restore your environment it will definitely save you some time in the process. If you loose your Reportservertempdb you’ll have to recreate it… So if you’re in the process of backing up take a backup of the Reportservertempdb as well.
You can use any method which is allowed by SQL to backup these dbases whether it’s System Center Data Protection Manager, third party software or the build-in SQL backup process.
I’ll be using the build-in SQL backup:
Open the Microsoft SQL Server manager and browse to your server / dbase:
Right click your reporting dbase and choose Tasks > Back Up…
Leave the backup type as Full, change the name (if you like otherwise use the default name) and check the location of the file.
Caution: Make sure you choose a file location which is included in your normal day to day file backups so you have it in your backup system when your server is completely lost.
If all goes well you’ll get the message that your backup was successful
Now repeat the steps above as well for the ReportservertempDB and save it in the same location as you have saved your ReportServerDB backup.
Backing up the Encryption Keys
This encryption key is used for encrypting sensitive information in the dbase to ensure the safety of the data in it. You normally only have to save this key once as this is used in a 1-on-1 relationship with the dbase and the Symmetric key.
The key needs to be restored in the following cases:
- Change the report Server Windows Service Account name or resetting the password.
- Migrating a report server installation to use a different report server dbase.
- Recovering a report server installation due to HW failure
- Renaming the computer or instance that hosts the report server.
Open your reporting Services Configuration Connection by choosing Start > all programs > Microsoft Sql Server ‘version’ > Configuration tools > Reporting Services Configuration Manager
A dialog box will appear to check the Server name and the report Server Instance:
If the are correct click Connect.
On the next page choose Encryption Keys and in the right pane click Backup button.
Choose the file location + name by clicking the … button.
Fill in a desired password. This password is used to encrypt the file so make sure you use a password you remember because there’s no way to restore the key without it + there’s also no way to reset the password on the exported SNK file.
If all goes well the key has been backed up and your receive the “Creating Encryption Key Backup” successful message at the bottom.
Backing up the Config files.
The reporting services uses different files to store the application settings. It’s very important you have this config files handy when disaster strikes because they contain all your settings / customizations.
Best practice is to take a backup of these files when you have installed the server, deploy custom extensions or when you run a full backup of your environment for drp reasons.
The following files must be included in a backup location which is covered by your filebackup system:
Web.config for both the Report Server and Report Manager ASP.NET applications
Machine.config for ASP.NET
Backing up Data Files
Backup the files that you create and maintain in Report Designer and Model Designer. These include report definition (.rdl) files, report model (.smdl) files, shared data source (.rds) files, data view (.dv) files, data source (.ds) files, report server project (.rptproj) files, and report solution (.sln) files.
Remember to backup any script files (.rss) that you created for administration or deployment tasks.
Verify that you have a backup copy of any custom extensions and custom assemblies you are using.
This was the last blog post in the series “How to backup your SCOM environment”. If you follow these guidelines you’ll have a pretty good chance of recovering from a disaster with as little downtime, as little data loss as possible.
In the next series I’ll be posting how to Recover it all using the backups we took so stay tuned and as usual if you have remarks or feedback you can reach me on facebook / twitter.
SCOM2007: How to document your SCOM Installation
This blog post is part of a series how to backup your SCOM environment.
You can find the other parts here:
After we have backed up all the other necessary bits of our environment in the previous blog posts just a little more bits remain to make sure we can successfully restore our environment when there has been a disaster.
Document your Scom installation
This can save you a lot of time and can even be useful when you just have to pull up a report on what actually get’s monitored in the environment.
I’ve based my script on a nice blog post of Kristoper Bash which can be found here:
I’ve adapted the script so it will be in line with the script I uses in this post: How to backup your unsealed management packs
You could combine the 2 into 1 script but for now I’ll leave them separate just in case you only need to document the environment instead of backing up + documenting everything.
The script, open the pictures to read out or download the script here: List mp script download location
The markings in yellow you need to modify to your own liking + environment.
Variables to change in the next session:
- $locationroot: This will be the root folder in which the folders with the date as name will be stored
- $outFile: Here you can change the filename. The filename consists out of “a name you choose”_”date of today”.html
Note that in the next session the mailing capability is completely commented. If you would like to send out a mail with the result of the script you can just uncomment and thus activate this feature by removing the ‘#’ in front of each line.
Variables to change in the next session:
- $Sender: This will be the sender of the mail to notify you whether it has been successfully finished.
- $OKRecipient: Specify the email address to where the mail in case of ok needs to be sent.
- $ErrRecipient: Specifiy the email address to where the mail in case of error needs to be sent.
- smtpserver: fill in the smtp server used to send out mails in your environment.
Schedule this with a scheduled task in windows on your RMS and you’re ready to go.
Tip: If you need both the documentation and the backup you could combine this script with the script I featured in this post: How to backup your unsealed management packs
SCOM 2007: Automated Backup of Unsealed Management packs
As part of my series how to back up your SCOM environment I ‘ve created a backup strategy for my unsealed management packs.
The setup I choose is to use a PowerShell script with error handling included which is run by the Task Scheduler on the RMS and monitored by a management pack in SCOM.
The advantages of this setup are:
- No additional load on the RMS (although this script is light you never know what will happen)
- Better control over when the script is running.
The PowerShell script I used is based on the UnsealedMPbackup management pack which is posted here: https://skydrive.live.com/?cid=397bb61b75cc76c5&id=397BB61B75CC76C5%21217#
Although this is an excellent script I modified it to have Error handling in there. If you look at the script there’s also a mailer included in the script but it’s commented out for now. If you would like to use this as a standalone script without the monitoring of SCOM over the process you can easily switch on the email function and be alerted when things went wrong.
This script will:
- Create a folder with the date of today in a root folder you define
- export all unsealed management packs to this folder
- Delete folders which are older than 15 days.
The parameters you will need to fill in are marked in yellow.
You can download the PowerShell script here.
- Reading out the parameters which will be used through the script:
- RMS: A simple $rootMS = ‘Local host’ would do the trick as well but I like a WMI approach so added that in. The WMI approach will not work if you use it on a clustered environment. In this case it’s best to just add the name of the cluster hardcoded in the script.
- Initializing the ops MGR 2007 snap in: Just loading the necessary cmdlets.
- Set Culture Info: In my case this is Dutch Belgium. This is important for the date format so please don’t forget to change it as you please.
- Error handling Setup: Defining the error handling for further use in the script.
In this section we are actually defining the location and exporting the management packs:
Define the backup location:
- Get Date: Here we are reading the system date to name our folder to backup to.
- Define Backup location: Create the folder to backup to
- Delete backup location older than 15 days: All the folders older than 15 days will be deleted. If you want to change the retention just change the parameter $Retentionperiod.
Export the unsealed management packs. With this command we’ll export all the unsealed management packs to the folder. If you (for one reason or another) want to backup all your management packs you can change this code to:
$all_mps = get-managementpack
foreach($mp in $all_mps)
export-managementpack -managementpack $mp -path “C:\backups”
Thanks to Maarten Goet for providing this example.
In this section the script is writing to the Operations manager eventlog a event whether it was successful (id 910) or unsuccessful (ID 911). This can be used to monitor the process.
Feel free to change the ID’s as you please but don’t forget to modify the supplied management pack later on.
The mailing section:
If you choose not to monitor this process with SCOM you can activate the mailing section that warns you about the outcome of the process.
Make sure to change the highlighted sections.
Scheduling the script on the RMS
As said before I’m scheduling this script on the RMS using the build-in Windows Task Scheduler.
The command to schedule should be (if you save the file in c:\scripts):
powershell -command “& ‘c:\scripts\backup_mp.ps1’ ”
Monitor the process with SCOM
You can easily monitor the process with SCOM and setup notifications whenever there’s an error. I’ve created a small management pack which contains a monitor to check the status of the backup.
This monitor is healthy by default and comes in critical error when event 911 is logged. However when the next backup is successful, when event 910 is logged, it will return back to healthy. I don’t mind to miss one backup
There’s an automated recovery task included as well to restart the backup when failed.
You can download this small MP here.
Don’t forget to setup your notifications and you’re all done.
Start to SCOM: Phase 1: The design doc
So now that you normally have a clear view on the assessment and what people expect you can start writing your design document.
I’ll be pointing out how I usually write my design docs. You can use these guidelines or create your own totally different layout + structure. Feel free to do so.
The different components in a design doc
First of all you need to write a design doc for people who are not familiar with the product. You already have some insights in the technology / product but be aware that most of the managers do not have these insights so you have to educate a little bit as well.
Therefore it’s a good thing to explain all the different components of your SCOM structure briefly before pointing out your decision concerning the component.
This is a brief overview of my framework for my design doc. Again this is my framework. Feel free to use it or alter it as you please:
- Goals of the project
- The current situation and why you want to implement the new product (remember the assessment phase)
- Explanation of the Operations environment and different components
- The proposed architecture + sizing
- Security + accounts which are needed for the environment
- Conclusion and summary.
These are in general the 7 chapters you need to cover. Let’s start with the first one:
1. Goals of the project
This chapter will be an easy one. You formulate here what you came to know during the Assessment phase. It’s best to sum up where you want to be when the project finishes. Don’t go much into detail here yet. There’s plenty of room for this later on
Describe a little bit the purpose of this design document. Again don’t go into much detail here yet.
3. The current situation
Again you can use your notes from the assessment meetings to summarize what the current situation is and why there’s decided to switch to SCOM. Here you can already make a small high level comparison between the current system and the new SCOM environment.
Include maps of the current topology of the network environment and/or the old monitoring system.
4. Explanation of the Operations Environment and different components.
Here begins the hard work. Luckily you only have to do this once because you can reuse this section later on because the explanation of the components will not change except for the design decisions.
SCOM 2007: Start to SCOM.
Everybody who has been working with SCOM remembers his first time when he opened the console. It can be overwhelming and you just got passed the experience of designing and installing the SCOM system.
So you lean back and ask yourself… Where do we go from here. Where to start. Where to get the info needed… So many questions and so many answers to find online posted by user groups, team blogs, white papers,…
In the next series of blogs I’ll try to set up a step by step guide to get things going to a level where you can already showcase the environment and further fine tune. I’m working 2 years with SCOM 2007 so the memory of the start is still fresh but fading fast when you dig deeper in the program.
First things first. The different phases of a SCOM project and the different trap holes they bring along:
These are subject to change off course but I always keep more or less to these 6 Phases. I call it my SCOM framework.
This series will be based on a install of a SCOM 2007 R2 environment.
I’ll walk through different phases of process. If there are any suggestions down the road please do not hesitate to leave a reply or contact me via my contact info on the front page.