Sail Away Systems

December 30, 2014
by dbissett
0 comments

Scom Notifications Config gotcha

I was scratching my head on why scom was insisting to be authenticated when sending to my exchange relay, this is why…..

Got this from Keven Hoffmans Blog, very useful,

Setting up notifications for email, IM, or command channels is almost identical to how this was configured in OpsMgr 2007 R2. This article will just serve as a walk through to the process, such as immediately after deploying OpsMgr 2012. The key difference here is that Notifications are now managed by a Resource Pool, instead of just depending on the RMS.

Notifications in OpsMgr are made of of three primary components – the Channel, Subscriber, and the Subscription. The Channel is the mechanism that we want to notify by, such as Email. The subscriber is the person or distribution list we want to send to, and the subscription is a definition of criteria around what should be sent.

The SMTP Channel:

We will first need to create the channel: Under Administration pane > Notifications > Channels. Right click and choose New channel > Email (SMTP)

image

Give your channel a name. We might have multiple email channels. Once for emails to our primary work mailboxes. Maybe another with different formatting for sending email to cell phones and pager devices. Lets just call this one our “Default SMTP Channel”

image

Click Add, and type in the FQDN of your SMTP server(s). This can be an actual SMTP enabled mail server, or a load balanced virtual name.

I am going to select “Windows Integrated” for my Authentication mechanism, since my mail server does not allow Anonymous connections.

image

For the Return Address – I have created an actual mail enabled user to send Email notifications through SCOM. This might not be a requirement to be a real mail address – mostly that depends on your mail server security policies.

image

Next up is the email format. We can customize this with very specific information that is relevant to how we want emails to look from SCOM. I will just accept the defaults for now. I can always come back and customize this one, or create additional channels with different formats later.

The Subscriber:

Next up – creating the subscriber. Right Click “Subscribers” and choose “New Subscriber”

This will default to show your domain account. You can change this to whatever you like:

image

Next – we need to choose when Kevin wants to receive email notifications. This is especially important for things like on call pager devices, or when people work shifts and only want to see emails during certain times.

Next – we need to add an email address to the subscriber. I will add my default work email:

image

Then select the Channel type, and the email address:

image

Additionally – you can configure a specific schedule for this specific address. The previous schedule was for the subscriber itself, but a subscriber can have multiple addresses with different schedules if needed. I will keep things simple and choose “Always send”. Click Finish a couple times and your subscriber is set up.

The Subscription:

Now we create a new subscription – Right Click “Subscriptions” and choose New Subscription.

Give your subscription a descriptive name that describes what it is and who it is to. Like – “Messaging team – all critical email alerts” Here is mine:

image

On the criteria screen – we have some very granular capabilities to scope this subscription. My goal for this simple one is just to send me any new critical alert that comes into my environment:

image

Next we add the subscribers to the subscription:

image

We also need to choose which Channel we want to use for this subscription:

image

On this same screen – there is an option for delay aging:

image

What that does – is allow for you to have multiple alert subscriptions – and using delay – create an escalation path if an alert is not modified in a way that takes it out of the notification path for these subscriptions.

Click “Finish” and we are all set. Behind the scenes – what happened is that all this information was actually written to a special management pack – the Microsoft.SystemCenter.Notifications.Internal MP.

Let’s test our work.

I have a test rule that generates a critical alert whenever a specific event is written to the event log. Since I subscribed to all critical alerts – this should trigger my subscription and deliver an email:

It worked!

image

Advanced configuration – setting up a Run As Account to authenticate to the SMTP server:

Note – there is a Run-As Profile that ships with SCOM called the “Notification Account”. If this is not configured, SCOM will try to authenticate to the Exchange server using the Management Server Action Account. If this is not allowed to authenticate, you might need to configure this Run-As profile with a Run As Account.

For instance – I disabled the ability for mail relay on my Exchange server. When I do this – only mail enabled Exchange servers can connect to it. Subsequent notifications fail to go through – and I will see two possible alerts in the console:

Failed to send notification

Notification subsystem failed to send notification over ‘Smtp’ protocol to ‘kevinhol@opsmgr.net’. Rule id: Subscription02e8b6be_528d_407c_8edf_5f29dddaae6b

Failed to send notification using server/device

Notification subsystem failed to send notification using device/server ‘ex10mb1.opsmgr.net’ over ‘Smtp’ protocol to ‘kevinhol@opsmgr.net’. Microsoft.EnterpriseManagement.HealthService.Modules.Notification.SmtpNotificationException: Mailbox unavailable. The server response was: 5.7.1 Client does not have permissions to send as this sender. Smtp status code ‘MailboxUnavailable’. Rule id: Subscription02e8b6be_528d_407c_8edf_5f29dddaae6b

In this case – I must configure the Run-As account with a credential that is able to authenticate properly with my Mail Server. I already have a user account and mailbox set up: OPSMGRscomnotify

Under Administration > Run As Configuration > Accounts – create a Run As Account.

The account type will be “Windows” and give it a name that makes sense:

image

Input the user account credentials:

image

Choose “More Secure” and click Next, then Close.

So – we have created our Run As Account – next we need to choose where to distribute it. Account credential distribution is part of the “More Secure” option – we need to choose which Health Services will be allowed to use this credential. In this case – we want to distribute the account to the management server pool in SCOM 2012 that handles notifications.

Open the properties of our newly created action account, and select the Distribution tab:

image

Click “Add”, and in the Option field – change it to “Search by Resource Pool Name” and click Search:

image

Choose the Notifications Resource Pool, click Add, and OK:

image

Now we have created our Run As Account for notifications, and then distributed it to the Notifications Resource Pool (which contains all management servers dynamically)

Next – we need to configure the Run As Profile – which will associate this account credential with the actual Notification workflows.

Under Administration > Run As Configuration > Profiles, find the “Notification Account” profile. Open the properties of this Profile.

Under Run As Accounts – click Add:

image

Select our Notification Run As Account, and click OK

image

Then Save it. This will update the Microsoft.SystemCenter.SecureReferenceOverride MP with these credentials and configurations for notification workflows.

From this point forward – Whichever Management server in the Notifications Resource Pool that is currently responsible for handling notifications, will spawn a MonitoringHost.exe process under our credential that we configured:

image

This credential will be used to authenticate to the Exchange server to send SMTP notifications. Now my email notifications are flowing smoothly once again! If the current management server goes down, another management server in the Notifications Resource Pool will pick up this responsibility and spawn the process, and continue sending notifications.

December 17, 2014
by dbissett
0 comments

Monitoring GPO Changes in Scom

Auditing Group Policy change

For the Active Directory admins and security minded folks the Advanced Group Policy Management (AGPM) tool is great for managing change. What SCOM does well is alert but not necessarily does it do all things out of the box even with the Group Policy Management Pack.
Say you want to know when a GPO is:
1) Created
2) Deleted
3) Modified
4) Permissions changed
5) Linked
Here is a great blog post () wrote to enable auditing for Group Policy change for the AD and SYSVOL. Once this is in place just create an rule to filter on events event 5136 or event 5137 if using Windows Server 2008 or above
http://blogs.msdn.com/b/canberrapfe/archive/2012/05/02/auditing-group-policy-changes.aspx

August 18, 2014
by dbissett
0 comments

4.2.1 smtp service not ready.

Summary:

After creating new Receive Connectors on Multi-Role Exchange 2013 Servers, customers may encounter mail flow/transport issues within a few hours/days. Symptoms such as:

  • Sporadic inability to connect to the server over port 25
  • Mail stuck in the Transport Queue both on the 2013 servers in question but also on other SMTP servers trying to send to/through it
  • NDR’s being generated due to delayed or failed messages

This happens because the Receive Connector was incorrectly created (which is very easy to do), resulting in two services both trying to listen on port 25 (the Microsoft Exchange FrontEnd Transport Service & the Microsoft Exchange Transport Service). The resolution to this issue is to ensure that you specify the proper “TransportRole” value when creating the Receive Connector either via EAC or Shell. You can also edit the Receive Connector after the fact using Set-ReceiveConnector.

Detailed Description:

Historically, Exchange Servers listen on & send via port 25 for SMTP traffic as it’s the industry standard. However, you can listen/send on any port you choose as long as the parties on each end of the transmission agree upon it.

Exchange 2013 brought a new Transport Architecture & without going into a deep dive, the Client Access Server (CAS) role runs the Microsoft Exchange FrontEnd Transport Service which listens/sends on port 25 for SMTP traffic. The Mailbox Server role has the Microsoft Exchange Transport Service which is similar to the Transport Service in previous versions of Exchange & also listens on port 25. There are two other Transport Services (MSExchange Mailbox Delivery & Mailbox Submission) but they aren’t relevant to this discussion.

So what happens when both of these services reside on the same server (like when deploying Multi-Role; which is my recommendation)? In this scenario, the Microsoft Exchange FrontEnd Transport Service listens on port 25, since it is meant to handle inbound/outbound connections with public SMTP servers (which expect to use port 25). Meanwhile, the Microsoft Exchange Transport Service listens on port 2525. Because this service is used for intra-org communications, all other Exchange 2013 servers in the Organization know to send using 2525 (however, 07/10 servers still use port 25 to send to multi-role 2013 servers, which is why Exchange Server Authentication is enabled by default on your default FrontEndTransport Receive Connectors on a Multi-Role box; in case you were wondering).

So when you create a new Receive Connector on a Multi-Role Server, how do you specify which service will handle it? You do so by using the -TransportRole switch via the Shell or by selecting either “Hub Transport” or “FrontEnd Transport” under “Role” when creating the Receive Connector in the EAC.

The problem is there’s nothing keeping you from creating a Receive Connector of Role “Hub Transport” (which it defaults to) that listens on port 25 on a Multi-Role box. What you then have is two different services trying to listen on port 25. This actually works temporarily, due to some .NET magic that I’m not savvy enough to understand, but regardless, eventually it will cause issues. Let’s go through a demo.

Demo:

Here’s the output of Netstat on a 2013 Multi-Role box with default settings. You’ll see MSExchangeFrontEndTransport.exe is listening on port 25 & EdgeTransport.exe is listening on 2525. These processes correspond to the Microsoft Exchange FrontEnd Transport & Microsoft Exchange Transport Services respectively.

1new

Now let’s create a custom Receive Connector, as if we needed it to allow a network device to Anonymously Relay through Exchange (the most common scenario where I’ve seen this issue arise). Notice in the first screenshot, you’ll see the option to specify which Role should handle this Receive Connector. Also notice how Hub Transport is selected by default, as is port 25.

3

4

5

After adding this Receive Connector, see how the output of Netstat differs. We now have two different processes listening on the same port (25).

6

So there’s a simple fix to this. Just use Shell (there’s no GUI option to edit the setting after it’s been created) to modify the existing Receive Connector to be handled by the MSExchange FrontEndTransport Service instead of the MSExchange Transport Service. Use the following command:

Set-ReceiveConnector Test-Relay –TransportRole FrontEndTransport

7

I recommend you restart both Transport Services afterwards.

March 10, 2014
by dbissett
0 comments

Setting a a domain account to auto logon to a server/workstation

Writing some code with foglight and transactional scripts has produced a requirement to auto log on some machines to produce this transactional based processing.

Open Regedit

navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon

Create these new string values if they do not exist

Defaultsername

DefaultPassword

DefaultDomainName

AutoAdminLogon

if any of the above keys are missing , create them with a new string  value (REG_SZ). set autoadminlogon to 1 and if autologoncount exists, then delete it.

January 10, 2014
by dbissett
0 comments

Finding all the Auto Forwards and then removing them in your exchange org.

Useful script to find all the auto forwards in your exchange org and exports results to a csv

Get-Mailbox -resultsize unlimited | Where {$_.ForwardingAddress -ne $null} | Select Name, ForwardingAddress, organizationalunit, whencreated, whenchanged, DeliverToMailboxAndForward | export-csv d:\forwardedusers.csv

then to remove all these use and use a get content for the csv (trim it to identity only first) to pipe it.

Set-Mailbox -Identity <mailbox@mydomain.com> -DeliverToMailboxAndForward $false -ForwardingSMTPAddress $null

csv contents for identy looks like

Abdul Ajoke
Olutimehin Abayomi
Ibitoye Abimbola
Olasusi Abimbola
Lucas Abraham
Adeleye Adebanjo
Odude Adedayo
DeJong Anneke

 

then use this

get-content “D:\forwardedusers.csv” | Set-Mailbox  -DeliverToMailboxAndForward $false -ForwardingSMTPAddress $null

forwarders removed. :)

 

January 9, 2014
by dbissett
0 comments

annoying issue with UAG and data buffer size

Had this issue today on UAG.

It’s time to talk a little about body parsing. One of IAG and UAGs key functions are parsing the body of files it delivers to connecting clients. The filter parses the files, and looks for various HTML and Script links within them, and “signs” them by adding the unique string of characters that you have probably seen before (for example, http://www.contoso.com/whalecom45c76f7678a87d876ca9096a6c/whalecom0/index.html). In case you are not familiar with the signing process at all, the purpose of this is to allow the server to expose multiple internal servers through a single IP and Port. Each internal server gets a unique signature, and when a request arrives from the client, the IAG server looks up the signature, and knows to which internal server to forward the request to. You can think of it like Valet-tickets, which lets the drivers know which car to bring around when you’re done eating.

The IAG server has an engine called “SRA”, which performs this magic by loading each file into a special buffer in memory, and searching through it for various HTML and JavaScript tags. This introduces several challenges that you may have run into along the way.

A common issue with file parsing is that the buffer allocated in memory for it is limited. Normally, IAG is supposed to parse text files like HTML, ASP and JS, and these files are usually quite small – hardly more than a few hundred kilo-byte. Sometimes, an unusually large file needs to be delivered to the client. For example, if a user downloads a large text file from a SharePoint site, or a large attachment from an Email message in OWA. We’ve also seen cases where software that generates usage reports creates very large HTMLs. If the file is larger than the default buffer size (which is 10 MB), the buffer fills up and the server has an error. In older versions of IAG, before SP2U3 (http://blogs.technet.com/ben/archive/2010/03/08/it-s-that-time-of-the-year-again.aspx), this would result in a generic and unintelligible 500 error, but following Update 3, it sends a clear message to the Web Monitor.

So…what if you need to have larger files go through the server? Well, there are several options:

1. Option one: Increase the buffer size. This is discussed in detail in the Update-4 for IAG SP1 (http://support.microsoft.com/kb/955123), but basically, it involves adding a registry value with a larger buffer allocation. This is the procedure:

a. Using the Registry Editor, navigate to:

HKEY_LOCAL_MACHINE\SOFTWARE\WhaleCom\e-Gap\von\UrlFilter

b. Create a new DWORD value and name it MaxBodyBufferSize

c. Edit the value to the maximum file size you want to support, in bytes. For example, to allow 20 MB files through, enter a value of 20000000 decimal.

d. Close the registry editor

e. Activate the UAG configuration (otherwise, the new settings will revert after a server reboot)

f. Restart IIS (type IISRESET in a CMD window, or reboot the server)

One must keep in mind, though, that the buffer is located in the computer’s memory, and if many users are connecting, it could use up a lot of memory even if no large files are being downloaded. Microsoft recommends setting this value to the lowest possible value.

2. Option two: Skip the parsing of some files. This option has a special GUI element that allows you to eliminate the parsing of specific servers, and/or specific URLs. This is controlled via the Advanced Trunk Configuration. To configure it, follow these steps:

a. For the relevant trunk, go to Advanced Trunk Configuration.

b. Switch to the “Application Access Portal” tab.

c. Click on EDIT next to “Don’t parse the bodies of these requests”.

d. Click ADD to add a server:

– The server name is the INTERNAL name – the name the IAG server would use to contact the server (and not the public portal URL)

– The server name can be specified using RegEx. For example Domino.* would affect all servers that start with the word Domino, so can cover an entire farm of servers.

– If the server name contains characters that are considered “non literals” for RegEx, they need to be slashed-out. For example, a dot (.) is a non-literal, so the server name www.contoso.com would have to be specified as www\.contoso.\com

e. Click ADD at the bottom of the tab, to add URLs:

– The URL is also internal, so should not include the Whale Signature.

– The URL can also use RegEx, so you could use something like /docs/.* to cause IAG to skip all files read from the docs library of some server.

– The above also mean that one needs to be careful when writing the URLs, to make sure they are not being missed because they contain non-literals. For a complete reference to RegEx, refer to the IAG Advanced User Guide Appendix B.

clip_image002

3. Option three: Skip parsing based on content type. This option is suitable if you want to apply or block parsing based on the content type of the file. For example, you might feel that all TEXT files should be parsed, but JavaScript should not. By default, the server is configured to parse these content types:

· Text/.*

· Application/x-javascript.*

· Application/x-vermeer-rpc

· Application/x-ica

To change this behavior, go to Advanced Trunk Configuration and Switch to the “application customization” tab. Under the “search and replace on content-type”, add, remove or edit the default content types and configure it to your liking. Please note that “content type” is not exactly like file-extension, so make sure in advance that you know exactly what is the content-type of the files you want to control.

clip_image004

4. Option four: Skip parsing based on the application type. This is suitable if you know that a specific application should be completely skipped for body parsing. This is not a popular option, so I won’t detail it here. Refer to TechNet if you want more details: http://technet.microsoft.com/en-us/library/dd278134.aspx

January 7, 2014
by dbissett
0 comments

Useful Script for extracting smtp addresses from csv containing accounts

Hi All,

 

this is quite a good utility for extracting smtp addresses en-mass from a csv.

just put your account names into the csv like this:-

eaarons
sabad
jabagatnan
gabanilla
iabarquez
pabayon
aabbara
nabbas
sabbas
aabbasi
aabbasian
yabbey
vabbot
alabbott

 

this is the script, which creates an export.csv excluding the errors in the root of c:\

get-content “D:\scripts\smtp\standard.csv” | get-mailbox | Select-object DisplayName,ServerName
,PrimarySmtpAddress, @{Name=”EmailAddresses”;Expression={$_.EmailAddresses |Where-Object {$_.PrefixString -ceq “smtp”} |
 ForEach-Object {$_.SmtpAddress}}} | export-csv c:\export.csv -notypeinformation

 

 

and it produces output like this

 

Aarons Emma sv-pr-ex04 Emma.Aarons@gstt.nhs.uk
Abanilla Ginayln sv-pr-ex49 Ginayln.Abanilla@gstt.nhs.uk
Abarquez Irvin sv-pr-ex46 Irvin.Abarquez@gstt.nhs.uk
Abbara Aula sv-pr-ex04 Aula.Abbara@gstt.nhs.uk
Abbas Sajjad sv-pr-ex47 Sajjad.Abbas@gstt.nhs.uk
Abbasi Amina sv-pr-ex44 Amina.Abbasi@gstt.nhs.uk
Abbasian Ali sv-pr-ex47 Ali.Abbasian@gstt.nhs.uk
Abbey Yvonne sv-pr-ex02 Yvonne.Abbey@gstt.nhs.uk
Abbot Vimmi sv-pr-ex03 Vimmi.Abbot@gstt.nhs.uk
Abbott Donna sv-pr-ex03 Donna.Abbott@gstt.nhs.uk
Abbott Julie sv-pr-ex02 Julie.Abbott@gstt.nhs.uk
Abbs Ian sv-pr-ex43 Ian.Abbs@gstt.nhs.uk
Abdala Sahan sv-pr-ex46 Sahan.Abdala@gstt.nhs.uk