VNET Peering – When to use and when not to use

VNET Peering has been an available feature for almost a year now and has proved to be a very useful, popular, and for a long time the most requested feature. That said, as much as we would like to mesh together all our Azure VNETs into one lovely firewalled network topology, this isn’t always possible or suitable.

The situations whereby VNET peering (or it’s associated features) cannot be used are as follows:

  • VNETs in different regions cannot have a peering relationship
  • VNETs with overlapping address spaces cannot be peered
  • VNETs which are both created using the Classic Deployment model cannot be peered
  • VNETs which are created using mixed deployment models cannot be peered across different subscriptions (although this will be available in the future)
  • Both VNETs must be created using the Resource Manager Deployment Model for Gateway Chaining (using a gateway in a peered VNET) to function
  • There is a default limit of 10 VNET peers per VNET. This can be raised to a maximum of 50 using Azure Support requests

This still leaves many applicable situations whereby VNET peering can be very useful and can provide the hub and spoke, high speed, low latency network which your Azure subscription/s need.

 

ADFS Additional Authentication Rules – MFA Issue Type

ADFS Claim / Additional Authentication rules can appear very complex and confusing, and that’s because they are! One thing that tripped me up recently is related to the issue section of a claim rule whereby MFA is specified. During a project, I created a rule from a template I had used for another customer. Upon saving the rule I found that it didn’t apply MFA as I was expecting, and instead caused an error message in ADFS during logon attempts.

The rule I had used was issuing a claim for the Azure MFA Server rather than the Azure MFA Cloud Service. To clarify, the difference in the claim type is as follows:

Azure Cloud MFA

=> issue(Type = "http://schemas.microsoft.com/claims/authnmethodsreferences", Value = "http://schemas.microsoft.com/claims/multipleauthn");

Azure MFA Server

=> issue(Type = "http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod", Value = "http://schemas.microsoft.com/claims/multipleauthn");

 

This is an important distinction and needs to be considered when applying different types of authentication flows in ADFS.

 

 

Virtual Machine Core Count – Azure

Today is not the first time I have come across this issue, but I’m going to make sure it’s the last time I google how to find out the limits applied to an Azure subscription!

By default, different VM soft limits apply to different types of subscriptions. If you come across this issue, your error will look something like this:

New-AzureRmVM : Operation results in exceeding quota limits of Core. Maximum allowed: 10, Current in use: 10

From memory (this may not be correct), the limits are as follows for different subscription types:

  • Pay as you go – 10 cores per VM type / region
  • CSP – 30 cores per VM type / region
  • VL – 20 cores per VM type / region
  • EA – I’m not sure!

If you want to see how many cores you are allowed by default, you need to login to Azure Powershell and run the following command, substituting your region.

Get-AzureRMVMUsage -Location "West Europe"

This will give you an output similar to below:

PS C:\WINDOWS\system32> Get-AzureRmVMUsage -Location "West Europe"

Name                         Current Value Limit  Unit
----                         ------------- -----  ----
Availability Sets                        3  2000 Count
Total Regional Cores                    10    10 Count
Virtual Machines                         8 10000 Count
Virtual Machine Scale Sets               0  2000 Count
Standard Av2 Family Cores               10    10 Count
Basic A Family Cores                     0    10 Count
Standard A0-A7 Family Cores              0    10 Count
Standard A8-A11 Family Cores             0    10 Count
Standard D Family Cores                  0    10 Count
Standard Dv2 Family Cores                0    10 Count
Standard G Family Cores                  0    10 Count
Standard DS Family Cores                 0    10 Count
Standard DSv2 Family Cores               0    10 Count
Standard GS Family Cores                 0    10 Count
Standard F Family Cores                  0    10 Count
Standard FS Family Cores                 0    10 Count
Standard NV Family Cores                 0    12 Count
Standard NC Family Cores                 0    12 Count
Standard H Family Cores                  0     8 Count
Standard LS Family Cores                 0    10 Count
As you can see, for each region there is a subset of machine types. If you need to raise a core limit, you need to raise an Azure support ticket and request an increase for the required region and VM type. This does not cost anything and from my experience is usually done within 24 hours.
Hopefully this helps some folk out there who come across this issue. If you haven’t seen this yet and are planning an Azure rollout, it would be worth requesting this increase prior to starting your project!

Azure Classic to Resource Manager Migration – Validation failed

I am starting to investigate the migration of resources from Azures Classic deployment mode into the shiny Azure Resource Manager mode.

The first step for me was to attempt to validate the VNET which I wanted to migrate to see if it was compatible. I ran the command listed on the following website (https://azure.microsoft.com/en-gb/documentation/articles/virtual-machines-windows-ps-migration-classic-resource-manager/)

    Move-AzureVirtualNetwork -Validate -VirtualNetworkName $vnetName

As I expected (nothing is ever simple is it?!) I received an error as shown below. The problem was that the validationmessages shown was limited and didn’t really show me any detail. In my case all it showed me was the name of my VNET.

Validation failed.  Please see ValidationMessages for details

In order to get some more detailed information out of the cmdlet, I ended up saving the validation command to a variable and then calling the variable, as shown below:

$validate = Move-AzureVirtualNetwork -Validate -VirtualNetworkName $vnetName -Verbose

$validate.validationmessages

This gave me lots of detail and I discovered that I had typed the VNET name incorrectly. D’oh! I forgot that when you create a Classic VNET in the new portal, the actual name of the VNET is not what you see in the new portal. You need to have a look in the old manage.windowsazure.com portal to see the actual name.

Hopefully this helps some folk out there!

Azure App Cloud Discovery & PAC Files

Azure App Cloud Discovery is a seriously cool piece of technology. Being able to scan your entire computer estate for cloud SaaS applications in either a targeted, or catch-all manner can really help discover the ‘Shadow IT’ going on in your environment. Nowadays, users not having local admin rights won’t necessarily stop them from using cloud SaaS apps in any way which is going to increase their productivity. Users don’t generally think about the impact of using such applications, and the potential for data leakage.

But, as with lots of Microsoft’s other cloud technologies which are being launched left, right and centre at the moment, the Enterprise isn’t catered for as it might hope. Most Enterprise IT departments leverage some kind of web filtering, or proxying. This may be using transparent proxying, in which case you can count your blessings as Cloud App Discovery will work just fine. If you are explicitly defining a proxy in your internet settings, then you can get around that by adding particular registry keys. However if you are using a PAC file to control access to the internet, then unfortunately Cloud App Discovery will not work for you. This is a shame as it is, in my opinion, the best way to approach web proxying in an Enterprise, but that’s another story. From what I have heard, a feature is in the works which will allow you to configure Cloud App Discovery agents to log their findings to an internal data collector. This data collector can sit on a local server and then upload data to Azure on your behalf, which is a much more elegant solution to the problem of data collection from multiple machines. However as far as I know, this feature is not available yet. I’ll be keeping my ear to the ground and will let you know if this changes.

In the meantime, if you are desperate to get your data collection up and running in the meantime, you could change to explicitly defined proxying, and configure registry settings for your clients as per the following MS article:

https://azure.microsoft.com/en-gb/documentation/articles/active-directory-cloudappdiscovery-registry-settings-for-proxy-services/

Cloud App Discovery is a feature of Azure Active Directory Premium, a toolkit designed to take your Azure Active Directory to new, cloudier heights. Azure AD Premium can be bought standalone, or comes bundled with the Enterprise Mobility Suite. I would highly recommend it to Office 365 customers as it can give you and your users some great new features which can help make your Azure AD the best it can be!

Edit: It looks like PAC file support has been added rather surreptitiously. No announcement was made, and the KB articles haven’t been updated. I happened to check the Change Log today and Release 1.0.10.1 includes an option to tweak your PAC file to support Cloud App Discovery. https://social.technet.microsoft.com/wiki/contents/articles/24616.cloud-app-discovery-agent-changelog.aspx

Alternatively, if you use a PAC file (Proxy Auto Configuration) to manage proxy configuration, please tweak your file to add https://policykeyservice.dc.ad.msft.net/ as an exception URL.

I’m yet to test this myself but it looks promising!

Azure Backup and Resource Manager Virtual Machines

Ask anybody in the Azure teams at Microsoft and they will tell you that Resource Manager is the future of Microsofts Azure strategy. And I agree; it’s much more versatile, robust, and finally gives us the ability to multitask rather than waiting for one task to complete before starting another. For all it’s benefits though, it is still an immature system. One big bugbear of mine is to do with changing Availability Sets once a VM is created, but that’s not what I wanted to write about today.

Microsoft recommend that all new services in Azure should be created using the Azure Resource Manager model. Which is all well and good, unless you want to back these servers up using Azure Backup. In which case you will have a problem.

I recently attempted to do this and was happily running through the guide provided by Microsoft entitled Deploy and manage backup to Azure for Windows Server/Windows Client using PowerShell. This article includes the following warning:

Warning: Azure Backup

I didn’t have a backup provider yet so I tried to run the command, only to be told that:

register-AzureProvider : The term 'register-AzureProvider' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + register-AzureProvider -ProviderNamespace Microsoft.DevTestLab + ~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (register-AzureProvider:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException

It seems that this cmdlet has been deprecated in Azure PowerShell v1.0.

Additionally, when looking into this error, I found the following tidbit of information on Microsofts Preparing your environment to back up Azure virtual machines page:

Backing up Azure Resource Manager-based (aka IaaS V2) virtual machines is not supported.

Bit of a showstopper eh? Luckily myself and my colleague simply wanted to schedule a backup of the ADFS database, so to workaround this we added a secondary Data disk to Azure and used Windows Server Backup to take a System State backup of the Primary ADFS server. For those planning a more extensive Azure Backup Strategy, you may need to rethink your use of v2 (Resource Manager) virtual machines.

As with all things Azure, things change all the time, and I’m sure Microsoft will add support for this feature very soon.

 

Set-AzureNetworkSecurityGrouptoSubnet – The virtual network name is not valid.

Another catchy title which I’m sure you’ll be saying to yourself all day.

I came across a problem recently whilst attempting to attach an Azure Network Security Group to a subnet within a virtual network. The purpose of this NSG was in reference to an ADFS configuration in Azure. The NSG was to be used to wrap around our ‘LAN’ subnet in Azure and protect it from the ADFS Proxy servers and the internet by locking down inbound traffic to only allowed ports.

I created my NSG by using:

New-AzureNetworkSecurityGroup -Name "LAN_NSG" -Location "West Europe"

And then attempted to map it to my LAN subnet by using:

Get-AzureNetworkSecurityGroup -Name "LAN_NSG" | Set-AzureNetworkSecurityGroupToSubnet -VirtualNetworkName 192.168.x.x -SubnetName ‘LAN’

However I got a wall of red, with the descriptive error message of:

The virtual network name 192.168.x.x is not valid

NSG - The Virtual Network name is not valid

I then tried specifying the virtual network name in a combination of formats, and also by using a variable:

'192.168.x.x'
"192.168.x.x"
$vnetname=192.168.x.x

And so on and so forth, all with the same result. I could however, run:

Get-AzureVNetSite -VNetName 192.168.x.x

Which returned a valid result, confirming my thinking that the VNet name was valid and correct. Eventually I logged a call with Azure Support to try and find out what was going on. I received a prompt response, and was told by the engineer dealing with the call that the problem could be reproduced, and was consistent when trying to apply NSG to a VNet name that starts with a number.

At the moment I’m waiting to hear back to see whether this is a known issue and if there is a prettier workaround other than creating a new VNet and migrating my VMs over to it. Another workaround is to apply the NSG to the VMs themselves, which may be the route that we travel until this issue is solved.