VNET Peering – When to use and when not to use

VNET Peering has been an available feature for almost a year now and has proved to be a very useful, popular, and for a long time the most requested feature. That said, as much as we would like to mesh together all our Azure VNETs into one lovely firewalled network topology, this isn’t always possible or suitable.

The situations whereby VNET peering (or it’s associated features) cannot be used are as follows:

  • VNETs in different regions cannot have a peering relationship
  • VNETs with overlapping address spaces cannot be peered
  • VNETs which are both created using the Classic Deployment model cannot be peered
  • VNETs which are created using mixed deployment models cannot be peered across different subscriptions (although this will be available in the future)
  • Both VNETs must be created using the Resource Manager Deployment Model for Gateway Chaining (using a gateway in a peered VNET) to function
  • There is a default limit of 10 VNET peers per VNET. This can be raised to a maximum of 50 using Azure Support requests

This still leaves many applicable situations whereby VNET peering can be very useful and can provide the hub and spoke, high speed, low latency network which your Azure subscription/s need.


ADFS Additional Authentication Rules – MFA Issue Type

ADFS Claim / Additional Authentication rules can appear very complex and confusing, and that’s because they are! One thing that tripped me up recently is related to the issue section of a claim rule whereby MFA is specified. During a project, I created a rule from a template I had used for another customer. Upon saving the rule I found that it didn’t apply MFA as I was expecting, and instead caused an error message in ADFS during logon attempts.

The rule I had used was issuing a claim for the Azure MFA Server rather than the Azure MFA Cloud Service. To clarify, the difference in the claim type is as follows:

Azure Cloud MFA

=> issue(Type = "", Value = "");

Azure MFA Server

=> issue(Type = "", Value = "");


This is an important distinction and needs to be considered when applying different types of authentication flows in ADFS.



Virtual Machine Core Count – Azure

Today is not the first time I have come across this issue, but I’m going to make sure it’s the last time I google how to find out the limits applied to an Azure subscription!

By default, different VM soft limits apply to different types of subscriptions. If you come across this issue, your error will look something like this:

New-AzureRmVM : Operation results in exceeding quota limits of Core. Maximum allowed: 10, Current in use: 10

From memory (this may not be correct), the limits are as follows for different subscription types:

  • Pay as you go – 10 cores per VM type / region
  • CSP – 30 cores per VM type / region
  • VL – 20 cores per VM type / region
  • EA – I’m not sure!

If you want to see how many cores you are allowed by default, you need to login to Azure Powershell and run the following command, substituting your region.

Get-AzureRMVMUsage -Location "West Europe"

This will give you an output similar to below:

PS C:\WINDOWS\system32> Get-AzureRmVMUsage -Location "West Europe"

Name                         Current Value Limit  Unit
----                         ------------- -----  ----
Availability Sets                        3  2000 Count
Total Regional Cores                    10    10 Count
Virtual Machines                         8 10000 Count
Virtual Machine Scale Sets               0  2000 Count
Standard Av2 Family Cores               10    10 Count
Basic A Family Cores                     0    10 Count
Standard A0-A7 Family Cores              0    10 Count
Standard A8-A11 Family Cores             0    10 Count
Standard D Family Cores                  0    10 Count
Standard Dv2 Family Cores                0    10 Count
Standard G Family Cores                  0    10 Count
Standard DS Family Cores                 0    10 Count
Standard DSv2 Family Cores               0    10 Count
Standard GS Family Cores                 0    10 Count
Standard F Family Cores                  0    10 Count
Standard FS Family Cores                 0    10 Count
Standard NV Family Cores                 0    12 Count
Standard NC Family Cores                 0    12 Count
Standard H Family Cores                  0     8 Count
Standard LS Family Cores                 0    10 Count
As you can see, for each region there is a subset of machine types. If you need to raise a core limit, you need to raise an Azure support ticket and request an increase for the required region and VM type. This does not cost anything and from my experience is usually done within 24 hours.
Hopefully this helps some folk out there who come across this issue. If you haven’t seen this yet and are planning an Azure rollout, it would be worth requesting this increase prior to starting your project!

Azure Classic to Resource Manager Migration – Validation failed

I am starting to investigate the migration of resources from Azures Classic deployment mode into the shiny Azure Resource Manager mode.

The first step for me was to attempt to validate the VNET which I wanted to migrate to see if it was compatible. I ran the command listed on the following website (

    Move-AzureVirtualNetwork -Validate -VirtualNetworkName $vnetName

As I expected (nothing is ever simple is it?!) I received an error as shown below. The problem was that the validationmessages shown was limited and didn’t really show me any detail. In my case all it showed me was the name of my VNET.

Validation failed.  Please see ValidationMessages for details

In order to get some more detailed information out of the cmdlet, I ended up saving the validation command to a variable and then calling the variable, as shown below:

$validate = Move-AzureVirtualNetwork -Validate -VirtualNetworkName $vnetName -Verbose


This gave me lots of detail and I discovered that I had typed the VNET name incorrectly. D’oh! I forgot that when you create a Classic VNET in the new portal, the actual name of the VNET is not what you see in the new portal. You need to have a look in the old portal to see the actual name.

Hopefully this helps some folk out there!

Azure App Cloud Discovery & PAC Files

Azure App Cloud Discovery is a seriously cool piece of technology. Being able to scan your entire computer estate for cloud SaaS applications in either a targeted, or catch-all manner can really help discover the ‘Shadow IT’ going on in your environment. Nowadays, users not having local admin rights won’t necessarily stop them from using cloud SaaS apps in any way which is going to increase their productivity. Users don’t generally think about the impact of using such applications, and the potential for data leakage.

But, as with lots of Microsoft’s other cloud technologies which are being launched left, right and centre at the moment, the Enterprise isn’t catered for as it might hope. Most Enterprise IT departments leverage some kind of web filtering, or proxying. This may be using transparent proxying, in which case you can count your blessings as Cloud App Discovery will work just fine. If you are explicitly defining a proxy in your internet settings, then you can get around that by adding particular registry keys. However if you are using a PAC file to control access to the internet, then unfortunately Cloud App Discovery will not work for you. This is a shame as it is, in my opinion, the best way to approach web proxying in an Enterprise, but that’s another story. From what I have heard, a feature is in the works which will allow you to configure Cloud App Discovery agents to log their findings to an internal data collector. This data collector can sit on a local server and then upload data to Azure on your behalf, which is a much more elegant solution to the problem of data collection from multiple machines. However as far as I know, this feature is not available yet. I’ll be keeping my ear to the ground and will let you know if this changes.

In the meantime, if you are desperate to get your data collection up and running in the meantime, you could change to explicitly defined proxying, and configure registry settings for your clients as per the following MS article:

Cloud App Discovery is a feature of Azure Active Directory Premium, a toolkit designed to take your Azure Active Directory to new, cloudier heights. Azure AD Premium can be bought standalone, or comes bundled with the Enterprise Mobility Suite. I would highly recommend it to Office 365 customers as it can give you and your users some great new features which can help make your Azure AD the best it can be!

Edit: It looks like PAC file support has been added rather surreptitiously. No announcement was made, and the KB articles haven’t been updated. I happened to check the Change Log today and Release includes an option to tweak your PAC file to support Cloud App Discovery.

Alternatively, if you use a PAC file (Proxy Auto Configuration) to manage proxy configuration, please tweak your file to add as an exception URL.

I’m yet to test this myself but it looks promising!

Azure Backup and Resource Manager Virtual Machines

Ask anybody in the Azure teams at Microsoft and they will tell you that Resource Manager is the future of Microsofts Azure strategy. And I agree; it’s much more versatile, robust, and finally gives us the ability to multitask rather than waiting for one task to complete before starting another. For all it’s benefits though, it is still an immature system. One big bugbear of mine is to do with changing Availability Sets once a VM is created, but that’s not what I wanted to write about today.

Microsoft recommend that all new services in Azure should be created using the Azure Resource Manager model. Which is all well and good, unless you want to back these servers up using Azure Backup. In which case you will have a problem.

I recently attempted to do this and was happily running through the guide provided by Microsoft entitled Deploy and manage backup to Azure for Windows Server/Windows Client using PowerShell. This article includes the following warning:

Warning: Azure Backup

I didn’t have a backup provider yet so I tried to run the command, only to be told that:

register-AzureProvider : The term 'register-AzureProvider' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + register-AzureProvider -ProviderNamespace Microsoft.DevTestLab + ~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (register-AzureProvider:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException

It seems that this cmdlet has been deprecated in Azure PowerShell v1.0.

Additionally, when looking into this error, I found the following tidbit of information on Microsofts Preparing your environment to back up Azure virtual machines page:

Backing up Azure Resource Manager-based (aka IaaS V2) virtual machines is not supported.

Bit of a showstopper eh? Luckily myself and my colleague simply wanted to schedule a backup of the ADFS database, so to workaround this we added a secondary Data disk to Azure and used Windows Server Backup to take a System State backup of the Primary ADFS server. For those planning a more extensive Azure Backup Strategy, you may need to rethink your use of v2 (Resource Manager) virtual machines.

As with all things Azure, things change all the time, and I’m sure Microsoft will add support for this feature very soon.


Set-AzureNetworkSecurityGrouptoSubnet – The virtual network name is not valid.

Another catchy title which I’m sure you’ll be saying to yourself all day.

I came across a problem recently whilst attempting to attach an Azure Network Security Group to a subnet within a virtual network. The purpose of this NSG was in reference to an ADFS configuration in Azure. The NSG was to be used to wrap around our ‘LAN’ subnet in Azure and protect it from the ADFS Proxy servers and the internet by locking down inbound traffic to only allowed ports.

I created my NSG by using:

New-AzureNetworkSecurityGroup -Name "LAN_NSG" -Location "West Europe"

And then attempted to map it to my LAN subnet by using:

Get-AzureNetworkSecurityGroup -Name "LAN_NSG" | Set-AzureNetworkSecurityGroupToSubnet -VirtualNetworkName 192.168.x.x -SubnetName ‘LAN’

However I got a wall of red, with the descriptive error message of:

The virtual network name 192.168.x.x is not valid

NSG - The Virtual Network name is not valid

I then tried specifying the virtual network name in a combination of formats, and also by using a variable:


And so on and so forth, all with the same result. I could however, run:

Get-AzureVNetSite -VNetName 192.168.x.x

Which returned a valid result, confirming my thinking that the VNet name was valid and correct. Eventually I logged a call with Azure Support to try and find out what was going on. I received a prompt response, and was told by the engineer dealing with the call that the problem could be reproduced, and was consistent when trying to apply NSG to a VNet name that starts with a number.

At the moment I’m waiting to hear back to see whether this is a known issue and if there is a prettier workaround other than creating a new VNet and migrating my VMs over to it. Another workaround is to apply the NSG to the VMs themselves, which may be the route that we travel until this issue is solved.

Cloud Security 101

With Microsoft, Amazon and Google currently enticing businesses and consumers into cloud services with promises of resiliency, scalability and simplified administration, many companies are quite rightly moving services and data into the cloud in some way or other.

Before making such a leap however, questions must be asked about data security and compliance. Cloud services must be scrutinised in order to ensure that data geo location and security practices meet compliance requirements for the customer and their clients. After all, you are trusting these providers with your essential business data, and in some cases your intellectual property.

Edward Snowdens leaks regarding NSA/GHCQ snooping, along with the leakage of personally identifiable information from services at Sony, eBay and the like has left many people suspicious and wary of who exactly can get to your information. This blog post is here to show you a sample of the security features used to protect your data in Office 365 and Azure, and some of the ways your data is protected in transit.

Office 365 Security

The Office 365 service provides many layers of security. Microsoft want to be seen as a safe pair of hands to help drive adoption and for this reason they have the interests of security and privacy at the very top of their lists of priorities.

The data in all Office 365 services is encrypted at rest using BitLocker and in addition to this, as of November 2014, all Office 365 data is encrypted again on a per file basis. This means that each individual file is encrypted using different keys, which are stored in an alternate location to the master key.

All Administrative actions taken on Office 365, either by the tenant administrators or the service administrators, are audited and fully transparent. As an Office 365 customer, you can view and export a list of Powershell commands run by Microsoft Support or your own administrators. Microsoft support technicians are given administrative access when required based on the least privilege model, and this access is time limited by default.

Data theft from inside or outside of the service is a serious concern to Microsoft and the Office 365 team work on the assumption that a compromise has already been made. A Red team exists whose sole job is to attempt to compromise the systems protecting customer data. They do this by attempting to gain access to test data, and a Blue team works in parallel to identify the Red team and counteract this threat. This is the equivalent of having your IT systems constantly penetration tested, which is more than most companies can say for their own IT systems!

Compliance in Office 365 is a hot topic and is critical to getting governmental departments, and health and financial companies on board. Office 365 are compliant with many of the standards required in these sectors, such as ISO 27001, FISMA, and HIPAA.

All of these security features help to make Office 365 a platform which is likely to be far more secure and compliant than your own On Premise environment. Microsofts transparency on security and privacy are also far superior to any of their rivals, giving you the peace of mind needed to begin your move to the cloud.

Much more information can be found at the Office 365 Trust Center –

Azure & Microsoft Datacentre Security

The Microsoft Azure IaaS environment should be considered by customers as an extension of their Datacentre. Microsoft are staunch supporters of the concept that the data you place in cloud services is your data, not theirs. Encryption is in place across all servers, and nobody with physical access in the Datacentre has knowledge of which customer’s data is in which rack or server.

Direct access to the Azure Hypervisors is unavailable to customers and network isolation is used to separate traffic between tenants. There are also various methods customers can use to increase the privacy of their Azure traffic, such as using Azure Private Virtual Networks and Azure ExpressRoute, which creates a direct connection to Azure, keeping your inter-site traffic off the Internet.

The physical environment is highly secured and access is extremely limited by using separation of duties and roles to make sure that no one person has too much knowledge of the systems. Failure to abide by the Microsoft Datacentre security policies means instant dismissal for the employee. In addition to this, personally identifiable information is stored separately to non-personally identifiable information.

All access to customer data is blocked by default, using a zero privileges policy. If this is allowed, it is time limited and fully audited. In addition to this, staff members who receive this access to customer data will not have physical datacentre access. These same physical and data based access controls are also in place for the Office 365 and all other Microsoft Online services.

From a compliance point of view, you can rest assured that Azure complies with the majority of major standards across the world. These compliance standards are not easy to achieve by any standards, but Microsoft remain committed to keeping their compliance up to date and as broad as possible. Some of the specific compliance standards which are verified for the Azure Service are:

  • ISO/IEC 27001
  • SOC 1 and SOC 2 SSAE 16/ISAE 3402
  • UK G-Cloud
  • EU Model Clauses
  • Singapore MCTS
  • FedRAMP
  • Australia IRAP

Much more information can be found at the Azure Trust Center –

Convincing your boss, management boards and other business trustees that a move to the cloud is a secure one is a tough job, but Microsofts commitment to privacy, security and transparency makes it much easier to put together a viable business case which can help you reap the rewards of scalability, resiliency and compliance.

Exchange 2013 – File Share Witness on an Azure VM

My last TechEd session of the year was on Exchange 2013 HA and Site Resilience. This session came with an exciting announcement. As of January 1st 2015, Microsoft will support using an Azure IaaS VM as a File Share Witness for an on-premises DAG. This provides the ability for deployments with a 2 datacenter DAG solution to add a FSW in a 3rd datacenter, providing the ultimate in Site Resiliency for Exchange Database Availability Groups. It’s worth pointing out that this is not the same as a Cloud Witness in the new Windows Server Technical Preview. The Exchange team have not yet decided whether they will do the work to make sure the Cloud Witness feature will be supported in Exchange 2013. The high level steps for doing this will be as follows: – create Azure networking and establish VPN (if not already in place) – configure Azure VMs (directory server and file server) – configure Exchange 2013 to use Azure FSW This is exciting news and is sure to be a guaranteed hit for those with a DAG stretched over 2 datacenters as it allows Quorum to be maintained when a single Datacenter is lost.