Azure AD Powershell – Token Lifetime Configuration for MFA

The default token expiry in Azure AD for ADAL clients (using Modern Authentication) is 14 days for single factor and multi factor authentication users. This can stretch up to 90 days as long as the user does not change their password, and they do not go offline for longer than 14 days.

This means that clients using Outlook or Skype for Business can perform MFA once and then remain signed in using their access token for up to 90 days before being required to authenticate using MFA. As you can imagine, this is not an ideal situation for multi-factor authentication as a compromised account could be accessed through a rich client application with no MFA for up to 90 days.

Until recently, this could not be modified. However Microsoft released Configurable Token Lifetime as a Preview feature quite recently. This allows for various properties to be controlled, giving administrators more granular control over token refresh and enforcing a more secure MFA policy.

The Azure team have provided a solid guide here: https://docs.microsoft.com/en-us/azure/active-directory/active-directory-configurable-token-lifetimes

To do this, you need the Azure AD Preview PowerShell module. Install this by running the following from a PowerShell prompt:

Install-Module -Name AzureADPreview 

Here is a sample policy I’ve configured which will change the MFA token lifetime to 12 hours. I’ve combined this with ADFS Claim Rules which only enforce MFA if the user is on the extranet and using particular applications:

New-AzureADPolicy -Definition @("{`"TokenLifetimePolicy`":{`"Version`":1, `"MaxAgeMultiFactor`":`"12:00:00`",`"AccessTokenLifetime`":`"04:00:00`"}}") -DisplayName OrganizationDefaultPolicyScenario -IsOrganizationDefault $true -Type TokenLifetimePolicy

This is  a much needed feature from the point of view of security controls, although keep in mind it is still in Preview!

 

 

Keep up to date with Office 365 IP Changes

It’s quite common for administrators to get caught out by IP changes in the Office 365 pool, and to find a service becoming intermittently inaccessible due to the addition of an IP address range to the pool of IPs used by an Office 365 service.

Microsoft publish an RSS feed to make this a bit easier for admins to follow, however I wanted to take this one step further.

Using Microsoft Flow (or IFTTT if that’s your bag), you can configure an event so that an update to an RSS feed prompts an action. That action could be to send an email, to update SharePoint or Yammer, or to update a Spreadsheet (amongst others). People consume information in many different ways, and this is one way to customise the delivery of this information to suit the way you work.

As an example, I want to send an email to myself every time a change is made to the Office 365 IP address RSS feed. To do this, I have logged into Microsoft Flow and have created a new Flow for myself.

The trigger event will be an RSS feed to look for changes to https://support.office.com/en-us/o365ip/rss

Microsoft Flow RSS Trigger

Microsoft Flow RSS Trigger

 

The action event will be to send an email through to me to warn me to update my firewall:

Microsoft Flow Send an Email

Microsoft Flow Send an Email

 

That’s all there is to it! You can choose any action you desire when an update is made to the RSS feed.

Hope this helps!

 

Script – control Client Access features using set-mailbox

I put together a short script recently which will enumerate all users in an Office 365 Group (Security/Distribution/O365Group) and disable certain Client Access features. In my case, I wanted to disabled IMAP, POP and MAPI connectivity. This leaves a user only able to perform Kiosk style connectivity through either OWA, EWS or ActiveSync. The users in question had E1 licenses, but the customer wanted to limit connectivity so that rich mail clients such as Outlook could not be used.

The script looks like this:

$group=Get-MsolGroup | Where {$_.DisplayName -eq "uk-dg-kiosk"}
$groupid=$group.ObjectId
$groupmembers=Get-MSOLGroupMember -GroupObjectId $groupid
ForEach ($member in $groupmembers.emailaddress)
{Set-CASMailbox $member -ImapEnabled $false -MAPIEnabled $false -PopEnabled $false}
ForEach ($member in $groupmembers.emailaddress)
{Get-CASMailbox $member}

I have also created a similar script which will apply to any user which has a particular license SKU:

$licensepack=Get-MsolUser -All | Where {$_.Licenses.AccountSKUId -ccontains "MISSTECH:ENTERPRISEPACK"}
ForEach ($user in $licensepack.userprincipalname)
{Set-CASMailbox $user -ImapEnabled $false -MAPIEnabled $false -PopEnabled $false}
ForEach ($user in $licensepack.userprincipalname)
{Get-CASMailbox $user}

This could be run on demand, or using a scheduled task. Using a scheduled task involves supplying credentials so be careful when you do this!

Have a look at my guide for setting up scheduled tasks with Office 365 to learn how to avoid using plain text passwords in your tasks: https://misstech.co.uk/2016/06/08/office-365-powershell-and-scheduled-tasks/

Till next time x

 

DNS Traffic Management Policies

This awesome new Server 2016 feature can be used to create a DNS policy which responds to a query for the IP address of a web server with a different IP address based on the source subnet of the client.

Let’s take an example; we have ADFS configured in Azure using the following settings:

Hostname: sts.misstech.co.uk
Internal IP: 192.168.9.11
External IP: 57.119.128.179 (this is made up so don’t try and go there!)

There are 2 sites, London and Manchester. London has a VPN link to Azure, however Manchester has no route to Azure. Both sites are connected to each other and the Domain Controller is located in London.

This means that London users (on 192.168.10.0/24) can access ADFS, however Manchester users (on 192.168.11.0/24) cannot access ADFS using the internal IP. We need to route Manchester users to ADFS via the external ADFS IP, but how to do this when they are resolving DNS records via the same Domain Controller? Host files can do this but that is complex and doesn’t allow for mobility. Enter Traffic Management using Server 2016.

To do this, the following steps need to do performed.

·       First, add the subnets which you want to use for traffic management.

AddDnsServerClientSubnet Name “Manchester” IPv4Subnet “192.168.11.0/24” PassThru

·       Next, add the subnet associated zone. The zone must already exist for this command to work.

Add-DnsServerZoneScope -ZoneName “eacsdemo.online” -Name “Manchester” -PassThru

·       Add the DNS Resource Record

Add-DnsServerResourceRecord -ZoneName “eacsdemo.online” -A -Name “sts” -IPv4Address “52.169.178.129” -ZoneScope “Manchester” -PassThru

·       Add the Traffic Management Policy to route Manchester requests through to

Add-DnsServerQueryResolutionPolicy -Name “ManchesterPolicy” -Action ALLOW -ClientSubnet “eq,Manchester” -ZoneScope “Manchester,1” -ZoneName “eacsdemo.online” -PassThru

These policies are very versatile, allowing you to combine multiple parameters (using AND/OR) such as client subnet, protocol, or time of day to create complex policies which can help you direct clients to the correct location.

I’ll finish this post with a small tip; if you want to remove or get the policy, make sure you specify the zone name or a null value will be returned. For example:

Get-DnsServerQueryResolutionPolicy -ZoneName “misstech.co.uk” -PassThru

remove-DnsServerQueryResolutionPolicy -ZoneName “misstech.co.uk” -PassThru

 

Azure Classic to Resource Manager Migration – Validation failed

I am starting to investigate the migration of resources from Azures Classic deployment mode into the shiny Azure Resource Manager mode.

The first step for me was to attempt to validate the VNET which I wanted to migrate to see if it was compatible. I ran the command listed on the following website (https://azure.microsoft.com/en-gb/documentation/articles/virtual-machines-windows-ps-migration-classic-resource-manager/)

    Move-AzureVirtualNetwork -Validate -VirtualNetworkName $vnetName

As I expected (nothing is ever simple is it?!) I received an error as shown below. The problem was that the validationmessages shown was limited and didn’t really show me any detail. In my case all it showed me was the name of my VNET.

Validation failed.  Please see ValidationMessages for details

In order to get some more detailed information out of the cmdlet, I ended up saving the validation command to a variable and then calling the variable, as shown below:

$validate = Move-AzureVirtualNetwork -Validate -VirtualNetworkName $vnetName -Verbose

$validate.validationmessages

This gave me lots of detail and I discovered that I had typed the VNET name incorrectly. D’oh! I forgot that when you create a Classic VNET in the new portal, the actual name of the VNET is not what you see in the new portal. You need to have a look in the old manage.windowsazure.com portal to see the actual name.

Hopefully this helps some folk out there!

MissTech.co.uk is live

Hello again,

I am very pleased to announce that MissTech.co.uk is now live! All my old content is still available on this site, and hopefully my SEO will remain steadfast as doubledit.co.uk still links to this site.

I’ve given the theme a refresh too, which will be a happy sight for many people who visit the site (I quite liked the yellow and black theme myself!). This one, although a bit more simple, is much more reader friendly.

Thanks for reading,

Emily

I’m coming out!

Hello internet,

I started this blog a couple of years ago because I wanted to document all the problems and fixes I encounter in my day to day work. Most people find my articles when searching for a particular problem, so to be honest this change probably won’t be noticed by many. I’ve also been a bit quiet on the blogging front this year for reasons which will become obvious as you read on.

This year I decided to live my dream and realign my gender to present myself to the world as I see myself inside. For that reason I’ve changed my name from David Dixon to Emily Coates, and will be rebranding my website too. DoubleD IT was fine when it was my initials but now it has certain connotations which I’m not keen on. At some point over the next month this website will become misstech.co.uk. Hopefully my readers will still be able to find the content on search engines as easily as they used to. I will also be endeavouring to make 2017 a year of much blogging and article writing, as I really do enjoy writing and knowing that I’m helping people within our little community of nerds.

Thanks for reading, and I’ll see you on the flipside!

Emily