Exchange (Anglais)

.NET Framework 4.7 and Exchange Server

Exchange Team Blog -

We wanted to post a quick note to call out that our friends in .NET are releasing .NET Framework 4.7 to Windows Update for client and server operating systems it supports.

At this time, .NET Framework 4.7 is not supported by Exchange Server. Please resist installing it on any of your systems after its release to Windows Update.

We will be sure to release additional information and update the Exchange supportability matrix when .NET Framework 4.7 becomes a supported version of .NET Framework with Exchange Server. We are working with the .NET team to ensure that Exchange customers have a smooth transition to .NET Framework 4.7, but in the meantime, delay this particular .NET update on your Exchange servers. Information on how this block can be accomplished can be found in article 4024204, How to temporarily block the installation of the .NET Framework 4.7.
It’s too late, I installed it. What do I do now?

If .NET Framework 4.7 was already installed and you need to roll back to .NET Framework 4.6.2, here are the steps:

Note: These instructions assume you are running the latest Exchange 2016 Cumulative Update or the latest Exchange 2013 Cumulative Update as well as .NET Framework 4.6.2 prior to the upgrade to .NET Framework 4.7 at the time this article was drafted. If you were running a version of .NET Framework other than 4.6.2 or an older version of Exchange prior to the upgrade of .NET Framework 4.7, then please refer to the Exchange Supportability Matrix to validate what version of .NET Framework you need to roll back to and update the steps below accordingly. This may mean using different offline/web installers or looking for different names in Windows Update based on the version of .NET Framework you are attempting to roll back to if it is something other than .NET Framework 4.6.2.

1. If the server has already updated to .NET Framework 4.7 and has not rebooted yet, then reboot now to allow the installation to complete.

2. Stop all running services related to Exchange.  You can run the following cmdlet from Exchange Management Shell to accomplish this: 

(Test-ServiceHealth).ServicesRunning | %{Stop-Service $_ -Force}

3. Depending on your operating system you may be looking for slightly different package names to uninstall .NET Framework 4.7.  Uninstall the appropriate update.  Reboot when prompted.

  • On Windows 7 SP1 / Windows Server 2008 R2 SP1, you will see the Microsoft .NET Framework 4.7 as an installed product under Programs and Features in Control Panel.
  • On Windows Server 2012 you can find this as Update for Microsoft Windows (KB3186505) under Installed Updates in Control Panel.
  • On Windows 8.1 / Windows Server 2012 R2 you can find this as Update for Microsoft Windows (KB3186539) under Installed Updates in Control Panel.
  • On Windows 10 Anniversary Update and Windows Server 2016 you can find this as Update for Microsoft Windows (KB3186568) under Installed Updates in Control Panel.

4. After rebooting check the version of the .NET Framework and verify that it is again showing version 4.6.2.  You may use this method to determine what version of .NET Framework is installed on a machine. If it shows a version prior to 4.6.2 go to Windows Update, check for updates, and install .NET Framework 4.6.2.  If .NET Framework 4.6.2 is no longer being offered via Windows Update, then you may need to use the Offline Installer or the Web Installer. Reboot when prompted.  If the machine does show .NET Framework 4.6.2 proceed to step 5.

5. After confirming .NET Framework 4.6.2 is again installed, stop Exchange services using the command from step 2.  Then, run a repair of .NET 4.6.2 by downloading the offline installer, running setup, and choosing the repair option.  Reboot when setup is complete.

6. Apply any security updates specifically for .NET 4.6.2 by going to Windows update, checking for updates, and installing any security updates found.  Reboot after installation.

7. After reboot verify that the .NET Framework version is 4.6.2 and that all security updates are installed.

8. Follow the steps here to block future automatic installations of .NET Framework 4.7.

The Exchange Team

Announcing Original Folder Item Recovery

Exchange Team Blog -

Cumulative Update 6 (CU6) for Exchange Server 2016 will be released soonTM, but before that happens, I wanted to make you aware of a behavior change in item recovery that is shipping in CU6.  Hopefully this information will aid you in your planning, testing, and deployment of CU6.

Item Recovery

Prior to Exchange 2010, we had the Dumpster 1.0, which was essentially a view stored per folder. Items in the dumpster stayed in the folder where they were soft-deleted (shift-delete or delete from Deleted Items) and were stamped with the ptagDeletedOnFlag flag. These items were special-cased in the store to be excluded from normal Outlook views and quotas. This design also meant that when a user wanted to recover the item, it was restored to its original folder.

With Exchange 2010, we moved away from Dumpster 1.0 and replaced it with the Recoverable Items folder. I discussed the details of that architectural shift in the article, Single Item Recovery in Exchange 2010. The Recoverable Items architecture created several benefits: deleted items moved with the mailbox, deleted items were indexable and discoverable, and facilitated both short-term and long-term data preservation scenarios.

As a reminder, the following actions can be performed by a user:

  • A user can perform a soft-delete operation where the item is deleted from an Inbox folder and moved to the Deleted Items folder. The Deleted Items folder can be emptied either manually by the user, or automatically via a Retention Policy. When data is removed from the Deleted Items folder, it is placed in the Recoverable Items\Deletions folder.
  • A user can perform a hard-delete operation where the item is deleted from an Inbox folder and moved to the Recoverable Items\Deletions folder, bypassing the Deleted Items folder entirely.
  • A user can recover items stored in the Recoverable Items\Deletions folder via recovery options in Outlook for Windows and Outlook on the web.
  • However, this architecture has a drawback – items cannot be recovered to their original folders.

Many of you have voiced your concerns around this limitation in the Recoverable Items architecture, through various feedback mechanisms, like at Ignite 2015 in Chicago where we had a panel that included the Mailbox Intelligence team (those who own backup, HA, DR, search, etc.). Due to your overwhelming feedback, I am pleased to announce that beginning with Exchange 2016 CU6, items can be recovered to their original folders!

How does it work?
  1. When an item is deleted (soft-delete or hard-delete) it is stamped with the LastActiveParentEntryID (LAPEID). By using the folder ID, it does not matter if the folder is moved in the mailbox’s hierarchy or renamed.
  2. When the user attempts a recovery action, the LAPEID is used as the move destination endpoint.

The LAPEID stamping mechanism has been in place since Exchange 2016 Cumulative Update 1. This means that as soon as you install CU6, your users can recover items to their original folders!

Soft-Deletion:

 

Hard-Deletion

Are there limitations?

Yes, there are limitations.

First, to use this functionality, the user’s mailbox must be on a Mailbox server that has CU6 installed. The user must also use Outlook on the web to recover to the original folder; neither Outlook for Windows or Outlook for Mac support this functionality, today.

If an item does not have an LAPEID stamped, then the item will be recovered to its folder type origin – Inbox for mail items, Calendar for calendar items, Contacts for contact items, and Tasks for task items. How could an item not have an LAPEID? Well, if the item was deleted before CU1 was installed, it won’t have an LAPEID.

And lastly, this feature does not recover deleted folders. It only recovers items to folders that still exist within the user’s mailbox hierarchy. Once a folder is deleted, recovery will be to the folder type origin for that item.

Summary

We hope you can take advantage of this long sought-after feature. We continue to look at ways we can improve user recovery actions and minimize the need for third-party backup solutions. If you have questions, please let us know.

Ross Smith IV
Principal Program Manager
Office 365 Customer Experience

TooManyBadItemsPermanentException error when migrating to Exchange Online?

Exchange Team Blog -

Some of you may have noticed that more migrations might be failing due to encountering ‘too many bad items’. Upon closer review, you may notice that the migration report contains entries referencing corrupted items and being unable to translate principals. I wanted to take a few minutes and provide more information to help understand what this means, why these are now occurring, and what can be done about them. Ready to geek out?

During a mailbox migration, there are several stages we go through. We start off with copying the folder hierarchy (including any views associated with those folders), then perform an initial copy of the data (what we call the Initial Sync). Once the initial data copy process is complete, we then copy rules and security descriptors. Reviewing a move report shows entries similar to these.

Stage: CreatingFolderHierarchy. Percent complete: 10
Initializing folder hierarchy from mailbox <guid>: X folders total
Folder hierarchy initialized for mailbox <guid>: X folders created
Stage: LoadingMessages
Copying messages is complete. Copying rules and security descriptors.

For our discussion today, we are interested in the stage of “Copying rules and security descriptors”. Security descriptors are Access Control Lists (ACLs), which are then comprised of Access Control Entries (ACEs, or the individual permissions entries) and stored in SDDL format. In the context of a mailbox, we include both the Mailbox security descriptor (Mailbox permissions) as well as Folder security descriptors (permissions on individual folders). When we look at the Mailbox Security descriptor, it should be noted that only Explicit mailbox permissions are copied. These would include permissions granted by using the Add-MailboxPermission cmdlet, by using the Exchange Management Console (2010) or Exchange Admin Center (2013 and 2016) to add Full Access rights. Any Inherited permissions are not evaluated during the copy process. For example, granting the Receive-As permission on a database object in Active Directory results in an Inherited Allow for Full Access for all mailboxes on that database. When mailboxes on that database are migrated to Exchange Online, those Inherited permissions will not get copied.

Now that we have briefly covered security descriptors, let’s look at the issue. About midway through 2016, a change was introduced to Exchange Online whereby if a security principal could not be successfully validated/mapped to an Exchange Online object, it would be marked as a bad item. Previously, the behavior was that invalid permissions would simply be ignored, and administrators were then left to wonder why some permissions no longer worked after the migration. With this new behavior, corrupt/invalid permissions are now logged so that administrators will know that there are problems with permissions. From my perspective as a Support Engineer, this is a change for the better because as Administrators, you are now able to see when there are issues with permissions. It is possible that this behavior will continue to evolve over time, but I would advise to become familiar with this new behavior so that you understand what is happening.

Now how does this affect you? Since we are now incrementing the bad item count for each corrupt/invalid permission, this means that if we encounter more corrupt/invalid permissions than your current bad item limit is set to (default is 10 for a migration batch), the migration will fail. Depending on the state of permissions, you could potentially see a LOT of bad entries being logged. If you are looking at the migration report text file (downloadable from the Exchange Online Portal), you may see entries similar to the following:

11/12/2016 8:44:43 AM [EXO MRS Server] A corrupted item was encountered: Unable to translate principals for folder “Folder Name”/”FolderNTSD”: Failed to find a principal from the source forest.
5/19/2016 6:33:50 PM [EXO MRS Server] A corrupted item was encountered: Unable to translate principals to the target mailbox: Failed to find a principal in the target forest that corresponds to the following source forest principal values: Alias: <alias>; DisplayName: <Display Name>; MailboxGuid: <mailbox guid>; SID: <SID of User>; ObjectGuid:
<Object GUID>; LegDN: <legacyExchangeDN>; Proxies: [X500:<legacyExchagneDN format>; SMTP:user@contoso.com;];.
5/19/2016 6:33:50 PM [EXO MRS Server] A corrupted item was encountered: Unable to translate principals to the target mailbox: Failed to find a principal in the target forest that corresponds to the following source forest principal values: SID: <SID of User>; ObjectGuid: <Object GUID>;.

So, what is the logic used to validate permissions?

I’m glad you asked! Here is the process spelled out. There are four basic steps to this process, broken out as follows.

  1. Exchange Online – I need to resolve this SID which is present in the security descriptor (Folder or Mailbox)
  2. Exchange Online – Make a request to the On-Premises MRS Proxy, passing the SID to resolve
  3. On-Premises MRS Proxy – Look up the SID against Active Directory and return a set of attributes (including primary SID and legacyExchangeDN)
  4. Exchange Online – Take the legExchangeDN value provided, and attempt to match it up with a user account in the cloud which has that stamped as an X500 proxy address.

Normally, Directory Synchronization will take care of stamping the legacyExchangeDN from each side as an X500 proxy address, but this does mean that the On-Premises legacyExchangeDN must match a Mail-enabled recipient (i.e. Mailbox, MailUser, Mail-enabled Security Group) in the cloud by an X500 Proxy. If it does not, then resolving that permission entry will fail.

I do want to differentiate between the different types of permissions errors you may see.

SourcePrincipalMappingException – these mean that when MRS Proxy tried to look up the SID against On-Premises Active Directory, it couldn’t be resolved. This is a common scenario when users leave the company and their accounts are deleted. You could also encounter these issues if the SID in question is part of the SIDHistory of an On-Premises account. When MRS Proxy attempts to look up the SID, we only search by ObjectSID or msExchMasterAccountSID. MRS Proxy does not evaluate against SIDHistory, so the SID failing to be resolved would be expected behavior. SIDHistory being populated won’t be a common scenario, but it is nonetheless something to be aware of.

Note: Exchange Online has a special built-in bad item limit of 1000 for these Source Principal Mapping errors, so these moves will not fail unless you encounter more than 1000 of these types of bad items.

TargetPrincipalMappingException – these mean that we can’t map the permission to a user account in the Target forest (Exchange Online). A common scenario here would be if a user or group was given permissions on a mailbox, but that user or group is not in your dirsync scope. After trying to move that mailbox via MRS, that user or group is not going to be present in Exchange Online, so this error would be expected. Another scenario is if a security group (not mail-enabled!) was used to assign permissions. Non mail-enabled security groups are not synchronized to Exchange Online, so they won’t exist in the Target forest.

To resolve this issue, there are really two options.

  1. Increase the bad item limit to account for permissions errors. In complex legacy environments where multiple Exchange versions have been in place, and there has been a lot of user turnover, I’ve seen where permissions errors can number into the thousands. Be prepared that you may need to increase the bad item limit to a number higher than you expect. The good news here is that with improvement to Exchange over the years, the odds of encountering actual bad messages is relatively slim, so odds are good that the vast majority of bad items are bad permissions. The second bit of good news here is that we log the type of bad item that is encountered and make this information available in the move report. I’ll show you how to dig into a move report and look at the bad items later on in this blog post.
  2. Cancel the move, fix the bad permissions from the folder or mailbox by either removing them or fixing the issue causing the user/group to not be resolved in Exchange Online, and then submit the move again. But – you may ask – what if I want to fix the permissions on the current move and then resume it? Well, I’m not going to stop you from fixing bad permissions. But I will tell you that it won’t make any difference for the current move. We only evaluate permissions once, at the end of the initial data copy. If the move fails due to bad items (permissions), even if you fix the bad permissions we won’t re-evaluate the now fixed-up permissions and allow the move to complete successfully. You either have to up the bad item limit, or remove the move and fix the permissions and submit a new move.

Now, I promised earlier that I would go through how to review the permissions errors. You can do this by using PowerShell and saving the move report into a variable where it is stored in memory. I typically have the move report exported out to an XML file because I don’t have direct access to customer tenant information. If you are reviewing failed moves within your own tenant, there is no need to do that if you don’t want. I’ll provide the context to do both just in case you want to know both methods.

To save the move report to a variable, you would run the following from PowerShell connected to Exchange Online.

$movereport = Get-MoveRequestStatistics <move request identity> -IncludeReport

To save the move report to an XML file, then import the XML file into PowerShell, you would run the following from PowerShell connected to Exchange Online.

Get-MoveRequestStatistics <move request identity> -IncludeReport | Export-CliXml c:\temp\movereport.xml

Once the file is saved, then you import it into PowerShell. Note that this PowerShell instance does not have to be connected to Exchange Online. It can be just a regular PowerShell instance.

$movereport = Import-CliXml c:\temp\movereport.xml

If you never dug into a move report, let me just say that there are all sorts of golden nuggets of information buried inside (which won’t show in the text file from the Portal, by the way!)

Now that you have the move report imported as a variable, you can access all the rich information within the report. We specified our variable earlier as $movereport, so we just need to call that variable, and access the information stored inside it.

$movereport.report.baditems – this gives you a list of all the bad items encountered. A cool tip is that you can use the Out-GridView PowerShell function to open another window with the list.

$movereport.report.baditems | Out-GridView

What is nice about the Grid View is that you can then filter the output. For example, to validate that all of your bad items are permissions errors, you can simply choose “Add criteria”, check the “Kind” box, and click “Add”.

Change “Contains” to “Does not contain”, and type Security. This will quickly show you if there are any other types of bad items.

Now that we have identified the behavior change, and gone over how to address it, let’s end by talking about what approach should be taken for migrating mailboxes.

The recommended approach to this new change in behavior would be to continue to migrate using low bad item counts, and then manually remediate those that fail. We recommend this approach because migrations that fail would indicate either a LOT of bad source permissions (more than 1000), or it indicates there are valid, working permissions On-Premises that are failing to be correctly mapped to objects in Exchange Online. Both of these conditions should not be common, so investigation would be warranted to ensure that you are in fact dealing with bad permissions.

Special thanks to Brad Hughes and the rest of the MRS team for their assistance and review of this content.

Ben Winzenz

Deep Dive: How Hybrid Authentication Really Works

Exchange Team Blog -

A hybrid deployment offers organizations the ability to extend the feature-rich experience and administrative control they have with their existing on-premises Microsoft Exchange organization to the cloud. A hybrid deployment provides the seamless look and feel of a single Exchange organization between an on-premises Exchange organization and Exchange Online in Microsoft Office 365. In addition, a hybrid deployment can serve as an intermediate step to moving completely to an Exchange Online organization.

But one of the challenges some customers are concerned about is that this type of deployment requires that some communication take place between Exchange Online and Exchange on-premises. This communication takes place over the Internet and so this traffic must pass through the on-premises company firewall to reach Exchange on-premises.

The aim of this post is to explain in more detail how this server to server communication works, and to help the reader understand what risks this poses, how these connections are secured and authenticated, and what network controls can be used to restrict or monitor this traffic.

The first thing to do is to get some basic terminology clear. With the help of TechNet and other resources, here are some basic definitions;

  • Azure Authentication Service – The Azure Active Directory (AD) authentication Service is a free cloud-based service that acts as the trust broker between your on-premises Exchange organization and the Exchange Online organization. On-premises organizations configuring a hybrid deployment must have a federation trust with the Azure AD authentication service. You may have heard of this referred to previously as the Microsoft Federation Gateway, and while speaking purely technically the two are quite different, they are different implementations of essentially what is the same thing. So, to avoid confusion, we shall refer to both as the Azure Active Directory (AD) authentication Service, or Azure Auth Service for short.
  • Federation trust – Both the on-premises and Office 365 service organizations need to have a federation trust established with the Azure AD authentication service. A federation trust is a one-to-one relationship with the Azure AD authentication service that defines parameters and authentication statements applicable to your Exchange organization.
  • Organization relationships – Organization relationships are needed for both the on-premises and Exchange Online organization and are configured automatically by the Hybrid Configuration Wizard. An organization relationship defines features and settings that are available to the relationship, such as whether free/busy sharing is allowed.
  • Delegated Auth (DAuth) – Delegated authentication occurs when a network service accepts a request from a user and can obtain a token to act on behalf of that user to initiate a new connection to a second network service.
  • Open Authorization (OAuth) – OAuth is an authorization protocol – or in other words, a set of rules – that allows a third-party website or application to access a user’s data without the user needing to share login credentials.
A History Lesson

Exchange has had a few different solutions for enabling inter-organization connectivity, which is essentially what a hybrid deployment is; two physically different Exchange orgs (on-premises and Exchange Online) appearing to work as one logical org to the users.

One of the most common uses of this connectivity is to provide users the ability to share free/busy information, so that’s going to be the focus of the descriptions used here. Of course, hybrid also allows users to send secure email to each other, but that rarely seems to come up as a concern as every org let’s SMTP flow in and out without much heartache, so we won’t be digging into that here. There are other features you get with hybrid, such as MailTips, but these use the same underlying protocol flow as Free/Busy, so if you know how Free/Busy works, you know how they work too.

So, one of the first cross-premises features we released was cross-forest availability. If the two forests did not have a trust relationship then each admin created a service account, gave that service account permissions to objects in their own forest (calendars in this case), and then gave those credentials to the other organization’s admins. If the forests were trusted each admin would instead give permissions to the Client Access Servers from the remote forest to read Free/Busy in their own forest.

Each org admin would then add an Availability Address space object to their own Exchange org, with the SMTP domain details for the other forest, and in the case of this being the untrusted forest case, provide the pre-determined creds for that forest. The admins also had to sync directories between the orgs (or import contacts for users in the remote forest) too. That was a hassle. But, once they did that, lookups for users who had a contact object in the forest triggered Exchange to look at the cross-forest availability config, and then use the previously obtained credentials or server permissions to make a call to the remote forest to request free/busy information.

The diagram below shows this at a high level for the untrusted forest version of this configuration.

Clearly there were some shortcomings with this approach. Directory sync is a big requirement for most organizations, credentials had to be exchanged and managed. Connections were directly from server to server, AutoDiscover had to be set up and working as it was used to find the correct EWS endpoints in the remote org, but one thing some customers liked was that these connections could be pre-authenticated with an application layer firewall (TMG back in the day was very popular) as creds were used in a Basic handshake, encrypted by SSL.

These shortcomings lead us to design a new approach, that allowed two servers to talk to each other securely without having to exchange credentials or perform a full directory sync.

DAuth

Exchange 2010 and later versions of Exchange were built to use this thing called the Azure Auth Service, an identity service that runs in the cloud, to be used as a trust broker for federating organizations, enabling them to share information with other organizations.

Exchange organizations wanting to use federation establish a one-time federation trust with the Azure Auth Service, allowing it to become a federation partner to the Exchange organization. This trust allows servers, on behalf of users authenticated by Active Directory (the identity provider for on-premises users) to be issued Security Assertion Markup Language (SAML) On-Behalf-Of Access Tokens by the Azure Auth Service. These On-Behalf-Of Access Tokens allow users from one federated organization to be trusted by another federated organization. The Organization Relationship or sharing policy that must also be set up governs the level of access partner users have to the organization’s resources.

With the Azure Auth Service acting as the trust broker, organizations aren’t required to establish multiple individual trust relationships with other organizations and can instead do the one-time trust, or Federation configuration, and then establish Organization Relationships with each partner organization.

The trust is established by submitting the organization’s public key certificate (this certificate is created automatically by the cmdlet used to create the trust) to the Azure Auth Service and downloading the Azure Auth Service’s public key. A unique application identifier (ApplicationUri) is automatically generated for the new Exchange organization and provided in the output of the New Federation Trust wizard or the New-FederationTrust cmdlet. The ApplicationUri is used by the Azure Auth Service to identify your Exchange organization.

This configuration allows an Exchange Server to request an On-Behalf-Of Access Token for a user for the purposes of making an authenticated request to an Exchange Server in a different organization (a partner, or perhaps an Exchange Server hosted in Office 365 in the case of hybrid), by referencing their ApplicationUri.

When the on-premises admin then adds an organization relationship for a partner org, Exchange reaches across to the remote Exchange Organization anonymously to the /AutoDiscover/AutoDiscover.svc end-point using the “GetFederationInformation” method to read back relevant information such as the Federated domains list, their ApplicationUri, etc.

Here’s an example of the entry in the cloud, for Contoso’s hybrid Exchange deployment. You can see we know the AutoDiscover endpoint in the on-premises Exchange organization based on this, and what can be done with this agreement.

DomainNames : {contoso.com}
FreeBusyAccessEnabled : True
FreeBusyAccessLevel : LimitedDetails
FreeBusyAccessScope :
MailboxMoveEnabled : False
MailboxMoveDirection : None
DeliveryReportEnabled : True
MailTipsAccessEnabled : True
MailTipsAccessLevel : All
MailTipsAccessScope :
PhotosEnabled : False
TargetApplicationUri : FYDIBOHF25SPDLT.contoso.com
TargetSharingEpr :
TargetOwaURL : https://mail.contoso.com/owa
TargetAutodiscoverEpr: https://autodiscover.contoso.com/autodiscover/autodiscover.svc/WSSecurity

And the same command when run on-premises results in pretty much the same information with the notable differences seen here:

TargetApplicationUri : outlook.com
TargetOwaURL : http://outlook.com/owa/contoso.onmicrosoft.com
TargetAutodiscoverEpr : https://autodiscover-s.outlook.com/autodiscover/autodiscover.svc/WSSecurity

Now when a user (Mary in our picture below) in Contoso’s On-Premises Exchange environment requests free/busy for a user (Joe) in Contoso’s online tenant, (or for a partner org for which there is an organization relationship, this flow works the same), here’s what happens.

  1. The on-premises contoso.com Exchange Server determines that target user is external and does a lookup for the Organizational Relationship details to find where the send the request.
  2. The on-premises contoso.com Exchange Server submits a token request to the Azure Auth Service for an On-Behalf-Of Access Token for contoso.onmicrosoft.com, referencing contoso.microsoft.com’s ApplicationUri (which of course it knows because of the creation of the Org Relationship), the SMTP address of the requesting user, and the purpose/intent of the request (Free/Busy in this case). This request is encrypted using the Azure Auth Service’s public key and signed using the on-premises organization’s private key, thereby proving where the request is coming from.
  3. The Azure Auth Service returns an On-Behalf-Of Access Token to the server in contoso.com, signed with its own Private Key (to prove where it came from) and the On-Behalf-Of Access Token in the payload is encrypted using the public key of contoso.microsoft.com (which Azure Auth has because contoso.microsoft.com provided it when they set up their own Federation Trust.
  4. The on-premises contoso.com Exchange Server then submits that token as a SOAP request to contoso.onmicrosoft.com’s AutoDiscover AutoDiscover/AutoDiscover.svc/wsssecurity endpoint (which it had stored in its Org Relationship config for the partner. The connection is anonymous at the HTTP/network layer, but conforms to WS-Security norms (see References at the end of this document for details on Ws-Security). Note: This step is ignored if TargetSharingEPR is set on the Org Relationship object as that specifies explicitly the EWS endpoint for the target Org.
  5. The contoso.onmicrosoft.com Exchange Server validates the signed and encrypted request (this is done at the Windows layer using the Windows Communication Framework (WCF) – Exchange just passes to the WCF layer (telling it about its keys and issuer information it has based on the setup of the federation trust) and then assuming it passes the WCF sniff test contoso.onmicrosoft.com’s Exchange Server returns the EWS URL for the Free/Busy request to be submitted to. (Don’t forget that only the Exchange Servers in contoso.microsoft.com have the necessary private key to decrypt the auth token to understand what it really is).
  6. The request and auth token is then submitted directly from Exchange in contoso.com to the EWS endpoint of Exchange in contoso.onmicrosoft.com.
  7. We do the same validation of the signed and encrypted request we did before as it’s now hitting a different endpoint on Exchange in contoso.onmicrosoft.com, once done the server sees that this is a free/busy request from contoso.com (again based on ApplicationUri, contained within the token).
  8. The Exchange Server in contoso.onmicrosoft.com extracts the e-mail address of the requesting user, splits-up the user from the domain part, and checks the latter against its domain authorization table (based on the Org Relationships configured in the org) if this domain can receive the requested free/busy information. These requests are allowed/denied on a per-domain basis only – if the domain of the requesting user is contained in the Org Relationship then it’s ok to return Free/Busy and only Default calendar permissions are evaluated.
  9. The server in contoso.onmicrosoft.com responds by providing the free/busy data. Or not. If it wasn’t authorized to do so.
  10. The on-premises contoso.com server returns the result to the requesting client.

What do you need to allow in through the firewall for this to work then? You need to allow inbound TCP443 connections to /autodiscover/autodiscover.svc/* and to/ews/* for the actual requests.

This is key – only the receiving Exchange server has the cert required for decrypting the On-Behalf-Of Access Token, so while you might be ok to unpack the TLS for the connection itself on a load balancer or firewall, the token within it is still encrypted to protect it from man in the middle attacks. If you were to install the private key and some smarts on a firewall device, you could open it but all you’d see is a token with values that only make sense to Exchange (the values agreed upon during creation of the Federation Trust). So if you want to verify this token really did come from Azure Auth Service, all you really need to do is verify the digital signature to ensure it was signed by the Azure Auth Service. When a message is signed, it is nearly impossible to tamper with the message but message signing alone does not protect the message content itself from being seen. Using the signature, the receiver of the SOAP message can know that the signed elements have not changed en route. Anything more than that, such as decrypting the inner token would require an awful lot of Exchange specific information, which might lead you to conclude the best place to do this is Exchange.

Now onto OAuth

So firstly, why did we move away from DAuth and switch to using OAuth?

Essentially, we made some architectural changes in the Azure Auth Service and WCF was falling out of favor and not the direction Microsoft was using as the framework for service-orientated applications. We had built something that was quite custom, and wanted to move to a more open-standards based model. OAuth is that.

So how does OAuth work at a high level?

At a high-level OAuth uses the same Trust Broker concept as DAuth, each Exchange organization trusts the Azure Auth Service, and tokens from that service are used to authorize requests, proving their authenticity.

There are several noteworthy differences between DAuth and OAuth.

The first is that OAuth provides the ability to allow a server with the resource being requested to redirect the client (or server) requesting the data to the trusted issuer of access tokens. It does this when the calling server or client sends an anonymous call with an empty value in the HTTP Bearer header – this is what tells the receiving server that the client supports OAuth, triggering the redirection response, sending the client to the server that can issue access tokens.

The second thing to note is that the Exchange implementation of OAuth for Server to Server Auth we call S2S OAuth 2.0 and we have documented it in detail here. This document explains a lot of detail about what is contained in the token, so if you’re interested, that’s the document to snuggle up with. As you’ll see we don’t use this redirection mentioned above for our server to server hybrid traffic but it’s good to know it’s there as it helps understand OAuth more broadly.

Here’s an extract directly from the protocol specification (linked to later in this document) which provides a great example of OAuth in practice. In this example, this is the response received when one server tries to access a resource on another server in the same hybrid org.

HTTP/1.1 401 Unauthorized
Server: Fabrikam/7.5
request-id: 443ce338-377a-4c16-b6bc-c169a75f7b00
X-FEServer: DUXYI01CA101
WWW-Authenticate: Bearer client_id=”00000002-0000-0ff1-ce00-000000000000″, trusted_issuers=”00000001-0000-0000-c000-000000000000@*”
WWW-Authenticate: Basic Realm=””
X-Powered-By: ASP.NET
Date: Thu, 19 Apr 2012 17:04:16 GMT
Content-Length: 0

Following this response, the requesting server then sends its credentials to the indicated token issuer in the response above (trusted_issuers=”00000001-0000-0000-c000-000000000000@*”), which is an endpoint it knows about because it too has an AuthServer object with that same id. That token broker authenticates the client and issues access and refresh tokens to the requestor. Then the requestor uses the access token to access the resource it requested on the server.

Below is an example of this, from the same specification document. In this example, the requestor went to the Trusted Issuer referred to in the example above, and that issuer authenticated the requestor and issued an access token for the server allowing it to request the data. The requestor then would use this token to access the resource it originally requested on the remote server.

This is example of a JWT actor (JSON Web Token) token issued by an STS. For more information about the claim values contained in this security token, see section 2.2 of the specification document.

actor:
{
“typ”:”JWT”,
“alg”:”RS256″,
“x5t”:”XqrnFEfsS55_vMBpHvF0pTnqeaM”
}.{
“aud”:”00000002-0000-0ff1-ce00-000000000000/contoso.com@b84c5afe-7ced-4ce8-aa0b-df0e2869d3c8″,
“iss”:”00000001-0000-0000-c000-000000000000@b84c5afe-7ced-4ce8-aa0b-df0e2869d3c8″,
“nbf”:”1323380070″,
“exp”:”1323383670″,
“nameid”:”00000002-0000-0ff1-ce00-000000000000@b84c5afe-7ced-4ce8-aa0b-df0e2869d3c8″,
“identityprovider”:”00000001-0000-0000-c000-000000000000@b84c5afe-7ced-4ce8-aa0b-df0e2869d3c8″
}

Back to differences between DAuth and OAuth – A notable difference between the two is that OAuth tokens are not encrypted. The token is also passed as header information, not as part of the body. There is therefore a reliance upon SSL/TLS (hereafter just referred to as TLS) to protect the traffic in transport.

And the last thing to note is that we only use this flow for on-premises to Exchange Online (and vice-versa) relationships; this isn’t something we use for partner to partner relationships. So if you are hybrid with Exchange Online and have Partner to Partner Org Relationships too, you are using both DAuth and OAuth.

So how does OAuth work in the context of Exchange hybrid. Let’s start with what’s needed to set up the relationship to support this flow. The steps are in https://technet.microsoft.com/en-us/library/dn594521(v=exchg.150).aspx but all of this is now automatically performed in the newest versions of the Hybrid Configuration Wizard (HCW) – so even though that’s the only right way to do this, we’re just going to walk through what the wizard does so we understand what is really going on.

The HCW first adds a new AuthServer object to the on-premises AD/Exchange Org specifying the Azure OAuth Service endpoint to use. The AuthServer object is the OAuth equivalent of the Federation Trust object and it stores such things as the thumbprint of the Azure Auth Service’s signing cert, the token issuing endpoint, the AuthMetaDataUrl (which is where the information all comes from anyway, so that’s kind of a circular reference, isn’t it) and so on.

The HCW process creates a self-signed authorization certificate, the public key of which is passed to the Azure Auth Service and will be used by the Azure Auth Service to verify that token requests from the org are authentic. This and the on-premises AppID and other relevant information are stored in the AuthConfig object. This is the OAuth equivalent of the FederationTrust object we had in DAuth.

The HCW registers the well-known AppID for Exchange on-premises, the certificate details and all the on-premises URL’s Exchange Online might use for the connection as Service Principal Names in Azure Auth Service. This is simply telling Azure Auth Service that Exchange Online may request a token for those URL’s and that AppID, which prevents tokens for any arbitrary URL being requested. Exchange Online’s URL’s are managed automatically with Azure Auth Service, so there’s no need for the admin to add any URL’s for Exchange Online. Having both Exchange Online and On-Premises use the same AppID is part of the magic why from an auth point of view there is no difference between both environments for the Exchange Servers within them.

Then the HCW creates the IntraOrganizationConnector object, specifying the domains in the other organization and the DiscoveryEndpoint AutoDiscover URL used to reach them.

Note the name of this object, Intra…, this is for the connection between on-premises Exchange and Exchange Online for the same customer. This is not something for partner to partner communication.

So, we’re set up – how does it work when someone wants to go look at the free/busy of someone on the other side of that hybrid relationship?

  1. Mary on-premises makes a Free/Busy request for Joe, a user in the contoso.onmicrosoft.com tenant.
  2. The on-premises Exchange Server determines that target user is external and does a lookup for an IntraOrganizationConnector to get the AutoDiscover endpoint for the external contoso.onmicrosoft.com organization (matching on SMTP domain).
  3. The on-premises Exchange Server makes an anonymous request to that AutoDiscover endpoint and the server responds with a 401 challenge, containing the ID for the trusted issuer from which it will accept tokens.
  4. The on-premises Exchange Server requests an Application Token from Azure Auth Service (the trusted issuer)Key: This Token is for Exchange@Contoso.com and can be cached. If another user on-premises does a Free/Busy request for the same external organization there is no round-trip to AAD, the cached token is used.
    1. It does this by sending a self-issued JSON (JWT) security token, asserting its identity and signed with its private key. The security token request contains the aud, iss, nameid, nbf, exp claims. The request also includes a resource parameter and a realm parameter. The value of the resource parameter is the Uniform Resource Identifier (URI) of the server.
    2. Azure Auth Service validates this request using the public key of the security token provided by the client.
    3. Azure Auth Service then responds to the client with a server-to-server security token that is signed with Azure Auth Service’s private key. The security token contains the aud, iss, nameid, nbf, exp, and identityprovider claims
  5. The on-premises Exchange Server then performs an AutoDiscover request using this token and retrieves the EWS endpoint for the target organization.
  6. The on-premises server then goes back to step 5 to request a token for the new audience URI, the EWS endpoint (unless this happens to be one and the same, which it will never be for Exchange Online users, but might be for on-premises users).
  7. The on-premises server then submits that new token to the EWS end point requesting the Free/Busy.
  8. Exchange Online authenticates the Access Token by lookup of the Application Identity and validates the server-to-server security token by checking the values of the aud, iss, and exp claims and the signature of the token using the public key of the Azure Auth Service.
  9. Exchange Online verifies that Mary is allowed to see Joe’s Free/Busy. Unlike DAuth, OAuth allows granular calendar permissions as the identity of the requesting user not just the domain is available to Exchange and so all permissions are evaluated.
  10. Free/Busy info is returned to the client.

What do you need to allow in through the firewall for this flow to work then? You need to allow TCP443 inbound connections to /autodiscover/autodiscover.svc/* for AutoDiscover to work correctly and to/ews/* for the actual requests.

Tokens are signed and so they cannot be modified – i.e. the audienceURI cannot be changed by some man-in-the-middle without invalidating the signing. But as the tokens exist in the clear in the packet header they could be copied and used by someone else against the same endpoint if they have access to them, which is why end to end TLS is key, and why only trusted devices should be able to perform TLS decryption/re-encryption.

So just as with DAuth, if you want to put a device between Exchange on-premises and Exchange Online you have some things to consider. You can do TLS termination if you want to, and if you wanted to verify the signing of the tokens to confirm it came from the Azure Auth Service you could do that too, but there’s not much else you can do to the traffic without breaking it (and you need to be careful to protect it as the token could be re-used but of course only against the original audienceuri, to change that parameter or any of the content would invalidate the digital signature). You can still restrict source IP address ranges at the network layer if you want to, but given that if you manage your servers properly such that only Exchange Online has the public key used for the signing of tokens, you are safe to assume that a properly signed token came from only one place. So, manage the security of the certificates on your Exchange Servers and trust that Exchange won’t do anything with a modified or incorrectly signed token other than reject it.

What about mailbox moves?

Another type of traffic that can take place between Exchange Online and Exchange on-premises is a mailbox move – and that’s the one type of traffic that does not follow the flows described above.

The Mailbox Replication Service (MRS) is used for the migration of mailboxes between on-premises and Exchange Online. When the admin creates the required migration endpoint to enable this feature, he must provide credentials of a user with permission to invoke the MRS moves – those credentials are used in the connection attempt to on-premises, which is TLS secured and uses NTLM Auth. So, you can use pre-auth for that connection to /ews/mrsproxy.svc, and because NTLM is used the credentials never go over the wire.

Hopefully that has cleared up quite a few of the questions we usually get, but just in case that’s all a bit tldr:, here’s the short(er) version:

How do we know the traffic is from Exchange Online? Can it be spoofed?

It can only be spoofed if the certificates used to sign (and in the case of DAuth, encrypt) the traffic are compromised. So, that’s why it’s vital to secure your servers and admin accounts using well documented processes. If your servers or admins are compromised, the doors are wide open to all kinds of things.

Again, to re-iterate, in DAuth the access tokens are encrypted as well as signed, so the token itself can’t be read without the correct private key, but with OAuth it can, but if the signature is valid, then we know where the traffic came from.

Can I scope the traffic so only users from my tenant can use this communication path?

Users from your tenant aren’t using this server to server communication – it’s Exchange Online and Exchange on-premises using it, performing actions on behalf of the users. So, can you scope it to just those servers? We do document the namespaces and IP address ranges these requests will be coming from here, but given what we’ve covered in this article, we now know Exchange can tell if the traffic is authentic or not and won’t do anything with traffic it can’t trust (we put our money where our mouth is on this, imagine how many Exchange Servers we have in Exchange Online, with no source IP scoping possible, so how many connections we handle, every minute of every day – that’s why we have to write and rely on secure code to protect us – and that same code exists in Exchange on-premises, assuming you keep it up to date).

Can I pre-authenticate the traffic? Can I check the tokens validity against some endpoint?

You can’t pre-authenticate the traffic using HTTP headers as you would Outlook or ActiveSync as the auth isn’t done that way. The authentication is provided by proving the authenticity of the request’s signing. If we think about authentication as being about proving who someone is, the digital signing itself proves who is making the request. Only the person in possession of the private key used to sign the traffic can only sign the requests. So we validate, and thereby authenticate the requests received from your on-premises servers coming in to Exchange Online because we know (and trust) only you have the private key used to sign them. Azure Auth Service looks after the private key it uses to sign our requests (very carefully as you might expect). Can you verify the signing? To directly quote this terrific blog post “signature verification key and issuer ID value are often available as part of some advertising mechanism supported by the authority, such as metadata & discovery documents. In practice, that means that you often don’t need to specify both values – as long as your validation software know how to get to the metadata of your authority, it will have access to the key and issuer ID values.” So, you can verify the signing is good, and you could potentially also choose to additionally validate;

  1. That the token is a valid JWT
  2. That the iss claim (in the signed actor token) is correct – this is a well-known GUID @ tenant ID
  3. Checking the actor is Exchange (AppId claim) – this is also a well-known appID value @ tenant ID
Can I use Multi-Factor Auth (MFA) to secure this traffic? My security policy says I must enforce MFA on anything coming in front the Internet

Let’s first agree upon the definition of MFA, as that’s a term people throw around a lot, often incorrectly. MFA is a security mechanism or system that requires the caller provide more than one form of authentication, and they must come from different providers to verify their identity. For example, credentials and a certificate, a certificate and a fingerprint and so on. Another way to describe MFA is with a set of three attributes: something you know, something you have and something you are. Something you know – a password, something you have, a certificate, something you are – a fingerprint.

So, now we know MFA is a general term used to describe how one party authenticates to another, and isn’t an actual ‘thing’ you can configure, let’s look at the hybrid traffic with it in mind.

In both DAuth and OAuth the digital signing addresses the something you have aspect, the signing can only have been done by Azure Auth Service, as it’s the only possessor of the private key used for the signing.

The something you are attribute isn’t something the flow can provide, Azure Auth isn’t a person with fingers or DNA, but the something you know is arguably what Exchange Online puts in the request, the claims in an OAuth token, the key values and attributes within a DAuth token. So one could make a case that this traffic already uses MFA. This might not be the kind of MFA your security guy can buy as an off-the-shelf solution with a token keyfob, but if you get back to what MFA is, not how it compares to a solution for client to server traffic, you’ll have a more meaningful conversation.

Can I SSL terminate the connection and inspect it and then re-encrypt it?

Yes you can terminate the SSL/TLS but ‘inspecting it’ is potentially a can of worms if ‘inspecting’ results in ‘modifying’ it. You can’t inspect a DAuth token without decrypting it, and what exactly are we inspecting it for? To check the issuer, the audience and so on are correct? Ok, let’s do that, but if the signing is still intact then they must be correct. All you need to do is verify the signature matches that from Azure Auth Service. If you can do that, you don’t need to inspect content, as they will be valid. Whatever happens you don’t want to tinker with the headers, or you’ll invalidate the signature, and then Exchange (or more precisely, Windows) will reject it.

Are these connections anonymous? Authenticated? Authorized?

As previously explained, the traffic does not carry authentication headers as such but instead is authenticated using digital signing of the requests, and the authorization is done by the code on the server receiving the request. Bob is asking to see Mary’s free/busy – can he? Yes or no. That’s authorization.

Are any of these connections or requests insecure or untrustworthy?

Microsoft does not consider any of the flows discussed in this article to be insecure at all. We were very diligent when designing and implementing them to make sure we secure the traffic and the tokens using all available means, and we’re only documenting this in detail in this article to clear up any doubts and to try and fully explain why it’s secure and trustworthy to configure Exchange hybrid.

How do we prevent token replay? Token modification?

Token replay is potentially possible with any token based authentication and authorization system – as the token is being used in place of credentials at the time of accessing a resource. DAuth has an advantage in this space as the tokens are encrypted, but the general principle for any authentication scheme like this is to protect any and all tokens from interception and mis-use, and there’s where TLS comes in, and only allowing termination of TLS on devices you trust. And not allowing man-in-the-middle attacks or allowing them to happen by configuring computers or teaching users to ignore certificate warnings.

How do I know if I’m using DAuth or OAuth and can I choose which to use?

Exchange will always try OAuth first by looking to see if there is an enabled IntraOrganizationConnector present with the domain name of the target user for any request. Only if no such connector exists, or if there is one but it is disabled, would we then look for the Domain name in an OrgRelationship. And if there isn’t one of them, we will then start to look for the domain name in the Availability Address Space configuration.

Remember OAuth is only for on-premises <-> Exchange Online users, so you might very well end up with both being used if you are both hybrid with Exchange Online and have partner relationships with other organizations.

Know this though, the HCW will always try to enable OAuth in your org if it can, because we want to try and get our customers to use OAuth if we can for reasons previously explained. If you disable the IntraOrganizationConnector and then re-run HCW, it will get re-enabled if your topology can support it.

Well done for making it this far. I hope you found this useful, if not today then at some point when you are having to explain to some security guy why it’s ok to go hybrid.

Please do provide any comments or ask questions if you want to, and if you want to read more here’s a list of articles I found helpful while putting this together.

References

Particular thanks for helping with this article go to Matthias Leibmann and Timothy Heeney for making sure it was technically accurate, and to numerous others who helped it make sense and mostly correct grammar.

Greg Taylor
Principal PM Manager
Office 365 Customer Experience

Office 365 Directory Based Edge Blocking support for on-premises Mail Enabled Public Folders

Exchange Team Blog -

Until now, our on-premises customers who use  Mail Enabled Public Folders (MEPF) could not use services like Directory Based Edge Blocking (DBEB). If DBEB is enabled, any mails sent to Mail Enabled Public Folders (MEPF) will be dropped at the service network perimeter. This is because, DBEB queries Azure Active Directory (AAD) to find out if a given mail address is valid or not. Because Mail Enabled Public Folders (MEPF) are not synced to Azure Active Directory, all MEPF address are considered as invalid by DBEB. Sender of the mail to MEPF would receive following NDR:

‘550 5.4.1 [<sampleMEPF>@<recipient_domain>]: Recipient address rejected: Access denied’.

To resolve this issue, in the latest Azure AD Connect tool update, we are introducing an option to synchronize MEPFs from on-premises AD to AAD. Admins can do this through the newly introduced option – ‘Exchange Mail Public Folders’ in Optional Features page of Custom installation during Azure AD Connect tool installation/upgrade.

When you select this option, and performs a full sync, all the Mail Enabled Public Folders from on-prem AD(s) will be synced to AAD. Once synced, you can enable DBEB. Mail Enabled Public Folders addresses will no longer considered invalid addresses by DBEB. And messages will be delivered to them like they are delivered to any other recipient.
Details of version of AAD Connect tool required

This feature is available in 1.1.524.0 (May 2017) version or any later versions of Azure AD Connect tool.

Azure AD Connect tool can be downloaded from following location: Download Azure AD Connect.

For more details, here is the link for version history of Azure AD Connect

IMPORTANT NOTES:

  • Directory Based Edge Blocking is not yet supported for Mail Enabled Public Folders hosted in Exchange Online. Current feature enables DBEB support only for Mail Enabled Public Folders hosted On-premises.
  • For Exchange Online Protection (EOP) Standalone i.e., customers who have only Exchange on-premises configured but no presence in Exchange Online, and no “advanced” features of EOP, this synchronization through AAD Connect tool is enough for DBEB to work.
  • For Exchange Online (ExO) & EOP i.e., customers who have both on-premises Exchange & Exchange Online configured, or who are using features such as DLP or ATP, this feature does not create the actual public folder objects in the Exchange Online directory. Additional synchronization via PowerShell is required for DBEB to work if you are using Exchange Online.
  • For customers who are planning to migrate Public Folders from on-premises to Exchange Online: nothing in the migration procedure has changed with this feature support. One extra point you should take care of before starting Public Folder migration to EXO is – ensure ‘Exchange Mail Public Folder’ option in Azure AD Connect tool is *not* checked. If it is checked, uncheck it before you start migration. By default, it will be unchecked.
Customers who had a work-around in place

There were some customers who did not want to disable DBEB despite having Mail Enabled Public Folders. These customers have opted for a work-around of creating MSOL objects (like EOPMailUser, MailUser or MailContact) in Azure Active Directory with same SMTP addresses as Mail Enabled Public Folders so that these addresses are considered as valid addresses by DBEB. Customers who opted for this work-around are requested to remove all such MSOL objects before performing the sync of Mail Enabled Public Folders through AAD Connect tool. If the ‘impersonation objects’ have not been removed prior to the new synchronization, they are likely to cause a soft-match error. In soft-match error case, sync of Mail Enabled Public Folder from on-prem AD to Azure Active Directory will not succeed, and an email similar to the following will be received:

“Identity synchronization Error Report: <Date>”

Unable to update this object because the following attributes associated with this object have values that may already be associated with another object in your local directory services: [ProxyAddresses SMTP:SampleMEPF@mail.contoso.com,smtp:SampleMEPF@contoso.com;Mail SampleMail@mail.contoso.com;]. Correct or remove the duplicate values in your local directory. Please refer to http://support.microsoft.com/kb/2647098 for more information on identifying objects with duplicate attribute values.

As mentioned in the description, you can correct or remove the entries with duplicate SMTP address. Below are corresponding links for each scenario:

Once the objects have been cleaned up, performing a full sync will ensure Mail Enabled Public Folders are successfully synced to Azure Active Directory. More info here: http://support.microsoft.com/kb/2647098.

Public Folder Team

Demystifying Certificate Based Authentication with ActiveSync in Exchange 2013 and 2016 (On-Premises)

Exchange Team Blog -

Some of the more complicated support calls we see are related to Certificate Based Authentication (CBA) with ActiveSync. This post is intended to provide some clarifications of this topic and give you troubleshooting tips.
What is Certificate Based Authentication (CBA)? Instead of using Basic or WIA (Windows Integrated Authentication), the device will have a client (user) certificate installed, which will be used for authentication. The user will no longer have to save a password to authenticate with Exchange. This is not related to using SSL to connect to the server as we assume that you already have SSL setup. Also, just to be clear (as some people have those things confused) CBA is not two-factor authentication (2FA).

How does the client certificate get installed on the device? There’s several MDM (Mobile Device Management) solutions to install the client certificate on the device.

The most important part of working with CBA is to know where the client certificate will be accepted (or ‘terminated’). How you implement CBA will depend on the response to following questions:

  • Will Exchange server be accepting the client certificate?
  • Will an MDM or other device using Kerberos Constrained Delegation (KCD) be accepting the client certificate?

You can choose only one. You can’t have both Exchange and a device accepting the client certificate.

This post assumes that the user certificates have already been deployed in AD before CBA was implemented. The requirements for user certificates are documented here: Configure certificate based authentication in Exchange 2016.
If Exchange Server is accepting the client certificate

This configuration is simple and is fully documented in the following link that applies to Exchange 2013/2016. The configuration for legacy versions follows the IIS configuration steps. The overall functionality of CBA has not changed across versions however the requirements may vary.

Configuration of CBA is done via IIS Manager. The overall steps are: Installing Client Certificate Mapping Authentication feature on all CAS servers, enabling client certificate authentication, setting SSL client certificates to “required” and disabling other authentication methods and finally enabling client certificate mapping on the virtual directory,

Important Notes:

  1. You cannot use multiple authentication methods and have client certificates enabled on the virtual directory. The client must either use client certificate or username and password to authenticate, not both.
  2. SSL settings should be set to “Require” not “Accept”. You can have connection failures if set improperly.
If MDM or another device is accepting the certificate and using KCD to authenticate the client device

What is important to note here is the client certificate will be accepted at the device, therefore, you would NOT configure client certificates on Exchange.

  • Each vendor should have updated documentation to work with current Exchange version.
  • To accept the client certificate, the MDM would require that KCD be configured to authenticate to Active Directory.
  • Most vendors expect Windows Integrated Authentication configured on IIS/Exchange. This would allow the authentication to be passed without any additional prompts to the client device. All other authentication methods would be disabled.

Note: When you enable Integrated authentication on Exchange, you should ensure that the authentication “Providers” have both NTLM and Negotiate enabled in IIS Manager.


Overall authentication process when client certificate is accepted by MDM:

  1. The client device contacts MDM with a client certificate that contains UPN in the Subject Alternative Name section of the certificate
  2. The MDM authenticates the user with Active Directory
  3. KCD issues a Ticket to the MDM with user’s credentials
  4. MDM sends users credentials to Exchange with Windows Integrated (only) configured on Exchange.
  5. Exchange response to the MDM with the mail data.
  6. MDM responds to the client with mail data.
Coexistence with Exchange, when Exchange is accepting the client certificate

When adding 2013/2016 to the environment and Exchange server 2013/2016 is accepting the client certificate, it’s important to disable any client certificate configuration on the legacy CAS. This is because the client certificate will not be proxied to the legacy server. The authentication on Legacy CAS would go back to default of Basic on “Microsoft-Server-ActiveSync” virtual directory, and “Windows Integrated” on subfolder named “Proxy”.


Troubleshooting

Here are some troubleshooting steps!
If Exchange Server is accepting the client certificate

If Exchange is configured to accept the client certificate, use the IIS logs and look for requests for /Microsoft-Server-ActiveSync. Determine the error code that is returned. IIS error codes are found here.

  • Verify the UPN configured on the “Subject Alternative Nameportion of the client certificate. In ADUC, click “View, Advanced Features”, locate the user account and select “Published Certificates”, click “Details” tab.
  • Client certificates and SSL “Required” should not be enabled on the Default Web Site, only on the MSAS “Microsoft-Server-ActiveSync” virtual directory.
  • Verify there are no additional authentication methods enabled on the MSAS virtual directory. See “Step 4” in Configure certificate based authentication in Exchange 2016
If MDM is accepting client certificate
  • With MDM vendor, verify that KCD is working correctly, by checking security logs on MDM to verify Kerberos is working.
  • Verify if the request is getting to Exchange by looking at the IIS logs requests for /Microsoft-Server-ActiveSync.
  • Verify Windows Integrated (only) is enabled on Exchange.
Attachments

If users have issues with attachments, follow “Step 7” in Configure certificate based authentication in Exchange 2016
Troubleshooting Logs and Tools

Use the IIS logs to determine if the device reached the Exchange server. Look for requests to /Microsoft-Server-ActiveSync virtual directory.

Refer to The HTTP status code in IIS 7.0, IIS 7.5, and IIS 8.0 KB for information on the various error codes in the IIS logs. Example of IIS error code: 403.7 – Client certificate required. From this you would verify that the device has the client certificate installed.

  • IIS Logs – IIS logs can be used to review the connection for Microsoft-Server-ActiveSync. More info here.
  • Log Parser Studio – Log Parser Studio is a GUI for Log Parser 2.2. LPS greatly reduces complexity when parsing logs. Download it here

I wanted to thank Jim Martin for technical review of this post.

Charlene Stephens (Weber)

2nd call for public folders to O365 Groups migrations

Exchange Team Blog -

We got some replies to our previous post on the subject, but wanted to reach out again as we want to make sure we validate this scenario well. Therefore, here is an updated request:

If you are using Public Folders (legacy or modern) and would like to migrate some of them to Office 365 Groups, we are working on a solution for that. We are starting with the migration of Calendar and Mail folders and will move on to other types as we complete work on those. We would like to have customers who would like to try this migration to provide feedback. Please email us with below information if interested. You can also send us your information if you would like to try migrating other types of public folders (other than Calendar and Mail) as we extend the support to those folder types. But our immediate work is related to Calendar and Mail.

Drop us an email at: pftogroupmigration@service.microsoft.com

  • Customer name:
  • Tenant domain name in Exchange Online:
  • Location of public folders; on-premises or Exchange Online:
  • If on-premises, Exchange version of public folder servers:
  • Public folder types to migrate (Mail, Calendar – sooner; Task, Contact – later on):

Your organization might need to join our TAP program (depending on public folder location) – and if so, we will share those details with you after reviewing the above.

A little update to provide a timeline: as part of this, we are ready to start migrating Exchange Online (EXO) public folders to Groups right away, with legacy / on-premises public folders following within a few months.

Public Folder Migration team

Accessing public folder favorites

Exchange Team Blog -

Introduction

Seeing that Outlook desktop and Outlook on the web (or OWA, depending on version) do not support the same types of public folders (or folders added to Favorites) we wanted to talk about what is expected behavior when public folders are used. We have seen some questions around this so – let’s clear it up!
Public folder types supported by different clients

Outlook supports public folders of following types:

  • Calendar
  • Contact
  • InfoPath Form
  • Journal
  • Mail and Post
  • Note
  • Task

OWA supports only the following public folder types:

  • Mail and Post
  • Calendar
  • Contact
Adding public folder to favorites using Outlook or OWA

Adding public folders to Favorites is slightly different depending on the client. Please see this article which explains how to do it in the respective client.
Things to keep in mind

OWA will only support folders types such as Mail, Contact and Calendar. Support for public folders with folder type Tasks and others are not available in OWA even though they can be added to Favorites using Outlook. If OWA does not support a specific folder type added to Favorites by Outlook, it will display the folder, but it will be greyed-out.

Another behavior that is very different in OWA is that OWA does not have a common view for different folders types like Outlook does. When the user tries to add the public folder to favorites using OWA, depending on the folder type, the user may not see it added in the default Favorites view in OWA, but the folder might already be added to the respective app launcher tab.

To understand this better let’s consider a scenario where you create the public folder, which has the item type set to “Contact items”. When you add the specific public folder to PF Favorites using OWA, it will list the folder type as highlighted in the following screenshot:

This means is that you need to go to the corresponding section such as Mail, Calendar or People, to access different types of folders that were added to Favorites.

In case of the folder of type Contact, it will be placed in the People tab as shown below:

Regular folders containing Mail items will continue to be added to the regular Favorites folder in OWA.

If the public folder being added using OWA is of folder type Calendar, then the calendar will be populated in the Other Calendars section:

If the added public folder needs to be removed from Favorites, right click the relevant public folder and select option remove from Favorites.
Public folder Favorites sync between Outlook and OWA

There is a sync of public folder favorites which happens between Outlook and OWA in which the public folder (supported types) added to the Favorites using Outlook will sync to OWA and vice versa.

Important: For this public folder Favorites sync feature to work between Outlook and OWA, Outlook client should be fully updated. There were some known issues with Favorites sync with older versions of Outlook clients so updating is important.

Any supported folder type added to public folder Favorites using Outlook will sync to OWA and will show in PF Favorites; in similar fashion, any supported folder added using OWA to the public folder favorites will be automatically added to the public folder favorites section in Outlook client.

The only additional consideration when adding Mail folder to Favorites in Outlook is:

  1. You need to add the desired public folder to the public folder favorites using the method which has been discussed earlier.
  2. Public folders of Mail type need to be additionally added to Default Favorites section in Outlook by selecting the option “Show in Favorites”. Once this option is selected the respective folders will sync up to OWA and automatically appear in OWA Favorites.

Note: This option is only available for Mail folders (message class of IPM.Post); public folders of different type will not be seen under the Default Favorites section in Outlook client.

As far as the other direction of the sync goes: public folder added to Favorites using OWA will sync to the Outlook client and will show in the default public folder Favorites, but will not appear in Default favorites in Outlook client
To recap

  1. Public folders added as Favorites using Outlook client will auto-populate in OWA in respective navigation tabs. This applies to folder types such as Mail, Contacts and Calendars.
  2. Public folders added as Favorites using OWA get automatically added to the Favorites section in Outlook client due to bi-directional sync.
  3. Public folders of Mail type can be auto-populated in OWA Default Favorites view by selecting the option “Show in Favorites” in the Outlook client.
  4. Removing the public folder from Favorites list using OWA will not remove it from Favorite list in Outlook client. It will need to be cleared manually from Outlook.

I hope readers find this post useful! I would like to thank the Public Folder Crew for their help reviewing the blog post. I would also like to say thanks to Nino Bilic and Scott Oseychik for their help in getting this blog post ready for publishing.

Siddhesh Dalvi
Support Escalation Engineer
Exchange Online Escalations

test table

Exchange Team Blog -

To… For messages… Use this command… Enable message copy Sent As the delegator

Set-Mailbox <delegator mailbox name> –MessageCopyForSentAsEnabled $true

Disable message copy Sent As the delegator

Set-Mailbox <delegator mailbox name> –MessageCopyForSentAsEnabled $false

 

Sent Items behavior control comes to Exchange Online user mailboxes

Exchange Team Blog -

It has been a while since we blogged about the ability to control the behavior of Sent Items for shared mailboxes when users either send as or on behalf of shared mailboxes. Today, we are glad to share with you that this feature is currently rolling out for User mailboxes also! What does that mean in real life?

Let’s say you have the following scenario:

  • Mary is a delegator/manager on the team
  • Rob is a delegate on Mary’s mailbox; Rob has Send As or Send on behalf of rights on Mary’s mailbox.
  • When Rob sends an email as Mary, the email will be only in Rob’s Sent Items folder

With this feature enabled on Mary’s mailbox, Exchange will copy the message that Rob sends as Mary to the Sent Items folder in Mary’s mailbox. In other words, both Rob and Mary will have the message in their Sent Items folders.

We have heard this request more than once, and now we are rolling it out to an Exchange Online mailbox near you! The configuration and behavior of the feature is the same as for the shared mailbox.

Note: If the user has used the Outlook 2013 feature to change the folder that Sent Items are saved to, the messages will be copied to that folder instead of the user’s Sent Items folder. Users can reconfigure this by clicking the Save Sent Items To button on the Email Options tab.

To For messages Use this command… Enable this behavior Sent As the delegator

Set-Mailbox <delegator mailbox name> –MessageCopyForSentAsEnabled $true

Enable this behavior Sent On Behalf of the delegator For emails Sent On Behalf of the delegator:

Set-Mailbox <delegator mailbox name> –MessageCopyForSendOnBehalfEnabled $true

Disable this behavior Sent As the delegator

Set-Mailbox <delegator mailbox name> –MessageCopyForSentAsEnabled $false

Disable this behavior Sent On Behalf of the delegator For emails Sent On Behalf of the delegator:

Set-Mailbox <delegator mailbox name> –MessageCopyForSendOnBehalfEnabled $false

Note, you can use the Office 365 Portal to configure this for shared mailboxes, but to configure user mailboxes, you’ll need to use PowerShell (the team would like to hear if you feel it should be in the Portal too!). For various other details of behavior please see the shared mailbox post.

Next question that some might have is: What about on-premises? We know you want to use this on premises also, and will update when we have more details!

Enjoy!

The Calendaring Team

Help us test Cloud Attachments in Outlook 2016 with SharePoint Server 2016

Exchange Team Blog -

My name is Steven Lepofsky, and I’m an engineer on the Outlook for Windows team. We have released (to Insiders) support for Outlook 2016’s Cloud Attachment experience with SharePoint Server 2016. We need your help to test this out and give us your feedback!

So, what do I mean by “cloud attachments?” Let’s start there.

The Cloud Attachment Experience Today

Back when we shipped Outlook 2016, we included a refreshed experience for how you can add attachments in Outlook. To recap, here are a few of the new ways Outlook helped you to share your files and collaborate with others:

We added a gallery that shows your most recently used documents and files. Files in this list could come from Microsoft services such as OneDrive, OneDrive for Business, SharePoint hosted in Office 365 or your local computer. When you attach these files, you have the option of sharing a link to the file rather than a copy. With the co-authoring power of Microsoft Office, you can collaborate in real time on these documents without having to send multiple copies back and forth.

Is the file you’re looking for not showing up in the in the recent items list? Outlook includes handy shortcuts to Web Locations where your file might be stored:

And in a recent update, we gave you the ability to upload files directly to the cloud when you attach a file that is stored locally:

Adding Support for SharePoint Server 2016

Until now, Cloud Attachments were only available from Office 365 services or the consumer version of OneDrive. We are now adding the ability to connect to SharePoint Server 2016, so you can find and share files from your on-premises SharePoint server in a single click. We’d love your help testing this out before we roll it out to everyone!

The new experience will match what we have today, just with an additional set of locations. Once setup, you’ll have new entries under Attach File -> Browse Web Locations. These will show up as “OneDrive for Business” for a user’s personal documents folder, and “Sites” for team folders.

Note: If you also happen to be signed in to any Office365 SharePoint or OneDrive for Business sites under File -> Office Account, both sites may show up. The difference will be that the Office 365 versions will have branding for your company. For example, it may say “OneDrive – Contoso” rather than “OneDrive for Business”, or “Sites – Contoso” rather than “Sites.”

You’ll be able to upload locally attached files to the OneDrive for Business folder located on your SharePoint Server.

And, of course, you’ll see recently used files from your SharePoint server start to show up in your recently used files list.

How to get setup

Here are the necessary steps and requirements to start testing this feature out:

  1. This scenario is only supported if you are also using Exchange Server 2016. You’ll need to configure your Exchange server to point to your SharePoint Server 2016 Internal and/or External URLs. See this blog post for details: Configure rich document collaboration using Exchange Server 2016, Office Online Server (OOS) and SharePoint Server 2016
  2. You’ll need Outlook for Windows build 16.0.7825.1000 or above.
  3. Ensure that your SharePoint site is in included in the Intranet zone.
  4. Optional: Ensure that crawling is enabled so that your documents can show up in the recent items gallery. Other features such as uploading a local attachment to your site will work even if crawling is not enabled. See this page for more details: Manage crawling in SharePoint Server 2013

Once enrolled, any mailbox that boots up Outlook and is configured with your SharePoint Server’s information per step #1 above will start to see the new entry points for the server.

We hope you enjoy this sneak peek, and please let us know how this is working for you in the comments below!

Steven Lepofsky

Exchange Server Edge Support on Windows Server 2016 Update

Exchange Team Blog -

Today we are announcing an update to our support policy for Windows Server 2016 and Exchange Server 2016. At this time we do not recommend customers install the Exchange Edge role on Windows Server 2016. We also do not recommend customers enable antispam agents on the Exchange Mailbox role on Windows Server 2016 as outlined in Enable antispam functionality on Mailbox servers.

Why are we making this change?

In our post Deprecating support for SmartScreen in Outlook and Exchange, Microsoft announced we will no longer publish content filter updates for Exchange Server. We believe that Exchange customers will receive a better experience using Exchange Online Protection (EOP) for content filtering. We are also making this recommendation due to a conflict with the SmartScreen Filters shipped for Windows, Microsoft Edge and Internet Explorer browsers. Customers running Exchange Server 2016 on Windows Server 2016 without KB4013429 installed will encounter an Exchange uninstall failure when decommissioning a server. The failure is caused by a collision between the content filters shipped by Exchange and Windows which have conflicting configuration information in the Windows registry. This collision also impacts customers who install KB4013429 on a functional Exchange Server. After the KB is applied, the Exchange Transport Service will crash on startup if the content filter agent is enabled on the Exchange Server. The Edge role enables the filter by default and does not have a supported method to permanently remove the content filter agent. The new behavior introduced by KB4013429, combined with our product direction to discontinue filter updates, is causing us to deprecate this functionality in Exchange Server 2016 more quickly if Windows Server 2016 is in use.

What about other operating systems supported by Exchange Server 2016?

Due to the discontinuance of SmartScreen Filter updates for Exchange server, we encourage all customers to stop relying upon this capability on all supported operating systems. Installing the Exchange Edge role on supported operating systems other than Windows Server 2016 is not changed by today’s announcement. The Edge role will continue to be supported on non-Windows Server 2016 operating systems subject to the operating system lifecycle outlined at https://support.microsoft.com/lifecycle.

Help! My services are already crashing or I want to proactively avoid this

If you used the Install-AntiSpamAgents.ps1 to install content filtering on the Mailbox role:

  1. Find a suitable replacement for your email hygiene needs such as EOP or other 3rd party solution
  2. Run the Uninstall-AntiSpamAgents.ps1 from the \Scripts folder created by Setup during Exchange installation

If you are running the Edge role on Windows Server 2016:

  1. Delay deploying KB4013429 to your Edge role or uninstall the update if required to restore service
  2. Deploy the Edge role on Windows Server 2012 or Windows Servers 2012R2 (Preferred)

Support services is available for customers who may need further assistance.

The Exchange Team

Released: March 2017 Quarterly Exchange Updates

Exchange Team Blog -

With this month’s quarterly release, we bid a fond farewell to Exchange Server 2007. Support for Exchange Server 2007 expires on 4/11/2017. Update Rollup 23 for Service Pack 3 will be the last update rollup released for the Exchange Server 2007 product. Today we are also releasing the latest set of Cumulative Updates for Exchange Server 2016 and Exchange Server 2013. These releases include fixes to customer reported issues and updated functionality. Exchange Server 2016 Cumulative Update 5 and Exchange Server 2013 Cumulative Update 16 are available on the Microsoft Download Center. Update Rollup 17 for Exchange Server 2010 Service Pack 3 is also now available.

Exchange Server 2013 and 2016 require .Net 4.6.2

As previously announced, Exchange Server 2013 and Exchange Server 2016 now require .Net 4.6.2 on all supported operating systems. Customers who previously updated their Exchange servers to .Net 4.6.1 can proceed with the upgrade to 4.6.2 before or after installing the updates released today. Customers who are still running .Net 4.5.2 should deploy Cumulative Update 4 or Cumulative Update 15, upgrade the server to .Net 4.6.2 and then deploy either Cumulative Update 5 or Cumulative Update 16.

Arbitration Mailbox Migration

Recently there have been reports of problems with customers migrating mailboxes to Exchange Server 2016. We wanted to take this opportunity to remind everyone that when multiple versions of Exchange co-exist within the organization, we require that all Arbitration Mailboxes be moved to a database mounted on a server running the latest version of Exchange. For more information, please consult the Exchange Server Deployment Assistance on TechNet.

Update on S/MIME Control

One year ago, we released an updated S/MIME Control for OWA. We have received questions from customers requesting clarification on what this release included. As stated previously, the control itself did not change. This was a packaging change necessary to prevent IE from throwing a certificate warning during installation due to SHA-1 deprecation. The Authenticode algorithm used to code sign the control uses a SHA-1 algorithm. SHA-1 ensures compatibility with Vista/Windows Server 2008 and Windows 7/Windows Server 2008R2 code signing. The Authenticode file hash and delivery package are signed with a SHA-2 certificate. Signing the package with a SHA-2 certificate prevents IE from throwing a certificate warning when the package is installed and provides the necessary protection for the entire package.

Latest time zone updates

All of the packages released today include support for time zone updates published by Microsoft through March 2017.

TLS 1.2 Exchange Support Update coming in Cumulative Update 6

We would like to raise awareness of changes planned for the next quarterly update release. We are working to provide updated guidance and capabilities related to Exchange Server’s use of TLS protocols. The June 2017 release will include improved support for TLS in general and TLS 1.2 specifically. These changes will apply to Exchange Server 2016 Cumulative Update 6 and Exchange Server 2013 Cumulative Update 17.

Late Breaking Issues not resolved in Cumulative Update 5

Cumulative Update 5 includes a couple of issues that could not be resolved prior to the product release. The unresolved items we are aware of include the following:

  • When attempting to enable Birthday Calendars in Outlook for the Web, an error occurs and Birthday Calendars are not enabled.
  • When failing over a public folder mailbox to a different server, public folder hierarchy replication may stop until the Microsoft Exchange Service Host is recycled on the new target server.

Fixes for both issues are planned for Cumulative Update 6.

Release Details

KB articles that describe the fixes in each release are available as follows:

Exchange Server 2016 Cumulative Update 5 does not include new updates to Active Directory Schema. If upgrading from an older Exchange version or installing a new server, Active Directory updates may still be required. These updates will apply automatically during setup if the logged on user has the required permissions. If the Exchange Administrator lacks permissions to update Active Directory Schema, a Schema Admin must execute SETUP /PrepareSchema prior to the first Exchange Server installation or upgrade. The Exchange Administrator should execute SETUP /PrepareAD to ensure RBAC roles are current.

Exchange Server 2013 Cumulative Update 16 does not include updates to Active Directory, but may add additional RBAC definitions to your existing configuration. PrepareAD should be executed prior to upgrading any servers to Cumulative Update 16. PrepareAD will run automatically during the first server upgrade if Exchange Setup detects this is required and the logged on user has sufficient permission.

Additional Information

Microsoft recommends all customers test the deployment of any update in their lab environment to determine the proper installation process for your production environment. For information on extending the schema and configuring Active Directory, please review the appropriate TechNet documentation.

Also, to prevent installation issues you should ensure that the Windows PowerShell Script Execution Policy is set to “Unrestricted” on the server being upgraded or installed. To verify the policy settings, run the Get-ExecutionPolicy cmdlet from PowerShell on the machine being upgraded. If the policies are NOT set to Unrestricted you should use the resolution steps in KB981474 to adjust the settings.

Reminder: Customers in hybrid deployments where Exchange is deployed on-premises and in the cloud, or who are using Exchange Online Archiving (EOA) with their on-premises Exchange deployment are required to deploy the most current (e.g., 2013 CU16, 2016 CU5) or the prior (e.g., 2013 CU15, 2016 CU4) Cumulative Update release.

For the latest information on Exchange Server and product announcements please see What’s New in Exchange Server 2016 and Exchange Server 2016 Release Notes. You can also find updated information on Exchange Server 2013 in What’s New in Exchange Server 2013, Release Notes and product documentation available on TechNet.

Note: Documentation may not be fully available at the time this post is published.

The Exchange Team

Announcing the availability of modern public folder migration to Exchange Online

Exchange Team Blog -

We are happy to announce the availability of public folder migration from Exchange Server 2013/2016 on-premises to Exchange Online! Many of our customers asked us for this, and the full documentation is now here. To ensure that any version-specific instructions are addressed appropriately, we have two articles to point you to:

While all of the information is located in the documentation, the key requirements are:

  • Exchange Server 2013 CU15 (or later), Exchange Server 2016 CU4 (or later)
  • Exchange on-premises hybrid configured with Exchange Online

If you have any additional questions, let us know in comments below. Enjoy!

Public Folder Migration Team

Multi-Factor Authentication for the Hybrid Configuration Wizard and Remote PowerShell

Exchange Team Blog -

You can now use an Administrator account that is enabled for Multi-Factor Authentication to sign in to Exchange Online PowerShell and the Office 365 Hybrid Configuration Wizard (HCW).

In case you are not aware, the Azure multi-factor authentication is a method of verifying who you are that requires the use of more than just a username and password. Using MFA for Office 365, users are required to acknowledge a phone call, text message, or app notification on their smart phones after correctly entering their passwords. They can sign in only after this second authentication factor has been satisfied. You can read more about the Office 365 Multi Factor Authentication option here.

Many Exchange Online customers wanted the extra level of security that is offered with Multi-Factor Authentication, which allows you to force the administrator account to use Multi-Factor Authentication. However, because of a limitation in Remote PowerShell, Exchange Online administrators could not connect with a Multi-Factor enabled account. In addition, as the Office 365 Hybrid Wizard also requires Remote PowerShell connections to Exchange Online, prior to now, the account you used to run the HCW could not be enabled for Multi-Factor Authentication.

The Exchange Online PowerShell Module

There is a new module that was created that can be downloaded to allow you to connect with an account that is enabled for Multi-Factor Authentication. You can download the module from the Exchange Online Administration Center (the steps are outlined in this article).

Note: We do not plan to discontinue traditional methods of connecting to Remote PowerShell; if you are not using Multi-Factor Authentication you can continue to connect using the methods you already have in place.

The Hybrid Wizard Update

The Hybrid Wizard has also been updated to allow for Multi-Factor Authentication enabled administrators to authenticate.

Note: There is an issue with this new Authentication method in the 21 Vianet Greater China tenants. For customers with Tenants in that region you cannot use the MFA module or Hybrid integration mentioned in this article and should instead use the Hybrid Wizard located here: http://aka.ms/HCWCN

In order to keep the sign in experience consistent for all customer whether they have MFA enabled or are using traditional credentials, we have updated our credentials page in the wizard.

On the Credential page of the wizard you will see that the “next” button is not available. You are required to pick your credential for on-premises (which by default will be the currently signed in credentials) and “sign in” to Office 365.

Once you select “sign in” you will be prompted for credentials in a familiar looking screen.

If you have Multi-Factor Authentication enabled for the administrator, you would then be prompted for the second factor of authentication.

Once verified, you would see the credential card for both the on-premises and Exchange Online administrators. You will also notice that the “next” button is now activated.

Conclusion

Your feedback about not being able to use MFA enabled account for Exchange Online administration was loud and clear! Please keep providing us feedback so we can continue to identify and address your needs.

The Exchange Team

S'abonner à Philippe BARTH agrégateur - Exchange (Anglais)