19 min read

From hybrid / fully joined devices to Entra ID

From hybrid / fully joined devices to Entra ID
Photo by Markus Spiske / Unsplash


Adversaries are more and more interested in the data and infrastructure that lives in Cloud environments like Azure and Microsoft 365 solutions. Since Microsoft EntraID is the most common central IDP solution for these environments, it is important to identify the possible paths attackers can use to move from a device to possible crown jewels that live in these Cloud solutions.

In this blog post, I wanted to talk about how adversaries can use Entra ID Joined or Hybrid Joined devices to move laterally to the cloud, using EntraID SSO features, and how they can get a foothold on these devices. This blog post is based on a Red-Teaming scenario I encountered in a real-life, and is written from a Blue-Teaming perspective.


Since a lot of the knowledge I gathered came from previously created great blog posts, I wanted to give the people who helped me in understanding this topic some credit. Everything I learned about how the SSO features can be used to move to Entra ID and how we can detect it, came from:

Why joined and hybrid devices are key

As you probably know, Entra ID Joined and Hybrid joined devices have SSO capabilities that make logging into applications and solutions easier for the end user. In a lot of cases, users can sign in to these applications without having to enter extra credentials or MFA. It is also a common scenario where Conditional Access policies require Compliant devices in order to perform tasks or log into applications, which makes it even more interesting to exploit the company devices to evade the policies that were set up in Entra ID.

If an attacker can exploit these devices, protection mechanisms like Identity Protection and suspicious login rules are also blind, since the tokens or login requests will not have suspicious properties in them. Even though the scenario is a bit harder to complete, it is a very interesting scenario for defense evasion when completed.

How to get a foothold on a device

Before a device can be used to move to Cloud solutions, the attacker needs to get a foothold on the device. There are a couple of procedures they can use to accomplish this, but in this post, we will talk about a procedure where a beacon is set up using Excel Plugins (XLL files).

You might think to yourself 'No worries, all of our devices have MDE onboarded. If this happens, we will detect it right away.' Well, think twice! MDE is one of the most popular EDR solutions, which means a lot of attackers are making a sport out of it to perform defense evasion. In fact, in the procedure shown above, there was even not one step that got detected by MDE without custom detection rules! Afterward, I could not really blame MDE for it, but it remains a scary thought.

Detecting the foothold

Initial Access / Phishing

At first, the attacker needs an end-user to download their payload. The technique that will most probably be used the most, is sending a phish. This can be a regular phish or a spear phish and can be done using various platforms like Teams, Email, Social Media, etc. I will not go into all the ways of protecting and detecting phishing techniques, since there is a very big amount of techniques and platforms that can be used. But in general, I would advise the following

  • GOOD(!) User education. By saying good user education I mean to not rely on purely sending phishing simulations to your end users without further training, and expecting no one will fall for phish anymore. Trust me, if you do, you are fooling yourself. 
  • Use any mail filtering solution to help you protect against the biggest amounts of phishing mail. Of course, you can perfectly use Defender for Office 365 but know that very good and sophisticated phishing emails will most certainly still pass through your phishing net. 
  • Restrict (if possible) collaboration over Teams to only organizations your company is working with. This will effectively reduce phishing over Teams coming from malicious tenants, which is a technique regularly seen these days.

In the scenario I encountered, the payload was downloaded from a basic storage account hosted in Azure. No specific or obscure domain, just a normal storage account where the URL used to download the payload was the default https://<mystorageaccount>.blob.core.windows.net URL. Pretty smart to be honest, since there was no way to match the URL with previous IOCs I could find in Threat Intelligence platforms.


A popular sub-technique that is being used to have persistence on a device is 'T1137.006 - Office Application Startup: Add-ins', where attackers use XLL files that contain heavily obfuscated beacons to create a Command and Control call. These XLL files can then be used to execute code and start the beacon when an office application (in this case Excel) starts. In the real-life scenario I encountered, MDE did not detect any of this by default. Do I think MDE is a bad EDR for this, not really. The XLL file was heavily obfuscated and designed to go under the radar. But how can we detect this technique ourselves? Well, let's first go a little deeper into the technique itself. 

I found this GitHub page an interesting article to read. Here the writer talks about how XLL files can be dropped on a device, and what the challenges and advantages are. To summarize, since Microsoft announced blocking macros coming from the internet, XLL files are a great alternative for attackers to have code execution on a machine. The XLL files can be written in C, C++, or C#, and are regularly excluded from antivirus systems since they are being executed by the trusted excel.exe process. However, there are very few legitimate use cases for XLLs, which actually makes it very easy to detect or prevent. 

Depending on the solutions you have to control downloads of certain files (think of L7 firewalls or SASE solution), you can create a detection or prevention rule that does not allow downloads of XLL files. Of course, this can be circumvented by smuggling the file past the defensive systems using encapsulation of other file types like IMG, ISO, ZIP, etc. Since I would expect an organization would flag ISO and IMG files, ZIP would be one of the best options to circumvent preventive / detective controls.

If detecting the download is not an option for you, you can create a custom detection rule in MDE to flag suspicious XLL files. A very easy detection would be a rule that checks the file events where the extension is XLL, which has a low global prevalence. 

| where Timestamp > ago(1h)
| where ActionType in ("FileCreated", "FileModified")
| where FileName endswith ".xll"
| invoke FileProfile(SHA256)
| where GlobalPrevalence < 500 or isempty(GlobalPrevalence)

This rule is a very easy but effective detection. Since most of the organizations do not have legitimate use cases for XLL files. If you do have legitimate use cases, you can perfectly fine-tune the rule to whitelist your specific scenarios.

When the XLL file is used to establish a beacon, the file is not directly persistent on itself. The file needs to be placed in the startup folder c:\Users\<username>\AppData\Roaming\Microsoft\Excel\XLSTART\ in order to be persistent. This can be done as a first step from the attacker via the initial beacon connection that is set up when the user clicks the XLL file manually. The problem the attacker will encounter here is that the ASR rule 'Block Office applications from creating executable content' will prevent this from happening (if the rule is in blocking mode of course). Even though this is a good preventive control to stop persistency, the attacker can always try to circumvent this by performing defense evasion, which we will talk about in the next section.

Defense Evasion

As you probably know, Microsoft has a couple of ASR rules that protect against Office Application abuse typically used to perform execution and persistence:

  • Block all Office applications from creating child processes
  • Block Office applications from creating executable content
  • Block Office applications from injecting code into other processes
  • Block Office communication application from creating child processes

Even though these ASR rules work great in most of the cases, attackers are still able to bypass some of the rules (aka perform defense evasion). One of the techniques they can use for this is "T1134.004: Parent PID Spoofing" where they perform actions from Office Applications by spoofing the parent process ID that performs the action. A procedure I encountered is where the attackers created a new process (in this case mfpmp.exe) from excel.exe by spoofing the parent process to msedge.exe. 

For this procedure, MDE brought some good and bad news. MDE did detect the parent process spoofing in the Timeline of the affected device:

But the events needed to detect the parent process spoofing in Advanced Hunting were not there! When we looked up the event in Advanced Hunting, we could only find the same event on the same timestamp but with msedge.exe as the parent process:

This was really astonishing to me. I always knew that MDE filtered out some events that are known to be verbose most of the time. But this was the first time I encountered that a device timeline had certain data that could not be seen in Advanced Hunting. As of today, I still have not managed to get an explanation as to why this is the case. So for now, I conclude that extra event sources like Windows Events or Sysmon will be needed to detect this in the future. Either way, by performing this defense evasion technique the ASR rule could be circumvented and the XLL file could be made persistent.

Command and Control

Establishing a Command and Control connection can be performed using various techniques. The technique I encountered and found very interesting was 'T1019.004 - Proxy: Domain Fronting', where the attacker used the fastly.com CDN to redirect the traffic to their own Command and Control servers. Here they crafted specific HTTPS requests that contained the SNI field to fastly.com, and the host header to the custom domain of the attacker. If no SSL/TLS inspection is performed, it appears as if the traffic goes to fastly.com, while it is actually being redirected to the attacker's custom domains. 

As briefly mentioned before, SSL/TLS inspection is needed to try and detect domain fronting in the package. This can be done using various data sources like L7 firewalls and SASE solutions. Since these types of data sources were not present in the scenario I encountered, I was not able to test which kind of detections we could create for this. The out-of-the-box detection capability and custom rule creation for this would also strongly depend on which data source is used in the environment. Looking forward to testing this scenario out with Microsoft Private Internet Access in the future.

How to move to Entra ID

Once an attacker has persistence and code execution on a device, it is time to start moving to the cloud. One of the popular ways to accomplish this is by requesting new PTR cookies via BrowserCore or COM Objects, and using these to gather access tokens for applications they want to login to and (ab)use. How can they do it? Below you find a schematic overview.

Detecting the move

Requesting a NONCE

Before authentication to Entra ID can be initiated, a NONCE needs to be requested. This is a random arbitrary number that can be used just once and only for 5 minutes. This NONCE needs to be sent in the request for a new PRT cookie so that the CloudAP Plugin can sign the NONCE using the user's private key. This will essentially bind the eventually signed PRT to the specific login event.

Since this NONCE request is just a normal request to the login.microsoft.com/common/oauth2/authorize URI, I was not able to bake a detection rule that would detect 'suspicious NONCE requests'.

Requesting a PRT token

In the second phase, the NONCE is used to request a PRT cookie. This can be done in two ways:

  • Using the RoadToken tool that will essentially use BrowserCore.exe to request a new PRT cookie with the current existing authentication context.
  • Using the RequestAADRefreshToken or aad_prt_bof tool will essentially use the MicrosoftAccountTokenProvider DLL to request a new PRT cookie. 

As you see in the schematic overview, using BrowserCore will eventually trigger a DeviceProcessEvent, and using the MicrosoftAccountTokenProvider DLL will eventually trigger a DeviceImageLoad event in MDE.

The easiest way to detect a malicious token request on a device is by flagging abuse of BrowerCore. When the NONCE is provided in stdin and the PRT cookie is exported using stdout, MDE should already flag this by default:

However, as the folks at FalconForce explain, this detection can be bypassed by creating two named pipes which Chrome also uses. 

cmd.exe /d /c "C:\Program Files\Windows Security\BrowserCore\BrowserCore.exe" chrome-extension://ppnbnpeolgkicgegkbkbjmhlideopiji/ --parent-window=0 < \\.\pipe\chrome.nativeMessaging.in.RANDOMNUMBERHERE > \\.\pipe\chrome.nativeMessaging.out.SAMERANDOMNUMBERHERE

Luckily, the bypass by using two named pipes can be detected by creating a custom detection rule in MDE which you can find on the FalconFriday GitHub page.

The second way of requesting the PRT cookie where the MicrosoftAccountTokenProvider DLL is used, is more difficult to detect. This is because this procedure triggers a DeviceImageLoad event, which is heavily filtered by default in MDE. The query to detect suspicious PRT requests using this way is not that hard and can be found below:

| where FileName =~ "MicrosoftAccountTokenProvider.dll"
| summarize by InitiatingProcessSHA1
| invoke FileProfile(InitiatingProcessSHA1, 1000)
| where GlobalPrevalence < 250

However, due to the heavy filtering on the DeviceImageLoad events, the False Negative rate for this rule will most certainly be high. Regardless, it might still be a good idea to use this query as a custom detection, since the chance of a True Positive when it hits is pretty high. But what if you really want to detect this procedure with more certainty? I'm afraid this will only be possible by using extra data sources like Sysmon

!UPDATE - On 01/03/2024 I created a new blog post explaining how we can detect this gap using WDAC instead of Sysmon!

Requesting an access token

Once a PRT cookie is returned, it is time to request an access token for an application. This can be done by using RoadTX in the RoadRecon tool, which gives you the possibility to change the Application ID and Resource ID you are requesting a token for by using the -c and -r parameters. Once the token is returned, you can use the access token to authenticate to the requested application, for the scope you used during the access token request.

Here we have a couple of detective and preventive controls we can enable in order to protect ourselves against suspicious access token requests.

By default, RoadTX will request an access token to Application 'Azure AD PowerShell' and Resource 'Azure AD Graph' if the command roadtx prtauthis used. If an attacker does this on a device not belonging to an IT admin, we can detect this access token request since we do not expect a non-admin profile to request access to Azure AD PowerShell.

let ITAccounts=(_GetWatchlist('ITAccounts') | summarize make_set(ITAccounts));
// Materialize Dataset
let DataSetMat= materialize (SigninLogs
| where TimeGenerated > ago(1h)
| where AppDisplayName has_any ("PowerShell", "CLI", "Command Line", "Management Shell")
// Get successful and failed due to no assignment logins
| where ResultType in ("0", "50105")
| summarize max(TimeGenerated) by UserPrincipalName, AppDisplayName, IPAddress, UserId, ResultType
// join IdentityInfo to get more information
| join kind=leftouter (IdentityInfo | where TimeGenerated > ago(14d) | summarize arg_max(TimeGenerated, *) by AccountObjectId ) on $left.UserId == $right.AccountObjectId
// exclude Accounts with Assigned Roles
| where array_length(AssignedRoles) == 0
// exclude known IT personnel Departments
| where Department !has "it" and Department !has "ict" and Department !has "operations"
// exclude service accounts
| where JobTitle != "Service Account");
// exclude IT accounts
let FIL= (DataSetMat
| extend ITAccounts= toscalar(ITAccounts)
| mv-expand ITAccounts
| where AccountUPN contains ITAccounts or AccountDisplayName contains ITAccounts);
// exclude service accounts
| join kind=leftanti FIL on AccountUPN
| distinct  max_TimeGenerated, UserPrincipalName, AppDisplayName, IPAddress, JobTitle, Department, UserId, ResultType

Do know that this detection can be circumvented by using a different Application in the request! 

RoadRecon uses graph.windows.net (aka Azure AD Graph), which has the resource ID 00000002-0000-0000-c000-000000000000. Depending on what the attacker wants to achieve, they will need to have consented to a couple of scopes. If the attacker wants to perform reconnaissance of the Entra ID environment, all they need is the user_impersonation scope since every normal user in Entra ID is allowed to read applications, groups, users, etc. 

Looking at the available scopes of the RoadTX documentation we can find for every Application which scopes can be used to access the Azure AD Graph resource. If you do a search for resource ID 00000002-0000-0000-c000-000000000000, you will see that all Applications except for Microsoft Teams can be used to perform reconnaissance with the user_impersonation scope. So if an attacker wants to perform reconnaissance without hitting the suspicious PowerShell detected for non-IT admin rule, they can request an access token via any normal application that would not be suspicious to be used by a non-IT profile.

 But, there is one caveat. RoadRecon uses the reply URL “https://login.microsoftonline.com/common/oauth2/nativeclient", which means the application the attacker is trying to get an access token for will need to have the correct reply URL in order for the token to be returned. There are only a handful of applications with this reply URL but know that attackers can circumvent this by providing custom reply URLs for applications that have URLs that can be abused (such as localhost, azurewebsites.net, trafficmanager.net, blob.core.windows.net). 

Also, note that the PowerShell use for non-IT personnel rule will not work when the end-device of an IT admin is exploited of course. Therefore, it is hard to detect suspicious access token requests when coming from the end-user's device.

Discovery on Microsoft Graph API

A second way to detect reconnaissance is by creating rules that flag deviations from a known normal baseline, consecutive typical reconnaissance operations, suspicious user agent detections, etc. Since the launch of the Microsoft Graph API audit logs, we can now use these rules to try to detect reconnaissance via the Microsoft Graph API. You can view an example below where we detected a suspicious amount of list operations, which were triggered by the attackers due to a high amount of reconnaissance operations.

Sadly enough, tools like RoadRecon still use the legacy Azure AD Graph, which does not have audit log capabilities like the Microsoft Graph does. Up until Azure AD Graph can still be used, attackers will always be able to evade reconnaissance detections like the one mentioned above. The Azure AD Graph has been deprecated since June 30, 2023, but at the time of writing, there is no formal timeline for when the API will be unable to be used again.

Preventing the move

Even though the scope 'user_impersonation' is sufficient to perform reconnaissance in Entra ID which can be used by other applications than the Azure AD PowerShell application, it is still a good idea to protect the default PowerShell applications that exist in a tenant. This will reduce the chance of attackers getting a valid authentication token and will make their lives a little harder if they want to move to Entra ID.

The first way we can protect the Azure AD PowerShell application is by forcing an assignment requirement for the application. But here comes the tricky part. By default, the Azure Active Directory PowerShell application is not surfaced in the Entra ID portal:

Even when we try to do it via the Microsoft Graph SDK, we cannot find it:

So the service principal does not exist, which means we cannot use it to log in, right? Well no, this is a default service principle that can still be used to log in even though you cannot find it by default in your tenant. To protect yourself against it you first need to surface the service principle by creating it yourself. For this, you will need the AppID (ClientID) of the service principle which is globally unique and the same for every tenant. The ID you need is 1b730954-1685-4b74-9bfd-dac224a7b894.

$SP = New-MgServicePrincipal -AppId 1b730954-1685-4b74-9bfd-dac224a7b894
Update-MgServicePrincipal -ServicePrincipalId $SP.Id -AppRoleAssignmentRequired:$true

The above command will create the service principle and set the assignment required property to True. If we try to get the service principle for Azure AD PowerShell afterward, you can see that the service principle now exists.

Notice how the service principle immediately gets recognized and receives the display name 'Azure Active Directory PowerShell', even though we only created the service principle using the AppID and never gave any other property. Now that the Enterprise Application exists, you can view it in the Entra ID portal:

If a user now tries to connect to Azure AD PowerShell without being assigned to the application, they will get an error message:

This effectively blocks the attacker from requesting an access token to Azure AD Graph via Azure Active Directory PowerShell. I would recommend doing the same for the Microsoft Graph PowerShell (the new name is Microsoft Graph Command Line Tools) since some reconnaissance tools already switched to the new Microsoft Graph API as well. If you would like a script that helps you with this, you can use the script I created below.

# Functions
function InstallModules {
    param ()

    # Modules to install
    $ModulesToInstall = @(
    # Install modules when they not exist
    $modulesToInstall | ForEach-Object {
        if (-not (Get-Module -ListAvailable -All $_)) {
            Write-Host "Module [$_] not found, installing..." -ForegroundColor DarkGray
            Install-Module $_ -Force
    # Import modules
    $modulesToInstall | ForEach-Object {
        Write-Host "Importing Module [$_]" -ForegroundColor DarkGray
        Import-Module $_ 

# Main
# Install modules
Write-Host "┏━━━" -ForegroundColor Cyan
Write-Host "┃  Installing/Importing PowerShell modules" -ForegroundColor Cyan
Write-Host "┗━━━" -ForegroundColor Cyan

# Authentication
Write-Host "┏━━━" -ForegroundColor Cyan
Write-Host "┃ Logging you in" -ForegroundColor Cyan
Write-Host "┗━━━" -ForegroundColor Cyan
Connect-MgGraph -Scopes Application.ReadWrite.All | Out-Null

Write-Host "┏━━━" -ForegroundColor Cyan
Write-Host "┃ Restricting apps" -ForegroundColor Cyan
Write-Host "┗━━━" -ForegroundColor Cyan
# Apps to limit
# 14d82eec-204b-4c2f-b7e8-296a70dab67e --> Microsoft Graph PowerShell / Microsoft Graph Command Line Tools
# 1b730954-1685-4b74-9bfd-dac224a7b894 --> Azure Active Directory PowerShell
# 1950a258-227b-4e31-a9cf-717495945fc2 --> Microsoft Azure PowerShell
$AppIds = @("14d82eec-204b-4c2f-b7e8-296a70dab67e","1b730954-1685-4b74-9bfd-dac224a7b894")

foreach ($AppId in $AppIds) {
    # Get existing service principle
    $SP = (Get-MgServicePrincipal -Filter "AppId eq '$($AppId)'")
    # Create service principal if not exists
    if (-not $SP) { 
        $SP = New-MGServicePrincipal -AppId $AppId 
        Write-Host "  ┖─ Did not found service principal with AppId $AppId so created one" -ForegroundColor Yellow
    } else {
        Write-Host "  ┖─ Found service principal with AppId $AppId" -ForegroundColor Green
    # Set assignment required
    Update-MgServicePrincipal -ServicePrincipalId $SP.Id -AppRoleAssignmentRequired:$true
    Write-Host "  ┖─ Updated assignment required for $AppId with display name $($SP.DisplayName)" -ForegroundColor Green

Do not forget to assign the accounts who are still allowed to use the application to the Enterprise Applications afterward.

Another preventive control that should be set, is to disable the usage of MsolPowerShell which is the old MSOline module. This module is very legacy and should never be used again. To disable it you can use:

Update-MgPolicyAuthorizationPolicy -BlockMsolPowerShell:$true

After that, the MSOnline module will be blocked for all users:


Detection confidence

I wanted to quickly show the detection confidence I think there is when you use MDE with the custom detection rules. For this, I first created an Attack Frow for the complete scenario:

Using the below colors, we can see where we have good detections and where there are still gaps.

  • Green - High confidence
  • Blue - Somewhat confident
  • Red - Low confidence

Looking at this diagram, we can conclude that you cannot have high confidence you will spot suspicious or malicious token requests once the attacker has a beacon on the device if you rely on MDE alone.

Extra data sources

To further improve the detections, extra data sources like Sysmon and a SSE solutions are needed:

  • Green - Sysmon needed
  • Red - SSE needed

These will be important data sources to spot beaconing started by other techniques than malicious XLL files or to spot suspicious token requests via COM objects.

Scenario Impact

Calculating the correct impact is hard without the business context of course. However, it is important to note that the impact of this scenario depends on what permissions and roles the user has in Entra ID, SharePoint, Teams, Azure, etc. If the user is a normal user without any extra privileged roles, the impact is limited to the access the user has to data and applications and the discovery every user can perform in, for example, Entra ID. 

But when the user of the device also has privileged roles in Entra ID or Azure, it becomes a dangerous scenario and the impact can be high to critical. Since the malicious requests are coming from the device itself, protection mechanisms like Identity Protection and most of the Conditional Access policies will not work. There is however an improvement that can be made in Conditional Access policies to further limit the risk, although it is not bulletproof.

How to protect your organization

Make sure privileged roles are protected behind PIM, and require authentication context when a role is activated. Requiring a compliant or Hybrid Joined device as context will not work of course, but what you can do is require authentication strength to phishing-resistant MFA. This makes sure a fido2 key needs to be used when activating the roles. Do keep in mind that this can be circumvented if the user is already logged in with a fido2 key since this claim will also be satisfied in the token when the attacker requests the new token. An extra preventive control would be requiring a specific fido2 key via fido2 advanced options for privileged roles only, effectively shrinking down the window where a malicious access token can be requested with this context. (not tested though)

An even better method is to protect privileged roles after PIM with an approval process. If your approvers are then trained to not blindly approve PIM requests, the attacker will not be able to elevate privileges via PIM.

The best option to circumvent this scenario is by using Privileged Access Workstations for all your admin accounts! This will reduce the likelihood of a beacon being installed on the device massively, making sure this scenario can only happen on end-user devices reducing the impact to reconnaissance and leakage of data the end-users have access to.

If data leakage that can happen via a normal user account has a high to critical impact on your organization, and you estimate the likelihood of this happening as likely to occur, I recommend adding the extra data sources needed (SSE solution and Sysmon) to close down the detection gaps discussed.