Exchange 2010 and Constrained language mode

  • Context

I’ve automated the user provisioning on Windows 7 computers and their mailbox were automatically created on Exchange 2010.
We’ve recently started a Windows 10 migration project where we use Applocker in whitelist mode and have PowerShell running in constrained language mode.

  • Issue

We had Applocker on all our Windows 7 endpoints and it was more or less permissive. It allowed us to have PowerShell running in Full Language Mode.
To create mailboxes, the script was importing Exchange cmdlets from a remote session.

$s = New-PSSession -ConnectionUri http:/servername.fqdn/PowerShell/ `
-ConfigurationName Microsoft.Exchange
Import-PSSession -Session $s

Now, on Windows 10 with constrained language mode, importing session just failed with the following message:
Import-PSSession : Index was out of range. Must be non-negative and less than the size of the collection.

  • Solution

I’m very busy and copied all the ActiveDirectoy and Exchange cmdlets we were using into a scriptblock with parameters.
My straightforward solution was to replace the Import-PSSession cmdlet by Invoke-Command:

Invoke-Command -Session $s -ScriptBlock $sb `
-Argumentlist $User,$Store

But it also failed 😦
The error was: A Begin statement block, Process statement block, or parameter statement is not allowed in a Data section
I’ve just forgotten the fact that the Exchange remote endpoint only exposes Exchange cmdlets, nothing else.
There’s for example no Get-Random cmdlet and no Active Directory module loaded into that Exchange remote config.
My scriptblock was actually born to fail. My bad, oops 🙄

I jumped onto another solution.
I can use Get-Random and other Active Directory cmdlet on the client but I’ll need to execute each and every Exchange cmdlet in a very simple scriptblock without using any pipeline…

To get a better idea of what I did, here are some examples:

Example 1:

$DB = Invoke-Command -Session $session -ScriptBlock `
{ Get-MailboxDatabase -ErrorAction SilentlyContinue } |
Where { $_.Server -match "$($ServerPrefixName)" } | 

I’ve hashtables defined on the client and passing it to the remote session using the magic using keyword 😀

Example 2:

Invoke-Command -Session $session -ScriptBlock `
{ Set-Mailbox "DomainName\$($using:UserName)" @using:MailboxQuota @using:extraparams}
  • Bonus: more on Constrained Language Mode

Not all PowerShell shells are equals

I’ve been recently involved in fixing 2 issues for my colleagues.

  • The first issue
    • Context
    • My colleagues send an message with a link that points to a script located on a shared drive to help our users reinstall their software.
      Our users just click on the link in their Outlook and got a message saying:
      \\servername.fqdn\share\softwarename\install.ps1 cannot be loaded because running scripts is disabled on this system. For more information, see about_Execution_Policies at https:/

    • Issue
    • Users use Outlook that is a 32-bit process. If they click on link that points to a script, it will spawn a 32-bit console and run a 32-bit powershell.exe child process.
      It appears that the ExecutionPolicy isn’t defined in the 32-bit PowerShell and set to its default value: “Restricted” although it’s defined in the 64-bit Powershell.
      Needless to say that you cannot run a script with a restricted execution policy.

    • Solution
    • While there are many ways to solve this issue, we’ve decided to address the issue when computers are provisionned. The post-install of a workstation runs a 64-bit PowerShell script where we’ve just added:

      C:\Windows\SysWOW64\WindowsPowerShell\v1.0\powershell.exe { Set-ExecutionPolicy -ExecutionPolicy 'RemoteSigned' -Force -Scope LocalMachine }

      The above solution just writes the missing ExecutionPolicy value in this registry key:

  • The second issue
    • Context
    • We’ve a short quick and dirty Pester test to perform some operational validation of our configuration. We’ve decided to add a quick test about the execution policy value for a 32-bit PowerShell. But other tests failed and that was unexpected because they don’t when they are executed in a 64-bit PowerShell console.
      The error message thrown was:
      CommandNotFoundException: The term ‘Get-LocalGroupMember’ is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct a
      nd try again.

    • Issue
    • As you can see, there isn’t any Microsoft.PowerShell.LocalAccounts module in the 32-bit modules location. It means that you cannot use the Get-LocalGroupMember cmdlet in a 32-bit PowerShell console.

      I started to compare the module names like this:

      Compare-Object (dir $PSHOME\Modules -Directory).Name `
      -DifferenceObject (
      dir "$($PSHOME -replace "system32","syswow64")\Modules"`

      Yes, on my 1803, this is the list of 64-bit only modules:

      • AppBackgroundTask
      • AssignedAccess
      • ConfigCI (or CIPolicy?)
      • HgsClient
      • Microsoft.PowerShell.LocalAccounts
      • NetworkSwitchManager
      • PcsvDevice
      • PersistentMemory
      • ProcessMitigations
      • PSWorkflow
      • PSWorkflowUtility
      • SmbShare
      • SmbWitness
      • StartLayout
      • WindowsSearch
      • WindowsUpdateProvider
    • Solution
    • Well, it depends how far you want to go. I’ve chosen to execute only Pester tests in a 64-bit PowerShell console and do the following:

       It 'WMF local machine 32-bit execution policy should be set to RemoteSigned' {
         Get-ItemProperty -Path 'HKLM:\SOFTWARE\WOW6432Node\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell' `
         -Name 'ExecutionPolicy' -ErrorAction SilentlyContinue
        ).'ExecutionPolicy' -eq 'RemoteSigned'| 
        should be $true

      The above registry key exists only when it’s executed in a 64-bit shell.

      Who executes Pester tests in a 32-bit shell when you run a 64-bit OS?
      My above unit test doesn’t handle gracefully the 32-bit issue and will actually throw an error if it’s executed in a 32-bit shell.

About the Turkish-I Problem

  • Context

We’ve got some computers located in Turkey. My colleagues started to execute some PowerShell code in the end-user context/session/environment.
They had the following code and noticed that the regular expression on line 5 doesn’t match when the culture is set to tr-TR

(Get-Content -Path $FilePath -ReadCount 1 -Encoding UTF8) | 
ForEach-Object {
 $line = $_
 if ($line -match "^([A-Z0-9_]+)=(.*)$") {
  $k = $Matches[1]
  $v = $Matches[2]
  if ($Keys -contains $k) {
   $myData.$k = $v.ToString().Trim()
  } elseif ($WarnIfUnknown) {
   Write-Warning "Unknown ini key '$k' in file '$FilePath'"
 } else {
  Write-Warning "Wrong line '$line' in file '$FilePath'"
  • Problem

What happens is actually well documented on this page:

By default, when the regular expression engine performs case-insensitive comparisons, it uses the casing conventions of the current culture to determine equivalent uppercase and lowercase characters.

However, this behavior is undesirable for some types of comparisons, particularly when comparing user input to the names of system resources, such as passwords, files, or URLs. The following example illustrates such as scenario. The code is intended to block access to any resource whose URL is prefaced with FILE://. The regular expression attempts a case-insensitive match with the string by using the regular expression $FILE://. However, when the current system culture is tr-TR (Turkish-Turkey), “I” is not the uppercase equivalent of “i”. As a result, the call to the Regex.IsMatch method returns false, and access to the file is allowed.


Here’s the same example in PowerShell to showcase what happens:

NB: It uses a function named Using-Culture that you can find on this page.

  • Solution

The solution is provided on the same page that documents the above problem:

Instead of using the case-insensitive comparisons of the current culture, you can specify the RegexOptions.CultureInvariant option to ignore cultural differences in language and to use the conventions of the invariant culture.

Using-Culture -Culture 'tr-TR' -ScriptBlock {

As you can see, it returns now true instead of false:

How could my colleagues solve their issue?
Well, they should use the RegexOptions.CultureInvariant option to ignore cultural differences when they perform their regular expression match.
They cannot use the -match operator anymore and have it populate the $Matches automatic variable when the input is scalar.
Instead of the -match operator, they should use the .Net [regex] object to be able to specify the RegexOptions.CultureInvariant option:

(Get-Content -Path $FilePath -ReadCount 1 -Encoding UTF8) | 
ForEach-Object {
 $line = $_
 if ([regex]::Matches($line,'^([A-Z0-9_]+)=(.*)$',513)) {

  $k,$v = [regex]::Matches($line,'^([A-Z0-9_]+)=(.*)$',513) | 
  Select-Object -ExpandProperty Groups | 
  Select -Last 2 -ExpandProperty Value
  if ($Keys -contains $k) {
   $myData.$k = $v.ToString().Trim()
  } elseif ($WarnIfUnknown) {
   Write-Warning "Unknown ini key '$k' in file '$FilePath'"
 } else {
  Write-Warning "Wrong line '$line' in file '$FilePath'"

PowerShell Conference Book #PSConfBook

Have you ever been tasked to remove admin privileges to your users or asked to implement a least privilege approach?

Removing User Admin Rights Mitigates 94% of All Critical Microsoft Vulnerabilities.
I wouldn’t be as assertive as this headline about the percentage. I prefer to say instead that removing admin rights reduces the attack surface and that the likelyhood your computers would be far more resistant when there’s a 0day is much higher.

Removing admin rights is for sure a recommended best practice:

Restrict users’ permissions to install and run software applications, and apply the principle of “least privilege” to all systems and services. Restricting these privileges may prevent malware from running or limit its capability to spread through a network.


source: The above slide is from Ivanti

Have you ever seen anything else than just recommendations and guidelines about how to implement a least privilege strategy?
Have you been able to locate any detailed starting guide about this topic?

I propose a basic detailed least privilege implementation example in a chapter of the PowerShell Conference Book

Are you looking for more good reasons to buy this book?

I’d like to personally thank Mike F Robbins, Michael T. Lombardi and Jeff Hicks.

Remove a DSC config

  • Context:

I’ve recently setup a Desired State Configuration (DSC) configuration on a computer that had the Hyper-V role installed.
The DSC configuration was supposed to apply once and reboot the computer once done.

  • Problem:

I was using a script resource but I failed to make it bulletproof.
The test part of the script resource always failed and returned false when the Hyper-V role was present.
It created a reboot loop.
I had to find a quick way to stop the DSC config to apply and remove it.

  • Solution:
Stop-DscConfiguration -Force -Verbose
Remove-DscConfigurationDocument -Stage Current,Pending -Force -Verbose

DateTime conversion

Recently a colleague of mine asked me why he could do this and couldn’t get the correct date?

(Get-ADUser $UserName -Properties LastLogon).LastLogon |

Well, some properties are stored as a 64bit integer in Active Directory:

Like many other properties found in Active Directory, these 64bit integers represent the number of 100-nanosecond intervals since January 1, 1601 (UTC)

Then why can’t I pipe a 64bit integer into the Get-Date cmdlet and get the correct date?
I can do the following, I can pipe a string and immediately get the correct date:

'31/12/2018' | Get-date

The Get-Date cmdlet will treat the input as a datetime object.
Both the help of the Get-Date cmdlet and its source code shows it.

Why the datetime object does not convert correctly the Active Directory 64bit integer?
The datetime object has many constructors.

[System.DateTime].GetConstructors().GetParameters() | 
Select -Unique | Select Name,Member

DateTime Constructor (Int64) documented on this page says it uses A date and time expressed in the number of 100-nanosecond intervals that have elapsed since January 1, 0001 at 00:00:00.000 in the Gregorian calendar.

Fortunately, there’s a another method documented that deals with ticks elapsed since 1/1/1601 and not 1/1/0001.
The DateTime.FromFileTime Method (Int64) documented on this page says:

A Windows file time expressed in ticks.
A Windows file time is a 64-bit value that represents the number of 100-nanosecond intervals that have elapsed since 12:00 midnight, January 1, 1601 A.D. (C.E.) Coordinated Universal Time (UTC). Windows uses a file time to record when an application creates, accesses, or writes to a file.

This means that I can get the correct date if I do:

# or
(Get-ADUser $UserName -Properties LastLogon).LastLogon

Ok, there’s still some magic happening behind the scene. How do I know what the PowerShell engine does? When does it use a method or a constructor?
There’s an excellent post about this topic on the PowerShell Team blog:
Understanding PowerShell’s Type Conversion Magic

Trace-Command -Expression {  
 '31/12/2018' | Get-date
} -PSHost -Name TypeConversion

Trace-Command -Expression { 
 [int64]'131787092608430925' | Get-date
} -PSHost -Name TypeConversion

Issue with PowerShell Remote Endpoints after a Windows 10 Upgrade

  • Context:

I’ve tested recently an upgrade to the latest branch 1803 (aka Spring Creators Update / RS4) of a Windows 10 Enterprise x64 that had a custom remote endpoint configuration.

  • Issue:

Everything was still there. It was still registered, its ACL still set… except that it doesn’t work.
If you attempt to create a new PSSession or use Enter-PSSession or Invoke-Command, it failed with the same error message:

New-PSSession : [localhost] Connecting to remote server localhost failed with the following error message : The WS-Management service cannot process the operation. An attempt to create a virtual account failed. Ensure that WinRM service is running as Local System and that it has TCB privilege enabled. For more information, see the about_Remote_Troubleshooting Help topic.

TCB means Trusted Computing Base. The WinRM service seems to have lost its (SeTcbPrivilege) ability to Act as part of the operating system and cannot create the virtual account used by the remote endpoint.

If I check the services on a 1709, I’ve:

If I check the services on a 1709 upgraded to a 1803, I’ve:

  • Solution 1:

Unregister and register

  • Solution 2:

There are two ways to restore the TCB privilege: either using sc.exe or by setting the RequiredPrivileges REG_MULTI_SZ value in the registry.

# using sc.exe to restore privilege
sc.exe privs WinRM SeAssignPrimaryTokenPrivilege/SeAuditPrivilege/SeChangeNotifyPrivilege/SeCreateGlobalPrivilege/SeImpersonatePrivilege/SeTcbPrivilege
# or directly setting the value in the registry
$HT = @{
 Path = 'HKLM:\SYSTEM\CurrentControlSet\Services\WinRM'
 Name = 'RequiredPrivileges'
 Value = @(
Set-ItemProperty -Type MultiString @HT -Force

# Restore the local system account
Get-CimInstance -ClassName Win32_Service -Filter "Name='WinRM'"| 
Invoke-CimMethod -MethodName Change -Arguments @{StartName='LocalSystem'}

# Apply changes
Restart-Service -Name WinRM
  • Misc.:

Can you notice any other difference left by the upgrade?

The DisplayName was changed and the ServiceType was changed from 0x10 (it has its own process) to 0x20 (it’s a shared process).
Don’t ask me why the upgrade changed these settings? 🙄 I have no idea.

Bonus: Solution 1 is better than solution 2 because it also fixes the service type.