2013 Scripting Games Event 1

This year, the scripting games events have a title, there will be only a total of 6 events and 5 working days to submit each entry before a 5 days voting period begins. Yeah, this year each event will go “social” 🙂

Event 1 is entitled: An Archival Atrocity.

  • Step 1: read the instructions as many times as required

Beginner 1 event instructions
Advanced 1 event instructions

  • Step 2: create the conditions to test your solution for event 1

For the first event 1, this prepartion step is crucial and is also an opportunity to learn a few things.

    • Create the folders tree

First I need to create the folders structure under C:\Application\Log.
To do this, I’d have type md at command prompt.
In PowerShell, md is actually an alias of mkdir and mkdir itself is a proxy function that calls the New-Item core cmdlet behind the scene.

Get-Content Function:\mkdir


In other words, by getting the content of the mkdir function, we see that it uses the New-Item cmdlet with a “Type” parameter.
But when you look at the help of the New-Item cmdlet you don’t see a parameter named “Type”. The only one that matches what “Type” is supposed to do is “ItemType”.
After digging for a few seconds, it appears that the help doesn’t reveal that the “ItemType” parameter has “Type” as an alias. But the Get-Command cmdlet reveals it:

(Get-Command New-Item).Parameters["ItemType"]


Enough with these intricacies, let’s get the job done and create these folders. I could do:

"App1","ThisAppAlso","OtherApp" | ForEach-Object {            
    New-Item -Path "C:\Application\Log\$_" -Type Directory            
 }

Or just:

"App1","ThisAppAlso","OtherApp"|%{mkdir "C:\Application\Log\$_"}
    • Create some log files

The instructions say that:

the filenames are random GUIDs with a .LOG filename extension.

Ok, let’s do it but first I need to discover the .Net GUID object. Here’s how to generate a random GUID.

[GUID]::NewGuid()

Let’s look at this object by piping it into the Get-Member cmdlet that will show its properties and methods.

[GUID]::NewGuid() | gm

Now let’s create our first .LOG file

# Create 1 file            
$file = (Join-Path -Path C:\Application\Log\App1 -ChildPath "$([GUID]::NewGuid().ToString()).log")
Get-Date > $file            

Ok, we’ve just created a file. Its modification date is today. We need also some files older than 90 days. If we do:

Get-item $file | gm


We can see that the LastWriteTime property of the object can be “set”, i.e., modified.
Let’s change its modification date and look at it

# Change its last modification date            
(Get-Item $file).LastWriteTime = (Get-Date).AddDays(-90)            
Get-Item $file            

Now, I’m ready to create in quick and dirty mode some recent files under the subfolders as well as some files older than 90 days like this:

# Create some sample files            
"App1","ThisAppAlso","OtherApp" | % {            
    $App = $_            
    # Create 10 recent files            
    0..9 | % {            
        $file = (Join-Path -Path "C:\Application\Log\$App" -ChildPath "$([GUID]::NewGuid().ToString()).log")            
        Get-Date | Out-File -FilePath $file            
    }            
    # Create 10 files older than 90 days                
    10..20 | % {            
        $file = (Join-Path -Path "C:\Application\Log\$App" -ChildPath "$([GUID]::NewGuid().ToString()).log")            
        Get-Date | Out-File -FilePath $file            
        (Get-Item $file).LastWriteTime = (Get-Date).AddDays(-90)            
    }            
}
  • Step 3: The heart of the solution

If I had have to submit the Beginner event, I’d have proposed the following one-liner:

Get-ChildItem -Path C:\Application\Log -Directory | ForEach-Object -Process {            
    $Folder = $_            
    $TargetPath = (Join-Path -Path "\\NASServer\Archives" -ChildPath $Folder.Name)            
    # Ensure that the target folder exist before moving files            
    If (-not(Test-Path -Path $TargetPath -PathType Container)) {            
        New-Item -Path $TargetPath -ItemType Directory | Out-Null            
    }            
    try {            
        Get-ChildItem -Path "$($Folder.FullName)\*.LOG" -File |             
        Where-Object { $_.LastWriteTime -le (Get-Date).AddDays(-90) } |            
        Move-Item -Destination $TargetPath -ErrorAction Stop            
    } catch {            
        Write-Warning -Message "Failed to move files because $($_.Exception.Message)"            
    }            
}            

Yes, it requires Powershell version 3.0. I first went with the ‘Recurse’ parameter of the Get-Childitem (dir) cmdlet but it doesn’t allow to fully use the power of the pipeline as the above code: Get-ChildItem, Where, Move-Item. Without over thinking the event, notice that I’ve also added a quick way to test if the destination folder exist. It went wrong when I forgot to create the application folders. It’s actually a requirement as the instructions state that:

You need to maintain the subfolder structure, so that files from C:\Application\Log\App1 get moved to \\NASServer\Archives\App1, and so forth.

Notice also the two new handy “File” and “Directory” switches used along with the Get-ChildItem cmdlet that allow to easily select only files or folders.
I didn’t add a second Where filter script to get only filenames that matches GUIDs. If I thought it was a strong requirement, I’d have added something like:

Where-Object {($_.LastWriteTime -le (Get-Date).AddDays(-90)) -and            
($_.Name -match "[a-f0-9]{8}-([a-f0-9]{4}-){3}[a-f0-9]{12}")            
}

Last thing, Get-ChildItem expects a string as path that’s why I used the following construction:

"$($Folder.FullName)\*.LOG"
  • Step 4: Create an advanced function

To move from the Beginner solution to the Advanced one, I’ve to:

    • Find a name to the function that respects the Verbs and Noun even if the “Fix-ArchivalAtrocity” is legal.
    • Add a [cmdletbinding()]…
    • Write the parameter block, define all the parameters types, whether they are mandatory or not, have a default value,…
    • Write the help of the function
    • Enclose the solution in the Begin{},Process{},End{} template.

Let’s look a the result and the overhead of the above requirements:

#Requires -Version 3            
            
Function Move-LogFilesToArchive {            
<#

.SYNOPSIS
    Move files with .LOG filename extension older than n days by preserving the subfolders structure

.DESCRIPTION
    Move files with .LOG filename extension older than 90 days by default from a source path to a destination path by preserving the subfolders structure

.PARAMETER Path
    String that represents the source path where directories containing log files are located.

.PARAMETER Destination
    String that represents the target path where files will be moved to. Subfolders will be created if they don't exist

.PARAMETER Age
    Integer that represents how old are files in days, minimum is 0 and maximum is 734982.

.EXAMPLE
    Move-LogFilesToArchive -Path C:\Application\Log -Destination \\NASServer\Archives
    The above command will move *.LOG files older than 90 days from subfolders located in C:\Application\Log to \\NASServer\Archives and preserve subfolder names

.EXAMPLE
    Move-LogFilesToArchive -Path C:\Application\Log -Destination \\NASServer\Archives -Age 180
    The above command will move *.LOG files older than 180 days from subfolders located in C:\Application\Log to \\NASServer\Archives and preserve subfolder names

.EXAMPLE
    Move-LogFilesToArchive -Path C:\Application\Log -Destination "\\NASServer\Archives" -WhatIf
    Using the -Whatif parameter would list the operations the command would perform and the items that would be affected.
    Instead of executing the command, messages that describe the effect of the command are displayed.

.EXAMPLE
    Move-LogFilesToArchive -Path C:\Application\Log -Destination "\\NASServer\Archives" -Confirm
    Using the -Confirm parameter would list the operations the function would perform and prompts you for confirmation before executing the command.

#>
            
[CmdletBinding(ConfirmImpact="Low",SupportsShouldProcess)]            
Param(            
    [parameter(mandatory)]            
    [ValidateNotNullOrEmpty()]            
    [ValidateScript({Test-Path -Path $_ -PathType Container})]            
    [string]$Path,            
            
            
    [parameter(mandatory)]            
    [ValidateNotNullOrEmpty()]            
    [ValidateScript({Test-Path -Path $_ -PathType Container})]            
    [string]$Destination,            
                
    [parameter()]            
    [ValidateRange(0,734982)]            
    [int]$Age=90            
)            
Begin {}            
Process {            
    Get-ChildItem -Path $Path -Directory | ForEach-Object -Process {            
        $Folder = $_            
        $TargetPath = (Join-Path -Path $Destination -ChildPath $Folder.Name)            
        # Ensure that the target folder exist before moving files            
        If (-not(Test-Path -Path $TargetPath -PathType Container)) {            
            try {            
                New-Item -Path $TargetPath -ItemType Directory -ErrorAction Stop | Out-Null            
            } catch {            
                Write-Warning -Message "Failed to create the directory $TargetPath because $($_.Exception.Message)"            
                break            
            }            
        }            
        try {            
            # Use the new File switch to avoid directories            
            Get-ChildItem -Path "$($Folder.FullName)\*.LOG" -File |             
            Where-Object { $_.LastWriteTime -le (Get-Date).AddDays(-$Age) } |            
            Move-Item -Destination $TargetPath -ErrorAction Stop            
        } catch {            
            Write-Warning -Message "Failed to move files because $($_.Exception.Message)"            
        }            
    }            
}            
End{}            
}

I’m not very happy with my function name but it’s better than my first idea.

My function has two cmldets that take some actions: New-Item and Move-Item. As I fully use the power of the pipeline and that these two cmdlets supports the -Whatif and -Confirm parameters, I can add the SupportsShouldProcess and ConfirmImpact arguments to the CmdletBinding attribute.

Instead of using my old habits and validating the pathes passed as parameter in the Begin block, I decided to experiment and go with the ValidateScript attribute. 2 lines instead of this code:

if (-not(Test-Path -Path $Destination -PathType Container)) {            
    Write-Warning -Message "$Destination isn't available or path isn't a directory"            
    break            
}            
if (-not(Test-Path -Path $Path -PathType Container)) {            
    Write-Warning -Message "$Path isn't available or path isn't a directory"            
    break            
}

Last thing, you may have noticed a strange upper limit for the ValidateRange. It actually represents the number of days between the 25th of April and the lowest date that can be represented by the console. As you can see, it’s a huge number that should cover the vast majority of use cases.

Look at what happens when I add the maximum integer to the current date

(Get-Date).AddDays(([int32]::MaxValue))


Exception calling “AddDays” with “1” argument(s): “Value to add was out of range.
I’ve already figured out what the biggest date that can be displayed by the console in this blog post. As I’ll substract days, I need to figure out what the smallest date could be. Well, it’s:

(Get-Date -Year 0001 -Day 1 -Month 1 -Hour 0 -Minute 0 -Second 0 -Millisecond 0)


Exception calling “AddMilliseconds” with “1” argument(s): “The added or subtracted value results in an un-representable DateTime.
Finally, here’s how I came up with 734982:

New-TimeSpan -Start (Get-Date -Year 0001 -Day 1 -Month 1 -Hour 0 -Minute 0 -Second 0 -Millisecond 0) -End (Get-Date)
  • Step 5: Test the code

I tested my code on a Windows 7 box with PowerShell 3.0 as well as a Windows 8. Coding on one computer, executing the code and then testing the code on another vanilla computer sound as a good practice.
When testing the code, I also switch the strict-mode to its strictest version so that I can see if the code I wrote respects good practices.

Set-StrictMode -Version latest

The only problem I found when testing the code is the following. I tested the code on a vanilla computer and forgot to create target directories. I also did some stupid thing when testing the -Confirm mode. I answered ‘No’ to create the target folder performed by the New-Item cmdlet and ‘Yes’ to the following move-item prompts. Stupid, isn’t it? Now, when I relaunch the function, I got an error with the New-Item cmdlet that wasn’t handled. That’s why I chose to enclose the New-item cmdlet in a try/catch block.

  • Conclusion

This exercice was a great learning experience but you know, in real life, I would have used a very pragmatic solution like this one:

robocopy C:\Application\Log \\NASServer\Archives *.LOG /S /r:0 /MOV /MINAGE:90 >NUL 2>&1

If it was a brain-teaser for the shortest one-liner, PowerShell wouldn’t win 😎

Advertisements

The nasty [char]160

  • Context

There’s another strange thing with the Gamarue piece of malware: it creates a system hidden folder whose name is [char]160.

  • Steps to create the problem:
    • Open the explorer
    • Right-click, select ‘New’ then ‘Folder’
    • Press ALT and then enter the 4 digits 0160 and hit Enter

    Now we have a folder whose name looks like either empty or is a white space.

  • How PowerShell helps us understand what we did above
    • It confirms we have the same “view” as the Explorer
    • Get-ChildItem .\

    • The folder name is [char]160 and correctly captured by the Get-ChildItem cmdlet
      (Get-ChildItem .\)[-1].FullName.GetEnumerator()|%{            
      '{0}->{1}' -f $_,[int][char]$_}

    • Notice that [char]160 isn’t listed in the invalid characters list of the .Net system.IO.Path class
      [IO.Path]::GetInvalidPathChars()|%{            
       '{0}->{1}' -f $_,[int][char]$_ }

  • How would we have solved this in the old DOS days

We just launch a DOS console and rename the folder using the path completion. We make sure to enclose the path between quotes. That’s it.
We could also have typed ALT+0160 between the quotes instead of using the path completion. We didn’t even need to change the code page to 1251 or 10002.

  • What works in PowerShell

The Get-ChildItem cmdlet partially works. It’s able to output a DirectoryInfo object. It can’t be used with its “Name” parameter.
Join-Path as well as Resolve-Path cmdlets also work:

Casting the string ‘C:\<ALT+0160\' into the .Net DirectoryInfo class also works

[IO.DirectoryInfo]('C:\ \')

Using the methods of the .Net IO.Directory class responsible for enumerating files and/or directories also work if you append a ‘\’ after the [char]160.

[IO.Directory]::EnumerateFiles('C:\ \')            
[IO.Directory]::EnumerateDirectories('C:\ \')            
[IO.Directory]::EnumerateFileSystemEntries('C:\ \')

Using the New-PSDrive cmdlet also work:

  • What are the limits of PowerShell

The Set-Location cmdlet and its alias cd don’t work

We can examine the automatic $Stacktrace variable that contains a stack trace for the most recent error.

The Rename-Item cmdlet and its alias ren don’t work.
Trying move based methods of the .Net Directory and DirectoryInfo class don’t work:

Another technique I found on JayKul‘s web site, http://huddledmasses.org/powershell-power-user-tips-current-directory/, works with a subfolder name

  • Final word

Although I should have done it, I chose not to fill a bug as I found on the .Net IO.Directory Move method the following text that made me think that developpers are aware of the above limits:

ArgumentException
sourceDirName or destDirName is a zero-length string, contains only white space, or contains one or more invalid characters as defined by InvalidPathChars.

[char]160 -match "\s"

System Center Configuration Manager Remote Control ACL issue

  • Context

I’m currently deploying the System Center Configuration Manager Agent 2012 SP1 over workstations that had previously the SCCM 2007 Client version 4.00.6487.2000

  • Symptoms

Helpdesk operators say that the remote control tool launches but that the client doesn’t see the window asking him to authorize or deny access to the helpdesk operator.

It looks like that all the process involved in the remote control are correctly launched on the client

gwmi win32_process | ? { $_.Commandline -match "CCM" } |  fl Commandline

The remote control window of the helpdesk operator doesn’t time out.

  • Analysis

Neither restarting the ConfigMgr services nor running the CCMEval task (Configuration Manager Health Evaluation) fixes the issue.
It’s time to digg in the numerous logs files of the ConfigMgr client:

Get-ChildItem C:\Windows\CCM\ -Include *.log -Recurse | Select-String -Pattern "SOFTWARE\\Microsoft\\SMS"

Using CMTrace.exe to read the logs, I can see that:
The log file under C:\windows\ccm\Logs\CmRcService.log contains the following errors:
Failed to open registry key Software\Microsoft\SMS\Client\Client Components\Remote Control\SessionStatus (0x80070005) for reading

Failed to open registry key ‘Software\Microsoft\SMS\Client\Client Components\Remote Control’. Permissions on the requested may be configured incorrectly.
Access is denied. (Error: 80070005; Source: Windows)

The log file under C:\windows\ccm\Logs\CcmExec.log contains the following additionnal info related to the issue
Failed to query size of Security (may not exist) (0x80070002)
Failed to get SID for ConfigMgr Remote Control Users (0x80070534)

  • Cause

If I compare a workstation that doesn’t have the issue and one that has, I can see that the registry permissions on HKLM\Software\SMS are not inherited from the parent. The BUILTIN\Users ACL is actually missing which causes the issue.

Here’s what I can see with PowerShell on workstation that has its registry permissions not set correctly:

$acl = (Get-ACL HKLM:\SOFTWARE\Microsoft\SMS)            
$acl.Access            
$acl.Sddl



The corresponding SDDL looks like: O:BAG:SYD:P(A;OICI;KA;;;SY)(A;OICI;KA;;;BA)

  • Solution

Although I don’t have time to fill-in a bug or to understand the root cause, I came up with the following quick fix.

if (Test-Path HKLM:\SOFTWARE\Microsoft\SMS) {            
    $acl = (Get-ACL HKLM:\SOFTWARE\Microsoft\SMS)            
            
    # SetAccessRuleProtection(bool isProtected, bool preserveInheritance)            
    # Allow inheritance            
    $acl.SetAccessRuleProtection($false,$false)            
                    
    # Enumerate all Access rules that are not inherited            
    # GetAccessRules(bool includeExplicit, bool includeInherited, type targetType)}            
    $acl.GetAccessRules($true,$false,[System.Security.Principal.NTAccount]) | ForEach-Object -Process {            
        # RemoveAccessRule(System.Security.AccessControl.RegistryAccessRule rule)            
        $acl.RemoveAccessRule($_) | Out-Null            
    }            
    # Set the ACL            
    try {            
        Set-Acl HKLM:\SOFTWARE\Microsoft\SMS -AclObject $acl -ErrorAction Stop            
    } catch {            
        Write-Warning -Message "Failed to restore inherited ACL from parent because $($_.Exception.Message)"            
    }                    
}

Note that the 2nd parameter of the .Net SetAccessRuleProtection Method (preserveInheritance), is ignored if the first (isProtected) is set to false. More on the following MSDN page.

As I have enabled the Powershell remoting I can also execute on a remote computer by simply passing the above code as a scriptblock to the Invoke-Command cmdlet.
Did I already say that Powershell rocks! 😎

Extending a system drive volume

My System Center Configuration Manager 2012 SP1 server is a VM running on Hyper-V 3.0. I’ve started the VM with a dynamic disk of 50GB and installed almost everything in the default location. But after a few weeks, the freespace on the system partition decreased to 9GB. The ConfigMgr started logging warnings and alerts:

Basically I had 3 options: I could either

  • move some components (SQL,…) to my second drive that has 30GB freespace
  • or decrease the thresholds so that the ConfigMgr stops logging warnings
  • or extend the partition of the system drive

I chose the later because moving the SQL Database can have some undesired consequences. Although it doesn’t apply to ConfigMgr 2012 SP1, I could have had the following issue: After moving the System Center 2012 Configuration Manager SQL Site Database to another drive, creating a new Software Update package or a new application fails

Here’s what I did

  • Backup the VM
  • $BackupShare = "\\mytarget.fqdn\myBackupShare$"            
    $cred = (Get-Credential)            
    $pol = New-WBPolicy            
    Get-WBVirtualMachine | ? VMName -eq "myCM2012ServerName" | Add-WBVirtualMachine -Policy $pol            
    $targetvol = New-WBBackupTarget -NetworkPath $BackupShare -Credential $cred -NonInheritAcl:$false            
    Add-WBBackupTarget -Policy $pol -Target $targetvol            
    Set-WBSchedule -Policy $pol -Schedule ([datetime]::Now.AddMinutes(10))            
    Start-WBBackup -Policy $pol

    If you want to read more on how to backup a VM in Hyper-V 3.0 you can read my post https://p0w3rsh3ll.wordpress.com/2013/03/06/backup-hyper-v-with-powershell/

  • Disable the replication and make sure there’s no snapshot

  • Do not perform actions (resize, shrink, convert,…) on virtual hard disk associated with a virtual machine that has snapshots, has replication enabled, or associated with a chain of differencing virtual hard disks. Otherwise, data loss is likely to occur.

  • Stop the running VM because of the following warning

  • Resize-VHD is an offline operation; the virtual hard disk must not be attached when the operation is initiated.

  • Detach the virtual hard disk from the VM
  • I only have one IDE drive on my VM so I can do:

    Get-VMHardDiskDrive -VMName "myCM2012ServerName" -ControllerType IDE |             
      Remove-VMHardDiskDrive
  • Resize the disk: add 15GB more
  • Resize-VHD -Path D:\VM\HDD\myVMName_DRIVE_C.vhdx -SizeBytes 65GB
  • Attach the drive back to the VM
  • Add-VMHardDiskDrive -VMName "myCM2012ServerName" -ControllerType IDE -ControllerNumber 0 -ControllerLocation 0 -Path D:\VM\HDD\myVMName_DRIVE_C.vhdx
  • Start the VM
  • Extend the C: (system drive) inside the VM
  • Although I could have done it with the following PowerShell cmdlets,

    Resize-Partition -DiskNumber 0 –PartitionNumber 2 -Size (            
    Get-PartitionSupportedSize –DiskNumber 0 –PartitionNumber 2).SizeMax

    I did it with the classic brave old diskpart command:

    I’ve also filled-in a documentation bug as the last example of the Resize-partition uses the MaximumSize property instead of the SizeMax property returned by the Get-PartitionSupportedSize cmdlet.

  • Enable replication

Total downtime a few minutes (I didn’t count) and my freespace issue is fixed 🙂

Viewing code pages

While hunting malware, a colleague noticed a folder that was displayed in his DOS console as if it had no name or was a single space.
This has nothing to do with the malware he was cleaning: http://blogs.technet.com/b/mmpc/archive/2013/02/27/the-strange-case-of-gamarue-propagation.aspx

It’s actually due to the fact that we work in a multi-language environment and that some cyrillic characters cannot be displayed correctly in a console whose code page is set to Western European (DOS) (850)

In DOS, you can use the command chcp.com to display the current code page.

In Powershell you can also use this old DOS command.

But PowerShell is different from a DOS console. It uses 3 code pages. One for the input and 2 for the Output.

[console]::InputEncoding


The standard console output encoding is the same as the input encoding:

But for the output being sent through the pipeline to native applications, there’s an automatic variable called $OutputEncoding

The Help file says the following about the $OutputEncoding

$OutputEncoding
—————
Determines the character encoding method that Windows PowerShell
uses when it sends text to other applications.

For example, if an application returns Unicode strings to Windows
PowerShell, you might need to change the value to UnicodeEncoding
to send the characters correctly.

Valid values: Objects derived from an Encoding class, such as
ASCIIEncoding, SBCSCodePageEncoding, UTF7Encoding,
UTF8Encoding, UTF32Encoding, and UnicodeEncoding.

Default: ASCIIEncoding object (System.Text.ASCIIEncoding)

EXAMPLES

This example shows how to make the FINDSTR command in Windows
work in Windows PowerShell on a computer that is localized for
a language that uses Unicode characters, such as Chinese.

The first command finds the value of $OutputEncoding. Because the
value is an encoding object, display only its EncodingName property.

PS> $OutputEncoding.EncodingName # Find the current value
US-ASCII

In this example, a FINDSTR command is used to search for two Chinese
characters that are present in the Test.txt file. When this FINDSTR
command is run in the Windows Command Prompt (Cmd.exe), FINDSTR finds
the characters in the text file. However, when you run the same
FINDSTR command in Windows PowerShell, the characters are not found
because the Windows PowerShell sends them to FINDSTR in ASCII text,
instead of in Unicode text.

PS> findstr # Use findstr to search.
PS> # None found.

To make the command work in Windows PowerShell, set the value of
$OutputEncoding to the value of the OutputEncoding property of the
console, which is based on the locale selected for Windows. Because
OutputEncoding is a static property of the console, use
double-colons (::) in the command.

PS> $OutputEncoding = [console]::outputencoding
PS> # Set the value equal to the
OutputEncoding property of the
console.
PS> $OutputEncoding.EncodingName
OEM United States
# Find the resulting value.

As a result of this change, the FINDSTR command finds the characters.

PS> findstr
test.txt:

# Use findstr to search. It find the
characters in the text file.

I think that the above help content will fully make sense with a concrete example that you can find on this page:
http://blogs.msdn.com/b/powershell/archive/2006/12/11/outputencoding-to-the-rescue.aspx

Last quick tip. To view all the available code pages with PowerShell, you do:

[System.Text.Encoding]::GetEncodings() | ft -AutoSize

I was also wondering how the code page was selected when you start a PowerShell or DOS console.
Well, it’s based on the “System Locale”. (See my article on the system locale vs. the user locale)
You can check the system locale with the following V3 cmdlet:

Show-ControlPanelItem -Name Region

or

control intl.cpl

So,
If you set the system Locale” to French, you end up with a 850 (Multilingual) code page.
If you set the system locale to English US, you end up with the 437 (English US) code page.

Without wanting to overcomplicate things, be also aware that the font has also an impact: http://stackoverflow.com/questions/1259084/what-encoding-code-page-is-cmd-exe-using

Quick tip: Get Active Directory (AD) Domain and Forest functional level

Some days ago, both Mike F. Robbins and Shay Levy showed how to enumerate the FSMO roles in the current domain and forest with PowerShell:

Based on the same cmdlets and technique, you can also view the Forest and Domain functional level like this:

# Get the Forest functional level            
(Get-ADForest).ForestMode            
            
# Get the Domain functional level            
(Get-ADDomain).DomainMode
# Get the Forest functional level            
[System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest() | Select *Mode*            
            
 # Get the Domain functional level            
[System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain() |Select *Mode*