Add missing WinRMRemoteWMIUsers__ group in Active Directory

I’ve seen this morning a post in French about the WinRMRemoteWMIUsers__ group missing from Active Directory Domain Services. The post references the following kb3118385 page about Svchost.exe uses excessive CPU resources on a single-core Windows Server 2012 domain controller

The only missing part in the blog post is the properties of this group that I actually found on this technet page winrmremotewmiusers__

Of course, you can add the missing group like this

if (-not(Get-ADGroup -Filter { Name -eq 'WinRMRemoteWMIUsers__' })) {
 New-ADGroup -GroupScope DomainLocal -GroupCategory Security -Name 'WinRMRemoteWMIUsers__'

…but, it won’t have the well-known SID documented above.

And its Description is:

Members of this group can access WMI resources over management protocols (such as WS-Management via the Windows Remote Management service). This applies only to WMI namespaces that grant access to the user.

Stop telemetry

Some people noticed Windows 7 computers talking to some IP addresses on port TCP:80 since the November Quality Rollup while Woody Leonhard warned mid-October that there’ll be a new telemetry being pushed to downlevel OS like Windows 7 and 8.1 .

A few days before Christmas, abbodi86 provided a less aggressive way of stopping telemetry on a computer on the mailing list:

Here’s how to do this on Windows 10 using PowerShell

At the end in the Perfmon snap-in you shouldn’t have any active event trace sessions (right-click refresh if necessary)
and the the Autologger session should be set to Disabled so that it doesn’t start at the next computer restart

On Windows 7, you cannot use PowerShell and here is the “legacy” way to achieve the same thing:

Fix DFSR 4012 event and MaxOfflineTimeInDays

What a nice way to start 2017, I’ve got group policies not applying to computers because the DFS replication of the Sysvol is stopped and the delay of 60 days to resume the replication is over.


Happy new year 🙂

NB1: don’t do what is recommended in the above message and when it says “To resume replication of this folder, use the DFS Management snap-in to remove this server from the replication group, and then add it back to the group.”.
This does not apply to a SYSVOL share on a domain controller, right?

NB2: If you wait for error 4012 and ignore warnings, it’s too late. Why did the domain admin wait for more than 60 days to resume replication? Nobody is reading the alerts and saw the warnings (events Id 2213) and/or worse the remediation script is also broken and there isn’t an alert for that 😦

NB3: notice that the above message tells me how long it is disconneted.
This server has been disconnected from other partners for ? days: 71 in my case.

I did the following to get back the replication of the SYSVOL working:

$i = gwmi -namespace root\microsoftdfs -query 'Select * FROM DfsrMachineConfig'
# increase the MaxOfflineTimeInDays to more than just a day
# 71+4=75
$i.MaxOfflineTimeInDays =[uint32]75
Restart-Service -Name DFSR -Verbose

How to export all URLs of Firefox tabs at once

I’ve seen the following script proposed on the technet script gallery that should show me “How to export all URLs of Firefox tabs at once”.

I said “should” because it actually doesn’t always work on my computer and there’s no error handling in any way in the proposed code 😦

First, to save you the hassle of trying that script, you can just open Tools/Options in Firefox and set the following. Even if you’ve 1000 tabs, the browser will open in a few minutes and restore the gazillion tabs without you to have to temporarily record the URL of each Firefox tabs that you want to open in the next session:firefox-tabs-02

Back to the proposed script. It actually threw the following error:
Error during serialization or deserialization using the JSON JavaScriptSerializer. The length of the string exceeds the value set on the maxJsonLength property.
Why? Because when you have a gazillion of tabs (whatever that means), your Mozilla .js file that stores the JSON data about your tabs can be quite big.
I found the following post in the Windows PowerShell forum that made me go this way:

Quick tip: change ConfigMgr updates maximum run time

With the new servicing model being applied to pre-Windows 10 operating systems, it’s a good idea to change the default “maximum run time” of the monthly cumulative updates.
For example, I’ve:

Here’s a way to achieve this easily with PowerShell.
The -Name parameter of the Get-CMSoftwareUpdate accepts wildcards although its documentation says the opposite.
The -Fast is explained in the following warning message:

WARNING: ‘Get-CMSoftwareUpdate’ supports -Fast for retrieving objects without loading lazy properties. Loading lazy properties can cause significant performance penalties. If it is not necessary to utilize the lazy properties in the returned object(s), -Fast should be used. This warning can be disabled by setting $CMPSSuppressFastNotUsedCheck = $true.

# 1. View what updates will be targeted:
Get-CMSoftwareUpdate -Fast -Name "*Security Monthly Quality Rollup for Windows*" |
Select Max*,*DisplayName
#NB: MaxExecutionTime is in seconds in this case

# 2. Set the maximum run time to 120 minutes
Get-CMSoftwareUpdate -Fast -Name "*Security Monthly Quality Rollup for Windows*"  | 
Set-CMSoftwareUpdate -MaximumExecutionMins 120 -Verbose

Local security policy using Pester

In my previous post, I’ve shown how to use DSC to configure the local security policy. If you only want to get the compliance status of the system, Pester might be more suitable.

I first create a hastable of settings like this:

# secedit.exe /export /Cfg C:\secpol.txt /areas SECURITYPOLICY
(Get-Content -Path 'C:\secpol.txt' -ReadCount 1) `
-match '^([A-Z\s0-9_\\]+)=(.*)$' -replace '=',',' |
ConvertFrom-Csv -Header Key,Value1,Value2 | ForEach-Object {
    '    @{'
    "        Key = '{0}'" -f $_.Key
    "        Value1 = '{0}'" -f $_.Value1
    if ($_.Value2) {
    "        Value2 = '{0}'" -f $_.Value2
    '    },'

and I populate the file secpol.ps1 using the output of the above command.
The content looks like this. The hashtable is stored inside an array named $SecurityPolicy.

Now in the test file secpol.tests.ps1, I’ve:

Let’s see how to use it

Invoke-Pester ~/documents/pester/secpol


Fine, my system is compliant.
But, let’s I change the Maximum password age using gpedit.msc

Now, Pester reports my system as not being compliant:
But, the only drawback with my quick’n dirty code is that it doesn’t tell you what value was found instead of the expected 42 in this case.

Audit policy using Pester

In my previous post, I’ve shown how to leverage DSC to apply a configuration that defines the local Audit policy.
If you’re just interested in the compliance of a system, Pester might be more suitable to assess it against a template. It will somehow validate the operational status of the system.

First, I create a hashtable of settings like this:

auditpol.exe /get /category:* /r |
ConvertFrom-Csv |
Select Subcategory*,*lusion* | 
ForEach-Object {
    '    @{'
    "        Name = '{0}'" -f $_.Subcategory
    "        GUID = '{0}'" -f $_.'Subcategory GUID'
    "        Inclusion = '{0}'" -f $_.'Inclusion Setting'
    '    },'

and I populate the file auditpol.ps1 using the output of the above command. The content looks like this. The hashtable is stored inside an array named $AuditPolicy.

Now in the test file auditpol.tests.ps1, I’ve:

Let’s see how to use it

Invoke-Pester ~/documents/pester/auditpol


If I change the Logoff policy for example like this:

auditpol /Set /subcategory:{0CCE9216-69AE-11D9-BED3-505054503030} /failure:enable

and run the pester test a second time, I’ll get:
…my system isn’t compliant anymore with the settings defined in my template.