Deploying Files to the Logged-In User’s Profile Using Configuration Manager

I’ve had a number of cases where it would have been useful to be able to deploy files to a logged-in user’s profile on Windows using Configuration Manager (SCCM), but due to the context in which the ‘installation’ runs (i.e. the local system), using parameters such as %AppData% in a batch file doesn’t work, and this has always been an issue.

We recently wanted to push some custom backgrounds out to Microsoft Teams and I thought I’d spend a while trying to solve this issue. The following may not be the most elegant solution, but it seems to work reliably for me!

  1. We’re going to use PowerShell to detect the presence of a file. Copy the following into a file for the moment; we’ll need this PowerShell snippet shortly:
    Function CurrentUser {
         $LoggedInUser = get-wmiobject win32_computersystem | select username
         $LoggedInUser = [string]$LoggedInUser
         $LoggedInUser = $LoggedInUser.split(“=”)
         $LoggedInUser = $LoggedInUser[1]
         $LoggedInUser = $LoggedInUser.split(“}”)
         $LoggedInUser = $LoggedInUser[0]
         $LoggedInUser = $LoggedInUser.split(“”)
         $LoggedInUser = $LoggedInUser[1]
         Return $LoggedInUser
    }

  2. $user = CurrentUser


    start-sleep 30


    $AppPath = “C:Users” + $user + “AppDataRoamingMicrosoftTeamsBackgroundsUploadsCustomBackground.png”
    If (Test-Path $AppPath) {
         Write-Host “The application is installed”
    }

    We need the sleep function in the file as the detection runs pretty quickly after the deployment and we want to be sure that the file(s) we’re going to copy are in place before attempting to perform the detection. This script extracts the username (including domain) of the currently logged in user, then separates the actual username from the returned value allowing this to be inserted into the path for detecting the file.

  3. Create a PowerShell file similar to the detection script to perform the file deployment:
    Function CurrentUser {
         $LoggedInUser = get-wmiobject win32_computersystem | select username
         $LoggedInUser = [string]$LoggedInUser
         $LoggedInUser = $LoggedInUser.split(“=”)
         $LoggedInUser = $LoggedInUser[1]
         $LoggedInUser = $LoggedInUser.split(“}”)
         $LoggedInUser = $LoggedInUser[0]
         $LoggedInUser = $LoggedInUser.split(“”)
         $LoggedInUser = $LoggedInUser[1]
         Return $LoggedInUser
    }

  4. $user = CurrentUser


    $AppPath = “C:Users” + $user + “AppDataRoamingMicrosoftTeamsBackgroundsUploads”
    cp “.Backgrounds*.png” -Destination $AppPath -Confirm:$false –Force

    I saved mine as ‘Deploy-TeamsBackgrounds.ps1’. This script uses the same logic as the detection script, above, to determine the username and allows us to use this in the path to copy the file(s) to.

  5. Create a batch file to call the PowerShell file that does the actual file deployment:
    Powershell -NoProfile -ExecutionPolicy Bypass -file %~dp0Deploy-TeamsBackgrounds.ps1

    I saved this as ‘Deploy-TeamsBackgrounds.bat’. This batch file runs PowerShell and passes in the filename of the scripts used to deploy the files to the user’s profile location. Ensure that the batch file and the deployment PowerShell script are in the same location somewhere suitable for SCCM to use as the application source (usually a share on the SCCM server). I then have a folder called ‘Backgrounds’ in the same location that contains the actual image files to be copied.

  6. Create a new application in Configuration Manager selecting ‘Manually specify the application information’:
    Deploy-TeamsBackgrounds01
    Then enter information on the name, publisher and version of the application:
    Deploy-TeamsBackgrounds02
    Add any required information for the Software Center entry:
    Deploy-TeamsBackgrounds03
    Click ‘Add’ to add a deployment type:
    Deploy-TeamsBackgrounds04
    and select ‘Script installer’ from the drop-down. This will automatically select the ‘Manually specify the deployment type information’ option:
    Deploy-TeamsBackgrounds05
    Provide a name for the deployment type:
    Deploy-TeamsBackgrounds06
    Specify the location that contains the files created earlier and the batch file as the command used to install the content:
    Deploy-TeamsBackgrounds07
    For the detection method, select ‘Use a custom script to detect the presence of this deployment type, the click the ‘Edit…’ button:
    Deploy-TeamsBackgrounds08
    Select PowerShell from the Script type drop-down and then paste the detection script generated earlier:
    Deploy-TeamsBackgrounds09
    and click OK to close the script editor window.
    Define the user experience:
    Deploy-TeamsBackgrounds10
    Note that I saw a warning shown at this point.
    Define any requirements (e.g. only deploy on Windows 10) and dependencies, then click through to generate the application.
  7. Distribute the content, then configure a deployment. I used the ‘All Staff’ user collection to deploy to.

Once the application appears in the Software Center, an end-user can click to install the custom backgrounds and the ‘application’ is downloaded to the Configuration Manager cache, then the batch file is triggered, which in turn executes the PowerShell script to copy the custom background images to the correct location in the user’s profile. The detection script then runs and detects the presence of the specified file.

Configuring PowerChute Network Shutdown on Server Core

Everyone installing Hyper-V servers is installing them as Server Core servers, right? Smile

I recently hit an issue configuring APC’s PowerChute Network Shutdown (PCNS) software on a Server Core installation of Windows Server 1809 (the most recent release of the semi-annual channel) whereby while the installation appeared to complete successfully, I could not communicate with the service to configure it post-installation.

After a little digging, it turned out that the installer had created the firewall rule exemptions for to wrong profile (i.e. public rather than domain). The solution was to run the following PowerShell to update the profile for the PCNS firewall rules to match the network profile the server was operating on:

Get-NetFirewallRule | where {$_.DisplayName -like “PCNS*”} | Set-NetFirewallRule -Profile Domain

Once the firewall rules were updated, communication was restored and configuration could be completed from a browser running on another machine.

Configure Server 2016 ADFS and WAP with custom ports using Powershell

A pull request for Chris Gardner’s WebApplicationProxyDSC is now inbound after a frustrating week of trying to automate the configuration of ADFS and WAP on a Server 2016 lab.

With Server 2016, the PowerShell commands to configure the ADFS and WAP servers include switches to specify a non-default port. I need to do this because the servers are behind a NetNat on a server hosting several labs, so port 443 is not available to me and I must use a different port.

This should be simple: Specify the SSLPort switch on the Install-ADFSFarm command and the HttpsPort on the Install-WebApplicationProxy command. However, when I do that, the WAP configuration fails with an error that it cannot read the FederationMetadata from the proxy.

I tried all manner of things to diagnose why this was failing and in the end, the fix is a crazy hack that should not work!

The proxy installation, despite accepting the custom port parameter, does not build the URLs correctly for the ADFS service, so is still trying to call port 443. You can set these URLs on a configured WAP service using the Set-WebApplicationProxyConfiguration command. However, when you run this command with no configured proxy, it fails.

Or so you think…

On the ADFS Server:

  1. Install-AdfsFarm specifiying the CertificateThumbprint, Credential, FederationServiceDisplayName, FederationServiceName and SSLPort params

On the WAP Server:

  1. Install-WebApplicationProxy specifiying the HttpsPort switch,  CertificateThumbprint, FederationServiceName and FederationServiceTrustCredential params.
  2. Set-WebApplicationProxyConfiguration specifying the ADFSUrl, OAuthAuthenticationURL and ADFSSignOutURL parameters with the correct URLs for your ADFS server (which include the port in the Url).
  3. Re-run the command in step 1.

Despite the fact that step 2 says it failed, it seems to set enough information for step 3 to succeed. My experience, however, is that only doing steps 2 and 3 does not work. Weird!

As a side note, testing this lot is a lot easier if you remember that the idpinitiatedsignon.aspx page we all normally use for testing ADFS is disabled by default in Server 2016. Turn it on with Set-AdfsProperties -EnableIdPInitiatedSignonPage $true

Renaming an In-Use Content Type in SharePoint Online

Design of SharePoint content Types for SharePoint, and in particular SharePoint Online is very important. Care must be taken to ensure that the design is appropriate for the environment as changes made later can impose significant management overheads. In particular, if a Content Type is put to use (I.e. is assigned to a list/library), this can complicate changes made at a point following initial deployment.

Some Content Type operations are simple, e.g. adding a column. This will work as expected, with the new column rippling all the way down to the in-use Content Types.

Renaming a Content Type potentially falls under the ‘more difficult’ category, in particular if it’s been assigned to a list/library. This is due to the way that SharePoint handles this process, with the Content Type that is assigned to the list/library being a child content type of that published to a site collection.

I’d still strongly recommend using the Content Type Hub (hidden site collection, available on /sites/contenttypehub) to centrally manage and publish content types. A change to the name of a content type made here, then the content type being republished will rename the content type in the content type gallery in each site collection. If the content type is attached to a list/library however as this is a child content type, this will not be renamed, so you end up in the scenario that the gallery reflects the name change, while the instance attached to the list/library does not.

Looking at the list of content types attached to a list/library, and clicking through on the content type that you wish to change does allow you to change the content type from read-only to writeable. This then allows you to change the content type’s name, however if you have lots of libraries and/or lots of content types to process, this gets laborious very quickly. PowerShell to the rescue again!

The following script is a sample that can be used to change the name of a content type that is attached to a set of lists/libraries:

[sourcecode language='powershell'  padlinenumbers='true']
$SiteUrl = "https://domain.sharepoint.com/teams/SiteCollection"  
$UserName = "Andy@o365domain.com"  
# Ask the user for the password
$Password = Read-Host -Prompt "Enter your password: " -AsSecureString

# List of lists/libraries to process
$libraries = @("Library1","Library2","Library3")

# Add references to the CSOM libraries
Add-Type -Path "C:<Path-to-CSOM-libraries>Microsoft.SharePoint.Client.dll" 
Add-Type -Path "C:<Path-to-CSOM-libraries>Microsoft.SharePoint.Client.Runtime.dll" 

# Connect
$spoCtx = New-Object Microsoft.SharePoint.Client.ClientContext($SiteUrl)  
$spoCredentials = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials($Username, $Password)   
$spoCtx.Credentials = $spoCredentials

# Load the web context
$web = $spoCtx.web
$spoCtx.load($web)
$spoCtx.executeQuery()

# Process the lists/libraries
foreach ($lib in $libraries) {
    $list = $web.lists.getbytitle("$lib")
    $spoCtx.load($list)
    $spoCtx.executeQuery()

    # Load the content types attached to the list/library
    $CTs = $list.ContentTypes
    $spoCtx.load($CTs)
    $spoCtx.executeQuery()

    $IDToUse = ""

    Write-Host "Processing library $lib" -ForegroundColor Yellow
    foreach ($CT in $CTs) 
    { 
        Write-Host "-- " $CT.Name $Ct.Id
        if ($CT.Name -eq "Content Type To Change")
        {
            $IDToUse = $CT.Id
            Write-Host "Using this one..." -ForegroundColor Green
        }
    }

    # Grab a reference to the content type we want to change
    $CT = $list.ContentTypes.getbyid($IDToUse)
    $spoCtx.load($CT)
    $spoCtx.executeQuery()

    if ($CT -ne $null)
    {
        # Set the content type to be writeable to be able to update it
        Write-Host "Setting content type to ReadOnly = false" -ForegroundColor Green
        $CT.ReadOnly = $false
        $CT.Update($false)
        $spoCtx.load($CT)
        $spoCtx.executeQuery()

        # Modify the content type name
        Write-Host "Processing Content type..." -ForegroundColor Cyan
        $CT.Name = "Content Type That Has Been Changed"
        $CT.Update($false)
        $spoCtx.load($CT)
        $spoCtx.executeQuery()

        # Return the content type to read-only
        Write-Host "Setting content type to ReadOnly = true" -ForegroundColor Green
        $CT.ReadOnly = $true
        $CT.Update($false)
        $spoCtx.load($CT)
        $spoCtx.executeQuery()
    }
}
[/sourcecode]

What I wish I had known when I started developing Lability DevTest Lab Environments

At Black Marble we have been migrating our DevTest labs to from on-premises TFS Lab Management to a mixture of on-premise and Azure hosted Lability defined Labs as discussed by Rik Hepworth on his blog. I have only been tangentially involved in this effort until recently, consuming the labs but not creating the definitions.

So this post is one of those I do where I don’t want to forget things I learnt the hard way, or to put it another way asking Rik or Chris after watching a 2 hour environment deploy fail for the Xth time.

  • You can’t log tool much. The log files are your friends, both the DSC ones and any generated by tools triggered by DSC. This is because most of the configuration process is done during boots so there is no UI to watch.
  • The DSC log is initially created in working folder the .MOF file is in on the target VM; but after a reboot (e.g. after joining a domain) the next and subsequent DSC log files are created in  C:WindowsSystem32ConfigurationConfigurationStatus
  • Make sure you specify the full path for any bespoke logging you do, relative paths make it too easy to lose the log file
  • Stupid typos get you every time, many will be spotted when the MOF file is generated, but too many such as ones in command lines or arguments are only spotted when you deploy an environment. Also too many of these don’t actually cause error messages, they just mean nothing happens. So if you expect a script/tool to be run and it doesn’t check the log and the definition for mismatches in names.
  • If you are using the Package DSC Resource to install an EXE or MSI couple of gotcha’s
    • For MSIs the ProductName parameter must exactly match the one in the MSI definition, and this must match the GUID ProductCode.  Both of these can be found using the Orca tool

      image

    • Package MongoDb {

      PsDscRunAsCredential = $DomainCredentialsAtDomain

      DependsOn = '[Package]VCRedist'

      Ensure = 'Present'

      Arguments = "/qn /l*v c:bootstrapMongoDBInstall.log INSTALLLOCATION=`"C:Program FilesMongoDBServer3.6`""

      Name = "MongoDB 3.6.2 2008R2Plus SSL (64 bit)"

      Path = "c:bootstrapmongodb-win32-x86_64-2008plus-ssl-3.6.2-signed.msi"

      ProductId = "88B5F0D8-0692-4D86-8FF4-FB3CDBC6B40F"

      ReturnCode = 0

      }

    • For EXEs the ProductName does not appear to be as critical, but you still need the Product ID. You can get this with PowerShell on a machine that already has the EXE installed
    • Get-WmiObject Win32_Product | Format-Table IdentifyingNumber, Name, Version
  • I had network issues, they could mostly be put does to incorrect Network Address Translation. In my case this should have been setup when Lability was initially configured, the commands ran OK creating a virtual switch and NetNat, but I ended up with a Windows failback network address of 169.x.x.x when I should have had an address of 192.168.x.x on my virtual switch. So if in doubt check the settings on your virtual switch, in the Windows ‘Networking and Share Center’ before you start doubting your environment definitions.

Hope these pointers help others, as well as myself, next time Lability definitions are written

Creating test data for my Generate Release Notes Extension for use in CI/CD process

As part of the continued improvement to my CI/CD process I needed to provide a means so that whenever I test my Generate Release Notes Task, within it’s CI/CD process, new commits and work item associations are made. This is required because the task only picks up new commits and work items since the last successful running of a given build. So if the last release of the task extension was successful then the next set of tests have no associations to go in the release notes, not exactly exercising all the code paths!

In the past I added this test data by hand, a new manual commit to the repo prior to a release; but why have a dog and bark yourself? Better to automate the process.

This can done using a PowerShell file, run inline or stored in the builds source repo and run within a VSTS build. The code is shown below, you can pass in the required parameters, but I set sensible default for my purposes

For this PowerShell code to work you do need make some security changes to allow the build agent service user to write to the Git repo. This is documented by Microsoft.

The PowerShell task to run this code is placed in a build as the only task

image

This build is then triggered as part of the release process

image

Note that the triggering of this build has to be such that it runs on a non-blocking build agent as discussed in my previous posts. In my case I trigger the build to add the extra commits and work items just before triggering the validation build on my private Azure hosted agent.

Now, there is no reason you can’t just run the PowerShell directly within the release if you wanted to. I chose to use a build so that the build could be reused between different VSTS extension CI/CD pipelines; remember I have two Generate Release Note Extensions, PowerShell and NodeJS Based.

So another step to fully automating the whole release process.

Test-SPContentDatabase False Positive

I was recently performing a SharePoint 2013 to 2016 farm upgrade and noticed an interesting issue when performing tests on content databases to be migrated to the new system.

As part of the migration of a content database, it’s usual to perform a ‘Test-SPContentDatabase’ operation against each database before attaching it to the web application. On the farm that I was migrating, I got mixed responses to the operation, with some databases passing the check successfully and others giving the following error:

PS C:> Test-SPContentDatabase SharePoint_Content_Share_Site1

Category        : Configuration
Error           : False
UpgradeBlocking : False
Message         : The [Share WebSite] web application is configured with
claims authentication mode however the content database you
are trying to attach is intended to be used against a
windows classic authentication mode.
Remedy          : There is an inconsistency between the authentication mode of
target web application and the source web application.
Ensure that the authentication mode setting in upgraded web
application is the same as what you had in previous
SharePoint 2010 web application. Refer to the link
http://go.microsoft.com/fwlink/?LinkId=236865″ for more
information.
Locations       :

This was interesting as all of the databases were attached to the same content web application, and had been created on the current system (I.e. not migrated to it from an earlier version of SharePoint) and therefore should all have been in claims authentication mode. Of note also is the reference to SharePoint 2010 in the error message, I guess the cmdlet hasn’t been updated in a while…

After a bit of digging, it turned out that the databases that threw the error when tested had all been created and some initial configuration applied, but nothing more. Looking into the configuration, there were no users granted permissions to the site (except for the default admin user accounts that had been added as the primary and secondary site collection administrators when the site collection had been created), but an Active Directory group had also been given site collection administrator permissions.

A quick peek at the UserInfo table for the database concerned revealed the following (the screen shot below is from a test system used to replicate the issue):

UserInfo Table

The tp_Login entry highlighted corresponds to the Active Directory group that had been added as a site collection administrator.

Looking at Trevor Seward’s blog post ‘Test-SPContentDatabase Classic to Claims Conversion’ blog post showed what was happening. When the Test-SPContentDatabase cmdlet runs, it’s looking for the first entry in the UserInfo table that matches the following rule:

  • tp_IsActive = 1 AND
  • tp_SiteAdmin = 1 AND
  • tp_Deleted = 0 AND
  • tp_Login not LIKE ‘I:%’

In our case, having an Active Directory Group assigned as a site collection administrator matched this set of rules exactly, therefore the query returned a result and hence the message was being displayed, even though the database was indeed configured for claims authentication rather than classic mode authentication.

For the organisation concerned, having an Active Directory domain configured as the site collection administrator for some of their site collections makes sense, so they’ll likely experience the same message next time they upgrade. Obviously in this case it was a false positive and could safely be ignored, and indeed attaching the databases that threw the error to a 2016 web application didn’t generate any issues.

Steps to reproduce:

  1. Create a new content database (to keep everything we’re going to test out of the way).
  2. Create a new site collection in the new database adding site collection administrators as normal.
  3. Add a domain group to the list of site collection administrators.
  4. Run the Test-SPContentDatabase cmdlet against the new database.

Setting Enroll Permissions on ADCS Certificate Template using DSC

As part of the work I have been doing around generating and managing lab environments using Lability and DSC, one of the things I needed to do was change the permissions on a certificate template within a DSC configuration. Previously, when deploying to Azure, I used the PSPKI PowerShell modules within code executed by the Custom Script extension. I was very focused on sticking with DSC this time, which ruled out PSPKI. Whilst there is a DSC module available to configure Certificate Services itself, this does not extend to managing Certificate Templates.

Nobody seemed to have done exactly this before. I used the following links as references in creating the code:

Get Effective template permissions with PowerShell by Vadims Podans

Duplicate AD Object Without Active Directory PS Tools

Add Object Specific ACEs using Active Directory PowerShell

Using Scripts to Manage Active Directory Security

The script finds the WebServer template and grants the Enroll extended permission to the Domain Computers AD group. This allows me to use xCertificate in the DSC configuration of domain member servers to request new certificates using the WebServer template.

Here is the code I include in my DSC configuration. $DomainCreds is a PSCredential object for the domain admin ( I create the AD domain in an earlier step using xActiveDirectory).

        #Enable Enroll on WebServer certificate template         Script EnableWebServerEnroll {             DependsOn = "[xAdcsCertificationAuthority]CertAuth"             PsDscRunAsCredential = $DomainCreds             GetScript = {                 return @{ 'Result' = $true}             }             TestScript = {                 #Find the webserver template in AD and grant the Enroll extended right to the Domain Computers                 $filter = "(cn=WebServer)"                 $ConfigContext = ([ADSI]"LDAP://RootDSE").configurationNamingContext                 $ConfigContext = "CN=Certificate Templates,CN=Public Key Services,CN=Services,$ConfigContext"                  $ds = New-object System.DirectoryServices.DirectorySearcher([ADSI]"LDAP://$ConfigContext",$filter)                 $Template = $ds.Findone().GetDirectoryEntry()                 if ($Template -ne $null) {                     $objUser = New-Object System.Security.Principal.NTAccount("Domain Computers")                     # The following object specific ACE is to grant Enroll                     $objectGuid = New-Object Guid 0e10c968-78fb-11d2-90d4-00c04f79dc55                      ForEach ($AccessRule in $Template.ObjectSecurity.Access) {                         If ($AccessRule.ObjectType.ToString() -eq $objectGuid) {                             If ($AccessRule.IdentityReference -like "*$($objUser.Value)") {                                 Write-Verbose "TestScript: WebServer Template Enroll permission for Domain Computers exists. Returning True"                                 return $true                             }                         }                     }                 }                 return $false             }             SetScript = {                 #Find the webserver template in AD and grant the Enroll extended right to the Domain Computers                 $filter = "(cn=WebServer)"                 $ConfigContext = ([ADSI]"LDAP://RootDSE").configurationNamingContext                 $ConfigContext = "CN=Certificate Templates,CN=Public Key Services,CN=Services,$ConfigContext"                  $ds = New-object System.DirectoryServices.DirectorySearcher([ADSI]"LDAP://$ConfigContext",$filter)                 $Template = $ds.Findone().GetDirectoryEntry()                  if ($Template -ne $null) {                     $objUser = New-Object System.Security.Principal.NTAccount("Domain Computers")                     # The following object specific ACE is to grant Enroll                     $objectGuid = New-Object Guid 0e10c968-78fb-11d2-90d4-00c04f79dc55                     $ADRight = [System.DirectoryServices.ActiveDirectoryRights]"ExtendedRight"                     $ACEType = [System.Security.AccessControl.AccessControlType]"Allow"                     $ACE = New-Object System.DirectoryServices.ActiveDirectoryAccessRule -ArgumentList $objUser,$ADRight,$ACEType,$objectGuid                     $Template.ObjectSecurity.AddAccessRule($ACE)                     $Template.commitchanges()                     Write-Verbose "SetScript: Completed WebServer additional permission"                 }             }         }

Running Pester PowerShell tests in the VSTS hosted build service

Updated 22 Mar 2016 This task is available in the VSTS Marketplace

If you are using Pester to unit test your PowerShell code then there is a good chance you will want to include it in your automated build process. To do this, you need to get Pester installed on your build machine. The usual options would be

If you own the build agent VM then any of these options are good, you can even write the NuGet restore into your build process itself. However there is a problem, both the first two options need administrative access as they put the Pester module in the $PSModules folder (under ‘Program Files’); so these can’t be used on VSTS’s hosted build system, where your are not an administrator

So this means you are left with copying the module (and associated functions folder) to some local working folder and running it manually; but do you really want to have to store the Pester module in your source repo?

My solution was to write a vNext build tasks to deploy the Pester files and run the Pester tests.

image_thumb[12]

The task takes two parameters

  • The root folder to look for test scripts with the naming convention  *.tests.ps1. Defaults to $(Build.SourcesDirectory)*
  • The results file name, defaults to $(Build.SourcesDirectory)Test-Pester.XML

The Pester task does not in itself upload the test results, it just throws and error if tests fails. It relies on the standard test results upload task. Add this task and set

  • it to look for nUnit format files
  • it already defaults to the correct file name pattern.
  • IMPORTANT: As the Pester task will stop the build on an error you need to set the ‘Always run’ to make sure the results are published.

image_thumb[11]

Once all this is added to your build you can see your Pester test results in the build summary

image_thumb[10]

image_thumb[14]

You can find the task in my vNextBuild repo