DDD Submission Issues

Some people have reported issues submitting sessions for DDD. if you do have a problem, please tweet us a message and we will get in touch

b.


DDD is coming up fast: things you might want to know

DDD is winging its way to Reading in June, We already have some great sessions submitted but it would be great to get even more. If you have something to say we would love to hear it.

Also we are still looking for designer orientated sessions.

If you are a new speaker let us know and we are trying to get some speaker mentoring and presentation training organised before the event to help you shine on the day.

So if you are a new speaker and you are accepted we will contact you to see if you need any assistance.

I look forward to seeing you all in June

b.

http://www.developerdeveloperdeveloper.com/



Major new release of my VSTS Cross Platform Extension to build Release Notes

Today I have released a major new release, V2, of my VSTS Cross Platform Extension to build release notes. This new version is all down to the efforts of Greg Pakes who has completely re-written the task to use newer VSTS APIs.

A minor issue is that this re-write has introduced a couple of breaking changes, as detailed below and on the project wiki

  • oAuth script access has to be enabled on the agent running the task

image

  • There are minor changes in the template format, but for the good, as it means both TFVC and GIT based releases now use a common template format. Samples can be found in the project repo

Because of the breaking changes, we made the decision to release both V1 and V2 of the task in the same extension package, so not forcing anyone to update unless they wish to. A technique I have not tried before, but seems to work well in testing.

Hope people still find the task of use and thanks again to Greg for all the work on the extension

Backing up your TFVC and Git Source from VSTS

The Issue

Azure is a highly resilient service, and VSTS has excellent SLAs. However, a question that is often asked is ‘How do I backup my VSTS instance?’.

The simple answer is you don’t. Microsoft handle keeping the instance up, patched and serviceable. Hence, there is no built in means for you to get a local copy of all your source code, work items or CI/CD definitions. Though there have been requests for such a service.

This can be an issue for some organisations, particularly for source control, where there can be a need to have a way to keep a private copy of source code for escrow, DR or similar purposes.

A Solution

To address this issue I decided to write a PowerShell script to download all the GIT and TFVC source in a VSTS instance. The following tactics were used

  • Using the REST API to download each project’s TFVC code as a ZIP file. The use of a ZIP file avoids any long file path issues, a common problem with larger Visual Studio solutions with complex names
  • Clone each Git repo. I could download single Git branches as ZIP files via the API as per TFVC, but this seemed a poorer solution given how important branches are in Git.

So that the process was run on regular basis I designed it to be run within a VSTS build. Again here I had choices:

  • To pass in a Personal Access Token (PAT) to provide access rights to read the required source to be backed up. This has the advantage the script can be run inside or outside of a VSTS build. It also means that a single VSTS build can backup other VSTS instances as long as it has a suitable PAT for access
  • To use the System Token already available to the build agent. This makes the script very neat, and PATs won’t expire, but means it only works within a VSTS build, and can only backup the VSTS instance the build is running on.

I chose the former, so a single scheduled build could backup all my VSTS instances by running the script a number of time with different parameters

To use this script you just pass in

  • The name of the instance to backup
  • A valid PAT for the named instance
  • The path to backup too, which can be a UNC share assuming the build agent has rights to the location

What’s Next

The obvious next step is to convert the PowerShell script to be a VSTS Extension, at this point it would make sense to make it optional to use a provided PAT or the System Access Token.

Also I could add code to allow a number of multiple cycling backups to be taken e.g. keep the last 3 backups

These are maybe something for the future, but they don’t really seems a good return in investment at this time to package up a working script as an extensions just for a single VSTS instance.

Opps, I made that test VSTS extension public by mistake, what do I do now?

I recently, whilst changing a CI/CD release pipeline, updated what was previously a private version of a VSTS extension in the VSTS Marketplace with a version of the VSIX package set to be public.

Note, in my CI/CD process I have a private and public version of each extension (set of tasks), the former is used for functional testing within the CD process, the latter is the one everyone can see.

So, this meant I had two public versions of the same extension, confusing.

Turns out you can’t change a public extension back to be private, either via the UI or by uploading a corrected VSIX. Also you can’t delete any public extension that has ever been downloaded, and my previously private one had been downloaded once, by me for testing.

So my only option was to un-publish the previously private extension so only the correct version was visible in the public marketplace.

This meant I had to also alter my CI/CD process to change the extensionID of my private extension so I could publish a new private version of the extension.

Luckily, as all the GUIDs for the tasks within the extension did not change once I had installed the new version of the extension I had mispublished in my test VSTS instance my pipeline still worked.

Only downside is I am left with an un-publish ‘dead’ version listed in my private view of the marketplace. This is not a problem, just does not look ‘neat and tidy’

System Center Data Protection Manager 2016 Delegated Administrator

I recently had a requirement to add a delegated administrator to DPM 2016. While it’s possible to configure self-service recovery, there doesn’t appear to be any way that I can configure another user to perform the delegated admin role as there is in some other System Center products.

It is possible to configure another user to be a delegated DPM admin if you’re willing to roll up your sleeves and get a little grubby with the config however! Note that I’m fairly sure that doing this will impact support, but it’s easy to undo if required.

The problem:

  1. There’s nothing in the DPM interface that appears to allow configuration of a delegated admin.
  2. Adding a user as a local admin of the DPM server still doesn’t allow the user concerned to administer DPM. When trying to launch the DPM console, the following error is shown:
    Unable to connect to DPM server error
  3. Granting the user logon locally permissions still requires that the user elevates when launching the DPM console, so realistically they should be made a local admin on the DPM server.

The solution:

  1. Grant the user local admin rights on the DPM server. I’d strongly suggest creating a dedicated admin account for the user rather than using their day-to-day account for this purpose.
  2. On the SQL instance that DPM uses, configure the following:
    1. Create a new login for the account that will be used as a delegated admin.
    2. Right-click the new account and select ‘Properties’.
    3. Select the DPM database and in the database role membership section of the dialog, select the appropriate DPM-related permissions for the user. To give the user full admin permissions on DPM, select all of the ‘MSDPM…’ role checkboxes:
      DPM delegated admin SQL rights
    4. Click OK to close the dialog.
    5. Check that the user can a) log onto the DPM server and b) successfully launch the DPM admin console and administer the service.

Using VSTS Gates to help improve my deployment pipeline of VSTS Extensions to the Visual Studio Marketplace

My existing VSTS CI/CD process has a problem that the deployment of a VSTS extension, from the moment it is uploaded to when it’s tasks are available to a build agent, is not instantiation. The process can potentially take a few minutes to roll out. The problem this delay causes is a perfect candidate for using VSTS Release Gates; using the gate to make sure the expected version of a task is available to an agent before running the next stage of the CD pipeline e.g waiting after deploying a private build of an extension before trying to run functional tests.

The problem is how to achieve this with the current VSTS gate options?

What did not work

My first thought was to use the Invoke HTTP REST API gate, calling the VSTS API https://<your vsts instance name>.visualstudio.com/_apis/distributedtask/tasks/<GUID of Task>. This API call returns a block of JSON containing details about the deployed task visible to the specified VSTS instance. In theory you can parse this data with a JSONPATH query in the gates success criteria parameter to make sure the correct version of the task is deployed e.g. eq($.value[?(@.name == “BuildRetensionTask”)].contributionVersion, “1.2.3”)

However, there is a problem. At this time the Invoke HTTP REST API gate task does not support the == equality operator in it’s success criteria field. I understand this will be addressed in the future, but the fact it is currently missing is a block to my current needs.

Next I thought I could write a custom VSTS gate. These are basically ‘run on server’ tasks with a suitably crafted JSON manifest. The problem here is that this type of task does not allow any code (Node.JS or PowerShell) to be run. They only have a limited capability to invoke HTTP APIs or write messages to service bus. So I could not implement the code I needed to process the API response. So another dead end.

What did work

The answer, after a suggestion from the VSTS Release Management team at Microsoft, was to try the Azure Function gate.

To do this I created a new Azure Function. I did this using the Azure Portal, picking the consumption billing model, C# and securing the function with a function key, basically the default options.

I then added the C# function code (stored in GitHub), to my newly created Azure Function. This function code takes

  • The name of the VSTS instance
  • A personal access token (PAT) to access the VSTS instance
  • The GUID of the task to check for
  • And the version to check for

It then returns a JSON block with true or false based on whether the required task version can be found. If any of the parameters are invalid an API error is returned

By passing in this set of arguments my idea was that a single Azure Function could be used to check for the deployment of all my tasks.

Note: Now I do realise I could also create a release pipeline for the Azure Function, but I chose to just create it via the Azure Portal. I know this is not best practice, but this was just a proof of concept. As usual the danger here is that this proof of concept might be one of those that is too useful and lives forever!

To use the Azure Function

Using the Azure function is simple

    • Added an Azure Function gate to a VSTS release
    • Set the URL parameter for the Azure Function. This value can be found from the Azure Portal. Note that you don’t need the Function Code query parameter in the URL as this is provided with the next gate parameter. I chose to use a variable group variable for this parameter so it was easy to reuse between many CD pipelines
    • Set the Function Key parameter for the Azure Function, again you get this from the Azure Portal. This time I used a secure variable group variable
    • Set the Method parameter to POST
    • Set the Header content type as JSON
{
      "Content-Type": "application/json"
}
    • Set the Body to contain the details of the VSTS instance and Task to check. This time I used a mixture of variable group variables, release specific variables (the GUID) and environment build/release variables. The key here is I got the version from the primary release artifact $(BUILD.BUILDNUMBER) so the correct version of the tasks is tested for automatically
{
     "instance": "$(instance)",
     "pat": "$(pat)",
     "taskguid": "$(taskGuid)",
     "version": "$(BUILD.BUILDNUMBER)"
}
  • Finally set  the Advanced/Completion Event to ApiResponse with the success criteria of
    eq(root['Deployed'], 'true')

Once this was done I was able to use the Azure function as a VSTS gate as required

image

Summary

So I now have a gate that makes sure that for a given VSTS instance a task of a given version has been deployed.

If you need this functionality all you need to do is create your own Azure Function instance, drop in my code and configure the VSTS gate appropriately.

When equality == operator becomes available for JSONPATH in the REST API Gate I might consider a swap back to a basic REST call, it is less complex to setup, but we shall see. The Azure function model does appear to work well

Renaming an In-Use Content Type in SharePoint Online

Design of SharePoint content Types for SharePoint, and in particular SharePoint Online is very important. Care must be taken to ensure that the design is appropriate for the environment as changes made later can impose significant management overheads. In particular, if a Content Type is put to use (I.e. is assigned to a list/library), this can complicate changes made at a point following initial deployment.

Some Content Type operations are simple, e.g. adding a column. This will work as expected, with the new column rippling all the way down to the in-use Content Types.

Renaming a Content Type potentially falls under the ‘more difficult’ category, in particular if it’s been assigned to a list/library. This is due to the way that SharePoint handles this process, with the Content Type that is assigned to the list/library being a child content type of that published to a site collection.

I’d still strongly recommend using the Content Type Hub (hidden site collection, available on /sites/contenttypehub) to centrally manage and publish content types. A change to the name of a content type made here, then the content type being republished will rename the content type in the content type gallery in each site collection. If the content type is attached to a list/library however as this is a child content type, this will not be renamed, so you end up in the scenario that the gallery reflects the name change, while the instance attached to the list/library does not.

Looking at the list of content types attached to a list/library, and clicking through on the content type that you wish to change does allow you to change the content type from read-only to writeable. This then allows you to change the content type’s name, however if you have lots of libraries and/or lots of content types to process, this gets laborious very quickly. PowerShell to the rescue again!

The following script is a sample that can be used to change the name of a content type that is attached to a set of lists/libraries:

[sourcecode language='powershell'  padlinenumbers='true']
$SiteUrl = "https://domain.sharepoint.com/teams/SiteCollection"  
$UserName = "Andy@o365domain.com"  
# Ask the user for the password
$Password = Read-Host -Prompt "Enter your password: " -AsSecureString

# List of lists/libraries to process
$libraries = @("Library1","Library2","Library3")

# Add references to the CSOM libraries
Add-Type -Path "C:<Path-to-CSOM-libraries>Microsoft.SharePoint.Client.dll" 
Add-Type -Path "C:<Path-to-CSOM-libraries>Microsoft.SharePoint.Client.Runtime.dll" 

# Connect
$spoCtx = New-Object Microsoft.SharePoint.Client.ClientContext($SiteUrl)  
$spoCredentials = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials($Username, $Password)   
$spoCtx.Credentials = $spoCredentials

# Load the web context
$web = $spoCtx.web
$spoCtx.load($web)
$spoCtx.executeQuery()

# Process the lists/libraries
foreach ($lib in $libraries) {
    $list = $web.lists.getbytitle("$lib")
    $spoCtx.load($list)
    $spoCtx.executeQuery()

    # Load the content types attached to the list/library
    $CTs = $list.ContentTypes
    $spoCtx.load($CTs)
    $spoCtx.executeQuery()

    $IDToUse = ""

    Write-Host "Processing library $lib" -ForegroundColor Yellow
    foreach ($CT in $CTs) 
    { 
        Write-Host "-- " $CT.Name $Ct.Id
        if ($CT.Name -eq "Content Type To Change")
        {
            $IDToUse = $CT.Id
            Write-Host "Using this one..." -ForegroundColor Green
        }
    }

    # Grab a reference to the content type we want to change
    $CT = $list.ContentTypes.getbyid($IDToUse)
    $spoCtx.load($CT)
    $spoCtx.executeQuery()

    if ($CT -ne $null)
    {
        # Set the content type to be writeable to be able to update it
        Write-Host "Setting content type to ReadOnly = false" -ForegroundColor Green
        $CT.ReadOnly = $false
        $CT.Update($false)
        $spoCtx.load($CT)
        $spoCtx.executeQuery()

        # Modify the content type name
        Write-Host "Processing Content type..." -ForegroundColor Cyan
        $CT.Name = "Content Type That Has Been Changed"
        $CT.Update($false)
        $spoCtx.load($CT)
        $spoCtx.executeQuery()

        # Return the content type to read-only
        Write-Host "Setting content type to ReadOnly = true" -ForegroundColor Green
        $CT.ReadOnly = $true
        $CT.Update($false)
        $spoCtx.load($CT)
        $spoCtx.executeQuery()
    }
}
[/sourcecode]

Fixing a ‘git-lfs filter-process: gif-lfs: command not found’ error in Visual Studio 2017

I am currently looking at the best way to migrate a large legacy codebase from TFVC to Git. There are a number of ways I could do this, as I have posted about before. Obviously, I have ruled out anything that tries to migrate history as ‘that way hell lies’; if people need to see history they will be able to look at the archived TFVC instance. TFVC and Git are just too different in the way they work to make history migrations worth the effort in my opinion.

So as part of this migration and re-structuring I am looking at using Git Submodules and Git Large File System (LFS) to help divide the monolithic code base into front-end, back-end and shared service modules; using LFS to manage large media files used in integration test cases.

From the PowerShell command prompt, using Git 2.16.2, all my trials were successful, I could achieve what I wanted. However when I tried accessing my trial repos using Visual Studio 2017 I saw issues

Submodules

Firstly there are known limitations with Git submodules in Visual Studio Team Explorer. At this time you can clone a repo that has submodules, but you cannot manage the relationships between repos or commit to a submodule from inside Visual Studio.

This is unlike the Git command line, which allows actions to span a parent and child repo with a single command, Git just works it out if you pass the right parameters

There is a request on UserVoice to add these functions to Visual Studio, vote for it if you think it is important, I have.

Large File System

The big problem I had was with LFS, which is meant to work in Visual Studio since 2015.2.

Again from the command line operations were seamless, I just installed Git 2.16.2 via Chocolaty and got LFS support without installing anything else. So I was able to enable LFS support on a repo

git lfs install
git lfs track '*.bin'
git add .gitattributes

and manage standard and large (.bin) files without any problems

However, when I tried to make use of this cloned LFS enabled repo from inside Visual Studio by staging a new large .bin file I got an error ‘git-lfs filter-process: gif-lfs: command not found’

image

On reading around this error it suggested that the separate git-lfs package needed to be installed. I did this, making sure that the path to the git-lfs.exe (C:Program FilesGit LFS) was in my path, but I still had the problem.

This is where I got stuck and hence needed to get some help from the Microsoft Visual Studio support team.

After a good deal tracing they spotted the problem. The path to git-lfs.exe was at the end of my rather long PATH list. It seems Visual Studio was truncating this list of paths, so as the error suggested Visual Studio could not find git-lfs.exe.

It is unclear to me whether the command prompt just did not suffer this PATH length issue, or was using a different means to resolve LFS feature. It should be noted from the command line LFS commands were available as soon as I installed Git 2.16.2. I did not have to add the Git LFS package.

So the fix was simple, move the entry for ‘C:Program FilesGit LFS’ to the start of my PATH list and everything worked in Visual Studio.

It should be noted I really need to look at whether I need everything in my somewhat long PATH list. It’s been too long since I re-paved my laptop, there is a lot of strange bits installed.

Thanks again to the Visual Studio Support team for getting me unblocked on this.