Just released a new Azure Pipelines Extension to update Git based WIKIs

I have just release a new Azure DevOps Pipelines extension to update a page in a Git based WIKI. 

It has been tested again

  • Azure DevOps WIKI – running as the build agent (so the same Team Project)
  • Azure DevOps WIKI – using provided credentials (so any Team Project)
  • GitHub – using provided credentials

It takes a string (markdown) input and writes it to a new page, or updates it if it already exists. It is designed to be used with my Generate Release Notes Extension, but you will no doubt find other uses

SharePoint 2010 Integrations with TFS 2017.3

An interesting aspect of a TFS Upgrade I found myself in recently was to do with SharePoint Integrations with Team Foundation Server. As a bit of a pre-amble, back in the day, when TFS was a bit more primitive, it didn’t have a rich dashboard experience like it does today, and so leaned on it’s distant cousin SharePoint for Project Portal support. A lot of my customers as a consequence of this have significant document repositories in SharePoint Foundation instances installed as part of their Team Foundation Server installations. This in itself isn’t an issue, as you can decouple a SharePoint server from TFS and have the two operate independently of one another, but some users will miss the “Navigate to Project Portal” button from their project dashboard.

I found an easy answer to this is to use the Mark Down widget to give them the link they want/need so they can still access their project documentation, but of course the work item and report add-ins will cease to function. Encouraging users to adopt the newer TFS/Azure Dev Ops Dashboards is key here.

But enough waffling. A technical conundrum faced in this recent upgrade I thought worth mentioning was around the SharePoint integrations themselves. The customer was migrating from a 2012.4 server to a 2017.3 server (as tempting as 2018.3 was, IT governance in their organization meant SQL 2016 onwards was a no no). Their old TFS server was cohabiting with a SharePoint Foundation 2010 server which was their BA’s document repository and point of entry for having a look at the state of play for each Team Project. The server had two Team Project Collections for two different teams who were getting two shiny new TFS Servers.

One team was going to the brand new world and still needed the SharePoint integrations. The other was going to the new world with no SharePoint (huzzah!). Initially the customer had wanted to move the SharePoint server to one of the new servers, and thus be able to decommission the old Windows Server 2008R2 server it was living on.

This proved to be problematic in that the new server was on Windows Server 2016, which even had we upgraded SharePoint to the last version which supported TFS integration which was SharePoint 2013, 2013 is too old and creaky to run on Windows Server 2016. So we had to leave the SharePoint server where it was. This then presented a new problem. According to the official documentation, to point the SharePoint 2010 server at TFS 2017 I would have had to install the 2010 Connectivity Extensions which shipped with the version of TFS I was using. So that would be TFS 2017’s SharePoint 2010 extensions. But we already had a set of SharePoint 2010 extensions installed on that server….specifically TFS 2012’s SharePoint 2010 extensions.

To install 2017’s version of the extensions I’d have had to uninstall or upgrade TFS 2012.4 since two versions of TFS don’t really like cohabiting on the same server and the newer version tends to try (helpfully I should stress) upgrading the older version. The customer didn’t like the idea of their fall-back/mothballed server being rendered unusable if they needed to spin it up again. This left me in something of a pickle between a rock and a hard place.

So on a hunch I figured why not go off the beaten path and ignore the official documentation (something I don’t make a habit of), and suggested to the customer “let’s just point the TFS 2012’s SharePoint 2010 extensions at TFS 2017 and see what happens!”. So, I reconfigured the SharePoint 2010 Extensions the TFS 2017 Admin console and guess what? Worked like a charm. The SharePoint 2010 Project Portals quite happily talked to the TFS 2017 Server with no hiccups and no loss of functionality.

Were I to make an educated guess, I’d say the TFS 2017, SharePoint 2010 extensions are the same as TFS 2012’s but with a new sticker on the front, or only minor changes.

TFS 2018.3 Upgrade–Where’s my Code Content in the Code Hub?

Firstly it’s been a while (about 4 years or so) since I last blogged, and so there’s going to be a bit of a gap in what I blog about now versus what I used to blog about. Why the big gap, and what’s been happening between I might talk about in a future post but for now I thought I’d share a recent war story of a TFS Upgrade gone wrong, which had a pretty arcane error.

The client had previously been using an on-premise TFS 2012.4 server and had expressed an interest in upgrading to a newer version,  TFS 2018.3 to be precise.

Naturally our first recommendation was to move to Azure DevOps (formerly VSTS, VSO) but this wasn’t a good fit for them at the point in time, issues around Data sovereignty and all that and Brexit generally putting the spook into everyone in the UK who had been eyeing the European Data Centres.

Non the less we built some future proofing into the new TFS server we built and configured for them, chiefly using SQL Server 2017 to mitigate any problems with them updating to Azure DevOps Server 2019/2020 which is due for release before the end of the year, or just after the turn of the year if the previous release cadence of TFS is any indicator.

We performed a dry-run upgrade installation, and then a production run, the client I suspect brushed through their dry-run testing a little too quickly and failed to notice an issue which appeared in the production run.


There was no content in the Code Hub file content.

image


Opening the Editor also just showed a blank editing pane. Examining the page load using web debugging tools showed us the following javascript error.

image

: Script error for “BuiltInExtensions/Scripts/TFS.Extension”
http://requirejs.org/docs/errors.html#scripterror

We also saw certain page assets failing to load in the network trace.

image

Editor.main.css looked to be quite important given the context of the page we were in. It was also noted in the network trace we had quite a number of 401’s and many of the page assets were not displaying correctly in TFS (like the Management/Admin Cog, the TFS logo in the top right, folder and branch icons in the Code Hub source control navigator). We were stumped at first, a support call from Microsoft in the end illuminated us to the issue. The client had a group policy setting which prevented membership of a role assignment in Local Policies from being modified.

image

When adding the IIS feature to Windows Server, the local user group IIS_IUSRS normally gets added to this role. In the clients case, because of this group policy setting which prevented role assignments from being made, this had not occurred. No error had been raised in the feature enablement, and so no one knew anything had gone amiss when setting up the server.

This local user group contains (as I understand it) the application pool identities when creating application pools in IIS. TFS’s app pool needs this impersonation policy to load certain page assets as the currently signed in user. Some group policy changes later we were able to add the local group this by hand and resolve the issue (combined with an iisreset command). It’s been explained to me this used to be a more common error back in the days of TFS 2010 and 2012 but is something of a rarity these days hence no luck with any Google-Fu or other inquiries we made to the error.

Interestingly a week later I was performing another TFS Upgrade for a different client, they were going to TFS 2017.3 and the Code Hub, Extension Management, and Dashboards were affected by the same error, fortunately recent experience helped us resolve that quickly.


DDD North is Coming

I am super happy to announce that DDD North is set to appear on February the 9th in the Sunny Northern City of Hull

The Amazing people at Hull University have arranged the venue and we will be shortly announcing call for sessions and dates for ticket availability.

We had hoped to have DDD this side of Christmas but with such a great opportunity available we felt we could wait a bit

I look forward to seeing a lot of old and new faces at the next DDD

b.


Azure DevOps Services & Server Alerts DSL – an alternative to TFS Aggregator?

Whilst listening to a recent  Radio TFS it was mentioned that TFS Aggregator uses the C# SOAP based Azure DevOps APIs; hence needed a major re-write as these APIs are being deprecated.

Did you know that there was a REST API alternative to TFS Aggregator?

My Azure DevOps Services & Server Alerts DSL is out there, and has been for a while, but I don’t think used by many people. It aims to do the same as TFS Aggregator, but is based around Python scripting.

However, I do have to say it is more limited in flexibility as it has only been developed for my (and a few of my clients needs), but its an alternative that is based on the REST APIs. 

Scripts are of the following form, this one changes the state of a work item if all it children are done

import sys
# Expect 2 args the event type and a value unique ID for the wi
if sys.argv[0] == "workitem.updated" : 
    wi = GetWorkItem(int(sys.argv[1]))
    parentwi = GetParentWorkItem(wi)
    if parentwi == None:
        LogInfoMessage("Work item '" + str(wi.id) + "' has no parent")
    else:
        LogInfoMessage("Work item '" + str(wi.id) + "' has parent '" + str(parentwi.id) + "'")

        results = [c for c in GetChildWorkItems(parentwi) if c["fields"]["System.State"] != "Done"]
        if  len(results) == 0 :
            LogInfoMessage("All child work items are 'Done'")
            parentwi["fields"]["System.State"] = "Done"
            UpdateWorkItem(parentwi)
            msg = "Work item '" + str(parentwi.id) + "' has been set as 'Done' as all its child work items are done"
            SendEmail("richard@blackmarble.co.uk","Work item '" + str(parentwi.id) + "' has been updated", msg)
            LogInfoMessage(msg)
        else:
            LogInfoMessage("Not all child work items are 'Done'")
else:
	LogErrorMessage("Was not expecting to get here")
	LogErrorMessage(sys.argv)

I have recently done a fairly major update to the project. The key changes are:

  • Rename of project, repo, and namespaces to reflect Azure DevOps (the namespace change is a breaking change for existing users)
  • The scripts that are run can now be
    • A fixed file name for the web instance running the service
    • Based on the event type sent to the service
    • Use the subscription ID, thus allowing many scripts (new)
  • A single instance of the web site running the events processor can now handle calls from many Azure DevOps instances.
  • Improved installation process on Azure (well at least tried to make the documentation clearer and sort out a couple of MSDeploy issues)

Full details are on the project can be seen on the solutions WIKI, maybe you will find it of use. Let me know if the documentation is good enough

YAML documentation for my Azure Pipeline Tasks (and how I generated it)

There is a general move in Azure DevOps Pipelines to using YAML, as opposed to the designer, to define your pipelines. This is particularly enforced when using them via the new GitHub Marketplace Azure Pipelines method where YAML appears to be the only option.

This has shown up a hole in my Pipeline Tasks documentation, I had nothing on YAML!

So I have added a YAML usage page for each set of tasks in each of my extensions e.g the file utilities tasks.

Now, as are most developers, I am lazy. I was not going to type all that information. So I wrote a script to generate the markdown from respective task.json files in the repo. Now this script will need some work for others to use as it relies on some special handling due to quirks of my directory structure, but I hope it will be of use to others.

Microsoft post root cause analysis on recent Azure DevOps Issues

Azure DevOps has had some serious issue over the past couple of weeks with availability here in Europe.

A really good open and detailed root cause analysis has just been posted by the Azure DevOps team at Microsoft. It also covers the mitigations they are putting place to make sure this same issues do not occur again.

We all have to remember that the cloud is not magic. Cloud service providers will have problems like any on-premise services; but trying to hide them does nothing to build confidence. So I for one applaud posts like this. I just wish all cloud service providers were as open when problem occur.

Using Paths in PR Triggers on an Azure DevOps Pipelines Builds

When I started creating OSS extensions for Azure DevOps Pipelines (starting on TFSPreview, then VSO, then VSTS and now named Azure DevOps) I made the mistake of putting all my extensions in a single GitHub repo. I thought this would make life easier, I was wrong, it should have been a repo per extension.

I have considered splitting the GitHub repo, but as a number of people have forked it, over 100 at the last count, I did not want to start a chain of chaos for loads of people.

This initial choice has meant that until very recently I could not use the Pull Request triggers in Azure DevOps Pipelines against my GitHub repo. This was because all builds associated with the repo triggered on any extension PR. So, I had to trigger builds manually, providing the branch name by hand. A bit of a pain, and prone to error.

I am pleased to say that with the roll out of Sprint 140 we now get the option to add a path filter to PR triggers on builds linked to GitHub repo; something we have had for Azure DevOps hosted Git repos since Sprint 126.

So now my release process is improved. If I add a path filter as shown below, my build and hence release process trigger on a PR just as I need.

image

It is just a shame that the GitHub PR only checks the build, not the whole release, before saying all is OK. Hope we see linking to complete Azure DevOps Pipelines in the future.

Registration open for free Black Marble events on modern process adoption using the cloud

Registration for the new season of Black Marble events have just been opened. If you can make it to Yorkshire why not come to an event (or two)

If you are stuck in the grim south, why not look out for us at Future Decoded in London at the end of the month

Deploying the ASDK for effective development use

Microsoft Azure Stack is a truly unique beast in terms of the capabilities it can bring to an organisation, and the efficiencies it can bring to a project that spans Public Cloud and on-premises infrastructure through it’s consistency with public Azure.

We’ve been using Stack in anger for a customer project for a year now and have learned several things about development and testing, and how to configure the ASDK to be an effective tool to support the project. This post will summarise those learnings and how I deploy the ASDK so you may mirror our approach for your own projects.

Learning 1: Cloud-consistent, but not quite

Azure Stack is great in that you use the exact same technologies to develop and deploy as its big brother. However, there are some nuances that we’ve hit. Stack lags behind Azure in API versions and provides a subset of Resource Providers. Whilst it is possible to configure your Azure subscription to the same API versions as Stack using policies, that’s not a guarantee of compatibility. In our project a partner org has been using Azure with those policies applied, but they still managed to update the language-specific SDKs for Storage beyond those supported by Stack. You need to be very watchful about version support, and our experience is that there is no replacement for using Stack as part of your pre-production testing effort.

Learning 2: Performance may vary

This one is obvious, if you think about it. Stack is built on different infrastructure than Public Azure because we don’t get things like FPGA-accelerated networking, for example. That means you need to performance test your stuff on Stack or you may be upset that it doesn’t meet the stats you got from Azure. Don’t get me wrong – we’ve put some very big load on both single-box ASDK and multi-node MAS during our project, but stuff like VM provisioning times, network performance and storage read/write times are different than production Azure and if your solution relies on this stuff you need to load test on your target environment.

Learning 3: Identity, identity, identity

Stack can be configured in connected or disconnected modes and that is a big, fundamental different you need to be aware of. In connected mode you use Azure AD for a single source of identity across your Stack and Azure subscriptions. With disconnected mode you are using ADFS to connect Stack to an Active Directory. The key difference is if you are using Service Principals – in disconnected mode you can only do certificate authentication.  This is supported on Azure but not the default. Our approach has been (since our project is disconnected) to do all our dev in an ASDK for absolute confidence of compatibility but with careful governance you could use public Azure. The key difference is that with the ASDK you create a service principal using PowerShell and the certificate gets created at that point for you to extract. With public Azure you need to create the cert first and provide it when creating and service principal.

Learning 4: ‘D’ is for Demo, not Dev; at least out-of-the-box

The ASDK is a wonderful thing. It’s easy to download and deploy a single-node version of Stack for testing. However, the installer delivers the system in a bubble. To use it you have to RDP into the host to access the portal and mess around with VPNs. Connecting to your CI/CD tool of choice (we use VSTS) is hard verging on impossible and at the very least active use for development is impractical.

You also have to content with the fact that the ASDK is built on evaluation media, and that you can’t upgrade it from version to version. I have no argument as to why this is – you shouldn’t be able to use the dev kit as a cheap way of getting Stack into your prod environment, but it means that if you want to use the ASDK for development you need to roll your sleeves up a bit… More on that in a moment.

Learning 5: Matt McSpirit Rocks!

For a long time I kept thinking that I should collect the ragtag bunch of PowerShell I used to deploy our ASDK each month into something resembling a usable solution and publish it. Then Matt McSpirit released ConfigASDK and I no longer needed to! If you use ASDK at all, then you need Matt’s Script because it makes post-install configuration an absolute breeze!

Effective Deployment 1: Hacking the Installer

We now have more than one ASDK and I need to rebuild each one pretty much monthly. That means streamlining the process. To do that, I convert the downloaded VHD to a WIM and we import it into our Config Manager deployment so I can use PXE to easily reimage the servers. Once that’s done I can clear down the storage spaces left over from the previous install and rebuild. I need the ASDKs to be on our network, so I assign a /16 network range for each Stack as well as assigning an IP on our main subnet to the host. I also modify the deployment scripts so I can use a custom region and external DNS suffix which allows us to access the ASDKs over DirectAccess – handy for when we’re working on site with our customer.

To do all this I loosely follow the documentation to deploy using PowerShell.

Reconfigure Network Ranges

The networking for the ASDK is all configured from the OneNodeCustomerConfigTemplate.xml file that is in c:CloudDeploymentConfiguration. I simply open that file and replace all occurances of 192.168. with my own /16 IPv4 range (e.g. 10.1.). That will give you an ASDK with several address ranges and importantly, a full /24 block of IP addresses for services (storage endpoints, Public IP Addresses, etc) that can sit on your network.

You’ll need to configure your own network to publish the route into that /16 address range, with an IP address on your own network the gateway address – we’ll assign this to the VM in stack that does the networking during the installation.

Reconfigure Region and ExternalDomainSuffix

When you deploy AzureStack it will create an AD called Azurestack.local that the infrastructure VMs will join, including the host. You will also find that the portal, admin portal and services are published on a DNS domain of local.azurestack.external. In that domain, ‘local’ is the region and ‘azurestack.external’ is the external domain suffix. Understandably, for a dev/demo deployment these are not exposed to you as things you can change. However, changing them is actually pretty easy when you know where to look.

When you run the InstallAzureStackPOC.ps1 command, it actually calls the DeploySingleNode.ps1 script that is in c:clouddeploymentsetup and if you open that script you’ll find that there are parameters called RegionName and ExternalDomainSuffix and both these params have default values. Simply open that file in an editor and change them. Line 143 contains the $RegionName parameter with a default of ‘local’ so I change that to be the name of my ASDK host; line 130 contains the $ExternalDomainSuffix parameter with a default of ‘azurestack.local’ so I change that to stack.<my internal domain suffix>.

This approach means that I end up with all the ASDKs within stack.mydomain.local, with the name of the host being the region – that’s a good logical approach as far as I and my team are concerned!

Now, doing that isn’t quite all you need to do. Just like with the network routing requirement, you have to get the DNS requests to the Azure Stack DNS or nothing will work. If you follow my approach for the network address range, then the next step is to create some delegated subdomains in your internal DNS. We are using Windows Server and our internal DNS is linked to our AD, so the steps are as follows:

  1. Open the DNS console on your DC.
  2. Within your internal domain, create a HOST record for the stack DNS. I normally call mine <host>stackdns.<mydomain> because I have more than one. Assuming you’re done the neworking changes like me, the IP address for that is <your /16 subnet>.200.67 (e.g. 10.1.200.67).
  3. Within your internal domain create a subdomain of your internal domain called ‘stack’.
  4. Within that stack subdomain, right-click and select New Delegation. In the dialog that pops up enter the name you specified as the RegionName and then when asked for a DNS FQDN enter the hostname you just registered.
  5. If you are using App Services you also need to create a delegation for their domain – <region>.cloudapp.<external domain suffix>. Simply create another subdomain within stack called cloudapp and then repeat the previous step to register the <region> delegation.

Once you’ve configured your internal network and DNS, then modified the config file and script you can perform the installation. When doing so, make sure that in the parameters of the InstallAzureStackPOC.ps1 you specify the NatIPv4Address parameter as the IP address you set in your routing configuration to be the IP address you used in your network router configuration.

Effective Deployment 2: Removing the NAT

This bit hasn’t changed since the preview releases and I follow the notes on the AzureStack blog post. Once the ASDK install completes I simply run a short script that contains the powershell at the end of that post to remove the NAT, at which point that VM sits on my internal network with the IP address I set in the deployment and have configured in my router and traffic just flows.

Effective Deployment 3: ConfigASDK

Matt’s ConfigASDK script is excellent, an I use this to configure our ASDKs post-install. The script will deploy additional resource providers for you (SQL, MySQL and AppServices), install handy utils on the host and pull down a bunch of marketplace offerings you’ll find helpful. For our project we are not using the SQL and MySQL RPs so I skip those, along with the host customisation part.

Now, Matt’s script is built with the Stack team and as such doesn’t currently support the hacks for region and external DNS. I have a fork of the repo in my GitHub where I have added the RegionName and ExternalDomainSuffix params to Matt’s script, but that may not be up to date so you should look at what I’ve done and proceed with caution. I had to put a bunch of code in to replace what were understandably-fixed URLs. However, once I’d done that all Matt’s code worked as expected.

Usual disclaimers apply – use my version of Matt’s script at your own risk. make sure you understand the implications of what I’m describing here before you do this on your own network!

Post-Deployment Configuration Tips

Once you’ve got your ASDK up and running there are still a couple of things you should tweak for a happier time, particularly if you batter the thing as much as we do in terms of creation and deletion of resources and load testing.

Xrp VM RAM

Something you can do with ASDK but not with MAS is get at the VMs that form the fabric of Stack. In an ASDK these are not resilient and are smaller in config than their production counterparts. Our experience is that one of them does not have enough RAM configured by default. The AzS-Xrp01 VM is create with 8192MB of RAM. We see it’s demand climb well beyond this in use and as it does, you may see storage failure messages when trying to create accounts, access the storage APIS or blades in the portal. I give it another 4GB – increasing it to 12288MB in total and that seems to sort it. You don’t need to turn off the VM to do this – just go into Hyper-V Manager and increase the amount.

App Services ScaleSets

As of 1808 you can do this in the portal rather than through PowerShell. When App Services is installed there are a number of VM scalesets created that contain the VMs that host the App Hosting Plans. The thing is, the App Services team don’t know what you need, so the deploy with a single Shared worker tier VM and none in the other tiers. You need to go into the portal and change the instance count to an appropriate number for your project. I normally scale back Shared to zero and set the Small scaleset to a reasonable number based on the project. I also set the Management scaleset to to two instances. I’ve found that makes for a more reliable system, again, because we tend to batter our ASDK during development.

Parting thoughts

Please remember that the approach I describe here is not the way the Stack team expect you to deploy an ASDK. You need to have a reasonable understanding of networking and DNS to know the implications of what I change in my approach to deployment. Also bear in mind that Stack releases monthly updates so if you’re reading this in a year’s time, stuff might have changed so be careful.

Azure Stack is a very powerful solution for a very specific set of customer requirements – it’s not designed to be something everybody should use and you need to remember that. If you do have requirements that fit, hopefully this post will help you configure your ASDK to be a useful component in your development approach.