Over the Christmas break I migrated our on premises TFS 2015 instance to VSTS. The reason for the migration was multi-fold:
- We were blocked on moving to TFS 2017 as we could not easily upgrade our SQL cluster to SQL 2014
- We wanted to be on the latest, greatest and newest features of VSTS/TFS
- We wanted to get away from having to perform on-premises updates every few months
To do the migration we used the public preview of the TFS to VSTS Migrator.
So what did we learn?
The actual import was fairly quick, around 3 hours for just short of 200Gb of TPC data. However, getting the data from our on-premises system up to Azure was much slower, constrained by the need to copy backups around our LAN and our Internet bandwidth to get the files to Azure storage, a grand total of more like 16 hours. But remember this was mostly spent watching various progress bars after running various commands; so I was free to enjoy the Christmas break, I was not a slave to a PC.
This all makes it sound easy, and to be honest the actual production migration was, but this was only due to doing the hard work prior to the Christmas break during the dry run phase. During the dry run we:
- Addressed the TFS customisations that needed to be altered/removed
- Sorted the AD > AAD sync mappings for user accounts
- Worked out the backup/restore/copy process to get the TPC data to somewhere VSTS could import it from
- Did the actual dry run migration
- Tested the dry run instance after the migrate to get a list of what else needed addressing and anything our staff would have to do to access the new VSTS instance
- Documented (and scripted where possible) all the steps
- Made sure we had fall back processes in place if the migration failed.
And arguably most importantly, discovered how long each step would take so we could set expectations. This was the prime reason for picking the Christmas break as we knew we could have a number of days where there should be no TFS activity (we close for an extended period) hence de-risking the process to a great degree. We knew we could get the migration done over weekend, but a weeks break was easier, more relaxed, Christmas seemed a timely choice.
You might ask the question ‘what did not migrate?’
Well a better question might be ’what needed changing due to the migration?’
It was not so much items did not migrate, just they are handled a bit differently in VSTS. The list of areas we needed to address were
- User Licensing – we needed to make sure your user’s MSDN subscription are mapped to their work IDs.
- Build/Release Licensing – we needed to decide how many private build agents we really needed (not just spin up more on a whim as we had done with our on-premises TFS), they cost money on VSTS
- Release pipeline – now these don’t migrate as of the time of writing, but I wrote a quick tool to get 95% of their content moved. After using this tool we did then need to also edit the pipelines, re-entering ‘secrets’ which are not exported, before retesting them
But that was all the issues we had to address, everything else seems to be fine with users just changing the URL they connected to from on-premises to VSTS.
So if you think migrating your TFS to VSTS seems like a good idea, why not have a look at the blog post and video on the Microsoft ALM Blog about the migration tool. Remember that this is a Microsoft Gold DevOps Partner led process, so please get in touch with us at Black Marble or me directly via this blog if you want a chat about the migrations or other DevOps service we offer.
A while ago I create the TFSAlertsDSL project to provide a means to script responses to TFS Alert SOAP messages using Python. The SOAP Alert technology has been overtaken by time with the move to Service Hooks.
So I have taken the time to move this project over to the newer technology, which is supported both on TFS 2015 (onwards) and VSTS. I also took the chance to move from CodePlex to GitHub and renamed the project to VSTSServiceHookDsl.
Note: If you need the older SOAP alert based model stick with the project on CodePlex, I don’t intend to update it, but all the source is there if you need it.
What I learnt in the migration
Supporting WCF and Service Hooks
I had intended to keep support for both SOAP Alerts and Service Hooks in the new project, but I quickly realised there was little point. You cannot even register SOAP based alerts via the UI anymore and it added a lot of complexity. So I decided to remove all the WCF SOAP handling.
C# or REST TFS API
The SOAP Alert version used the older TFS C# API, hence you had to distribute these DLLs with the web site. Whilst factoring I decided to swap all the TFS calls to using the new REST API. This provided a couple of advantages I did not need to distribute the TFS DLLs Many of the newer function of VSTS/TFS are only available via the REST API
Exposing JObjects to Python
I revised the way that TFS data is handed in the Python Scripts. In the past I hand crafted data transfer objects for consumption within the Python scripts. The problem with this way of working is that it cannot handle custom objects, customised work items are a particular issue. You don’t know their shape.
I found the best solution was to just return the Newtonsoft JObjects that I got from the C# based REST calls. These are easily consumed in Python in the general form
Downside is that this change does mean that any scripts you had created for the old SOAP Alert version will need a bit of work when you transfer to the new Service Hook version.
Create a release pipeline
As per all good projects, I created a release pipeline for my internal test deployment. My process was as follows
- A VSTS build that builds the code from Github this
- Complies the code
- Run all the unit test
- Packages as an MSDeploy Package
- Followed by a VSTS release that
- Sets the web.config entries
- Deploys the MSDeploy package to Azure
- Then uses FTP to uploaded DSL DLL to Azure as it is not part of the package
Add support for more triggers
At the moment the Service Hook project supports the same trigger events as the old SOAP project, with the addition of support Git Push triggers.
I need to add in the handlers for all the older support triggers in VSTS/TFS, specifically the release related ones. I suspect these might be useful.
Create an ARM template
At the moment the deployment relies on the user creating the web site. It would be good to add an Azure Resource Management (ARM) Template to allow this site to be created automatically as part of the release process
So we have a nice new Python and Service Hook based framework to help manage your responses to Service Hook triggers for TFS and VSTS.
If you think it might be useful to you why not have a look at https://github.com/rfennell/VSTSServiceHookDsl.
Interested to hear your feedback
If you are moving from on-premises TFS to VSTS you might hit the same problem I have just have. The structure of a VSTS releases is changing, there is now the concept of multiple ‘Deployment Steps’ in an environment. This means you can use a number of different agents for a single environment – a good thing.
The downside this that if you export a TFS2015.3 release process and try to import it to VSTS it will fail saying the JSON format is incorrect.
Of course you can get around this with some copy typing, but I am lazy, so….
I have written a quick transform tool that converts the basic structure of the JSON to the new format. You can see the code as Github Gist
It is a command line tool, usage is as follows
- In VSTS create a new empty release, and save it
- Use the drop down menu on the newly saved release in the release explorer and export the file. This is the template for the new format e.g. template.json
- On your old TFS system export the release process in the same way to get your source file e.g. source.json
- Run the command line tool providing the name of the template, source and output file
RMTransform template.json source.json output.json
- On VSTS import the newly create JSON file release file.
- A release process should be created, but it won’t be possible to save it until you have fixed a few things that are not transferred
- Associated each Deployment step with Agent Pool
- Set the user accounts who will do the pre-and post approvals
- Any secret variable will need to be reentered
IMPORTANT - Make sure you save the imported process as soon as you can (i.e. straight after fixing anything that is stopping it being saved). If you don't save and start clicking into artifacts or global variable it seems to loose everything and you need to re-import
It is not perfect, you might find other issues that need fixing, but it save a load of copy typing
Whilst working with the VSTS Data Import Service I ended up migrating a TFS TPC up to VSTS that had an old XAML Build Controller defined. I did not need this XAML build controller, in fact I needed to remove it because it was using my free private build controller slot. Problem was I could not find a way to remove it via the VSTS (or Visual Studio Team Explorer) UI, and the VM that had been running the build controller was long gone.
The way I got rid of it in the end was the TFS C# API and a quick command line tool as shown below.
Note that you will need to delete any queued builds on the controller before you can delete it. You can do this via the VSTS browser interface.
I recently decided to change one of my VSTS instance to be directory backed. What this means is that in the past users logged in using LiveIDs (MSAs by their new name); once the VSTS instance was linked to an Azure Active Directory (AAD), via the Azure portal, they could login only if they were using an account in the AAD their MSA was listed as a guest in the AAD they used a work ID in another AAD that is listed as a guest in my AAD
Thus giving me centralised user management.
So I made the changes required, and the first two types of user were fine, but I had a problem with the third case. When I did the following Added and external Work ID to my AAD directory (via the old management portal https://manage.windowsazure.com) Added the user in my VSTS instance as a user Granted the new user rights to access team projects.
All seemed to go OK, but when I tried to login as the user I got the error
TF400813: The user 'db0990ce-80ce-44fc-bac9-ff2cce4720af\fez_blackmarble.com#EXTemail@example.com' is not authorized to access this resource.
With some help from Microsoft I got this fixed, seem to be an issue with Azure AD. The fix was to do the following
- Remove the user from VSTS account
- Go to the new Azure Portal (https://portal.azure.com/) and remove this user from the AAD
- Then re-add them as an external user back into the AAD (an invite email is sent)
- Add the user again to VSTS (another invite email is sent)
- Grant the user rights to the required team projects
and this fixed the access problems for me. The key item for me I think was to use the new Azure portal.
Hope it saves you some time
Azure DevTest Labs has been available for a while and I have found it a good way to make sure I control costs of VMs i.e. making sure they are shutdown outside office hours. However, in the past, have had a major limitation for me that they only allowed templates that contained a single VM. You could group these together but it was bit awkward.
Well at Connect() it was announced that there is now complex ARM template support. So you can now build multi-VM environment that simulate the environments you need to work on as a single unit.
Why not give it a try?
One type of feature I hate people demoing in any IDE, especially Visual Studio, is the ‘just click here to publish to live from the developers PC’. This is just not good practice, we want to encourage a good DevOps process with
- Source Control
- Automated build
- Automated release with approvals
The problem is, this can all be a bit much for people, it takes work and knowledge, and that right click is just too tempting.
So I was really pleased to see the new ‘Continuous Delivery (Preview)’ feature on Azure Web Apps announced at Connect().
This provides that one click simplicity, but creates a reasonably good DevOps pipeline using the features of VSTS using VSTS itself or GitHub as the source repository.
For details of the exact features and how to use it see the ALM Blog post, I am sure it will provide you with a good starting point for your ongoing development if you don’t want to build it from scratch; but remember this will not be your end game, you are probably still going to need to think how you are going to manage the further config settings, tests and approvals a full process will require. It is just a much better place to start than a right click in Visual Studio.
I am often asked asked ‘How can I move my TFS installation to VSTS?’
In the past the only real answer I had was the consultant’s answer ‘it depends’. There were options, but they all ended up losing fidelity i.e. that the history of past changes got removed or altered in some manner. For many companies the implication of such changes meant they stayed on-premises; with all the regular backups, updates and patch running the use of any on-premises service entails.
This has all changed with the announcement of the public preview of the TFS to VSTS Migrator from Microsoft at the Connect() conference.
In essence this allows a TFS Team Project Collection to be imported into VSTS as new VSTS instance. This makes it sound simple, but this can be a complex process depending upon your adoption of Azure Active Directory, the levels of customisation that have been made to your on-premises TFS instance and may require the upgrading your TFS server to the current version. Hence, the process is Microsoft ALM/DevOps partner led, and I am pleased to say that Black Marble is one of those Gold Partners.
So if you have an on-premise TFS and…
- your company strategy is cloud first and you want to migrate, with full history
- or you don’t want to patch your TFS server any more (or you stopped doing it a while ago)
- or you just want to move to VSTS because it where all the cool new bits are
why not get in touch with us at Black Marble or myself to help you investigate the options.
What are the changes in allowed email addresses in MSAs?
You may or may not have noticed that there has been a recent change in LiveID (or Microsoft Account MSA as they are now called). In the past you could create a MSA using an existing email address e.g. firstname.lastname@example.org . This is no longer an option. If you try to create a new MSA with a non Microsoft (outlook.com/hotmal.com) email they are blocked saying ‘This email is part of a reserved domain. Please enter a different email address’.
This limitation is actually a bit more complex than you might initially think, as it is not just your primary corporate work email it checks, it also checks any aliases you have. So in my case it would give the same error for email@example.com as well as firstname.lastname@example.org because they are both valid non Microsoft domains even though one is really only used as an email alias for the other.
So if creating a new MSA you will need to create a email@example.com style address. This is something all our new staff need to do as at present you need an MSA to associate with your MSDN subscription.
In the past we asked them to create this MSA with their email firstname.lastname@example.org alias. This email address is an alias for their primary work email account, not their primary work account address email@example.com itself. We encouraged them to not use their primary email address as it gets confusing as to which type of account (MSA or work account) is in use at any given login screen if their login name is the same for both (their primary email address). We now we have to ask them to create one in the form firstname.lastname@example.org to associate their MSDN subscription with.
So that is all good, but that about any existing accounts?
I think the best option is to update any existing to use new email@example.com addresses. I have found if you don’t do this you get into a place where the Azure/VSTS/O365 etc. login get confused as to whether your account is MSA or a Work Account. I actually managed to get to the point where I suddenly could not login to an Azure Active Directory (AAD) backed VSTS instance due to this confusion (the fix was to remove my ‘confused’ MSA and re-add my actual corporate AAD work account)
How do I fix that then?
To try to forestall this problem on other services I decided to update my old MSA email adress by do the following
- Login as my old MSA
- Go to https://account.microsoft.com
- Select ‘Info’
- Select ‘Manage how you sign in to Microsoft’
- Select ‘Add a new email address’
- Create a new @outlook.com email address (this will create a new email/Outlook inbox, but note that this seems to take a few minutes, or it did for me)
- Once the new email alias is created you can choose to make it your primary login address
- Finally you can delete your old address from the MSA
And your are done, you now can login with your new firstname.lastname@example.org with your existing password and any 2FA settings you have to any services you would previously login to e.g MSDN web site, VSTS etc.
The one extra step I did was to go into https://outlook.live.com , one the new email inbox was created, to access the new Inbox and setup an email forward to my old email@example.com email address. This was just to make sure any email notifications sent to the MSAs new Inbox end up somewhere I will actually see them, last think I wanted was a new Inbox to monitor
So I have migrated the primary email address for my MSA and all is good. You might not need this today, but I suspect it is something most people with MSAs using a work email as their primary login address are going to have to address at some point in the future.
Whilst I have been on holiday my PC has been switched off and in a laptop bag. This did not seem to stopped me getting problems when I tried to use it again…
- Outlook could not sync to O365 – turns out there had been some changes in our hybrid Exchange infrastructure, I just need to restart/patch the PC on our company LAN to pick up all the new group policy settings etc.
- Could not login to OneDrive getting a script error https://auth.gfx.ms/16.000.26657.00/DefaultLogin_Core.js
This second problem was a bit more complex to fix.
- Load Internet Explorer (not Edge)
- Go to Settings > Internet Options > Security
- Pick Trusted Sites and manually add the URL https://auth.gfx.ms as a trusted site.
- Unload the OneDrive desktop client
- Reload the OneDrive desktop client and you should get the usual LiveID login and all is good
- Interestingly – if you remove the trusted site setting the login still appears to work, but for how long I don’t know. I assume something is being cached.
So it appears there have been a few changes on security whilst I have been away.