BM-Bloggers

The blogs of Black Marble staff

Deploying FileZilla with Configuration Manager 2012 R2

Some time ago I created an System Center Configuration Manager (SCCM) application to deploy FileZilla for users. Recently I created an updated application that upgraded FileZilla (using the usual ‘uninstall the old version, then install the new version’ method), which gave me an error when the application was upgraded:

Application Upgrade Filaed

Unable to make changes to your software

with the reported error being ‘The software change returned error code 0x87D00325 (-2016410843).

In addition, attempting to uninstall the application using the SCCM Software Center on the client machine gave a ‘Removal Failed’ with the same error details:

FileZilla Removal Failed

Digging through the AppEnforce log file on the client, an error was reported when attempting to uninstall the application as part of the change:

Prepared command line: "C:\Program Files (x86)\FileZilla FTP Client\uninstall.exe" /S
Post install behavior is BasedOnExitCode
Waiting for process 1616 to finish.  Timeout = 120 minutes.
Process 1616 terminated with exitcode: 0
Looking for exit code 0 in exit codes table...
Matched exit code 0 to a Success entry in exit codes table.
Performing detection of app deployment type FileZilla 3.13.0 x86
Discovered application
App enforcement completed (0 seconds) for App DT "FileZilla 3.13.0 x86"

This indicates that the uninstall process completed, but the application was detected as still being installed immediately afterwards. This is due to the way that uninstallation is performed, with the uninstall process triggered by SCCM spawning another process to actually perform the uninstallation, while the original process terminates. This means that the process that SCCM is waiting on to complete, does so almost immediately, while the actual work to perform the uninstallation is on-going, hence the reason that SCCM then detects the application as still present on the client system.

To get around the issue, we need to wait a period of time after the uninstallation is triggered to allow for completion before allowing SCCM to perform its detection. In order to do this, I’m using the ‘sleep.exe’ program that comes with the Windows 2003 Resource Kit Tools (available at http://www.microsoft.com/en-us/download/details.aspx?id=17657) to achieve this.

The items we’ll need to be copied to an appropriate location for deployment are:

  • The FileZilla installer
  • The sleep.exe program
  • A batch file to perform the uninstallation
  • An SCCM application that uses the batch file

The batch file should contain two lines:

"%ProgramFiles%\FileZilla FTP Client\uninstall.exe" /S
Sleep 20

This runs the uninstaller as before, then pauses for 20 seconds to allow the system to catch up. 20 seconds is long enough for our client machines, YMMV.

The SCCM application should then be created as normal:

Manually specify the application information:

Manually specify the application information

Provide a name for the application and optionally a publisher, version etc.:

Provide a name for the application and optionally a publisher, version etc.

Provide appropriate information for the Application Catalog:

Provide appropriate information for the Application Catalog

Click ‘Add…’ to add a Deployment Type to the application:

Click ‘Add…’ to add a Deployment Type to the application

Select ‘Manually specify the deployment type information’:

Select ‘Manually specify the deployment type information’

Provide a deployment type name and optionally administrator comments:

Provide a deployment type name and optionally administrator comments

For the content page of the wizard, specify the content location, for the installation program, specify ‘FileZilla_X.XX.X_win32-setup.exe /S’ and for the uninstall program, specify the batch file that was created previously:

For the content page of the wizard, specify the content location, installation program and uninsatll program

For the detection rules, we used the file version of the ‘filezilla.exe’ program file that gets installed, which seemed to work well:

For the detection rules, use the version of the 'filezilla.exe' file

In addition, if required, supersedence information can be added to the application.

The procedure above provided a robust SCCM application that can be installed and uninstalled without issue and can also be upgraded to a later version without issue. It should be noted that this method works for other applications with the same sort of uninstallation routine, e.g. Notepad++.

ALM Rangers guidance on migrating from RM Agent based releases to the new VSTS release system

Vijay in the Microsoft Release Management product group has provided a nice post on the various methods you can use to migrate from the earlier versions of Release management to the new one in Visual Studio Team Services.

An area where you will find the biggest change in technology is when moving from on premises agent based releases to the new VSTS PowerShell based system. To help in this process the ALM Rangers have produced a command line tool to migrate assets from RM server to VSTS, it exports all your activities as PowerShell scripts that are easy to re-use in a vNext Release Management process, or in Visual Studio Team Services’ Release tooling.

So if you are using agent based releases, why not have a look at the tools and see how easy it makes to migrate your process to newer tooling.

Visual Studio Dev Essentials announced at Connect() with free Azure time each month

One announcement I missed at at Connect() last week was that of Visual Studio Dev Essentials. I only heard about this one whilst listening to RadioTFS’s news from Connect() programme.

Visual Studio Dev Essentials is mostly a re-packing of all the tools that were already freely available from Microsoft e.g. Visual Studio Community Edition, Tem Foundation Server Express etc.; but there are some notable additions* (some coming soon)

  • Pluralsight  (6-month subscription)—limited time only
  • Xamarin University mobile training— coming soon
  • WintellectNOW  (3-month subscription)
  • Microsoft Virtual Academy
  • HackHands Live Programming Help  ($25 credit)
  • Priority Forum Support
  • Azure credit  ($25/month for 12 months)—coming soon
  • Visual Studio Team Services account with five users
  • App Service free tier
  • PowerBI free tier
  • HockeyApp free tier
  • Application Insights free tier

*Check the Visual Studio Dev Essentials site for the detailed T&C

So if you, or a student/hobbyist you know, need great development tools sign up at Visual Studio Dev Essentials

Upgrading to SonarQube 5.2 in the land of Windows, MSBuild and TFS

SonarQube released version 5.2 a couple of weeks ago. This enabled some new features that really help if you are working with MSbuild or just on a Windows platform in general. These are detailed in the posts

The new ability to manage users with LDAP is good, but one of the most important for me is the way 5.2 ease the configuration with SQL in integrated security mode. This is mentioned in the upgrade notes; basically it boils down to the fact you get better JDBC drivers with better support for SQL Clustering and security.

We found the upgrade process mostly straight forward

  1. Download SonarQube 5.2 and unzipped it
  2. Replaced the files on your SonarQube server
  3. Edited the sonar.properties file with the correct SQL connection details for an integrated security SQL connection. As we wanted to move to integrated security we did not need to set the sonar.jdbc.username setting.
    Important: One thing was not that clear, if you want to use integrated security you do need the sqljdbc_auth.dll file in a folder on a search path (C:\windows\system32 is an obvious place to keep it). You can find this file on MSDN
  4. Once the server was restarted we ran the http://localhost:9000/setup command and it upgraded our DBs

And that was it, for the upgrade. We could then use the standard SonarQube upgrade features to upgrade our plug-ins and to add the new ones like the  LDAP one.

Once the LDAP plug-in was in place (and the server restarted) we were automatically logged into SonarQube with our Windows AD accounts, so that was easy.

However we hit a problem with the new SonarQube 5.2 architecture and LDAP. The issue was that with 5.2 there is now no requirement for the upgraded 1.0.2 SonarQube MSBuild runner to talk directly to the SonarQube DB, all communication is via the SonarQube server. Obviously the user account that makes the call to the SonarQube server needs to be granted suitable rights. That is fairly obvious, the point we  tripped up on was ‘who was the runner running as?’ I had assumed it was as the build agent account, but this was not the case. As the connection to SonarQube is a TFS managed service, it had its own security credentials. Prior to 5.2 these credentials (other than the SonarQube server URL) had not mattered as the SonarQube runner made it own direct connection to the SonarQube DB. Post 5.2, with no DB connection and LDAP in use,  these service credentials become important. Once we had set these correctly, to a user with suitable rights, we were able to do new SonarQube analysis runs.

image

One other item of note. The change in architecture with 5.2 means that more work is being done on the server as opposed the client runner. The net effect of this is there is a short delay between runs being completed and the results appearing on the dashboard. Once expect it, it is not an issue, but a worry the first time.

Finding it hard to make use of Azure for DevTest?

Announced at Connect() today were a couple of new tools that could really help a team with their DevOps issues when working with VSTS and Azure (and potentially other scenarios too).

  • DevTest Lab is a new set of tooling within the Azure portal that allows the easy management of Test VMs, their creation and management as well as providing a means to control how many VMs team members can create, thus controlling cost. Have a look at Chuck’s post on getting started with DevTest Labs
  • To aid general deployment, have a look that new Release tooling now in public preview. This is based on the same agents as the vNext build system and can provide a great way to formalise your deployment process. Have a looks at Vijay’s post on getting started with the new Release tools

Chrome extension to help with exploratory testing

One of the many interesting announcements at Connect() today was that the new Microsoft Chrome Extension for Exploratory Testing  is  available in the Chrome Store

This is a great tool if you use VSO, sorry VSTS, allowing an easy way to ‘kick the tyres’ on your application, logging any bugs directly back to VSTS as Bug work items.

image

Best of all, it makes it easy to test your application on other platforms with the link to Perfecto Mobile. Just press the device button, login and you can launch a session on a real physical mobile device to continue your exploratory testing.

image

Only down side I can see that that, if like me, you would love this functionality for on-premises TFS we need to wait a while, this first preview only support VSTS.

Why you need to use vNext build tasks to share scripts between builds

Whilst doing a vNext build from a TFVC repository I needed map both my production code branch and a common folder of scripts that I intended to use in a number of builds, so my build workspace was set to

  • Map – $/BM/mycode/main                                     - my production code
  • Map – $/BM/BuildDefinations/vNextScripts  - my shared PowerShell I wish to run in different builds e.g. assembly versioning.

As I wanted this to be a CI build, I also  set the trigger to $/tp1/mycode/main

The problem I found was that with my workspace set as above, the associated changes for the build include anything checked into $/BM and below. Also the source branch was set as $/BM

image

 

To fix this problem I had to remove the mapping to the scripts folder, once this was done  the associated changes shown were only those for my production code area, and the source branch was listed correctly.

But what to do about running my script?

I don’t really want to have to copy a common script to each build, huge potential for error there, and versioning issues if I want a common script in all build. The best solution I found was to take the PowerShell script, in my case the sample assembly versioning script provided for VSO, and package it as a vNext build Task. It took no modification just required the addition of a manifest file. You can find the task on my vNextBuild Github repo

This custom task could then be uploaded to my TFS server and used in all my builds. As it picks up it variable from environment variables it required so configuration, extracting the version number for the build number format.

image

If you wish to use this task, you need to follow the same instructions to setup your development environment as for the VSO Tasks, then

  1. Clone the repo https://github.com/rfennell/vNextBuild.git
  2. In the root of the repo run gulp to build the task
  3. Use tfx to upload the task to your TFS instance

Installing Windows 10 RSAT Tools on EN-GB Media-Installed Systems

This post is an aide memoir so I don’t have to suffer the same annoyance and frustration at what should be an easy task.

I’ve now switched to my Surface Pro 3 as my only system, thanks to the lovely new Pro 4 Type Cover and Surface Dock. That meant that I needed the Remote Server Administration Tools installing. Doing that turned out to be much more of an odyssey that it should have been and I’m writing this in the hope that it will allow others to quickly find the information I struggled to.

The RSAT tools download is, as before, a Windows Update that adds the necessary Windows Features to your installation. The trouble is, that download is EN-US only (really, Microsoft?!). If, like me, you used the EN-GB media to install you’re in a pickle.

Running the installed appears to work – it proceeds with no errors, albeit rather quickly – but the RSAT features were unavailable. I already had a US keyboard on my config (my pro keyboard is US), but that was obviously not enough. I added the US language, but still couldn’t get the installer to work.

I got more information on the problem by following the steps described in a TechNet article on using DISM to install Windows Updates. That led me to a pair of articles on the SysadminTips site about the installation problem, and how to fully add the US language pack to solve it.

It turns out that the EN-GB media doesn’t install the full US-EN language pack files, so when you add the US language it doesn’t add enough stuff into the OS to allow the RSAT tools. Frankly, that’s a mess and I hope Microsoft deal with the issue by releasing multi-language RSAT tools.