BM-Bloggers

The blogs of Black Marble staff

Out with the Band in with the Garmin

I have been using the Microsoft Band (both version Band1 and Band2) since they came out, and been reasonably happy. However, a year or so on my issues with it have remained the same

  • Poor battery life, I can live with charging it each day, but even with GPS Power-saver mode on I can’t go for any exercise over about 4 hours (bit of an issue for longer bike rides)
  • It is not waterproof, so no swimming (and worried doing the washing up)

Also there seem to be some build issues with the robustness of the Band2. I had to get mine replaced due to it not accepting recharging and the forums seems to report people suffering problems with the wrist strap splitting. That said, the warrantee service seems excellent, no complaints there, mine was swapped without any issue in a couple of days

In the end however, I decided it was time to to check out alternatives and picked the Garmin Vivoactive HR; basically the Garmin equivalent to the Band in feature set and price (it is a little more expensive in the UK)

 

image

 

I have to say a couple of weeks in I am very pleased. It fixes those two major issues for me. Most importantly I seem to need charging it only about every 5 days or so, that is with with an hour or two of full activity tracking each day. The specs claim 10 hour+ for full activity tracking on a charge. Also it is waterproof and allows activity tracking for pool based swimming (swim mode is lap based and has no GPS enabled so less use for open water).

That all said there are still issues

  • The Bluetooth link to my Windows Phone 10 is a little temperamental for things like notifications and sync -  a restart usually fixes everything (but hey it fully supports Windows Phone 10 not just Android and iPhone!)
  • Shame they disable heart rate monitor for swimming (signal not reliable enough, unless you pair with a chest strap it seems)
  • Lack of open water swimming tracking (see above – but of you want full multisport tracking look at the Garmin 920XT, their top of the range watch it does it all)

But I think these are all minor issues for me, and the third party apps store for the device help such as adding triathlon support which attempts HR monitoring for swimming, without needing to upgrade to the 920XT.

So a good alternative to theBand2?

For me yes, it addresses my key issues. Band2 is a good fitness tracker with unique styling, but if swimming or longer activities are your thing I think the Garmin Vivoactive HR has it.

New Build Management VSTS tasks

Just published a new VSTS extension with a couple of tasks in it. The aim to to help formalise the end of a release process. The tasks

  • Allow you to set the retension ‘keep forever’ flag on a build (or all builds linked to a release)
  • Update increment a build variable e.g. all or part of a version number, in a build (or all builds linked to a release)

The first just replicates functionality I used to have in house for builds

The second one is important to me as once I have released to production a version of a product I never want to generate another build with the same base version number. For example we version stamp all our DLLs/builds with a version number in form

$(Major).$(Minor).$(year:yy)$(dayofyear).(rev:r)     e.g. 1.2.16170.2

Where the $(Major) and $(Minor) are build variables we set manually (we decide when we increment a major or minor release) and the second two blocks guarantee a unique build number every time. It is too easy to forget to manually increment the Major or Minor build variable during a release. This task means I don’t need to remember to set the value of one or both of these. I can either set an explicit value or just get it to auto-increment. I usually auto increment the Minor value as a default, doing a manual reset of both the Major and Minor if it is a major release.

NOTE: You do have to add some permissions to the build service account else this second task fails with a 403 permission error – so read the WIKI

Web Application Proxy Failure Following Outage

Following a ‘hiccup’, involving a Web Application Proxy (WAP) server, internal services were no longer being published to the outside world.

After some investigation, both the ADFS and WAP services showed as stopped on the server. Attempting to start the ADFS service from the services console produced the following error:

Windows could not start the Active Directory Federation Service service on Local Computer.
Error 1064: An exception occurred in the service when handling the control request.

Under the System section of the Windows Event Log, the following error was shown:

Event ID: 7023
The Active Directory Federation Services service terminated with the following error:
An exception occurred in the service when handling the control request.

Followed a few moments later by the following error:

Event ID: 7023
The Web Application Proxy Service terminated with the following error:
A certificate is required to complete client authentication

Looking in the ‘AD FS’ section of the Event Log (under ‘Applications and Services Logs’), the following errors were shown (note that the first error was generally shown multiple times, followed by a single instance of the second error):

Event ID: 383
The Web request failed because the web.config is malformed.
User Action:
Fix the malformed data in the web.config file.
Exception details:
Root element is missing (C:\Windows\ADFS\Config\microsoft.identityServer.proxyservice.exe.config)
Root element is missing.

Followed by:

Event ID: 199
The federation server proxy could not be started.
Reason: Error retrieving proxy configuration from the Federation Service.
Additional Data
Exception details:
An error occurred when attempting to load the proxy configuration.

Checking the file at C:\Windows\ADFS\Config\microsoft.identityServer.proxyservice.exe.config showed that while the file size was still indicated as 2k, the file was blank.

I’ve seen a number of reports online indicating that WAP seems happy to chew up the contents of this configuration file following an outage, although I can find no information on why this might happen. If you have a backup of the file in question, it should be a simple matter to restore this file and restart the ADFS and WAP services to restore service. If you don’t, and have no other example server from which you can pull a similar copy of the file then the following steps must be taken:

  1. Remove the Web Application Proxy role from the server. Once this is complete, a reboot will be required.
  2. Re-add the Web Application Proxy role to the server.
  3. Once this is complete, initiate the configuration wizard.
  4. Use the same configuration parameters as you used when configuring the service initially, namely federation service name (e.g. federation.domain.com), local admin details for the federation server and the federation certificate (unless you’ve replaced the certificate used, in which case obviously you should use the new certificate details); you noted those down during initial configuration, right?
  5. Once configuration is complete, the Remote Access Management Console should open automatically. All of your publishing rules should still be in place, and your published services should be available immediately.

For reference, here’s a sample config file, from which you should be able to reconstruct an appropriate file for your service:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <configSections>
    <section name="microsoft.identityServer.proxyservice" type="Microsoft.IdentityServer.Management.Proxy.Configuration.ProxyConfiguration, Microsoft.IdentityServer.Management.Proxy, Version=6.3.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL" />
  </configSections>

  <microsoft.identityServer.proxyservice>
    <congestionControl latencyThresholdInMSec="8000" minCongestionWindowSize="64"
      enabled="true" connectionTimeoutInSec="60" />
    <connectionPool connectionPoolSize="200" scavengeInterval="5" />
    <diagnostics eventLogLevel="15" />
    <host tlsClientPort="49443" httpPort="80" httpsPort="443" name="federation.domain.com" />
    <proxy address="" />
    <trust thumbprint="1234567890ABCDEF1234567890ABCDEF12345678"
      proxyTrustRenewPeriod="21600" />
  </microsoft.identityServer.proxyservice>
  <!-- <system.serviceModel>
    <diagnostics>
      <messageLogging logEntireMessage="true"
              logMessagesAtServiceLevel="true"
              logMessagesAtTransportLevel="true">
      </messageLogging>
    </diagnostics>
  </system.serviceModel> -->
</configuration>

 

Web Application Proxy Failure Following Outage

Following a ‘hiccup’, involving a Web Application Proxy (WAP) server, internal services were no longer being published to the outside world.

After some investigation, both the ADFS and WAP services showed as stopped on the server. Attempting to start the ADFS service from the services console produced the following error:

Windows could not start the Active Directory Federation Service service on Local Computer.
Error 1064: An exception occurred in the service when handling the control request.

Under the System section of the Windows Event Log, the following error was shown:

Event ID: 7023
The Active Directory Federation Services service terminated with the following error:
An exception occurred in the service when handling the control request.

Followed a few moments later by the following error:

Event ID: 7023
The Web Application Proxy Service terminated with the following error:
A certificate is required to complete client authentication

Looking in the ‘AD FS’ section of the Event Log (under ‘Applications and Services Logs’), the following errors were shown (note that the first error was generally shown multiple times, followed by a single instance of the second error):

Event ID: 383
The Web request failed because the web.config is malformed.
User Action:
Fix the malformed data in the web.config file.
Exception details:
Root element is missing (C:\Windows\ADFS\Config\microsoft.identityServer.proxyservice.exe.config)
Root element is missing.

Followed by:

Event ID: 199
The federation server proxy could not be started.
Reason: Error retrieving proxy configuration from the Federation Service.
Additional Data
Exception details:
An error occurred when attempting to load the proxy configuration.

Checking the file at C:\Windows\ADFS\Config\microsoft.identityServer.proxyservice.exe.config showed that while the file size was still indicated as 2k, the file was blank.

I’ve seen a number of reports online indicating that WAP seems happy to chew up the contents of this configuration file following an outage, although I can find no information on why this might happen. If you have a backup of the file in question, it should be a simple matter to restore this file and restart the ADFS and WAP services to restore service. If you don’t, and have no other example server from which you can pull a similar copy of the file then the following steps must be taken:

  1. Remove the Web Application Proxy role from the server. Once this is complete, a reboot will be required.
  2. Re-add the Web Application Proxy role to the server.
  3. Once this is complete, initiate the configuration wizard.
  4. Use the same configuration parameters as you used when configuring the service initially, namely federation service name (e.g. federation.domain.com), local admin details for the federation server and the federation certificate (unless you’ve replaced the certificate used, in which case obviously you should use the new certificate details); you noted those down during initial configuration, right?
  5. Once configuration is complete, the Remote Access Management Console should open automatically. All of your publishing rules should still be in place, and your published services should be available immediately.

For reference, here’s a sample config file, from which you should be able to reconstruct an appropriate file for your service:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <configSections>
    <section name="microsoft.identityServer.proxyservice" type="Microsoft.IdentityServer.Management.Proxy.Configuration.ProxyConfiguration, Microsoft.IdentityServer.Management.Proxy, Version=6.3.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL" />
  </configSections>

  <microsoft.identityServer.proxyservice>
    <congestionControl latencyThresholdInMSec="8000" minCongestionWindowSize="64"
      enabled="true" connectionTimeoutInSec="60" />
    <connectionPool connectionPoolSize="200" scavengeInterval="5" />
    <diagnostics eventLogLevel="15" />
    <host tlsClientPort="49443" httpPort="80" httpsPort="443" name="federation.domain.com" />
    <proxy address="" />
    <trust thumbprint="1234567890ABCDEF1234567890ABCDEF12345678"
      proxyTrustRenewPeriod="21600" />
  </microsoft.identityServer.proxyservice>
  <!-- <system.serviceModel>
    <diagnostics>
      <messageLogging logEntireMessage="true"
              logMessagesAtServiceLevel="true"
              logMessagesAtTransportLevel="true">
      </messageLogging>
    </diagnostics>
  </system.serviceModel> -->
</configuration>

Life gets better in Visual Studio Code for PowerShell

I have been using Visual Studio Code for PowerShell development, but got a bit behind on reading release notes. Today I just realised I can make my Integrated Terminal a Code a PowerShell instance.

In File > Preferences > user Settings (settings.json) enter the following

 

// Place your settings in this file to overwrite the default settings
{
     // The path of the shell that the terminal uses on Windows.
    "terminal.integrated.shell.windows": "C:\\windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe"
}

Now my terminal is a PowerShell instance, and you can see it has loaded by profile so POSH Git is work as well

 

image

 

So I think we have reached the goodbye PowerShell ISE point

Gotcha’s when developing VSTS Build Extension

I recently posted on my development process for VSTS Extensions, it has been specifically PowerShell based build ones I have been working on. During this development I have come across a few more gotcha’s that I think are worth mentioning

32/64 bit

The VSTS build agent launches PowerShell 64bit (as does the PowerShell command line on dev PC), but VSCode launches it 32bit. Whilst working my StyleCop extension this caused me a problem as StyleCop it seems can only load dictionaries for spell checking based rules when in a 32bit shell. So my Pester tests for the extension worked in VSCode but failed at the command line and within a VSTS build

After many hours my eventual solution was to put some guard code in my scripts to force a reload in 32bit mode

param
(
    [string]$treatStyleCopViolationsErrorsAsWarnings,
    [string]$maximumViolationCount,
    … other params
)

if ($env:Processor_Architecture -ne "x86")  
{
    # Get the command parameters
    $args = $myinvocation.BoundParameters.GetEnumerator() | ForEach-Object {$($_.Value)}
    write-warning 'Launching x86 PowerShell'
    &"$env:windir\syswow64\windowspowershell\v1.0\powershell.exe" -noprofile -executionpolicy bypass -file $myinvocation.Mycommand.path $args
    exit
}
write-verbose "Running in $($env:Processor_Architecture) PowerShell"

... rest of my code

 

The downside of this trick is that you can’t pass return values back as you swapped execution process. For the type of things I am doing with VSTS tasks this not an issue as the important data has usually be dropped to a file which is accessible by everything, such as test results.

For a worked sample of production code and Pester tests see by GitHub repo.

Using Modules

In the last post I mentioned the problem when trying to run Pester tests against scripts, the script content is executed. I stupidly did not mention the obvious solution of moving all the code into functions in a PowerShell modules. This makes it easier to write tests for all bar the outer wrapper .PS1 script that is called by the VSTS agent.

Again see by GitHub repo so a good sample. Note how I have split out the files so that I have

  • A module that contains the functions I can test via Pester
  • A .PS1 script called by VSTS (this will run 64bit) where I deal with interaction with VSTS/TFS
  • An inner PS1 string that we force into 32bit mode as needed (see above)

Hacking around on your code

You always get to the point I find when developing things like VSTS build tasks that you want to make some quick change to try something without the full development/build/release cycle. This is in effect the local development stage, it is just build task development makes with awkward. It is hard to fully test a task locally, it need to be deployed within a build

I have found a way to help here is to use a local build agent, you can then get at the deployed task and edit the .PS1 code. The important bit to node is that the task will not be redeployed so you local ‘hack’ can be tested within a real TFS build without having to increment the task’s version and redeploy.

Hacky but handy to know.

You of course do need to make sure you hacked code is eventually put through your formal release process.

And maybe something or nothings…

I may have seen these issues, but have not got to the bottom of them, so they may not be real issues

  • The order parameters are declared in a task.json file seems to need to match the order they are declared in the .PS1 file call. I had thought they we associated by name not order, but in one task they all got transposed until I fixed the order.
  • The F5 dev debug cycle is still a little awkward with VSCode, sometime it seems to leave stuff running and you get high CPU utilisation – just restart the VSCode  - the old fix!
  • If using the 32 bit relaunch discussed above write-verbose messages don’t awlays seem to show up in the VSTS log, I assume a –verbose parameter is being lost somewhere, or it is the spawning of another PowerShell instance that cause the problem.

SO again I hope these tips help with your VSTS extension development

Running TSLint within SonarQube on a TFS build

I wanted to add some level of static analysis to our Typescript projects, TSLint being the obvious choice. To make sure it got run as part of our build release process I wanted to wire it into our SonarQube system, this meant using the community TSLintPlugin, which is still pre-release (0.6 preview at the time of writing).

I followed the installation process for plugin without any problems setting the TSLint path to match our build boxes

C:\Users\Tfsbuild\AppData\Roaming\npm\node_modules\tslint\bin\tslint

Within my TFS/VSTS build I added three extra tasks

image

  • An NPM install to make sure that TSLint was installed in the right folder by running the command ‘install -g tslint typescript ‘
  • A pre-build SonarQube MSBuild task to link to our SonarQube instance
  • A post-build SonarQube MSBuild task to complete the analysis

Once this build was run with a simple Hello World TypeScript project, I could see SonarQube attempting to do TSLint analysis but failing with the error

2016-07-05T11:36:02.6425918Z INFO: Sensor com.pablissimo.sonar.TsLintSensor

2016-07-05T11:36:07.1425492Z ##[error]ERROR: TsLint Err: Invalid option for configuration: tslint.json

2016-07-05T11:36:07.3612994Z INFO: Sensor com.pablissimo.sonar.TsLintSensor (done) | time=4765ms

The problem was the build task generated sonar-project.properties file did not contain the path to the TSLint.json file. In the current version of the TSLint plugin this file needs to be managed manually, it is not generated by the SonarQube ruleset. Hence is a file in the source code folder on the build box, a path that the SonarQube server cannot know.

The Begin Analysis SonarQube for MSBuild task generates the sonar-project.properties, but only adds the entries for MSBuild (as its name suggests). It does nothing related to TsLint plugin or any other plugins.

The solution was to add the required setting via the advanced properties of the Begin Analysis task i.e. point to the tslint.json file under source control, using a build variable to set the base folder.

/d:sonar.ts.tslintconfigpath=$(build.sourcesdirectory)\tslint.json

image

Once this setting was added I could see the TSLint rules being evaluated and the showing up in the SonarQube analysis.

Another step to improving our overall code quality through consistent analysis of technical debt.

SharePoint Crawl Rules Appears to Ignore Some URL Protocols

I recently came across an issue relating to crawling people information in SharePoint and the use of crawl rules to exclude certain content.

The issue revolved around a requirement to exclude content contained within peoples’ MySites, but include user profile information so that people searches could still be conducted. The following crawl rule had been configured and was successfully excluding MySite content, but was also excluding the user profile data (crawled using the sps3s:// protocol):

URL Exclude or Include
https://mysite.domain.com/* Exclude

Using the crawl rule test facility indicated that while SharePoint treats http:// and https:// differently, https:// and sps3s:// appear to be treated the same as far as crawling is concerned, so if the above crawl rule is in place, items in the MySite root site collection, both with an https:// and sps3s:// prefix, will not be crawled, and therefore user profile data and people search will not be available:

Crawl rule test

[Screen shot from lab SharePoint 2010 system. however the same tests have been performed against SharePoint 2013 and 2016 with the same results]

In fact what is happening is that the sps3s:// prefix tells SharePoint which connector to use, and in the case of people search, this is translated into a call to a web service at the host specified, i.e. https://mysite.domain.com/_vti_bin/spscrawl.asmx, so the final call that is made is in fact to an https:// prefix, hence the reason that the people data is not crawled.

Replacing the above crawl rule with the following rule corrects the issue allowing people data stored in the MySite root site collection to be indexed and therefore be available for users to search:

URL Exclude or Include
https://mysite.domain.com/personal/* Exclude

Scroll bars in MTM Lab Center had me foxed – User too stupid error

I thought I had a problem with our TFS Lab Manager setup, 80% of our environments had disappeared. I wondered if it was rights, was it just showing environments I owned? No it was not that.

Turns our the issue was a UX/Scrollbar issue.

I had MTM full screen in ‘Test Center’ mode, with a long list of test suites, so long a  scroll bar was needed and I had scrolled to the bottom of the list

I then switched to ‘Lab Center’ mode, this list was shorter, not needing a scrollbar, but the the pane listing the environments (that had been showing the test suites) was still scrolled to the bottom. The need for the scrollbar was unexpected and I just missed it visually (in my defence it is light grey on white). Exiting and reloading MTM had no effect, the scroll did not reset on a reload or change of Test Plan/Team Project.

In fact I only realised the solution to the problem when it was pointed out by another member of our team after I asked if they were experiencing issues with Labs; the same had happened to them. Between us we wasted a fair bit of time on this issue!

Just goes to show how you can miss standard UX signals when you are not expecting them.