Sunday 12 October 2014

Acid Air Pollution on the Cowley Road Oxford Uk

The Long Wall which Long Wall Street is named for always has a light line of white sand at its base, and nearly always has ongoing masonry works to repair it.

These pictures are taken at another Oxford road junction.

On the eastern edge of Oxford another set of traffic lights, at the junction of Cowley Road and Between Towns Road, again causes stationary traffic.

The local limestone is not very hard, and is constantly eroded, with the damage to the mortar being worse than that to the stone.

The damage is worst at ground level up to one metre.

The damage is worst near the ground and is caused by acidic exhaust fumes.

Here all the lichen has been killed and the brick looks as though it has been cleaned with brick acid.

We know that breathing in small particles from diesel exhaust is dangerous, as the particle size is small enough to pass deep into the lungs. The pattern of the effect on the walls shows that the concentration is particularly strong under one metre in height. There are two primary schools, one on either side of this junction.

Iffley Village Oxford Rag Stone

The quality of Oxford stone can be pretty shaky, and the stone that was used for field and road boundary walls was presumably of lower quality than that used for houses as good quality stone is scarce on the clay. This wall in Iffley is a charming example, but note what is happening to the bottom metre.

Who should we be claiming financial damages from? Shell? BP? Exxon?

Tuesday 30 September 2014

Add new lines to end of files with missing line ends

A Sonar rule: Files should contain an empty new line at the end convention
Some tools such as Git work better when files end with an empty line.

To add a new line to all files without one place the following in a file called newlines

FILES="$@"
for f in $FILES
do
c=tail -c 1 $f
if [ "$c" != "" ];
then
echo "$f No new line"
echo "" >> $f
continue
fi
done

Then invoke:

$ chmod +x newlines
$ find * -name *.java |xargs ./newlines

Monday 1 September 2014

Setting up a mac

Plugin, turn on, update, allow an hour!

Ensure you do not accept the default user details or your admin user will be timpizey not timp.

Install homebrew from http://brew.sh/. Ruby is installed already. This process will install devtools.

Install chrome, font size is under Web Content.

The System Font cannot be altered! The System Font is used by all native Apple applications such as iPhoto and iStore. This is a little annoying (EN_US tr: infuriating and probably illegal). For more general, well written, unix applications the fonts can be altered one by one.

Wednesday 30 July 2014

How to mount the Nexus 4 storage SD card on Linux systems

Taken from How to mount the Nexus 4 storage SD card on Linux systems and comments there.

Reproduced here so that I can find it again!

Enable Developer Mode

Settings >>‘about phone’ menu and after that you should tap seven times on ‘Build Number’.

Now, from the Developer Options menu enable USB Debugging.

sudo apt-get install mtp-tools mtpfs
sudo gedit /etc/udev/rules.d/51-android.rules

Note not smart quotes as in the article

#LG – Nexus 4
SUBSYSTEM=="usb", ATTR{idVendor}=="1004?, MODE="0666?
sudo chmod +x /etc/udev/rules.d/51-android.rules 
sudo service udev restart
sudo mkdir /media/nexus4
sudo chmod 755 /media/nexus4

Next, connect your Google Nexus 4 to your Ubuntu computer using the USB cable. The MTP option has to be enabled.

sudo mtpfs -o allow_other /media/nexus4

To unmount:

sudo umount /media/nexus4

Friday 20 June 2014

Rename Selected Jenkins Jobs

Using jenkinsapi it is easy to rename some jobs:

from jenkinsapi.jenkins import Jenkins
J = Jenkins('http://localhost:8080')

for j in J.keys():
  if (j.startswith('bad-prefix')):
    n = j.replace('bad-prefix', 'good-prefix')
    J.rename_job(j, n)

Thursday 5 June 2014

Multiple git identities

You may wish to keep your work and personal git identities separate but use them both on the same machine.

One way to do this is to use aliases in your ssh and git configs.

Generate a second key and upload it to github.

In ~/.ssh/config

# Opensource github user
Host githubAlias
  HostName github.com
  User git
  IdentityFile ~/.ssh/id_rsa_opensource

In the non-default case, Melati in this example, we now use the alias within the project configuration and override user.name and user.email in .git/config:

[core]
 repositoryformatversion = 0
 filemode = true
 bare = false
 logallrefupdates = true

[user]
 email = timp@paneris.org
 name = Tim Pizey

# NOTE use of remote alias defined in ~/.ssh/config
[remote "origin"]
 fetch = +refs/heads/*:refs/remotes/origin/*
 url = git@githubAlias:Melati/Melati.git
[branch "master"]
 remote = origin
 merge = refs/heads/master

Swapping

At some point you will forget or slip up in some other way.

#!/bin/sh
 
git filter-branch --env-filter '
 
an="$GIT_AUTHOR_NAME"
am="$GIT_AUTHOR_EMAIL"
cn="$GIT_COMMITTER_NAME"
cm="$GIT_COMMITTER_EMAIL"
 
if [ "$GIT_COMMITTER_EMAIL" = "timp@paneris.org" ]
then
    cn="Tim Pizey"
    cm="timp21337@paneris.org"
fi
if [ "$GIT_AUTHOR_EMAIL" = "timp@paneris.org" ]
then
    an="Tim Pizey"
    am="timp21337@paneris.org"
fi
 
export GIT_AUTHOR_NAME="$an"
export GIT_AUTHOR_EMAIL="$am"
export GIT_COMMITTER_NAME="$cn"
export GIT_COMMITTER_EMAIL="$cm"
'
Then git push -f origin master

Friday 16 May 2014

Using your SSD for caches

You have an SSD mounted on /scratch. Your home directory however is still on the spinning disk.

mv .cache /scratch/home/timp/

In your ~/.xsessionrc:

XDG_CACHE_HOME=/scratch/home/timp/.cache
export XDG_CACHE_HOME

Wednesday 12 March 2014

Installing and initialising Jenkins JNLP slaves using runit

Run all jobs on slaves

A possible configuration for Jenkins is to run all jobs through JNLP slaves. These can be housed upon the same machine as the master or different machines. The benefit is that the slaves can be run as different users and so cannot overwrite configuration files.

Creating a jnlp slave
mkdir /srv/jenkins-slaves
adduser  --home /srv/jenkins-slaves/jslave jslave
Download http://localhost:21337/jnlpJars/slave.jar to /srv/jenkins-slaves Then as shown on the slave start page:
cd /srv/jenkins-slaves/jslave

java -jar slave.jar -jnlpUrl http://localhost:21337/computer/jnlp/slave-agent.jnlp \
   -secret 96742108603d1c4f19a7fe52133f7410d75a7287f9686d9e97276e3c1eae10d7

This can then be run under runit.

Add the following to /etc/sv/jslave/run

#!/bin/sh
set -e
exec 2>&1
export LANG=en_GB.UTF8
export LANGUAGE=en_GB:en
export LC_ALL=en_GB.UTF8
export HOME=/srv/jenkins-slaves/jslave

cd /srv/jenkins-slaves/jslave
# Secret and url copied from http://localhost:8081/computer/Runner/
chpst -u jslave \
 java -jar slave.jar -jnlpUrl http://localhost:8081/computer/Runner/slave-agent.jnlp \
  -secret 96742108603d1c4f19a7fe52133f7410d75a7287f9686d9e97276e3c1eae10d7

Add the following to /etc/sv/jslave/log/run

#!/bin/bash
set -e
exec svlogd /var/log/jslave

Then add a symlink into /etc/services and start the service:

ln -s /etc/sv/jslave /etc/service/
/usr/bin/sv start /etc/service/jslave

Now jobs can be configured through the Jenkins interface to run only on the slave runner.

You can extend this to many different slaves, each running a different class of job.

Monday 10 March 2014

Git objects/pack grows enormous, fills disk

Running Jenkins server with git as the source repository I came in to find my primary disk full.

The first thing to do was to find where the files were being used:

$ du -h / | sort -h 

The git checkout directory inside my Jenkins workspace is 406G!!

$ du -h .git/objects/pack/
  406G .git/objects/pack/

Easily fixed:

$ git gc --prune=now
$ du -h .git/objects/pack/
  1.2G .git/objects/pack/

Friday 21 February 2014

A Continuous Integration Train Smash

If you are doing it right then very soon your Jenkins server will become essential to the functioning of every element of your business.

Jenkins will handle the testing, measurement, packaging and deployment of your code.

We have one Jenkins installation, this has grown in capability as it has in importance.

Like most organisations we grew our CI infrastructure organically. Developers from Android, iOS, Java core, front end and sysadmin teams all added jobs and plugins to support them. Some jobs are built on the server and some on slaves.

No record of who installed which plugins, when or why was kept.

We were aware that we needed to backup this crucial element of infrastructure, though never did a recovery dry run.

We decided to add the configuration of the server to git, initially manually and then using the SCM Sync configuration plugin, however we did not test restoring from this.

After a while we noticed errors in the Jenkins logs, and on the screen, about git. The errors in the logs came from some bad git config files, manually fixed. The problems with the SCM Sync configuration plugin were worse and were tracked down to Renaming job doesn't work with Git. The work around given does work, but the plugin breaks in a very nasty and difficult to fix way which requires the server to be restarted. We decided to remove the plugin, even after fixing the current symptoms.

All was good to go, we had a working server, no errors in the logs, all clean and uptodate.

Snatching defeat from the jaws of victory

Prior to the restart I had updated all plugins to their latest versions. This is something I have done many times over the last five years and it has never failed. As the first question one is asked in forums is "Have you updated to the latest version?" it is a step I have, until now taken for granted.

After running a few builds, the following day, Jenkins froze.

The last build to be run was a new, complex, important one, involving Virtual Machines, written by my boss.

I restarted the server, taking the opportunity to update three more plugins as I went.

Again it limped on for a while then the UI froze.

We disabled all plugins (a very competent colleague had joined me) by creating a .disabled file in the plugins directory:

for x in *.jpi; do touch $x.disabled; done

Then we set about re-enabling them, one letter at a time, repeat for a-z :


rm -v a*.jpi.disabled
sudo /etc/init.d/jenkins restart

This revealed that the problem was in a plugin starting with t, one of:

tasks.jpi
thinBackup.jpi
throttle-concurrents.jpi
translation.jpi
token-macro.jpi

Whilst it looked like it might be token-macro.jpi it then appeared not to be, the restarts were taking an increasing length of time.

At this point we decided that it would be better to revert to a backup.

The sysadmin team initiated the restore from backup, then discovered that there was still a spinning, 100% CPU, process and that it was from the throttle-concurrent plugin.

A quick google lead to JENKINS-21044, a known Blocker issue. On the wiki this is flagged:

Warning!
The version has a "blocker" issue caused by threads concurrency. See JENKINS-21044 for more info.

It was however too late to stop the restore. The backup failed at 8.00pm

By 7.00pm the following evening, Friday, after a day of configuration by most of the developers, we were back to where we had been on Wednesday night.

The long tail of the event continues through Monday.

Friday 14 February 2014

Increasing Cucumber Test Speed using Annotations

Cucumber is a neat framework for writing English language specifications, sometimes called Behaviour Driven Development or Specification By Example.

We are using Cucumber to test an API, and so we have perhaps strayed from the true path and used language which is not necessarily at the business logic level, however it does serve well as documentation for its intended audience.

One of the features of Cucumber is a test template, called a Scenario Outline, which enables one to parameterize a test with values:


  Scenario Outline: Calling by code gives full title
    Given a thing with code <code> and title <title>
    When calling "/rest/thing/<code>"
    Then expect json:
    """
      {"thing":{"code":"<code>", "title": "<title>"}}
    """
  Examples:
    |code|title  |
    |a   |Apples |
    |b   |Bananas|
    |c   |Cucumbers|

This is very powerful and enables one to add a lot of tests very quickly.

If your test setup takes non-negligible time, such as creating a clean new database, then you will quickly create a test suite whose running time is heading towards hours.

We addressed this by only recreating the database when necessary. We know when that is: it is when the test alters the database in such a way as to interfere with it or other tests being run again. Initially I thought of this quality as idempotency however I realised what I was actually talking about was repeatability. If your test creates a record with a field which has a unique constraint then you will need to delete that record before that test can be repeated.

We can use the Scenario Outline functionality to repeat a test, if it is not already parameterized, by simply adding an unused parameter.


  Scenario Outline: Create a thing
    When putting "/rest/thing/b/Banana"
    Then expect json:
    """
      {created:{"thing":{"code":"b", "title": "Banana"}}}
    """
    Then remove thing b
  Examples:
    |go|
    |one|
    |two|

We now know that this test can be run twice, so we are justified in tagging it @repeatable.

Initialisation when needed

If a test is @repeatable then we know that we do not need to create a clean database.

The Java snippet below sets the variable DIRTY if the test is not tagged @repeatable.

  private static boolean DIRTY = true;

  @Before(value = "@reuseCurrentDb", order = 1)
  public void reuseCurrentDb() {
    DIRTY = false;
  }
  @Before(order = 2)
  public void resetDB() {
    if (DIRTY) {
      try {
        if (CONNECTION == null) {
          CONNECTION = init(dataSource_.getConnection());
        }
        importDB(CONNECTION, dataSet_, DatabaseOperation.CLEAN_INSERT);
      }
      catch (Exception e) {
        throw new RuntimeException("Import failed before test run", e);
      }
    }
  }


  @After(order = 2)
  public void afterAll() {
    DIRTY = true;
  }


  @After(value = "@repeatable", order = 1)
  public void afterRepeatable(Scenario scenario) {
    if (!scenario.isFailed()) {
      DIRTY = false;
    }
  }

In retrospect it looks as though we should have inverted this, with a tag @dirties, as now almost every test is tagged @repeatable. However that does not take into account the development sequence: the test is first made to run once and then is made @repeatable.

Anticipated and Unanticipated Wins

This was intended to speed up our tests and did: from 48 minutes to 6 minutes.

The unintended win was that by focussing on cleanup we discovered three database tables which did not have correct deletion cascading.

This approach may surface other hidden problems, either with your tests or with the system under test; this is a good thing.

Update (2014-06-19)

By chipping away at the cascades, adding methods to capture is from returned JSON and adding deletion methods one is able to reduce the database creations to one, and any dirtying of the database throws an exception.

761 Scenarios (761 passed)
10364 Steps (10364 passed)
1m40.157s

  private static boolean DIRTY = true;

  private static HashMap rowCounts_;

  private HashMap rowCounts() {
    if (rowCounts_ == null) {
      rowCounts_ = setupTableRowCounts();
    }
    return rowCounts_;
  }

  @Before(order = 1)
  public void resetDB() {
    if (DIRTY) {
      try {
        if (I_DB_CONNECTION == null) {
          I_DB_CONNECTION = init(dataSource_.getConnection());
        }
        importDB(I_DB_CONNECTION, dataSet_, DatabaseOperation.CLEAN_INSERT);
      }
      catch (Exception e) {
        throw new RuntimeException("Import failed before test run", e);
      }
    }
  }


  private HashMap setupTableRowCounts() {
    HashMap tableRowCounts = new HashMap();
    try {
      String[] normalTables = { "TABLE" };
      DatabaseMetaData m = I_DB_CONNECTION.getConnection().getMetaData();
      ResultSet tableDescs = m.getTables(null, null, null, normalTables);
      while (tableDescs.next()) {
        String tableName = tableDescs.getString("TABLE_NAME");
        tableRowCounts.put(tableName, I_DB_CONNECTION.getRowCount(tableName));
      }
      tableDescs.close();
    }
    catch (SQLException e) {
      throw new RuntimeException(e);
    }
    return tableRowCounts;
  }

  private void checkTableRowCounts(Scenario scenario) {
    try {
      DatabaseMetaData m = I_DB_CONNECTION.getConnection().getMetaData();
      String[] normalTables = { "TABLE" };
      ResultSet tableDescs = m.getTables(null, null, null,
          normalTables);
      String problems = "";
      while (tableDescs.next()) {
        String tableName = tableDescs.getString("TABLE_NAME");
        int old = rowCounts().get(tableName);
        int current = I_DB_CONNECTION.getRowCount(tableName);
        if (old != current) {
         problems += " Table " + tableName + " was " + old 
                     + " but now is " + current + "\n";
        }
      }
      tableDescs.close();
      if (!problems.equals("")) {
        problems = "Scenario " + scenario.getName() + " :\n" + problems;
        throw new RuntimeException(problems);
      }
    }
    catch (SQLException e) {
      throw new RuntimeException(e);
    }
  }


  @After(order = 2)
  public void afterAll(Scenario scenario) {
    if (scenario.isFailed()) {
      DIRTY = true;
    }
    else {
      DIRTY = false;
      try {
        checkTableRowCounts(scenario);
      }
      catch (RuntimeException e) {
        DIRTY = true;
        throw e;
      }
    }
  }