Saturday, 7 July 2012

Next subroutine.....

I've almost forgotten what 'subroutines' do!

However, managed to load onto GITHUB the first routine called by the ROBFIT background fitting code BKGFIT detailed in previous posts.

The code BKLINK is now available for viewing. Though think I should have put more comments into the code!

Anyway - this is another of the routines used in the background fitting process - it is essentially an input routine that reads user defined values from a file called BKGFIT.MNU - which I still need to find!

Why bother fitting the background when the idea is to be looking for small peaks?

The code has been mainly developed around fitting of gamma-ray spectra but can be used on any data set which required the identification of peaks in among significant background 'noise'. Once you have identified what the background looks like and have represented it mathematically the search for small variations from this representation is made easier. Exactly like the identification of the signal for the Higgs Boson reported upon this week. Blimey I am topical - it wasn't planned!

Essentially the operation of the ROBFIT code follows a sequence;


  1. Read in data required to be analysed
  2. Fit the background (this can be a separate file or the code can be run 'all-up' with it fitting background and peaks)
  3. Search for 'channels' above a cutoff level
  4. Search for peak regions
  5. Identify peaks in these regions
  6. Refit all peaks in the regions
  7. Update the peak list
which seems a fairly straightforward sequence.

Except it gets a bit more complicated......more on that later!





Saturday, 30 June 2012

Tracking the good stuff..

Bit of a diversion from programming but worth it.

Thanks to a bizarre combination of watching a programme on Caledonian Road (N9 London) on the BBC (excellent viewings for me), Tweeting that I had lived there for a few years then Eleonora http://eleonoraschinella.wordpress.com  replying with a Tweet about a way of tracking and storing them as a story - I discovered an excellent new online tool for corralling all this fleeting internet information.


Its called Storify http://storify.com/ absolutely worth checking out! So I'm using it to pull together the threads of 'knowledge' that I pick up from various sources such as Twitter, Google reader etc. So far I have gone mad and have 4 'stories running - 2 can be found on http://storify.com/grandwizz - one is an Alan Turing tribute and the other relates to collation of Innovation ideas! The other 2 stories are in draft and are just 'knowledge' dumps - probably post them anyway this week sometime.

So the online tools are shaping up like this for me;


  1. Twitter - full of junk until you can filter what you want through a trusted network then excellent access to current thinking!
  2. LinkedIn - interfacing with Twitter to share knowledge and for deeper discussions - negative is its a bit glacial in response time - positive is that the feedback you get from a network is fantastic, a living encyclopaedia.
  3. Github - repo for all coding - best bar none for this type of thing!
  4. Blogger - this - effectively my diary.
  5. Storify - repository for pulling things together and building themes.
  6. My website (yes I am calling it that much to the disdain of my lads)  https://sites.google.com/site/oldbam/  for keeping track of where all this is located!


Now just need a good Fortran compiler - the one I have (must be free you see) is a bit too retro even for me;)





Friday, 22 June 2012

First module loaded!

Result - the first ROBFIT module has been loaded onto Github!

It only took me 4 hours to do that mind!

Setting up the ROBFIT repo (got to do the jargon right - that's short for repository on github) was more of a lesson in retro. All command line control once you have downloaded github,

mkdir robfit
cd robfit 
git init

type stuff - not sure whether to be pleased that I still follow this stuff or not - thought there would be some slick modern version to clickety click and bob's your uncle its all done for you. But no - maybe the advanced version comes with a green screen too ;)

Anyway;

git add BKGFIT
git remote add origin https://github/grandwizz/robfit.git
git push origin master

seemed to load the background fitting programme BKGFIT onto the site.

Github - looks very very useful by-the-way once you have got over up the learning curve!

Check it out.......more to come.

Friday, 15 June 2012

Reading my own words.

Bit odd - I have had to resort to reading the ROBFIT book to accelerate the learning.

Feels like I'm cheating - a bit.

So I have;

BKGFIT - which fits the background alone
FSPFIT - which fits he complete spectrum
RAWDD - displays the raw`data
XCALIBER - x-axis calibration to energy
FSPDIS - display the full spectrum
STGEN - generate a standard peak

The order of events for the way the code works is as follows;


  1. Read and display the raw data (RAWDD)
  2. Generate a 'standard' peak from the peak data (STGEN)
  3. Fit the background (BKGFIT)
  4. Search for regions where there may be peaks (FSPFIT)
  5. Add a peak to the region (FSPFIT)
  6. Repeat 3,4,5 until there are no further peak regions identified 
  7. Convert the x-axis channel numbers into energies (XCALIBER)
  8. Display the fitting results (FSPDIS)
During the fitting the user has full control over the level to which the code will identify and attempt to fit smaller and smaller peaks.

Can't believe we did this on machines available at the time!





Saturday, 9 June 2012

Back to ROBFIT and the Fortran for a while!

So have agreed with Bob that posting the code on GITHUB is a good idea. However, getting a version of the code off the floppy disc's proved to be an exercise in itself! Involving clearing the loft to try and find an old machine that could read the 3 1/4 disc's. Found the machine and a miracle occurred it fired up without any trouble - the drive worked - and I managed to copy a total or 30 discs with various versions onto a modern disc drive. Result!

On a roll I selected the most recent version - copied it onto the machine I am typing this on (a Toshiba Satellite laptop - conveniently named for analysing Supernova data I thought) - the found a 'read.me' file - another result! Can't for the life of me remember writing any of this build stuff - maybe Bob did it ;)

Tried to follow the instructions, which are;

"Welcome to Robfit


Book reference "The Theory and Operation of Spectral Analysis
                       Using ROBFIT". AIP 1991 ISBN 0-88318-941-0
        Robert L. Coldwell and Gary J. Bamford Univ. of Fla
             ROBFIT@NERVM.NERDC.UFL.EDU


The disks have a mk...hd.bat file in the root directory
First insert Essential and run MKESSHD.BAT
This creates the robfit directory and various subdirectories
with a set of test cases in them.
   Next insert the appropriate coproexe or nocoproexe disk
(depending on whether you do or do not have a coprocessor)
and enter the command RUNABLE.BAT.  This creates the subdirectory
runable under robfit on the hard disk.  Enter this subdirectory and
enter the command ROBFIT and read the book.  The test cases are labelled
ZTCASE1.SP (the data file) through ZTCASE8.SP (supernova data).
It is supposed to be obvious what to do next.  (...dis for display),
(...fit to fit).

.................."

Er - nope - that didn't happen. Guess after 20 years things have changed. I am blaming Bob for not making it future proof ;))

So the plan now is to take the Fortran files one by one and try and figure out how to re-compile them for new machines.

I am enjoying this aren't I??

Saturday, 2 June 2012

Escalation modelling.....

Back to the offshore QRA modelling...

This is what we are trying to get our heads around at the moment!

This is the code escalation modelling element of the QRA code. The intention is to extract this module so that we can reuse in future applications.

We now just need a slick way of setting this up for future use that doesn't involve some tortured sequence of ascii characters that the spreadsheet models utilise - mind bending!


Saturday, 26 May 2012

Thought leadership!

Twitter update.

Just started to collate information on Twitter - which is a bit of a nightmare given the volume of traffic from the 80 people and organisation I follow.

However I have discovered the beauty of 'Lists' - you can essentially allocate certain people/organisations to a List or a number of Lists that you yourself can define. So I have set up 4 Lists which I am calling my Twitter Libraries, at the moment I have the following,


  • Science Library
  • Computing Library
  • Management Library
  • Coffee Library

Which seem to be a good grouping of my areas of interest. These are proving to be very effective at filtering the dross and not losing a post in all of the traffic. So well done Twitter!

I am also using #arctki to record all of my tweets related to technical knowledge and innovation which again is a great way of tracking and sharing information.

Everyone has visibility of the Libraries and #arctki - not sure if anyone is checking them on a regular basis but a store of information is building up.