Saturday 30 June 2012

Tracking the good stuff..

Bit of a diversion from programming but worth it.

Thanks to a bizarre combination of watching a programme on Caledonian Road (N9 London) on the BBC (excellent viewings for me), Tweeting that I had lived there for a few years then Eleonora http://eleonoraschinella.wordpress.com  replying with a Tweet about a way of tracking and storing them as a story - I discovered an excellent new online tool for corralling all this fleeting internet information.


Its called Storify http://storify.com/ absolutely worth checking out! So I'm using it to pull together the threads of 'knowledge' that I pick up from various sources such as Twitter, Google reader etc. So far I have gone mad and have 4 'stories running - 2 can be found on http://storify.com/grandwizz - one is an Alan Turing tribute and the other relates to collation of Innovation ideas! The other 2 stories are in draft and are just 'knowledge' dumps - probably post them anyway this week sometime.

So the online tools are shaping up like this for me;


  1. Twitter - full of junk until you can filter what you want through a trusted network then excellent access to current thinking!
  2. LinkedIn - interfacing with Twitter to share knowledge and for deeper discussions - negative is its a bit glacial in response time - positive is that the feedback you get from a network is fantastic, a living encyclopaedia.
  3. Github - repo for all coding - best bar none for this type of thing!
  4. Blogger - this - effectively my diary.
  5. Storify - repository for pulling things together and building themes.
  6. My website (yes I am calling it that much to the disdain of my lads)  https://sites.google.com/site/oldbam/  for keeping track of where all this is located!


Now just need a good Fortran compiler - the one I have (must be free you see) is a bit too retro even for me;)





Friday 22 June 2012

First module loaded!

Result - the first ROBFIT module has been loaded onto Github!

It only took me 4 hours to do that mind!

Setting up the ROBFIT repo (got to do the jargon right - that's short for repository on github) was more of a lesson in retro. All command line control once you have downloaded github,

mkdir robfit
cd robfit 
git init

type stuff - not sure whether to be pleased that I still follow this stuff or not - thought there would be some slick modern version to clickety click and bob's your uncle its all done for you. But no - maybe the advanced version comes with a green screen too ;)

Anyway;

git add BKGFIT
git remote add origin https://github/grandwizz/robfit.git
git push origin master

seemed to load the background fitting programme BKGFIT onto the site.

Github - looks very very useful by-the-way once you have got over up the learning curve!

Check it out.......more to come.

Friday 15 June 2012

Reading my own words.

Bit odd - I have had to resort to reading the ROBFIT book to accelerate the learning.

Feels like I'm cheating - a bit.

So I have;

BKGFIT - which fits the background alone
FSPFIT - which fits he complete spectrum
RAWDD - displays the raw`data
XCALIBER - x-axis calibration to energy
FSPDIS - display the full spectrum
STGEN - generate a standard peak

The order of events for the way the code works is as follows;


  1. Read and display the raw data (RAWDD)
  2. Generate a 'standard' peak from the peak data (STGEN)
  3. Fit the background (BKGFIT)
  4. Search for regions where there may be peaks (FSPFIT)
  5. Add a peak to the region (FSPFIT)
  6. Repeat 3,4,5 until there are no further peak regions identified 
  7. Convert the x-axis channel numbers into energies (XCALIBER)
  8. Display the fitting results (FSPDIS)
During the fitting the user has full control over the level to which the code will identify and attempt to fit smaller and smaller peaks.

Can't believe we did this on machines available at the time!





Saturday 9 June 2012

Back to ROBFIT and the Fortran for a while!

So have agreed with Bob that posting the code on GITHUB is a good idea. However, getting a version of the code off the floppy disc's proved to be an exercise in itself! Involving clearing the loft to try and find an old machine that could read the 3 1/4 disc's. Found the machine and a miracle occurred it fired up without any trouble - the drive worked - and I managed to copy a total or 30 discs with various versions onto a modern disc drive. Result!

On a roll I selected the most recent version - copied it onto the machine I am typing this on (a Toshiba Satellite laptop - conveniently named for analysing Supernova data I thought) - the found a 'read.me' file - another result! Can't for the life of me remember writing any of this build stuff - maybe Bob did it ;)

Tried to follow the instructions, which are;

"Welcome to Robfit


Book reference "The Theory and Operation of Spectral Analysis
                       Using ROBFIT". AIP 1991 ISBN 0-88318-941-0
        Robert L. Coldwell and Gary J. Bamford Univ. of Fla
             ROBFIT@NERVM.NERDC.UFL.EDU


The disks have a mk...hd.bat file in the root directory
First insert Essential and run MKESSHD.BAT
This creates the robfit directory and various subdirectories
with a set of test cases in them.
   Next insert the appropriate coproexe or nocoproexe disk
(depending on whether you do or do not have a coprocessor)
and enter the command RUNABLE.BAT.  This creates the subdirectory
runable under robfit on the hard disk.  Enter this subdirectory and
enter the command ROBFIT and read the book.  The test cases are labelled
ZTCASE1.SP (the data file) through ZTCASE8.SP (supernova data).
It is supposed to be obvious what to do next.  (...dis for display),
(...fit to fit).

.................."

Er - nope - that didn't happen. Guess after 20 years things have changed. I am blaming Bob for not making it future proof ;))

So the plan now is to take the Fortran files one by one and try and figure out how to re-compile them for new machines.

I am enjoying this aren't I??

Saturday 2 June 2012

Escalation modelling.....

Back to the offshore QRA modelling...

This is what we are trying to get our heads around at the moment!

This is the code escalation modelling element of the QRA code. The intention is to extract this module so that we can reuse in future applications.

We now just need a slick way of setting this up for future use that doesn't involve some tortured sequence of ascii characters that the spreadsheet models utilise - mind bending!