Saturday, 29 December 2012

What's worked well?

So with all the social-freemium activity I've had a play with this year, which applications stand out from the rest in terms of usefulness?

Clear winner:

Skype - what can I say, free phone, view of who is 'live' on-live, conference calls and a messaging system all at the click of a button!

Runner up:

Trello - great medium for developing ideas, managing projects and developing marketing plans!

Honourable mention:

Twitter (inc Tweetdeck) - thought this was going to be useless - but is very dynamic, feels alive when compared with the very slow response you get from the other networking applications.

Good but must try harder:

LinkedIn - going in the wrong direction - removing links to other media - seems to be reverting to recruitment site mode.

Storify - I like this one personally but not been a great interest for others. - you need to make sure it doesn't end up as a dumping ground  - the stuff you put in the loft thinking it would be of use someday!

Saturday, 22 December 2012

Christmas programming activity for when you get bored!

Short and sweet one this week - a bit of festive activity courtesy of 'bambofy' ;)

Hopefully everyone is now familiar with Dropbox - which is one of my 2012 favourite freemium apps by-the-way - well it has now been enhanced to the point where you can use it to build your own web site!

Or as the web site says 'absurdly simple web site hosting' check out - which lets you build a site from your Dropbox folder structure!

Here is my very feeble first attempt at it - looking for some instruction from my two trusted advisor's over the holiday.

Another one for all those interested in managing their activity on all this social media stuff - and a mechanism for planning your 2013 online campaign if you are so inclined - check out

  • Twitter
  • GitHub
  • emails
  • etc 

all tracked for you - and put onto your own personal dashboard - bit dangerous really!

That's it for now - Merry Christmas - enjoy whatever projects you have running!!

Saturday, 15 December 2012

Just when you thought it was safe to come out!

LinkedIn keeps removing functionality!!!

I planned to wax lyrical about richly functional, free, open source applications than can be a powerful tool for business use. However, the past two weeks have illustrated the dangers of relying too much on these 'free' platforms. Should have heeded the messages in a number of the Tweets I have sent out recently.

So what is it all about?

The last couple of posts have been about the journey into 'gamification' and the thinking we have been doing in-house on setting up a company implementation of skill measurement 'badges'. This weeks note was going to say that the easiest implementation for us would be to use the LindedIn platform of 'endorsements'. Creating an in-house set of badges for a tbd set of skills. Company members of LinkedIn would then be able to vote for your company skill. Thus providing a global platform for these ratings without too much involvement from our management. A peer rated set of skills - a much more powerful measure than a list of the courses you have sat through!

Well - this has now all been thrown up in the air. LinkedIn have recently taken a unilateral decision - as far as I can see - to remove a couple of similar  features from your profile. One was the link to github (useful if you are into coding) and the other was a link to blogger (ie this site). I'm now very concerned about putting anything remotely critical to our business on the site. Who knows what they will decide to remove next!! In fact, why bother even using the endorsements at all - if nobody bothers they will more than likely be terminated! Yes, yes, yes, social pressures blah, blah, blah.

We need a re-evaluation.

Get your own site up and running as quickly as possible ..... xxxx you have been warned!

Saturday, 8 December 2012

Gamification cont.

So - following on from the previous post what have been the thoughts on what to actually allocate badges for? Seemed like an easy one at first but became a bit mind bending pretty quickly.

Do you allocate badges for tangible activities/achievements - such as these the team came up with off the top of their head;

  • Training quizzes passed (1st, 5th, every 10 thereafter) 
  • Completion of a training series 
  • Billable hours worked (1st, 40th, 100th, 500th, every 500 thereafter) 
  • Employee grade 
  • H&S TIPs completed (1st, 5th, every 10 thereafter) 
  • Completion of Project Management training 
  • Revenue oversight for PMs ($100K, $500K, every $500K thereafter? no idea on this one) 
  • Company Profile completed (Mysite) 
  • Professional Licensure 

or link them to actual company certificates - such as;

  • Advanced Manaement Programme Certificate 
  • European Networking Programme Certificate 
  • IOSH Certificate 
  • PM Certificate 
  • Risk Mngt Certificate 
  • Webinar Presenter 
  • Quest Achiever 
  • SuperQuest Achiever 

The advantage of doing this is ease of coordination and management by the business. These activities are already tracked and monitored as part of internal staff development.

However, then the question is, what is the point of the badge? They then are simply measuring achievements in  a simple tick-box manner. Is that wrong - a question still to be debated? 

OR - should badges be allocated based more around skill or capability measures as judged by a community, peer groups, clients, competitor networks etc? This would have the advantage that it is more of a true representation of capabilities, rather than ticking the box for a given training course. You may not have your PhD in Particle Physics (or railway signalling) but you can be acknowledges for your participation and contribution to the field (Honorary Doctorate style) by other leaders in the filed. This to me seems a much better use of badge allocation. The second question - still to be debated.  

As an example of how this would work LinkedIn has recently introduced 'endorsements' - where your network of contacts can 'endorse' the skills you say you have as part of your LinkeIn profile. Its fascinating seeing this grow - its only been put in place recently and is still in the phase where people are finding out how it works and what it all means.

Anyway, the fun continues.....

Saturday, 1 December 2012

An emergent property!

Over the past few weeks - through my various meanderings on Twitter and LinkedIn - something that I have been seeking has emerged! 

A topic that I came across on a Tweet by one of my GURU's (that's Grandwizz Useful Research Unit) was taken and re-entered into one of the company 'thought leadership' groups on LinkedIn. Nothing Earth shattering in that process, however, it has been fascinating seeing how this topic has sparked interest among a 'self-organising' group of people. No need to send emails around the various operating units around the globe to canvas for support - usually resulting in getting someone nominated who is not fully engaged - the Diamond Dogs have formed. 

Comprising, me plus;

  • Ben
  • Cam
  • Eric
  • Ian
  • Marcelo
  • Paul
  • Tom - who set the challenge!
Many thanks to all - you know who you are - for the input so far by-the-way.

The topic we are thinking through is around the use of 'gamification' to help support and grow staff development -  which sounds pretty boring when you say it like that. Essentially the use of game and token incentives - like collecting game point to show your 'power' to others. Or possibly hotel points in my case given the number of scheme's I seem to be enrolled in! Given that this was completely new to me a couple of weeks ago I am now seeing these sorts of things everywhere, LinkedIn 'profile % complete', 'endorsements', Twitter followers' etc..

The challenge has resulted from Tom's use of FourSquare (I'd not used that either - just to let you know how far behind the drag curve I am on these things) where 'badges' can be gained for visiting certain places. Badge collection resulting in gamification of travel. I'm still struggling with FS if truth be told but I can understand the concept of incentives for visiting places - still feels a bit boy scout-ish to me though. The concept of using badges for training and development purposes is well established - few sites listed below for those interested - and widely used in the education field. Our challenge was could we not apply these concepts to our internal activities? Seems like a very reasonable task - how to use non-monetary public recognition awards within the business to help raise staff engagement. 

Some of the key requirements - given the topics of previous posts I had to put a few of these down ;)
  1. must be easy for staff to 'sign-up' to the scheme
  2. must be accessible to all staff - no inner-circles
  3. must be recognised across the business
  4. must be easy to implement
  5. must be publicly viewable
  6. must be linked to tangible benefits (e.g. enhanced peer recognition)
  7. must be cheap (if not free)!
See how we get on in future post's ;)  .....

Badge collection site links

A few links if you want to explore further: which provides some basic open-source tools to accomplish intrinsic badge reward set-ups.

Saturday, 24 November 2012

Management of requirements management!

Quote this week from Bambofy - "you deal with the boring end of software development".

I think I agree.

This week has taken a bizarre twist in that its been a week of 'requirements management' (RQM) issues. Two area emerged, the first around how to specify them appropriately and the second on reuse of requirements. You have to admit that sounds pretty boring doesn't it!

But when you try to get your head round these things, the situation rapidly gets complicated. A problem emerges around the sheer number of 'requirements' that can be generated if you don't have a strategy for RQM. Let me try and illustrate.

Even for a simple system there is an exponential increase in the number of requirements the more you need to partition things. Lets not do software example as they tend to be a bit obtuse, but use a house build as an example. Hopefully we can all relate to that a bit better. I'm assuming in all this that everyone is signed up to undertaking some form of RQM as part of the design of course! The first decision is how are you going to represent the 'systems' involved as you will need to be able to allocate the requirements throughout the house in some manner. If you don't get this bit correct you have already increased the gradient of the requirements growth curve. In our house example you could take each room as a 'system' or you could take each major element of infrastructure as a 'system' or one of many other variations. Lets take the infrastructure view as this is more akin to what you would do for more complex assets, railways, oil platforms, power plans etc.

So off we go doing our requirements capture exercise - don't worry I'm not going to do the whole thing - even I'm not that sad!

There are at least say 10 major areas to consider, e.g. 1 water, 2 electrical, 3 heating, 4 lighting, 5 civil structure, 6 plumbing, 7 waste treatment, 8 accessibility, 9 safety, 10 useability ....... etc.

Each of these areas breaks down into at least 10 further sub-areas, e.g. for  1 water these could be 1.1 sinks, 1.2 baths, 1.3 toilets, 1.4 hot water, ..... etc.

Even for this relatively simple example we already have 10x10 or 100 sub-areas to allocate requirements to. We could then easily envisage coming up with say 10 main requirements for each of these sub-areas and at least a further 10 sub-requirements for each main requirement. You can see where this is going - we now have 100 (sub-areas)x10(main)x10(sub-main) or 10,000 requirements to allocate and track. On top of this it is likely that we would need to allocate a set of 'attributes' for each requirements so that we could also track certain types of requirements rather than just which area they are allocated to, for example attributes like, environment, performance, safety, quality .....etc. which could again easily add up to 10 minimum. So - you still awake - in total, without even trying, we have got ourselves into a situation where we are reporting and tracking 100,000 items - just for a house!

Serious problem eh - if you are not careful this is also serious job creation!

This number assumes also that you can clearly specify your requirements in the first place - if not you could easily start with (I have seen this) 100 top-level requirements leading to 1,000,000 items to manage - good luck with that one.

That is why it is imperative that you have a rationale for management of your requirements management. And, no, you don't just have to purchase a requirements management software package.

You then have to ask yourself, if you tick all the requirement boxes, is your built system the one you wanted - would you want a builder to manage the build of your house in this way - or would you rather have the build project overseen by a civil engineer?

In the overall scheme of things its still pretty boring - but critical to get right!

Now some of these requirements can surely be reused on the next house - but which ones ;)

Saturday, 17 November 2012

Analytical taxonomies - appropriate analysis

Having had a pop at approaches to 'Big Data Analytics' based around spreadsheets in the last post the question has to be "so what does appropriate analysis look like?"

In my various internet wanderings this week I came across a couple of articles that for me give a glimpse into what the future should look like.

The first is by Jim Sinur in an entry on applying analytics to processes and not just data, follow the link for more detail;

In fact, thinking through exactly what you are expecting your 'processes' to deliver rather than simply feeding the process, is key, as is 'unmanned' optimising and management of interactions between them!  

The figure below illustrates some of the analytical taxonomy that could be used.

As well as the process analytics elements outlined above the sheer volume of data to work through will also require new computing techniques. The second article I came across by Rick Merritt in EETimes illustrates the type of computing power that will be available;

which is by the sounds of it is 40,000 processors working in a parallel configuration using neural net and fuzzy logic techniques to crank out 5.2 tera-operations per second!

So the Big Data Analytics future, for me, contains complexity in both analysis techniques and computational systems. A bit more that a few unconnected spreadsheets.

Looks exciting eh!!

Sunday, 11 November 2012

Big Data Analytics - innaproriate analysis

I thought I wasn't going to rant about this again for a while but three encounters this week have fanned the flames again.

I don't know how many Twitter and LinkedIn posts I have made on Big Data + Analytics over recent months but its definitely an area on an increasing trend in the business world. However, the reality is most of the business world struggles to extract any meaningful  'knowledge' from all of the 'data' that is 'collated' from business activities.

Why is that - because the main analysis tools used are spreadsheets - an in particular - Excel. Now don't get me wrong Excel is a phenomenal software package - but in my view in some instances it is being used to produce models that are way outside of its domain of appropriateness.

What do I mean by that? Well - three events this week have highlighted the tip of the iceberg for me. All of these are being addressed, I hasten to add, but I don't think I am alone in my experiences.

1 The first was when I was sat in a meeting looking at the projected image of some analysis using Excel, upon which we were making decisions that affected the direction of the business. One of a myriad of cells was being concentrated on - and the value in the cell was 'zero'. Everyone in the room knew that wasn't right so we all sat there for 5 minutes discussing why this was so. Now this could have been a simple mistake somewhere on one of the supporting sheets but the effect it had was to throw the whole of the analysis into question. How could we then believe any of the other numbers. Therein lies the first 'rant fact' - it is difficult to manage traceability in these sorts of tools.

2 The second was when I was asked to comment and add to a sheet for some supporting data input into a model. Someone was collating data to help build up a spreadsheet model and was emailing around for changes to the data sheet. Of course no one person holds all of this in their head so people were passing on the sheet for refinement. The version that came to me for input was 'blah filename - Copy - Copy - Copy'. Therein lies the second 'rant fact' - if not part of some wider process, configuration and version control can get out of hand pretty quickly.

3 The third and for me the most serious came from checking through to try and understand a part of a model that didn't appear to be functioning as expected (see 'rant fact' 1). When I looked into the sheets in question - without even going into any of the equation set being used - I found one sheet with 100 columns and 600 rows of manually input driven data entries - that's 60,000 places for making an input error on that sheet alone and there were more sheets!  Therein lies the third 'rant fact' - data quality within this environment is difficult to control.

The issue is that Excel in particular is so easy to fire up and start bashing away at, that we forget that we are in some cases building complex calculation engines. In some instances these engines are not using any 'design' process at all. There is no overarching systems design process and even at a simplistic level there is no recognition of fundamental modelling techniques that would improve modelling and therefore output quality, namely, consideration of the following;

1 Functional model development - what is the sheet set up to do - even a simple flowchart would help never mind some functional breakdown of the calculation set.

2 Data model development - what data, where from, what format, type views to force thinking about quality control of data, a database maybe!

3 Physical model of the hardware - how does the overall 'system', including data input, connect, store and process the information.  Maybe using email and collating feedback on a laptop is not the best system configuration.

All these activities add time and cost to model development and because their results are intangible and difficult to measure can get left out in the rush to get the answer out. However, the question is, would you put your own money at risk on the basis of this 'answer'?

What is the solution? Well certainly don't let learner drivers loose in the F1 racing car for a start - but there must also be some way of providing an easily accessed development environment that can   be used to translate formula into readable and understandable code - formula translation - now that could catch on (sorry couldn't resist!).

Saturday, 3 November 2012

To blog or to curate - that is the question?

More a thought for the day this one.

You definitely need a strategic approach to get the most out of all of this social media capability. There is so much to go at you can quite easily become social app weary. Not to mention spending your whole life trawling through the various information feeds!

Check out Guy Kawasaki's view on the following link for a more 'rounded' assessment

Which is great, but what are you going to do with all this 'networking' data and information, just leave it all hanging out there?

That is why I believe you need some sort of strategic goal - something that all of the collating and curating works towards. Currently, myself and one of my trusted advisor's (JB you know who you are)  are having a go at feeding and filtering information related to setting up a consultancy business. Which is something I have lived through and so can do the curating and is something that JB is interested in and can do the editing. The ultimate goal is to produce a book which effectively has been filtered through the process we are following.

The process at the moment goes like this;

  1. Curate the material on
  2. Select relevant entries for inclusion in the book
  3. Do an initial organisation of the information based upon business content
  4. Enter these into Storify story repository
  5. Arrange the content in Storify
  6. Produce the narrative around this content
  7. Flesh out the narrative to produce the book
  8. Publish as an eBook


Who knows what it will produce - then the question is - can you automate this and produce a book automatically - what!! process link -

Storify process link - work in progress don't forget -

Or is life too short ..... at least its fun trying?

Saturday, 27 October 2012

Quality of the software quality standard!

Just for completeness below are the current set of 'metrics' recommended from the software quality standard.

You can try and demonstrate 'progress' using these and some smoke and mirrors (sorry - project review document) but most folk would start to glaze over.

 ISO 9126-3:2003 (factors/criteria) properties/sub properties referenced from
You are probably thinking I haven't put any of these into place for my coding - join the club - we have probably been distracted by actually trying to solve the problem ;)

End of software progress rant.....

Friday, 19 October 2012

One year on.

Can't believe it but have been doing this for a year!

What I thought would be a pretty old school re-introduction to Fortran computing has turned into a fantastic kaleidoscope of programming, networking and cloud computing activities. Its not exactly been the structured journey originally planned but rather a meander around various avenues as and when I came across them.

One frightening realisation was that it didn't take long to get back into a Fortran 'flow' - very enjoyable - mathematical routines the lot. Which was a great re-starter! However, the most exciting part of the past year has been the introduction to new media for sharing and networking of 'knowledge'. The brilliant thing is that this part of the revival has cost absolutely nothing in terms of application costs. The biggest cost was in the time invested in development of the content and network. This however, is part of the attraction of course!

Quick summary of online things, including an estimate of the 'value' of each;

  1. LinkedIn - essential, nuff said, 10/10
  2. Storify - excellent repository and for thread building - 7/10
  3. - very easy for collation of things - 7/10
  4. Google+ - I like this as it connects me to all other G-things - 9/10
  5. Google Sites - brilliant free web-site builder - free to anyone - 10/10
  6. Google Reader - great collator of news feeds - 6/10
  7. Google analytics - great for monitoring web-site activity - 6/10
  8. Twitter - original thought that this would be rubbish but turned out to be fantastic for current news - 10/10
  9. Tweetdeck - what a dashboard - 9/10
  10. Trello - came out of the blue - project/activity manageent tool - now essential 9/10
  11. Corkboard - the one that fell by the wayside - remined post it note site - 4/10
  12. Blogger - withut which you wouldn't be reading this years worth of diary - 10/10
  13. GitHub - code repository - essentail  - one with great potential for the future 8/10
  14. Dropbox - absolutely brilliant - 10/10
  15. Photobucket - saved me hours copying photo's between machines - 6/10

Wow - that a pretty exhaustive list but still only scratching the surface - you definitely need a 'strategy' for managing all this - otherwise burnout will ensue. In my view the trick is to have an 'approach' for each of these and on how they fit together - more of that in future posts.

With thanks to all - named in past posts - for pointing me in the direction of these new worlds!!

Saturday, 13 October 2012

A million dollar idea!

Cont' ..... from last blog post.

Having mulled the problem of measuring software development over for a week and having performed extensive literature searches (that's Google and Wikipedia) I think I've got one - a million dollar idea that is!

Maybe I shouldn't tell .... hey ho - won't be the first time.

So the best the extensive research could come up with was essentially you need to define some 'Metrics' that you then monitor, for example;

  1. metric - number of lines of code written - issue, doesn't this just measure the efficiency of the coding team?
  2. metric - number of bug's corrected - issue, same as 1 above?
  3. metric - structured processes in place - issue, is anyone following them?
  4. metric - verification and validation testing - issue, not bad but does it measure progress?
  5. metric - timelines project planning - issue, will you (or maybe you will) plateaux at 90% complete?
  6. metric [for 'agile' development processes such as RAD, DSDM, and other HP (Hackers Paradise;) processes] - small programme elements, and the novel concept of speaking the projects team! - issue - again not bad but am I going to tell you I have a stuck bolt and its going to take me at least a week to  fix it?
  7. etc.
Note to self: must remember the 'agile' terminology for future reference and use on Spreadsheet development projects!

All this is still pretty intangible for the 'manager' who just wants to know if its going OK and will things be finished on time. Not an unreasonable question. So - there must be something better and more user friendly that all this metric stuff (which is useful/essential don't get me wrong). Being software geeks surely there is some software available that does this for you? I've not come across anything more than app's that mechanise the above 'metrics' which is not the right answer in my view.

So I started thinking about how a civil engineering project goes about doing this - not that I am a civil engineer - but that's the point! Say the project was construction of a house, even I could walk along to the site and have a look. Is it still a hole in the ground, have the foundations been put in place is the roof on? These are all tangible milestones that can be viewed by anyone. You could look at the project plan and progress charts, measure time on the job, ensure the contractor has processes in place for delivery - all metrics like the software ones. However, there is no substitute for going and having a look at the site! So what is the software equivalent of 'viewing the site' then?

Taking the civil engineering analogy a step further, and probably stretching it a bit. Could we not present things a little differently, say;

  • hole in the ground - SW equivalent,  project plan and development team in place,
  • foundations in place - SW equivalent, detailed design completed,
  • topping out - party time - SW equivalent, major functional components completed and VnV'd,
  • final fitout - all input output routines completed.
  • etc

what would then be needed is a slick piece of software that puts this into some 'house build' type view of the project. The metrics will then be linked to something more tangible, something that anyone can understand. If you don't have the roof on, for example, then you could have major delays if the main functional requirements have not been coded. There is no point in fiddling about with the fitout if the roof still needs completing (critical path analogy).

Et voila - you have a visualisation that anyone can view and understand.

Well something like that anyway. Now, where do I collect my money .....

Sunday, 7 October 2012

Invisibility cloak.

I have been hit with a couple of software development questions this week. Given what has happened with the West Coast Main Line franchise rebid I am even more concerned about the level of development and checking processes within software related projects. Is software quality being compromised due to the current climate of quick turnaround on jobs and the need to keep costs to a bare minimum?

The West Coast issue seems to be related to spreadsheet use - of which past posts give my view on that so I won't bang on about those any more here. However, this week, a couple of other queries have also raised alarm bells, and these are related to main stream software projects on critical infrastructures!

Both questions were effectively related to how can clients assure themselves that the right level of progress is being made with complex software development. One issue was around ensuring the right level of reliability of a particular code and the other was around assurances to meet delivery timescales.

How do you measure software development 'progress' in a consistent manner? Particularly with the variety of development methods that can be called up, for example, a list taken from Wikipedia states;

"..., the software development methodology is an approach used by organizations and project teams to apply the software development methodology framework (noun). Specific software development methodologies (verb) include:
Or some homemade variant thereof!"

So - just how do you visualise progress on a software development activity? Using verification and validation monitoring, ensuring timelined activities, timesheets of developers or a variety of other metrics? So many processes with so many variations of 'progress'.

to be continued.....

Friday, 28 September 2012

Another world opens up!

Just when I thought I was doing really well having built a web site with Google's sites application - see One of my trusted advisor's introduced me to Drupal. Having just returned from Brussels it sounded an bit like a Belgian beer so had to be tried out. This is professional web site development made easy - and open source - i.e. free - which is critical to getting a high score on my marking scheme.

Check out the link to the Drupal site

Its a little bit more involved than Google site building - though Google also hosts your site for you - free - which is another major plus - but the URL you get assigned isn't so memorable. I don't really care about such niceties though.

Sitting with someone who knew what they were doing Drupal enabled the building of a brand new professionally looking site in literally a couple of hours.

I believed you had to be crashing out HTML for months to get a site up and running. Technology, who'd have thought it!

Saturday, 22 September 2012

Speed test.

Spent this week at the Innotrans premier rail conference in Berlin, which has meant limited time for any extra-curricula computing nerdery but I did test something which turned out to be a bit of a shock.

Some of the stands at this event were works of art - and some were almost the size of small towns, complete with coffee shop, bar, bleacher seating and live entertainment. God knows how much these things cost but they are well received by the attendee's. Anyway a thought came to me while wandering around - there should be a competition for the best stand. To the best of my knowledge there wasn't an official one organised - would probably be far to much effort and just end up cheesing off some stand owners. So I thought I would do my own and try and record it using my new found social media tool set!

Before I go any further I must point out that this survey was NOT scientific in ANY way - entrants were selected simple along my timeline through the event and purely based on my personal 'gut' reaction at the time of passing the stand.

So the plan was, to photograph stands as I wandered around, then select a stand for an award base upon what I had come across. Photo's and award were then Tweeted to #innotrans and then later Storified into an award ceremony in a Storify story! The whole process probably only took me say 5 minutes per award to manage. The compiled and 'published' award ceremony can be fond on the following link;

Well the strange thing was that I started off doing this as a bit of fun and to see if I could generate some 'e-traffic'. Fortunately I kept the awards 'sensible' and 'clean'  - could have taken a completely different route for some of the stands if I had been so inclined believe you me.

Anyway, the 'award ceremony' was carried out on the last day with me Tweeting the final winners and then the link to the Storify site. All done from the Innotrans business lounge - fitting for an award ceremony I thought (what a great place - free wifi and all that). The shocking thing was that within seconds the story had been re-tweeted by @EURAILmag, @OfficialHitachi, @thalesgroup and @LR_Rail to over 6000 potential readers! Only slight panic set in  - until very positive comments came back - phew!

Quite a buzz!

Friday, 14 September 2012


OK so I've not taken the time to check out some of the web programming tools so far. Too busy trying to get back into the swing or real computing! But got a flea in the ear from one of my trusted advisor's - the Lancaster computing one - and was 'advised' to check out a site called as I was asking some dumb questions on web page design.

Anyway - I did just that one spare lunchtime - what a revelation!

HTML looks just like the old WordStar editor I used to use. Shock, horror, a scripting language Ctrl-B and all that. Well at least I could see myself getting my head around the simple formatting elements of the language. Nothing new under the Sun eh?

So now the situation is 'Cloud computing' looks a lot like an old mainframe setup and HTML turns out to be an old editor. There is hope yet for us ancients!

The w3schools site is pretty useful for all those questions you wanted to ask about the multitude of languages but couldn't be bothered, I'm sure to be using it again!

Saturday, 8 September 2012

Information overload ....

This week has been more of a maintenance and upgrade week on the revival path. I am advancing to next level on the computing and social networking fronts.

On the networking front I have discovered Tweetdeck - why didn't someone tell me about this before! This is the dashboard of dashboards that blows your mind with all the information that can be displayed. No doubt our Human Factors team would tell me off for my set-up  - cognitive block and all that. But wow it definitely gives you the feel that you are looking into the networks you have set up. Its also free!!

On the serious computing front I have re-energised the ROBFIT GitHub repo by uploading the next routine BREAD.FOR that is used for reading the raw data - so not that stunning but essential! I have also discovered that you can link GitHub to your LinkedIn profile. LinkedIn is becoming more useful by the day at this rate - could well become the spoke at the centre of all my various meanderings across the web.

Data dump - information reset ....

Friday, 31 August 2012

Communication breakdown.

Progress on the swimming waveform analysis front is grinding to a halt as communication between the various team members has become difficult. A combination of time zone differences (12 hours between the 'thought leader' and the 'development team'), recovery from jet lag and unfamiliarity with the code sharing site have all contributed to the situation.

How best to get a grip on things again?

The main issue has been the way development has had to be carried out using a 'rapid prototyping' approach. Essentially get something up and running and review what it looks like then tweak (sometimes TWEAK) then review then tweak again ..... continue ..... until right answer. Which all worked fine when all parties were together in one place. However, now the review and development cycle has become protracted  the 'thought leader' is struggling seeing the code run and the 'development team' is wondering what happens next.

If you read the text on this software engineering practice everything works seamlessly and Bob's your aunt you end up with the perfect code (sorry 'app' - fully documented too!). We are attempting to use an approved approach that will keep us out of the hacking arena. However, what its not doing is helping much on progressing around the development cycles or on homing in of the final solution. You could end up going round and round and round and round ...... just another variation please.

What we need is a means of facilitating the capture of emerging requirements and or questions from the 'thought leader' that the 'development team' can then work on and reply to. Done in a simple and visible way so that all parties can immediately understand the level of progress, identify any issues, and know who is in the seat for resolving them. Progress around the cycle can then be maintained - hopefully.

Just need to get everyone using Trello properly now then ..... ;)

Saturday, 25 August 2012

Languages suck .... computer ones that is ;)

More a though for the day this one....

I have been both impressed and confused over the last few months by the sheer number of software languages that you appear to need to keep abreast of these days. I am easily into double figures on the count and I'm not even trying. What is going on?

In the good old days you would get by with BASIC, FORTRAN and possibly C but only if you were a real geek. Oh and then there was JCL (my god - I'd almost forgotten about that). These days the number has ballooned and careers have been made upon the back of the various mutations of operating systems and their associated languages.

In past blogs - as you can read - even the production of a simple package (sorry 'app') to analyse a waveform has required quite a bit of soul searching on which language to use. Surely what is needed is some overarching operating system of languages to act as a master controller - hey maybe nobody has thought of that one!

With the IoT (Internet of Things) emerging what is the best way forward - or are we doomed to even greater mushrooming of languages.

The balloon (or my mind) must pop at some point.....

Saturday, 18 August 2012

Data presented!

Well not much progress last week as holiday return trip home has got in the way - was a long way back!

There was a little bit of movement though - the week was spent checking the code. I was please to note that even with all the new programming language bell's and whistles we still ended up de-bugging using the good old print statement. Which was actually very simple in Python - unlike my memory of Fortran 'WRITE' statements - though I'm sure all that has changed by now - at least I hope so.

Anyway - here is a print out of 5 right (top) and 5 left (bottom) hand pressure waveforms extracted from the first dataset.
The work going forward is to start to analyse the structure of these to try and come up with an optimum shape that will deliver maximum power for the swimmer. A swimmer can then use this information in real time to make adjustments to their technique.

I don't know much about swimming but this is interesting stuff - the appliance of science!

Friday, 10 August 2012

Data extracted!

Well much to my amazement Bambofy has managed to build a code to pull out the swimming pressure data from the hand sensors. Not that I doubted he could do it, just that I didn't think Python was appropriate for doing this extraction. Just goes to show - old dog's and new tricks can happen. 

My view of Python - from looking over the shoulder - is it seems pretty flexible for string manipulation, good for object coding, simple to produce plots (we would still be loading the plotting library in Fortran - Python variant knocked up in 10mins no lie!) and can add up. Not sure what editor is being used to build the code but it puts out a great psychedelic screen for doing the edits!

Not sure how it will get on with the next phase which will involve a bit more complex mathematics - we will see......

Friday, 3 August 2012

Data extraction!

Not by me - Prof Jefferies has appointed Bambofy to help sort out the analysis of the swimming data.

First up is the data extraction so that we can pick out the waveforms associated with each hand. Given the way the data is stored even doing that is turning out to be a bit of a trauma. This is being done using Python code - so that's the end of me - PJ was going to do this in Fortran (same code era as me you see) but we have been overruled.

The monkey is now on Bambofy's back to split the waveforms into left and right hand sets - then the fun starts.....

Sunday, 29 July 2012

Holiday computing!

All a bit sporadic for the next few weeks while holidaying takes precedence.

Think may have a new application for Robfit fittery - nothing to do with gamma rays this time but swimming related!

The essence of the fitting is based around analysis of swimmer force profiles which have been measured using 'glove' worn by swimmers. Analysing the waveforms from these gloves is being used to improve swimming technique. A better explanation of the physics behind all this is provided in an article by friends Stuart and Colleen in the Journal of International Society of Swimming Coaching;

Full reference;

The Effect of Real-Time Feedback on Swimming Technique (page 41)

"We examine a new approach for accelerating the learning of efficient stroke mechanics: using a flume equipped to deliver multi-perspective live video footage and force analysis data simultaneously to the swimmer and the coach. A preliminary study of the effectiveness of this approach with a small group of age group swimmers shows gains in ability to generate force of around 20% and to improve swim velocity with only two hours of application."

where you can see the profiles produced by the gloves.

May take few few beers to come up with the optimal way of analysing the waveforms mind...;)

Saturday, 21 July 2012

ROBFIT it works!

So how does it work - just realised I launched into GitHub loading without setting out how the code operates.

ROBFIT - that's the Fortran code used to find peaks within a complex spectrum, such as a gamma ray spectrum where the code originated. The idea behind the code is that it is designed to find the very smallest peaks (signals) in a spectrum and it does that by employing a ROBust FITing technique.

There are many spectral analysis packages on the market, however, these tend to require the spectrum to be broken into small sections, each of which is then fitted separately. This creates a couple of major problems. One is that if you have a large spectrum then there is substantial user intervention required resulting in fitting taking increased time to complete. Secondly, and more importantly, splitting the background into sections may misrepresent the background continuum. Small details in the spectrum could therefore be missed.

What's the solution?

Use ROBFIT - sorry - but yes do - the code gets round these problems by seperating spectra into two functions: background and foreground. The background contains slowly varying features and the foreground contains high-frequency content (peaks). Accurate separation of these functions allows the code to detect small peaks and decompose multiple-peak structures. ROBFIT iterates on background and foreground fitting to move smaller peaks from the background to the foreground.

A critical feature is that the code fits the background over the entire spectrum as a set of cubic-splines with adjustable knots - a knot being a place where two cubic splines meet. More on this in later post. Fitting over the whole spectrum range allows the background features to be continually fitted with fewer constants, resulting in a more accurate representation than is possible when fitting in small sections.

Two algorithms make operation this possible. The first is a data compression algorithm which uses a robust averaging technique to reduce the contributions to the background from peaks and spurious high points. The second is a minimisation algorithm (SMSQ routine) that minimises chi-square with respect to the constants of the background and foreground. With the background represented as a smoothly varying function, peaks can be identified as regions of the spectra that lie above this background curve - simples!

So now you know.....

Saturday, 14 July 2012

Momentum ..... increasing.

Update of the Twitter social universe side of the revival.

Approaching 100 Twitter followers has been an interesting journey. The 100, which is 'pulsating' all the time, is homing in on an excellent set of communities. Having been directed to the use of Lists by @sheffters I now have a way through the noise of Twitter land. Though I thought I was listing these contacts for my own use I found out this week that the person who I have listed also gets informed - not a problem really, in fact had a few nice messages from people as a result!

These contacts have taken a bit of a tortured route though. Which makes me a bit suspicious that I am being spoon fed by the Twitter machine? I started off with lots of 'follows' from nice young ladies - at least that's what they looked like - which I didn't follow back I will have you know! Welcome to Twitter. However, these soon 'unfollow'. If (like me) you have a plan for the use of Twitter then things start to get organised pretty quickly - well its taken a few months to get to this stage. Soon the follows become more relevant, subject area wise that is. If you then have a rationale for who you do follow back you end up with a pretty focussed set of information feeds. I'm probably driving my contacts on LinkedIn mad by posting links that I come across there too, but that's all part of that sharing thing.

The question now is what happens next, what happens with the focussed group, how can I ask a question of this set of individuals and not be lost in the noise?

Onward and upward as they say.....

Saturday, 7 July 2012

Next subroutine.....

I've almost forgotten what 'subroutines' do!

However, managed to load onto GITHUB the first routine called by the ROBFIT background fitting code BKGFIT detailed in previous posts.

The code BKLINK is now available for viewing. Though think I should have put more comments into the code!

Anyway - this is another of the routines used in the background fitting process - it is essentially an input routine that reads user defined values from a file called BKGFIT.MNU - which I still need to find!

Why bother fitting the background when the idea is to be looking for small peaks?

The code has been mainly developed around fitting of gamma-ray spectra but can be used on any data set which required the identification of peaks in among significant background 'noise'. Once you have identified what the background looks like and have represented it mathematically the search for small variations from this representation is made easier. Exactly like the identification of the signal for the Higgs Boson reported upon this week. Blimey I am topical - it wasn't planned!

Essentially the operation of the ROBFIT code follows a sequence;

  1. Read in data required to be analysed
  2. Fit the background (this can be a separate file or the code can be run 'all-up' with it fitting background and peaks)
  3. Search for 'channels' above a cutoff level
  4. Search for peak regions
  5. Identify peaks in these regions
  6. Refit all peaks in the regions
  7. Update the peak list
which seems a fairly straightforward sequence.

Except it gets a bit more complicated......more on that later!

Saturday, 30 June 2012

Tracking the good stuff..

Bit of a diversion from programming but worth it.

Thanks to a bizarre combination of watching a programme on Caledonian Road (N9 London) on the BBC (excellent viewings for me), Tweeting that I had lived there for a few years then Eleonora  replying with a Tweet about a way of tracking and storing them as a story - I discovered an excellent new online tool for corralling all this fleeting internet information.

Its called Storify absolutely worth checking out! So I'm using it to pull together the threads of 'knowledge' that I pick up from various sources such as Twitter, Google reader etc. So far I have gone mad and have 4 'stories running - 2 can be found on - one is an Alan Turing tribute and the other relates to collation of Innovation ideas! The other 2 stories are in draft and are just 'knowledge' dumps - probably post them anyway this week sometime.

So the online tools are shaping up like this for me;

  1. Twitter - full of junk until you can filter what you want through a trusted network then excellent access to current thinking!
  2. LinkedIn - interfacing with Twitter to share knowledge and for deeper discussions - negative is its a bit glacial in response time - positive is that the feedback you get from a network is fantastic, a living encyclopaedia.
  3. Github - repo for all coding - best bar none for this type of thing!
  4. Blogger - this - effectively my diary.
  5. Storify - repository for pulling things together and building themes.
  6. My website (yes I am calling it that much to the disdain of my lads)  for keeping track of where all this is located!

Now just need a good Fortran compiler - the one I have (must be free you see) is a bit too retro even for me;)

Friday, 22 June 2012

First module loaded!

Result - the first ROBFIT module has been loaded onto Github!

It only took me 4 hours to do that mind!

Setting up the ROBFIT repo (got to do the jargon right - that's short for repository on github) was more of a lesson in retro. All command line control once you have downloaded github,

mkdir robfit
cd robfit 
git init

type stuff - not sure whether to be pleased that I still follow this stuff or not - thought there would be some slick modern version to clickety click and bob's your uncle its all done for you. But no - maybe the advanced version comes with a green screen too ;)


git add BKGFIT
git remote add origin https://github/grandwizz/robfit.git
git push origin master

seemed to load the background fitting programme BKGFIT onto the site.

Github - looks very very useful by-the-way once you have got over up the learning curve!

Check it out.......more to come.

Friday, 15 June 2012

Reading my own words.

Bit odd - I have had to resort to reading the ROBFIT book to accelerate the learning.

Feels like I'm cheating - a bit.

So I have;

BKGFIT - which fits the background alone
FSPFIT - which fits he complete spectrum
RAWDD - displays the raw`data
XCALIBER - x-axis calibration to energy
FSPDIS - display the full spectrum
STGEN - generate a standard peak

The order of events for the way the code works is as follows;

  1. Read and display the raw data (RAWDD)
  2. Generate a 'standard' peak from the peak data (STGEN)
  3. Fit the background (BKGFIT)
  4. Search for regions where there may be peaks (FSPFIT)
  5. Add a peak to the region (FSPFIT)
  6. Repeat 3,4,5 until there are no further peak regions identified 
  7. Convert the x-axis channel numbers into energies (XCALIBER)
  8. Display the fitting results (FSPDIS)
During the fitting the user has full control over the level to which the code will identify and attempt to fit smaller and smaller peaks.

Can't believe we did this on machines available at the time!

Saturday, 9 June 2012

Back to ROBFIT and the Fortran for a while!

So have agreed with Bob that posting the code on GITHUB is a good idea. However, getting a version of the code off the floppy disc's proved to be an exercise in itself! Involving clearing the loft to try and find an old machine that could read the 3 1/4 disc's. Found the machine and a miracle occurred it fired up without any trouble - the drive worked - and I managed to copy a total or 30 discs with various versions onto a modern disc drive. Result!

On a roll I selected the most recent version - copied it onto the machine I am typing this on (a Toshiba Satellite laptop - conveniently named for analysing Supernova data I thought) - the found a '' file - another result! Can't for the life of me remember writing any of this build stuff - maybe Bob did it ;)

Tried to follow the instructions, which are;

"Welcome to Robfit

Book reference "The Theory and Operation of Spectral Analysis
                       Using ROBFIT". AIP 1991 ISBN 0-88318-941-0
        Robert L. Coldwell and Gary J. Bamford Univ. of Fla

The disks have a mk...hd.bat file in the root directory
First insert Essential and run MKESSHD.BAT
This creates the robfit directory and various subdirectories
with a set of test cases in them.
   Next insert the appropriate coproexe or nocoproexe disk
(depending on whether you do or do not have a coprocessor)
and enter the command RUNABLE.BAT.  This creates the subdirectory
runable under robfit on the hard disk.  Enter this subdirectory and
enter the command ROBFIT and read the book.  The test cases are labelled
ZTCASE1.SP (the data file) through ZTCASE8.SP (supernova data).
It is supposed to be obvious what to do next.  (...dis for display),
( to fit).


Er - nope - that didn't happen. Guess after 20 years things have changed. I am blaming Bob for not making it future proof ;))

So the plan now is to take the Fortran files one by one and try and figure out how to re-compile them for new machines.

I am enjoying this aren't I??

Saturday, 2 June 2012

Escalation modelling.....

Back to the offshore QRA modelling...

This is what we are trying to get our heads around at the moment!

This is the code escalation modelling element of the QRA code. The intention is to extract this module so that we can reuse in future applications.

We now just need a slick way of setting this up for future use that doesn't involve some tortured sequence of ascii characters that the spreadsheet models utilise - mind bending!