Saturday 24 November 2012

Management of requirements management!

Quote this week from Bambofy - "you deal with the boring end of software development".

I think I agree.

This week has taken a bizarre twist in that its been a week of 'requirements management' (RQM) issues. Two area emerged, the first around how to specify them appropriately and the second on reuse of requirements. You have to admit that sounds pretty boring doesn't it!

But when you try to get your head round these things, the situation rapidly gets complicated. A problem emerges around the sheer number of 'requirements' that can be generated if you don't have a strategy for RQM. Let me try and illustrate.

Even for a simple system there is an exponential increase in the number of requirements the more you need to partition things. Lets not do software example as they tend to be a bit obtuse, but use a house build as an example. Hopefully we can all relate to that a bit better. I'm assuming in all this that everyone is signed up to undertaking some form of RQM as part of the design of course! The first decision is how are you going to represent the 'systems' involved as you will need to be able to allocate the requirements throughout the house in some manner. If you don't get this bit correct you have already increased the gradient of the requirements growth curve. In our house example you could take each room as a 'system' or you could take each major element of infrastructure as a 'system' or one of many other variations. Lets take the infrastructure view as this is more akin to what you would do for more complex assets, railways, oil platforms, power plans etc.

So off we go doing our requirements capture exercise - don't worry I'm not going to do the whole thing - even I'm not that sad!

There are at least say 10 major areas to consider, e.g. 1 water, 2 electrical, 3 heating, 4 lighting, 5 civil structure, 6 plumbing, 7 waste treatment, 8 accessibility, 9 safety, 10 useability ....... etc.

Each of these areas breaks down into at least 10 further sub-areas, e.g. for  1 water these could be 1.1 sinks, 1.2 baths, 1.3 toilets, 1.4 hot water, ..... etc.

Even for this relatively simple example we already have 10x10 or 100 sub-areas to allocate requirements to. We could then easily envisage coming up with say 10 main requirements for each of these sub-areas and at least a further 10 sub-requirements for each main requirement. You can see where this is going - we now have 100 (sub-areas)x10(main)x10(sub-main) or 10,000 requirements to allocate and track. On top of this it is likely that we would need to allocate a set of 'attributes' for each requirements so that we could also track certain types of requirements rather than just which area they are allocated to, for example attributes like, environment, performance, safety, quality .....etc. which could again easily add up to 10 minimum. So - you still awake - in total, without even trying, we have got ourselves into a situation where we are reporting and tracking 100,000 items - just for a house!

Serious problem eh - if you are not careful this is also serious job creation!

This number assumes also that you can clearly specify your requirements in the first place - if not you could easily start with (I have seen this) 100 top-level requirements leading to 1,000,000 items to manage - good luck with that one.

That is why it is imperative that you have a rationale for management of your requirements management. And, no, you don't just have to purchase a requirements management software package.

You then have to ask yourself, if you tick all the requirement boxes, is your built system the one you wanted - would you want a builder to manage the build of your house in this way - or would you rather have the build project overseen by a civil engineer?

In the overall scheme of things its still pretty boring - but critical to get right!

Now some of these requirements can surely be reused on the next house - but which ones ;)

Saturday 17 November 2012

Analytical taxonomies - appropriate analysis

Having had a pop at approaches to 'Big Data Analytics' based around spreadsheets in the last post the question has to be "so what does appropriate analysis look like?"

In my various internet wanderings this week I came across a couple of articles that for me give a glimpse into what the future should look like.

The first is by Jim Sinur in an entry on applying analytics to processes and not just data, follow the link for more detail;

http://blogs.gartner.com/jim_sinur/2012/11/13/is-unmanned-process-management-a-pipe-dream/

In fact, thinking through exactly what you are expecting your 'processes' to deliver rather than simply feeding the process, is key, as is 'unmanned' optimising and management of interactions between them!  

The figure below illustrates some of the analytical taxonomy that could be used.


As well as the process analytics elements outlined above the sheer volume of data to work through will also require new computing techniques. The second article I came across by Rick Merritt in EETimes illustrates the type of computing power that will be available;

http://www.eetimes.com/electronics-news/4401143/Startup-demos-new-road-to-computing-at-SC-12

which is by the sounds of it is 40,000 processors working in a parallel configuration using neural net and fuzzy logic techniques to crank out 5.2 tera-operations per second!


So the Big Data Analytics future, for me, contains complexity in both analysis techniques and computational systems. A bit more that a few unconnected spreadsheets.

Looks exciting eh!!

Sunday 11 November 2012

Big Data Analytics - innaproriate analysis

I thought I wasn't going to rant about this again for a while but three encounters this week have fanned the flames again.

I don't know how many Twitter and LinkedIn posts I have made on Big Data + Analytics over recent months but its definitely an area on an increasing trend in the business world. However, the reality is most of the business world struggles to extract any meaningful  'knowledge' from all of the 'data' that is 'collated' from business activities.

Why is that - because the main analysis tools used are spreadsheets - an in particular - Excel. Now don't get me wrong Excel is a phenomenal software package - but in my view in some instances it is being used to produce models that are way outside of its domain of appropriateness.

What do I mean by that? Well - three events this week have highlighted the tip of the iceberg for me. All of these are being addressed, I hasten to add, but I don't think I am alone in my experiences.

1 The first was when I was sat in a meeting looking at the projected image of some analysis using Excel, upon which we were making decisions that affected the direction of the business. One of a myriad of cells was being concentrated on - and the value in the cell was 'zero'. Everyone in the room knew that wasn't right so we all sat there for 5 minutes discussing why this was so. Now this could have been a simple mistake somewhere on one of the supporting sheets but the effect it had was to throw the whole of the analysis into question. How could we then believe any of the other numbers. Therein lies the first 'rant fact' - it is difficult to manage traceability in these sorts of tools.

2 The second was when I was asked to comment and add to a sheet for some supporting data input into a model. Someone was collating data to help build up a spreadsheet model and was emailing around for changes to the data sheet. Of course no one person holds all of this in their head so people were passing on the sheet for refinement. The version that came to me for input was 'blah filename - Copy - Copy - Copy'. Therein lies the second 'rant fact' - if not part of some wider process, configuration and version control can get out of hand pretty quickly.

3 The third and for me the most serious came from checking through to try and understand a part of a model that didn't appear to be functioning as expected (see 'rant fact' 1). When I looked into the sheets in question - without even going into any of the equation set being used - I found one sheet with 100 columns and 600 rows of manually input driven data entries - that's 60,000 places for making an input error on that sheet alone and there were more sheets!  Therein lies the third 'rant fact' - data quality within this environment is difficult to control.

The issue is that Excel in particular is so easy to fire up and start bashing away at, that we forget that we are in some cases building complex calculation engines. In some instances these engines are not using any 'design' process at all. There is no overarching systems design process and even at a simplistic level there is no recognition of fundamental modelling techniques that would improve modelling and therefore output quality, namely, consideration of the following;

1 Functional model development - what is the sheet set up to do - even a simple flowchart would help never mind some functional breakdown of the calculation set.

2 Data model development - what data, where from, what format, type views to force thinking about quality control of data, a database maybe!

3 Physical model of the hardware - how does the overall 'system', including data input, connect, store and process the information.  Maybe using email and collating feedback on a laptop is not the best system configuration.

All these activities add time and cost to model development and because their results are intangible and difficult to measure can get left out in the rush to get the answer out. However, the question is, would you put your own money at risk on the basis of this 'answer'?

What is the solution? Well certainly don't let learner drivers loose in the F1 racing car for a start - but there must also be some way of providing an easily accessed development environment that can   be used to translate formula into readable and understandable code - formula translation - now that could catch on (sorry couldn't resist!).

Saturday 3 November 2012

To blog or to curate - that is the question?

More a thought for the day this one.

You definitely need a strategic approach to get the most out of all of this social media capability. There is so much to go at you can quite easily become social app weary. Not to mention spending your whole life trawling through the various information feeds!

Check out Guy Kawasaki's view on the following link for a more 'rounded' assessment

http://www.socialmediaexaminer.com/blogs-books-and-social-how-the-world-has-changed/

Which is great, but what are you going to do with all this 'networking' data and information, just leave it all hanging out there?

That is why I believe you need some sort of strategic goal - something that all of the collating and curating works towards. Currently, myself and one of my trusted advisor's (JB you know who you are)  are having a go at feeding and filtering information related to setting up a consultancy business. Which is something I have lived through and so can do the curating and is something that JB is interested in and can do the editing. The ultimate goal is to produce a book which effectively has been filtered through the process we are following.

The process at the moment goes like this;

  1. Curate the material on Scoop.it
  2. Select relevant entries for inclusion in the book
  3. Do an initial organisation of the information based upon business content
  4. Enter these into Storify story repository
  5. Arrange the content in Storify
  6. Produce the narrative around this content
  7. Flesh out the narrative to produce the book
  8. Publish as an eBook

Simples'

Who knows what it will produce - then the question is - can you automate this and produce a book automatically - what!!

Scoop.it process link - http://www.scoop.it/t/how-to-set-up-a-consulting-services-business

Storify process link - work in progress don't forget - http://storify.com/grandwizz/for-those-who-want-to-quickly-set-up-in-business

Or is life too short ..... at least its fun trying?