I've registered on Foursquare - shock horror - have no idea why - got bored and was playing with it!
It reminds me of clocking on at the mill - yes I did work in a mill in a previous life - when we still had some in the UK that is!
However, I now have powers - I am the Mayor of the Travelodge Milton Keynes Central - wow! I can see how these badges become addictive. I'm going to try and become the mayor of my local ASDA next. Seeing as I seem to meet most of my work colleagues there doing the weekly Saturday shopping this could end up as a cross company Badge competition!
Its all a bit scary - you can see what others have been up to and others can see what you have been up to - useful for filling out your timesheet! Though I have discovered that connecting up with a work colleague that I wouldn't normally have much to do with has created an odd relationship. I know how he gets to work what he does for lunch and when he arrives home for the weekend. Is this a good thing - not entirely sure. There is something there, but it feels a little voyeuristic to be honest.
I can see that 'checking in' you could also meet up with new contacts - useful on a work front as well as social. I now feel obliged to check in at my mayoral residence(s) and I do feel sense of responsibility to these places weirdly!
Obviously something that needs a bit more investigation.....
All things computing, software engineering and physics related! Now also including retirement stuff.
Friday, 15 March 2013
Sunday, 10 March 2013
Big Data and Fractals all in one post!
Data fragmentation was the topic of the last post - and this weeks meandering thoughts have also been on data fragmentation and measures of its complexity - now that is a bit of a mind bender - and whether the advent of Cloud Computing (aka mainframes) will help in sorting the fragmentation mess out?
The problem as I see it is everything starts with a plan of having a central 'Big Data' repository (aka Computing Centre) from which all decision making analysis can be driven. However, in reality - out in the field - individuals need some local, specific, analysis to be performed to help them do their job. So they take a data extract from the 'Big Data' and do what they need to do. The problem is, these extracts over time, can take on a life of their own, along with growth of all sorts of other associated ecosystems. This cycle of events can continue down to individual spreadsheet levels!
Aside: I have to come clean and confess that I have made extensive use of Excel (filter functions) this week - given my panning of Excel programming this does feel a little hypocritical - however - they have proven very useful - just illustrating the ease with which you can get drawn into this! Its not been real coding though - so I think I am still OK ;)
So, where is all this going? The question is, is it possible to measure the complexity of this fragmentation using some measure of the fractal dimension of the data sets - that's a thought from the MOOC course I'm taking! Can this be used to estimate the amount of effort required to consolidate the fragmented data? In fact, how do you calculate the dimension of a dataset? Will Cloud Computing help solve some of these problems going forward? The root cause of the fragmentation is people wanting something that corporate locked down system do not provide - will the new Cloud systems give people the freedom to build (under proper supervision) what they need locally or will it end up in this non-virtuous cycle again? What is the probability of the fragmentation occurring again?
Need to watch the next lecture on the course - maybe there is no connection!!
Obviously more questions than answers here - the revival continues ......
The problem as I see it is everything starts with a plan of having a central 'Big Data' repository (aka Computing Centre) from which all decision making analysis can be driven. However, in reality - out in the field - individuals need some local, specific, analysis to be performed to help them do their job. So they take a data extract from the 'Big Data' and do what they need to do. The problem is, these extracts over time, can take on a life of their own, along with growth of all sorts of other associated ecosystems. This cycle of events can continue down to individual spreadsheet levels!
Aside: I have to come clean and confess that I have made extensive use of Excel (filter functions) this week - given my panning of Excel programming this does feel a little hypocritical - however - they have proven very useful - just illustrating the ease with which you can get drawn into this! Its not been real coding though - so I think I am still OK ;)
So, where is all this going? The question is, is it possible to measure the complexity of this fragmentation using some measure of the fractal dimension of the data sets - that's a thought from the MOOC course I'm taking! Can this be used to estimate the amount of effort required to consolidate the fragmented data? In fact, how do you calculate the dimension of a dataset? Will Cloud Computing help solve some of these problems going forward? The root cause of the fragmentation is people wanting something that corporate locked down system do not provide - will the new Cloud systems give people the freedom to build (under proper supervision) what they need locally or will it end up in this non-virtuous cycle again? What is the probability of the fragmentation occurring again?
Need to watch the next lecture on the course - maybe there is no connection!!
Obviously more questions than answers here - the revival continues ......
Sunday, 3 March 2013
Big Data fragmentation function ....
Got involved in 'Big Data' type activities this week then some 'physics from the past' emerged out of random thought processes!
Big Data Project (BDP names withheld to protect the innocent) started with - this is the system diagram 'someone draws a big system diagram with loads of connections'. Holy smoke how am I going to get my head around this one was the overriding thought! We are talking about a massive company with massive data requirements - a definition of 'Big Data'. Data has been replicated, re-used, added to across geographic and functional boundaries not to mention individual personal modifications down at Excel (yuk, yuk, yuk) level.
BDP's goal is to try and specify the core functionality of all of this. Well, we have started to plug away at unpicking it using process maps, system diagrams and data flows, so the fog is starting to clear.
The question in my mind though was how did it all get into this position in the first place all of the above was done for the right reasons - to get the day job done. Each core data element seems to have spawned a few siblings which in turn have spawned more. It would be useful to know if there was some measure of the 'robustness' for each and every data repository and what has been their history?
Data store's exploding into many fragments which then exploded even further like a palm firework were the images in my mind. That bizarrely made a connection to my particle physics past! This seemed a bit like the tracks we used to trawl though from the JADE central detector - on night shifts - burned forever into my memory bank!
Which then led me on to thinking about fragmentation functions - essentially how you characterise the cascade of particles from the central annihilation - electron and positron in the JADE case.
In summary (ish);
"Fragmentation functions represent the probability for a parton to fragment into a particular hadron carrying a certain fraction of the parton's energy. Fragmentation functions incorporate the long distance, non-perturbative physics of the hadronization process in which the observed hadrons are formed from final state partons of the hard scattering process and, like structure functions, cannot be calculated in perturbative QCD, but can be evolved from a starting distribution at a defined energy scale. If the fragmentation functions are combined with the cross sections for the inclusive production of each parton type in the given physical process, predictions can be made for the scaled momentum, xp, spectra of final state hadrons. Small xp fragmentation is significantly affected by the coherence (destructive interference) of soft gluons, whilst scaling violation of the fragmentation function at large xp allows a measurement of ."
(ref; http://ppewww.ph.gla.ac.uk/preprints/97/08/gla_hera/node5.html)
so now you know!
I'm sure the data we have now started off as 'Big Data' in some form prior to fragmentation so, is there an analogy between particle fragmentation and data fragmentation, and thus a means of potentially predicting fragmentation of new Big Data repositories within an organisation?
Oh well it was nice thinking about it anyway.....
Big Data Project (BDP names withheld to protect the innocent) started with - this is the system diagram 'someone draws a big system diagram with loads of connections'. Holy smoke how am I going to get my head around this one was the overriding thought! We are talking about a massive company with massive data requirements - a definition of 'Big Data'. Data has been replicated, re-used, added to across geographic and functional boundaries not to mention individual personal modifications down at Excel (yuk, yuk, yuk) level.
BDP's goal is to try and specify the core functionality of all of this. Well, we have started to plug away at unpicking it using process maps, system diagrams and data flows, so the fog is starting to clear.
The question in my mind though was how did it all get into this position in the first place all of the above was done for the right reasons - to get the day job done. Each core data element seems to have spawned a few siblings which in turn have spawned more. It would be useful to know if there was some measure of the 'robustness' for each and every data repository and what has been their history?
Data store's exploding into many fragments which then exploded even further like a palm firework were the images in my mind. That bizarrely made a connection to my particle physics past! This seemed a bit like the tracks we used to trawl though from the JADE central detector - on night shifts - burned forever into my memory bank!
Which then led me on to thinking about fragmentation functions - essentially how you characterise the cascade of particles from the central annihilation - electron and positron in the JADE case.
In summary (ish);
"Fragmentation functions represent the probability for a parton to fragment into a particular hadron carrying a certain fraction of the parton's energy. Fragmentation functions incorporate the long distance, non-perturbative physics of the hadronization process in which the observed hadrons are formed from final state partons of the hard scattering process and, like structure functions, cannot be calculated in perturbative QCD, but can be evolved from a starting distribution at a defined energy scale. If the fragmentation functions are combined with the cross sections for the inclusive production of each parton type in the given physical process, predictions can be made for the scaled momentum, xp, spectra of final state hadrons. Small xp fragmentation is significantly affected by the coherence (destructive interference) of soft gluons, whilst scaling violation of the fragmentation function at large xp allows a measurement of ."
(ref; http://ppewww.ph.gla.ac.uk/preprints/97/08/gla_hera/node5.html)
so now you know!
I'm sure the data we have now started off as 'Big Data' in some form prior to fragmentation so, is there an analogy between particle fragmentation and data fragmentation, and thus a means of potentially predicting fragmentation of new Big Data repositories within an organisation?
Oh well it was nice thinking about it anyway.....
Saturday, 23 February 2013
"Simples"....
This weeks missive relates to software design processes - again!
The question that has been put forward is what is an appropriate level of use of CASE tools. These are packages that help you design and build software. To illustrate the 'issue' here is the list of CASE tools from Wikipedia - and there are probably even more than this knocking around in some small corner of the office!
If all you are doing is building some simple application, e.g. web site, some remote interface control, analysis of some business data, what do you do? You could easily spend all your time evaluating which tool to use rather than getting on with the job!
Well Occ-Bam this week has shown me how you can use a simple tool like PowerPoint to help capture requirements, prototype the design and develop the used guide for a simple app. Its so simple even I could do it without having to fork out thousands of pounds on one or more of the above packages - and I mean thousands and thousands in some cases (sorry couldn't resist).
All you do is;
What a brilliant use of PowerPoint I say, simples ;)
The question that has been put forward is what is an appropriate level of use of CASE tools. These are packages that help you design and build software. To illustrate the 'issue' here is the list of CASE tools from Wikipedia - and there are probably even more than this knocking around in some small corner of the office!
Ref: Wikipedia - "Types of tools are:
- Business process engineering tools
- Process modeling and management tools
- Project planning tools
- Risk analysis tools
- Project management tools
- Requirement tracing tools
- Metrics management tools
- Documentation tools
- System software tools
- Quality assurance tools
- Database management tools
- Software configuration management tools
- Analysis and design tools
- PRO/SIM tools
- Interface design and development tools
- Prototyping tools
- Programming tools
- Web development tools
- Integration and testing tools
- Static analysis tools
- Dynamic analysis tools
- Test management tools
- Client/Server testing tools
- Re-engineering tools"
If all you are doing is building some simple application, e.g. web site, some remote interface control, analysis of some business data, what do you do? You could easily spend all your time evaluating which tool to use rather than getting on with the job!
Well Occ-Bam this week has shown me how you can use a simple tool like PowerPoint to help capture requirements, prototype the design and develop the used guide for a simple app. Its so simple even I could do it without having to fork out thousands of pounds on one or more of the above packages - and I mean thousands and thousands in some cases (sorry couldn't resist).
All you do is;
- storyboard the key user requirements on separate PP slides
- build sample user input and output slides for step 1 slides
- construct sample user interface slides based on step 2
- document in the notes on each slide the functionality that sits behind that particular slide
- document in the notes on each slide what the user needs to do interact
- iterate steps 2 to 5 until happy!
et voila - you have created, a requirements repository, a functional model of the code, a set of use cases, defined the user interface, prototyped the design and created a user guide all in one go.
What a brilliant use of PowerPoint I say, simples ;)
Saturday, 16 February 2013
More pieces of the jigsaw ....
Well my advisor's have been keeping me busy this week. The revival is gaining momentum I feel - just hope I can hang on.
Two areas of development for me emerged out of the chaos this week.
The first was back to basics on software engineering processes courtesy of Occ-Bam - who has reintroduced me to the virtues of a simple approach to software development. You can make a meal of this documentation stuff can't you - and its tough to keep a 'linear' record of what you are doing up-to-date when you are iterating code development at a rapid pace! Just go away and let me code or nothing will work will it. Keeping the documentation as integrated into development activity as much as possible is the key. Also keeping the 'stages' of development at a fairly high level not only helps you better follow the 'story' of the development but also stops you getting lost describing a very nice looking weed!
Two areas of development for me emerged out of the chaos this week.
The first was back to basics on software engineering processes courtesy of Occ-Bam - who has reintroduced me to the virtues of a simple approach to software development. You can make a meal of this documentation stuff can't you - and its tough to keep a 'linear' record of what you are doing up-to-date when you are iterating code development at a rapid pace! Just go away and let me code or nothing will work will it. Keeping the documentation as integrated into development activity as much as possible is the key. Also keeping the 'stages' of development at a fairly high level not only helps you better follow the 'story' of the development but also stops you getting lost describing a very nice looking weed!
- project initiation
- requirements capture
- systems design
- coding - with version control and design information integrated into the code
- testing
- final documentation - including user manual
Are all you need for most simple projects to ensure you are 'building the right thing, rather than just building something right' as a colleague recently pointed out. Steps 1 to 3 basically make sure you know what it is you are coding and stop you diving straight in to hacking stuff out. Step 4 lets you get stuck in and document as you progress the code. Steps 5 and 6 are a check to see it turned out as originally planned (or not ;).
The second area of development for me has opened up another Pandora's box of stuff to consider. Bambofy has introduced me to Quora http://www.quora.com/ - well what to do with this? Its like a living Wikipedia, you can post questions on topics and get responses from 'others'. I have been on a bit of a quest for something like this for a while - to really give some depth to the LinkedIn group type discussions - how do you broaden the thinking on topics of discussion? Well this site seems to offer that facility. I am just starting to play with it and have posted some Quality Assurance tester questions - so far I have been impressed.
Got to stop myself wading into answering physics questions though .....
Saturday, 9 February 2013
MOOCing about.
Think I have now seen both end of the spectrum related to online learning! (MOOC Massive Open Online Courses by the way)
I started on Monday with an internal online training course - which will remain nameless to protect the innocent. This was a one hour course on one of the company business process support systems - newly implemented. I was sat at home logged onto the VPN with coffee in hand first thing in the morning (my best time) ready to be educated (my best skill). I fired up the training pack and then spent an hour trying not to eat my face. It was the most uninspiring hour I have probably ever had - an exaggeration I must have blocked the others out. The worst thing was I had to run the video's to their conclusion - monitored you see. What was so wrong with it? Well it turned out essentially to be an hour of how to fill out forms, "this is where your name goes", type of thing. There was a bit on which order to fill them out in which was mildly relevant but on balance not worth an hour of time - could have had another form that you filled out to show the order you need to fill out the other forms, if you know what I mean! On top of that when you actually get round to using these forms in real life they need to be printed out and filled in - using a pen - shock horror!
This was an online 'course' that should have just been an instruction pack. It wasn't for the lack of polish when it came to the presentation side of things either. There had obviously been a lot of effort put into recording the course and in making it available.
Having been the first online course I have ever logged onto it has kind of left me with a feeling of trepidation with respect to online training.
HOWEVER
I was saved by the Santa Fe institute - thank you, thank you!
In my re-entry into all things computing I thought I needed to get re-skilled on where current thinking is on complex system dynamics. Big data, knowledge management, social networks blah blah - what is the thinking on modelling these types of systems. So I signed up for the 'Introduction to Complexity' course (even though I think I am living through complexity so may not need the introduction ;).
Follow the link for details http://www.complexityexplorer.org/
The course is run by Melanie Mitchell - I have only done the first few modules but it is light years head of the form filling course - mainly because the content has clearly been well thought through. Getting the level of detail right, on what is a difficult subject, in a simple manner, is a skill in itself. I have now sat through an hour of this training and it has felt like 5 minutes - the power of engagement!
As a result of my first encounter, however, I still have this niggling worry that future modules are going to turn into some trivial "complex systems are complex" format, but I doubt it!
So what does all this mean?
Well I guess you can whack out YouTube video's to your hearts content but when it comes to online training - Delivery is important but Content is King!
Sunday, 3 February 2013
The end of Gamification!
Gamification - just hate that word - has been the subject of some thinking recently - around whether it can be used to promote internal company 'experience' qualifications. Therefore helping drive training and development activities and foster a more personal responsibility for building up skills.
Previous posts have covered some of the issues that have come up. What I want to do here is summarise where we have got to and park the subject until some of these issues have been addressed.
So here we go on the issues;
For further reading on the upside, check out;
http://www.forbes.com/sites/nextavenue/2013/01/11/how-to-make-the-most-of-linkedin-endorsements/
and the downside
http://mashable.com/2013/01/03/linkedins-endorsements-meaningless/
So in summary, it looks viable to select a set of badges for internal skill's and make attainment of them visible on an individuals LinkedIn site. The mechanics are therefore fairly straightforward for getting something running. The difficult part will be raising the profile of this initiative within the business and in getting buy in from senior management.
Which is the next step in this journey!
Previous posts have covered some of the issues that have come up. What I want to do here is summarise where we have got to and park the subject until some of these issues have been addressed.
So here we go on the issues;
- Is collecting badges really a game? Why will people bother collecting them other than some of us OCD types who will collect any old rubbish - who will be energised to collect? The answer to this will need to be addressed within the company. The initiative will need some serious backing from the most senior management and will need to be followed up with 'proper' marketing. Leaving it to 'organically grow' I don't believe is an option - see comments on LinkedIn below which is much more visible to people but still has had difficulties.
- Extensive research, i.e. me just watching what has happened to my LinkedIn skill endorsements, has shown that there are some who are just 'too cool to play'. Some people - engineer-techno-scientific'y types I must add - seem to find it impossible to click the endorse button. Probably some deep psychological reason here which I wont even bother tying to understand. Possibly throwing some confusion into the badge construction and selection arena?
- How best to make visible the badges, internal company-only sites or external visible to all, a la LinkedIn? External is good for infrastructure - its all in place and maintained for you. In fact using LinkedIn skill sets would be a very easy way to start to roll out the badges. Using specific badge collecting sites would impose too much of a burden on people as it would involve everyone signing up for yet another site whereas almost everyone in the company has a LinkedIn account. However, these external sites are subject to the whim of the designer and we have seen recently unilateral changes to functionality of these sites and the removal of some facilities. Do you really want to rely on these sites for what would be a key business function? Internal sites would of course be under full control of the business but would require quite a bit of 'maintenance' - which would of course also costs money and is therefore at odds with my primary directive! I think the answer lies in both internal AND external recording. An internal central record could be kept but using the external site to advertise - if the external site goes pear shaped then at least there is a backup. Internal - simple record and authorised rating body for the skill - external - visible and rateable by community - is the best way forward.
- Others outside the business could mess up the ratings? However, I have left an internal training badge (AMP badge) on my LinkedIn site for a few months and nobody has tinkered with it. So the chances are this route for advertising badges will work - people tend to only rate skills that they are personally aware of - which of course is what gives credibility to the community ratings. but also means that they are less inclined to rate and therefore muck about with internal training course badges.
For further reading on the upside, check out;
http://www.forbes.com/sites/nextavenue/2013/01/11/how-to-make-the-most-of-linkedin-endorsements/
and the downside
http://mashable.com/2013/01/03/linkedins-endorsements-meaningless/
So in summary, it looks viable to select a set of badges for internal skill's and make attainment of them visible on an individuals LinkedIn site. The mechanics are therefore fairly straightforward for getting something running. The difficult part will be raising the profile of this initiative within the business and in getting buy in from senior management.
Which is the next step in this journey!
Subscribe to:
Posts (Atom)