[r-t] Blue Line Difficulty
tuftyfrog at gmail.com
Tue Aug 29 01:34:54 UTC 2017
> On 29 Aug 2017, at 00:29, Graham John <graham at changeringing.co.uk> wrote:
> […]On its own, the count of unique pieces of work is already starting to rank methods in rough order of difficulty, but is probably insufficient on its own. However it does suggest that any refinement needs to be subtle. One way to do this, is to score each of the unique pieces of work sum the scores as a complexity index[…]
I’m convinced that difficulty hasn’t really got a useful meaning outside of the context of a specific person and their way of thinking. Many methods will be harder/easier depending on your method of learning them. Grid learning allows you to absorb a whole method in no time at all, whereas blue-line learning lends itself to methods made up of simple chains of familiar bits of ‘work'.
I’d've thought that a more useful measure of difficulty would be to have the user be able to specify what methods they can already ring, or have them rank some ’standard’ methods from easiest to hardest, and then rank all the other methods based on their similarity to this sample set using a ‘work’-based analysis or something similar. This would capture the fact that, for example, a familiar piece of work like a dodge can be much harder depending on its context: people generally find the 4-5 dodges in Glasgow much trickier than any of the work in Cambridge; the long sections of identical work in Cray or Derwent cause many more trips than the much shorter work in Yorkshire.
Symmetry should also play a big part in reducing the difficulty scores of some methods: I don’t think that pieces of work should really be considered unique from their mirror images, since a high degree of symmetry often makes things simpler.
A similarity-based consideration would also be useful in ranking spliced compositions: a model based on a Gaussian distribution should work quite well. Very similar methods are more likely to get confused, and very dissimilar methods require greater effort to remember; the easiest peals would strike a balance between the two.
Some other things which would be useful to consider:
How is the coursing order affected throughout the lead?
How many changes in direction are there — does it have a stable period (like Avon points) or is it less regular?
Is the hunting predominantly right, wrong, or a mixture of both?
Do you do a lot of work with the same bells?
Is the work synchronised [i.e for the benefit of handbell ringers]?
How many runs are there [in the plain course]?
Is the method part of any popular spliced collections?
Are there any published resources on how to it?
How many times has the method been pealed in the last decade (i.e how popular is it and how much opportunity will you have to practice it)?
Is the method false/unmusical enough to require splitting the tenors to get good compositions?
Ultimately though I’m not sure what purpose such a scoring system actually serves, other than being a way to ‘prove’ one way or another that a certain method is harder, and therefore more impressive, to ring! People do, in general, already know what they find difficult — and a score does nothing to tell people why a method might be tougher to ring, or how they can go about learning it!
More information about the ringing-theory