[r-t] A new Spliced Surprise Major canon

Mark Davies mark at snowtiger.net
Sat Mar 9 11:55:34 UTC 2013


GACJ writes,

> Not sure what is meant by the "correct" answer, as Don's assertion is
> correct.

Hmm, yes, scotch that idea then! Actually I've now carried out some 
upgrades that make multi-spliced a lot more feasible, anyway. The key 
improvement was to abandon false node tables and instead work with 
leads. That needs much less memory and means the table build is only 
O(n^2) with respect to the number of different leads, rather than nodes 
- a far smaller number. In theory it means false-node checking is 
slower, but in practice the total number of different leads is small 
enough that a bit set representation is reasonable, leading to a false 
check which runs in similar time. Good!

> My spliced software was developed over 20 years ago, so needs a complete
> rewrite, but the iterative tuning functions it contained worked in a similar
> way to that you describe.

Yes, I was partly inspired in this current project by your work, Graham. 
You are one of the few to have made significant progress in the field of 
one-part spliced in recent times.

Availability of large amounts of RAM allows greedy algorithms such as 
the one you describe to work better on modern machines. Rather than 
process a node at a time, keeping only the single best result, my 
deterministic algorithm looks at all nodes in a given pass, and keeps 
the best (configurable, but normally) 1000 results as seed for the next 
pass. Keeping a bigger set between passes means you are less likely to 
get stuck in a local "greedy" minimum. (Actually, the main constraint 
here is processing time, not memory - the more compositions you keep, 
the more you need to process in the next pass.)

However, in general the stochastic algorithms do work better than the 
greedy/deterministic method. It's not always true, but more than half 
the time they seem to make faster progress and get better results.

The current star of the show in my little suite is a tunneling 
algorithm, which uses a series of short simulated annealing runs to 
explore the local area around the best minimum it has found so far. This 
makes rapid initial progress as it homes in on good minima, but 
eventually runs out of steam when the barriers surrounding the current 
solution are too high. When this happens, the K value is gradually 
increased to try and tunnel across the barrier. Eventually a "full K" 
simulated annealing run is made, and if that fails, a solution is 
assumed. This seems to work well, presumably because the best minima are 
much closer to easily-found but merely "good" minima, than they/it are 
to the much poorer starting point.

> Truth is an interesting score, apparently binary. I found that by counting
> the false rows, I could substitute a false block, then see if refining the
> rest of the composition would improve (reduce) the falseness score. If it
> comes back to zero, keep the result.

That is interesting. I may investigate a "falseness" score rather than 
treating it as a binary divide, as I currently do, only crossable via my 
deterministic bridge.

> One of the things to note is that this process is very sensitive to the
> scoring you use.

Yes! I think I've said before, but it's quite startling how you can use 
the scoring function to specify the type of composition you want, and 
the search will go and find it. Change the scoring, and you get a 
totally different set of compositions out.

> A further problem is that compositions typically contain courses that are
> split up across the composition. This means that trying to substitute one
> block of leads will be limited because the rest of the course is elsewhere
> and the calls constrain the structure.

I didn't really see this as a problem, but I guess you're right, it 
could be a constraint for algorithms that use single steps, varying just 
one node at a time. In some senses it is just a special case of 
falseness between nodes, but perhaps it is one that it is worth dealing 
with. I think the best approach would be to combine nodes which visit 
the same course into a single node, and vary it as one. You could make 
sure in the table-build phase that the set of leads for each variation 
of the node were true and consistent with respect to the underlying 
calling. I will try that.

MBD




More information about the ringing-theory mailing list