[r-t] A new Spliced Surprise Major canon
mark at snowtiger.net
Wed Mar 6 23:20:13 UTC 2013
> Does this replacement have to be by a method with the same lead end order?
> And presumably "doubled-up" is more than just every place bell appears
> twice, since you need two disjoint sets of leads, each of which is atw, right?
> Or am I missing something?
Sorry, yes, same LH order was the idea. I figured by the time I got to
five methods plus it was OK, and even quite friendly in a way, to have
some duplicates. And yes, by "doubled up" I meant that the array of
(bell,PB) counts needs to have a number >=2 in every cell.
I haven't got round to trying this out yet, though. Not having children
in bed until 9:30 and then being knackered does not help the 21st
> I'm not sure it's entirely new. Something I did a decade ago was
> vaguely similar, though neither as thorough nor leading to anything as
> impressive as your results.
That looks very similar to my idea, indeed. I actually started with a
selective brute-force search which only examined one node (the leads
between two calls) at a time, and searched for all possible variations
of the methods within this, before moving on to start afresh with the
next node. The total set of results was then pruned to the best 1000 or
so compositions, and these formed the input to another set of runs of
the same algorithm.
Using a greedy breadth-first algorithm like this worked surprisingly
well, given the right methods to splice. Things certainly hotted up when
I started adding the stochastic algorithms into the mix, but sometimes
the original deterministic method was able to make progress where they
couldn't, so I didn't ditch it. Not quite so sexy of course, so I
haven't discussed it as much. :-)
In terms of purely stochastic algorithms, actually my first dabbling,
prompted by Paul Bibilo, was an implementation of simulated annealing
within SMC32. This is used in the table-build phase if the "falsebits"
optimisation is enabled. Its job is to reorder node numbers so that the
average size (in words) of the false-node bit arrays for each node is
minimized. This was instrumental in bringing the Cambridge "Full Monty"
search under an hour, however is probably of limited theoretical interest.
I have also recently tried adding a stochastic element to an otherwise
standard tree search, in order to try and "skim" the full, very large,
search space. The idea was that, even if a branch was true, you'd only
pursue it if a random function was below a certain threshold. This may
hold some promise to get the measure of what a large search space
contains, but didn't produce anything helpful for the particular example
I was exploring. (I think the reason was that my space was just too big,
and true compositions just too rare.)
More information about the ringing-theory