[r-t] Scala, part one

Mark Davies mark at snowtiger.net
Wed Jul 4 06:55:27 UTC 2012


In my previous email I introduced the programming language Scala, and 
promised to show some examples of its utility in the field of composition.

This first article will focus on the functional programming style made 
possible in Scala. Functional programming will be familiar to those who 
have used languages such as Lisp, Forth, Clojure or Erlang, but is often 
alien to those of us brought up on C and Java. However, it is 
beautifully suited to many composing problems.

For instance, my first task in the Helixoid project was to generate some 
Helixoid methods. This is effectively a tree search, and it's very 
simple to write some functional Scala code to do the job.

First I created a function (just like a method in e.g. Java) called 
"search", like so:

def search(row: Row, pnSoFar: List[PN], rowsUsed: Set[Row]):
	List[Method] = { ...}

Variables are defined "back-to-front" compared to Java, so the search 
function here has three parameters, called "row", "pnSoFar", and 
"rowsUsed", which have types of "Row", "List[PN]" and "Set[PN]" 
respectively. The search method returns "List[Method]", which is nothing 
more than a List containing objects of type Method, and corresponds to a 
Java generic type such as List<Method>.

However, one key difference between the two languages is that, as in 
most functional languages, the default collection types are immutable. 
Hence, neither the set of place notation examined so far (pnSoFar) nor 
the set of rows produced in the lead so far (rowsUsed) can be modified.

This seems not very useful! But in fact it is exactly what we need in 
our search, to make sure separate branches of the search tree cannot 
overwrite each other's values. It means we have no need to "backtrack", 
as we must do with mutable data-structures, which dramatically 
simplifies our algorithm.

We can now implement the search using a simple recursive call:

def search(row: Row, pnSoFar: List[PN], rowsUsed: Set[Row]): List[Method] =
{
	if (rowsUsed.contains(row))
		return Nil
	else if (pnSoFar.size==HalfLeadLength-1)
		return new Method(pnSoFar.reverse)
	else
		AllPN.flatMap{ (pn)=> search(row.apply(pn),
			pn::pnSoFar, rowsUsed+row) }
}

We carry out a search for Helixoids by triggering the first recursive 
call, like so:

	search(Row("XXX....."), List(), Set())

And there we have a working tree-search algorithm. I hope you will agree 
this seems too simple to be true! Obviously we are relying on a little 
bit of support code: in particular classes Row and PN, which represent 
rows and place notations. However the main search is entirely embodied 
in the search function above.

Here's how it works.

The first thing to note is our initial row value: not rounds, but the 
masked string "XXX.....". A Helixoid must contain all 56 (=8x7) position 
of bells (123) within the 56-change half-lead, up to rotation of (123). 
Starting the search from "XXX....." and keeping track of which "rows" 
have been generated from this pattern is an easy way of enforcing the 
Helixoid rule. In fact, this starting pattern is the only thing which 
separates our search from a generic search for any method type.

The "rowsUsed" Set keeps track of which (XXX-style) rows we have 
generated in the current search branch. Every time we generate a new 
row, we create a new set (rowsUsed+row) and pass it to the recursive 
call of the search() function. Similarly, the pnSoFar List tracks the 
place notations generated in the current search branch, and we prepend 
the next pn before the recursive call, using the syntax pn::pnSoFar.

Because the Set and List instances are immutable, we create new 
instances on each call, and this means we don't have to worry about 
backtracking as we would normally need to do in a procedural language 
with mutable data structures. But wait! What about performance? Surely 
Scala is creating millions of new objects and cloning large 
data-structures on each call?

In fact, the immutable collection classes, which are part of Scala's 
standard library, are very clever and efficient. For instance, a Scala 
List is much more like a Lisp list than a java.util.List. Because it is 
singly-linked, very efficient head, tail and prepend operations are 
possible. If we already have a series of PN such as x18.1458.36, this is 
represented as a list of linked objects: x->18->1458->36->Nil. Scala can 
then add a new PN on the front simply by linking to the first "x", 
meaning the entire list does not have to be cloned. Millions of 
immutable list instances can be formed, each sharing the original tail 
of the list.

[Of course, we end up with the place notation for the half-lead the 
"wrong way round", with the final PN at the head of the list; but it's 
easy to reverse the list if we do find a valid Helixoid method, as you 
can see in the code above: new Method(pnSoFar.reverse).]

Another aspect of functional programming which is invaluable for our 
search is the idea of a "higher-order function". This is a function 
which takes as a parameter another function. Scala's collection classes 
use this idea heavily, for instance to implement map and filter 
operations on sets, lists and other collection types.

For example, suppose we had a set of rows (variable type Set[Row]) and 
wished to apply the same place notation to all of them. In Java we would 
need to write a loop to do this. In Scala, we just apply a "map" 
operation, like so:

	def applyPN(row: Row): Row = row.apply(pn)
	var newRows = oldRows.map( applyPN )

Here, the map() function returns a new set, comprised of every member of 
the old set, with the applyPN function applied to each.

We don't even need to explicitly declare the applyPN function. Scala 
supports "anonymous functions", which we can declare as follows:

	var newRows = oldRows.map( (row:Row) => row.apply(pn) )

The anonymous function is created by declaring its parameters in 
brackets (row:Row), then specifying the function literal => and finally 
the function implementation (row.apply(pn)).

Where the function passed has only one parameter, we can simplify this 
further, removing the parameter list and using "_" in place of the 
parameter name:

	var newRows = oldRows.map( _.apply(pn) )

Sometimes the code is clearer if we use the alternative brackets {} 
around the "higher-order" parameter:

	var newRows = oldRows.map{ _.apply(pn) }

I have used a higher-order function call in the search() function given 
above. Search() has three conditional branches, the first two of which 
are fairly self-explanatory, but here they are again with some comments:

	// Terminate recursion with a Nil (empty list) if we've already
	// had this (masked) row.
	if (rowsUsed.contains(row))
		return Nil
	// Otherwise, if we have a full half-lead's worth of PN, return
	// a List with a single Method in it, constructed from the PN.
	else if (pnSoFar.size==HalfLeadLength-1)
		return new Method(pnSoFar.reverse)

It is the third branch of the condition where the clever stuff happens:

AllPN.flatMap{ (pn)=> search(row.apply(pn), pn::pnSoFar, rowsUsed+row) }

"AllPN" is the set of all PN types we are interested in - for example, 
the minimal Set("x", "14", "58", "18") could be used, and does actually 
produce some useful Helixoids.

The "flatMap" call is a higher-order function defined on all collection 
types (including sets and lists), which works a little bit like the map 
function described above: the difference is that flatMap expects the 
function passed to it to return *another* collection of objects, and the 
results are "flattened" into a single master collection.

In diagramatic form, if the old set and the function parameter have 
these types:

	var oldSet: Set[X]
	def fn(x: X): Set[X]

then oldSet.map(fn) would give us a set of sets - Set[Set[X]] - but 
oldSet.flatMap(fn) gives us a plain old Set[X], because the individual 
sets returned from each call to fn() are combined.

This is just what we need for our search, because each recursive call 
can return zero, one or many valid Methods, and the flatMap operation 
will merge these together into a single list.

What about the function passed to the flatMap? It is an anonymous 
function, declared as:

	(pn)=> search(row.apply(pn), pn::pnSoFar, rowsUsed+row)

This takes a PN instance (one of those in the AllPN set) and then 
recursively calls the search function, with (a) a new row calculated by 
applying the pn to the current row; (b) a new pn list, created by 
prepending the current pn to the old list as described above, and (c) a 
new rowsUsed set, created by adding the old row to the old set.

[Notice how the anonymous function does not need to declare the type of 
the parameter, pn - Scala, although strongly-typed, has a sophisticated 
type inference system which means we can often leave it up to the 
compiler to work out the correct type. If in doubt, you can always 
specify the type, which we would do here by writing (pn:PN)=>... ]

Hence, the flatMap operation on AllPN effectively iterates through every 
PN possibility, recursively calling the search function on the results 
of each, and collating the results, all in a very neat, concise, 
easy-to-code way. Job done!

Well, not quite. With a 56-change halflead, the search space for 
Helixoids is reasonably large. The algorithm above needs some 
optimisation before it'll run in reasonable time. Again, Scala helps us 
here - we'll see more in the subsequent articles, where I hope to cover 
other interesting areas of the Helixoid project, but for now, let's keep 
looking at that recursive search function.

There are several things to note about the run-time behaviour of our search:

1. First, Scala code compiles down to (Java VM) bytecode. So, with a 
good modern JIT and heap manager, it's damn quick. I'm certain we could 
do still better with hand-optimised assembler or C, but I surely don't 
want to do that for a rapid, exploratory-style project such as this.

2. With one exception, we shouldn't be working the heap manager very 
hard with this code. Our maximum recursive depth is 56 (the length of 
the Helixoid half-lead), and we are generating a low number of 
relatively small objects at each stage, each of which will be free for 
collection once the call which allocated them has completed.

3. The exception is the result list. This is a major problem with the 
simple implementation given above: we are collating all our results into 
a big List, which we won't get to look at until the search is finished. 
We'll be waiting a long time, and there's a real danger we'll run out of 
memory trying to store all the results.

4. We're possibly doing a bit more work than we should in the "inner 
loop" of the recursion. For instance, we are calculating explicit PN 
permutations between rows; a standard composition technique is to 
pre-build a table of nodes, representing the entire search space, in 
order to avoid this overhead. By using a "masked" row of XXX.... we have 
avoided some of the cost - we only have a single "row" to generate in 
each node - but not all.

5. Finally, and most importantly, there are ways to prune the branches 
of our search tree. Pruning is the single most important optimisation 
for any tree-search algorithm. Here, we can note that, because of our 
use of a "masked" row, at each step in the recursion there may be 
several PN possibilities which give us the same effective result row. 
For example, from "rounds" (XXX.....) the place notations x, 56, 58, 
1256 and 1258 all give us the same result (XX.X....). These place 
notations will certainly generate different methods, but they will all 
have the same pattern for bells (123) and (45678). In other words, they 
will be trivial variations, and we really only need to list (and search 
for) one of them. There are plenty of other pruning possibilities, too: 
we can restrict the list of place notation (AllPN), and we could look 
for more restrictive method types, such as right-place methods, or 
Double methods.

In the remainder of this article we'll look at points (3) and (5). I'll 
discuss the node table idea in point (4) in the next article.

So, point (3) is the easiest to deal with. My initial design for the 
search() function was intended to show off some functional programming 
with ideas like the flatMap higher-order function. But Scala allows us 
to mix procedural programming with the functional stuff, so we can do 
something really simple here: let's not return anything from our 
function, and simply print each method we find. We get this:

def search(row: Row, pnSoFar: List[PN], rowsUsed: Set[Row])
{
	if (rowsUsed.contains(row))
		return
	else if (pnSoFar.size==HalfLeadLength-1)
		println(new Method(pnSoFar.reverse))
	else
		AllPN.foreach{ (pn) => search(row.apply(pn),
			pn::pnSoFar, rowsUsed+row) }
}

Here the return type (and the =) of the function definition have been 
removed; we use println (the concise Scala equivalent of Java's 
System.out.println) to output methods as we find them; and flatMap() is 
replaced by foreach(), another higher-order function which simply 
applies a function to all members of a collection, discarding any results.

We could have stayed "pure functional" by, for example, passing in a 
function which would be called when each Helixoid is found, however for 
our quick-and-dirty search a hard-wired println() is fine.

Scala does have a very neat and powerful "for" loop, which can be used 
to rephrase the higher-order functions such as filter, map and foreach, 
so we could write the following code instead, which might look more 
familiar than the "foreach":

	for (pn <- AllPN)
		search(row.apply(pn), pn::pnSoFar, rowsUsed+row)

Here we have two ways of expressing the same thing, but it is a nice 
demonstration of how naturally Scala combines the procedural and 
functional paradigms.

Now to address point (5). If we can solve this, we've got some chance of 
exhausting the search space in a reasonable length of time.

The first thing to do is to avoid place notations that give a trivial 
variation of a notation we have already tried. One way to do this is to 
use a mutable Set to keep track of all the result rows we have seen so 
far, thus:

	// Declare an empty mutable set of Rows
	val rowsSeenThisTime = mutable.Set[Row]()
	for (pn <- AllPN)
	{
		val newRow = row.apply(pn)
		if (!rowsSeenThisTime.contains(newRow))
		{
			rowsSeenThisTime+= newRow
			search(newRow, pn::pnSoFar, rowsUsed+row)
		}
	}

Alternatively we could have used an immutable Set, and instead of using 
a "constant" variable (defined with "val", and the equivalent of Java's 
"final" keyword), we could use a re-assignable variable (defined with 
"var"):

	// Declare an empty immutable set of Rows
	var rowsSeenThisTime = Set[Row]()
	for (pn <- AllPN)
	{
		val newRow = row.apply(pn)
		if (!rowsSeenThisTime.contains(newRow))
		{
			rowsSeenThisTime+= newRow
			search(newRow, pn::pnSoFar, rowsUsed+row)
		}
	}

Note how the rest of the code does not change at all! The operator += is 
the equivalent of saying
	rowsSeen = rowsSeen+newRow
if rowsSeen is an immutable Set, but changes to just
	rowsSeen.+(newRow)
if it is mutable. [Observe that "+" is a method, which does exactly the 
same as the "add" method when applied to a mutable Set:
	rowsSeen.add(newRow)
Scala does not in fact have operators: everything is a method! We can 
define methods with symbolic names, like + or ::, and use them with 
either infix notation, a + b, or dotted notation, a.+(b). We can also 
use the infix syntax for "normal" methods with one parameter: a add b. 
This flexibility may seem frightening to start with - but fear not!]

Whether we code this with mutable or immutable sets, we are likely to 
end up with something pretty quick, since Scala's collection library 
provides super-efficient hard-wired collection classes for instances 
with low member sizes.

One final improvement to make to the algorithm in this article: let's 
implement a right-place search. To do this, we simply need to filter the 
list of PN candidates to ensure, if a cross was rung in the last change, 
places are made in the next, and vice versa. This little helper function 
sums that up:

	def isRightPlace(thisPN: PN, prevPN: PN): Boolean =
		thisPN.isCross != prevPN.isCross

Here we are relying on the "isCross()" method from the PN class, which 
we can assume is a library routine. The interesting thing to spot here 
is that, if a function takes no parameters, the () brackets are 
optional, so we write "pn.isCross" instead of "pn.isCross()".

We can now apply some more functional magic by filtering our AllPN list 
using the higher-order filter() method of Scala's collection library:

	for (pn <- AllPN.filter{ isRightPlace(_, pnSoFar.head) })

We have made a slight modification to just one line of code, and as if 
by magic a right-place search algorithm has popped out!

Well, we do need to be a little careful here: we probably want to make 
sure the search starts with a x change, and you may have noted the call 
"pnSoFar.head", which might not work if the pnSoFar list is initially empty.

These objections could be handled within the search loop, but in fact it 
is easier just to modify the way we initiate the recursion. Since there 
is no choice of first PN, let's jump immediately to the second change:

	search(Row("XX.X...."), List(PN("x")), Set(Row("XXX.....")))

Here is the complete implementation of our search (with a small bit of 
refactoring to make it neater; in particular we now expect the rowsUsed 
set to contain the row parameter already, and in doing so the 
rowsUsed.contains check can be combined with rowsThisTime.contains):

object HelixoidSearcher
{
	val HalfLeadLength = 56
	val AllPN = Set(PN("x"), PN("14"), PN("58"), PN("18"))

	def main(args: Array[String])
	{
		val rounds = Row("XXX.....")
		val startPN = PN("x")
		val row2 = rounds.apply(startPN)
		search(row2, List(startPN), Set(rounds, row2))
	}

	def isRightPlace(thisPN: PN, prevPN: PN): Boolean =
		thisPN.isCross != prevPN.isCross

	def search(row: Row, pnSoFar: List[PN], rowsUsed: Set[Row])
	{
		if (pnSoFar.size==HalfLeadLength-1)
		{
			println(new Method(pnSoFar.reverse))
			return
		}
		val rowsThisTime = mutable.Set[Row]()
		for (pn <- AllPN.filter{isRightPlace(_, pnSoFar.head)})
		{
			val newRow = row.apply(pn)
			if (!rowsThisTime.contains(newRow) &&
				!rowsUsed.contains(newRow))
			{
				rowsThisTime+= newRow
				search(newRow, pn::pnSoFar,
					rowsUsed+newRow)
			}
		}
	}

}

With the four place notations specified, the program above runs to 
completion in a few seconds, searching half a million possible 
rows/place notations, and producing the following output:

-14-18-14-18-58-18-58-58-58-18-18-14-14-18-58-58-18-18-14-14-14-18-14-18-58-18-58- 
-14-18-14-18-58-18-58-18-18-18-58-18-14-18-14-14-14-18-18-18-18-18-18-14-14-18-18- 
-18-14-14-58-58-14-58-18-18-14-18-58-18-18-18-14-18-58-18-18-14-58-14-14-58-58-18- 
-18-14-14-18-18-14-14-18-18-18-14-58-14-18-58-14-58-18-18-18-58-58-18-18-58-58-18- 
-18-14-14-18-18-18-18-14-18-14-18-18-58-18-14-18-14-18-18-18-14-18-58-18-58-14-18- 
-18-58-14-18-14-18-58-18-18-18-58-18-58-18-14-18-18-58-18-58-18-18-18-18-58-58-18- 
-18-18-58-58-18-18-18-18-18-18-58-58-58-18-58-18-14-18-18-18-14-18-14-18-58-18-58- 


Hopefully this introductory article has demonstrated some of the power 
and expressibility of Scala. It really is a very quick and elegant way 
of bolting together composing programs.

I'll examine its application to some more sophisticated algorithms in 
the next article.

MBD




More information about the ringing-theory mailing list