Friday, August 22, 2014

Akka Master-Slave Design

This page describes and implements the Master-Slave design for distributed computing using Akka actors.

Overview
Traditional multi-threaded applications rely on accessing data located in shared memory. The mechanism relies on synchronization monitors such as locks, mutexes or semaphores to avoid deadlocks and inconsistent mutable states. Those applications are difficult to debug because of race condition and incur the cost of a large number of context switches.
The Actor model addresses those issues by using immutable data structures (messages) and asynchronous (non-blocking) communication. The actor model has already been described in the previous post "Scala Share-nothing Actors". This post focuses on the simple Master-worker model using Akka framework 2.3.4

Master-slave Model
In this design, the "slave" or "worker" actors are initialized and managed by the "master" actor which is responsible for controlling the iterative process, state, and termination condition of the algorithm. The orchestration of the distributed tasks (or steps) executing the algorithm is performed through message passing:
* Activate from master to workers to launch the execution of distributed tasks
* Complete from workers to master to notify completion of tasks and return results.
* Terminate from master to terminate the worker actors.

The first step is to defined the immutable messages.

sealed abstract class Message(val id: Int)

case class Activate(i: Int, xt: Array[Double]) extends Message(i)
case class Completed(i: Int, xt: Array[Double]) extends Message(i)
case class Start(i: Int =0) extends Message(i)

The Start message is sent to the master by the client code, (external to the master-worker communication) to launch the computation.
The following sequence diagram illustrates the management of worker' tasks by the master actor through immutable, asynchronous messages.


The next step is to define the key attributes of the master. The constructor takes 4 arguments:
* A time series xt (line 5)
* A transformation function fct (line 6)
* A data partitioner (line 7)
* A method to aggregate the results from all the worker actors aggr (line 8)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
type DblSeries = Array[Array[Double]]
type DblVector = Array[Double]
 
abstract class Master(
  xt: DblSeries, 
  fct: DblSeries => DblSeries, 
  partitioner: Partitioner, 
  aggr: (List[DblSeries]) => immutable.Seq[DblVector]) extends Actor{
 
  val workers = List.tabulate(partitioner.numPartitions)(n => 
      context.actorOf(Props(new Worker(n, fct)), 
          name = s"${worker_ String.valueOf(n)}"))
   
  workers.foreach( context.watch ( _ ) )
  ...
}

The master actor creates list of worker actors, workers using the higher order method tabulate (line 10). The master registers the worker actor to be notified of their termination context.watch (line 14).

In the implementation of the event handler receive for the master below, the Start message triggers the partitioning of the original dataset through a split function (line 3).
Upon completion of their tasks, the workers emit a Completed message to the master (line 6). The master counts the number of workers which have completed their tasks. Once all the workers have completed their tasks with the condition aggregator.size >= partitioner.numPartitions-1, the master computes the aggregated value (line 8), aggr then stop all the workers through its context workers.foreach( context.stop(_) ) (line 9).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
override def receive = {
    // Sent by client to master to initiate the computation
  case s: Start => split
 
    // Sent by workers on completion of their computation
  case msg: Completed => {
    if(aggregator.size >= partitioner.numPartitions-1) {
       val aggr = aggregate.take(MAX_NUM_DATAPOINTS).toArray
       workers.foreach( context.stop(_) )
    }
    aggregator.append(msg.xt)
  }
   
     // Sent by client to shutdown master and workers
  case Terminated(sender) => {
      // wait the current execution of workers completes
    if( aggregator.size >= partitioner.numPartitions-1) {
       context.stop(self)
       context.system.shutdown
    }
  }
}

The message Terminated (line 15) shuts down the master and the global context for all the actors, context.system.shutdown (lines 18 & 19).

The next step consists of defining the tasks for the worker actors. A worker actors is fully specified by its id and the data transformation fct (lines 2 & 3).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
final class Worker(
     id: Int, 
     fct: DblSeries => DblSeries) extends Actor {
   
  override def receive = {
     // Sent by master to start execution
    case msg: Activate => {
      val msgId = msg.id+id
      val output = fct(msg.xt)
      sender ! Completed(msgId, output)
  }
}

The event loop processes only one type of message, Activate, (line 7) which executes the data transformation fct (lines 8 & 9).

The last step is the implementation of the test application. Let's consider the case of the cancellation of noise on a very large dataset xt executed across multiple worker actors. The dedicated master actor of type NoiseRemover partitions the dataset using an instance of Partitioner distributed the cancellation algorithm cancelNoise to its worker (or slave) actors. The results aggregation function aggr has to be defined for this specific operation.

def cancelNoise(xt: DblSeries): DblSeries
 
class NoiseRemover(
    xt: DblSeries,
    partitioner: Partitioner,
    aggr: List[DblSeries] => immutable.Seq[DblVector])
 extends Master(xt, cancelNoise, partitioner, aggr)


The Akka actor context ActorSystem is initialized (line 1). The test driver implements a very simple results aggregation function aggregate passed as parameter of the noise remover master actor, controller (line 4). The reference to the controller is generated by the Akka actor factory method ActorSystem.actorOf (line 8).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
val actorSystem = ActorSystem("System")
   
  // Specifies the aggregator used in the master
def aggregate(aggr: List[DblSeries]): Seq[DblVector] =
    aggr.transpose.map( _.sum).toSeq
 
  // Create the Akka master actor
val controller = actorSystem.actorOf(
   Props(new NoiseRemover(xt, partitioner, aggregate)), "Master"
)

controller ! Start(1)

Finally the execution is started with a "fire and forget" message Start (line 12)

References

Friday, August 8, 2014

Bloom Filter in Scala

A brief introduction to the Bloom filter and its implementation in Scala using a cryptographic digest.

Overview
Bloom filter became a popular probabilistic data structure to enable membership queries (object x belonging to set or category Y) a couple of years ago. The main benefit of Bloom filter is to reduce the requirement of large memory allocation by avoiding allocating objects in memory much like HashSet or Hash Table. The compact representation comes with a trade-off: although the filter does not allow false negatives it does not guarantee that there is no false positives. In other words, a query returns:
- very high probability that an object belong to a set
- an object does not belong to a set
A Bloom filter is quite often used as a front end to a deterministic algorithm

Note: For the sake of readability of the implementation of algorithms, all non-essential code such as error checking, comments, exception, validation of class and method arguments, scoping qualifiers or import is omitted

Theory
Let's consider a set A = {a0,.. an-1} of n elements for which a query to determine membership is executed. The data structure consists of a bit vector V of m bits and k completely independent hash functions that are associated to a position in the bit vector. The assignment (or mapping) of hash functions to bits has to follow a uniform distribution. The diagram below illustrates the basic mechanism behind the Bloom filter. The set A is defined by the pair a1 and a2. The hash functions h1 and h2 map the elements to bit position (bit set to 1) in the bit vector. The element b has one of the position set to 0 and therefore does not belong to the set. The element c belongs to the set because its associated positions have bits set to 1

However, the algorithm does not prevent false positive. For instance, a bit may have been set to 1 during the insertion of previous elements and the query reports erroneously that the element belongs to the set.
The insertion of an elements depends on the h hash functions, therefore the time needed to add a new element is h (number of hash functions) and independent from size of the bit vector: asymptotic insertion time = O(h). However, the filter requires h bits for each element and is less effective that traditional bit array for small sets.
The probability of false positives decreases as the number n of inserted elements decreases and the size of the bitvector m, increases. The number of hash functions that minimizes the probability of false positives is defined by h = m.ln2/n.

Implementation in Scala
The implementation relies on the MessageDigest java library class to generated the unique hash values. Ancillary methods and condition on methods arguments are ommitted for sake of clarity.
The first step is to define the BloomFilter class and its attributes
  • length Number of entries in the filter (line 2)
  • numHashs Number of hash functions (line 3)
  • algorithm Hashing algorithm with SHA1 as default (line 4)
  • set Array of bytes for entries in the Bloom filter (line 6)
  • digest Digest used to generate hash values (line 7)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
class BloomFilter(
  length: Int,
  numHashs: Int, 
  algorithm: String="SHA1") {
    
  val set = new Array[Byte](length)
  val digest = Try(MessageDigest.getInstance(algorithm))

  def add(elements: Array[Any]): Int {}
  final def contains(el: Any): Boolean = {}

  private def hash(value: Int): Int {}
  private def getSet(el: Any): Array[Int] = {}
}

The digest using the message digest of the java library java.security.MessageDigest.
The next step consists of defining the methods to add single generic element add(any: Any) line 8 and array of elements add(elements: Array[Any]) (line 2).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
// add an array of elements to the filter
def add(elements: Array[Any]): Int = digest.map(_ => {
   elements.foreach( getSet(_).foreach(set(_) = 1) )
   elements.size
 }).getOrElse(-1)
 
@inline
def add(any: Any): Boolean = this.add(Array[Any](any))
 
final def contains(any: Any): Boolean =
   digest.map( _ => !getSet(el).exists(set(_) !=1))
       .getOrElse(false)

The method contains (line 10) evaluates whether an element is contained in the filter. The method returns
  • true if the filter very likely contains the element
  • false if the filter DOES NOT contain this element
The contains method relies on a accessing an element from the set using the recursive getSet method.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
def getSet(any: Any): Array[Int] = {
  val newSet = new Array[Int](numHashs)
  newSet.update(0, hash(any.hashCode))
  getSet(newSet, 1)
  newSet
}
 
@scala.annotation.tailrec
def getSet(values: Array[Int], index: Int): Unit =
  if( index < values.size) {
    values.update(index, hash(values(index-1)))
    getSet(values, index+1) // tail recursion
  }
}


Similarly to the add method, the getSet methods has two implementations
  • Generate a new set from any new element (line 1)
  • A recursive call to initialize the Bloom filter with an array if integers (line 9).
The hash method is the core of the Bloom filter: It consists of computing an index of an entry.

def hash(value: Int) : Int = digest.map(d => {
  d.reset
  d.update(value)
  Math.abs(new BigInteger(1, d.digest).intValue) % (set.size -1)
}).getOrElse(-1)

The instance of the MessageDigestclass, digest generates a hash value using either MD5 or SHA-1 algorithm. Tail recursion is used as an alternative to the iterative process to generate the set.

The next code snippet implements a very simple implicit conversion from Int to Array[Byte] conversion (line 5)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
object BloomFilter {
 val NUM_BYTES = 4
 val LAST_BYTE = NUM_BYTES -1
 
 implicit def int2Bytes(value: Int) : Array[Byte] =
    Array.tabulate(NUM_BYTES)(n => {
      val offset = (LAST_BYTE - n) << LAST_BYTE
      ((value >>> offset) & 0xFF).toByte
    })
}

The conversion relies on the manipulation of bits from a 32 bit Integer to 4 bytes (line 6 - 8). Alternatively, you may consider a conversion from a long value to a 8 byte array.

Usage
This simple test consists of checking if a couple of values are indeed contains in the set. The filter will definitively reject 22 and very likely accept 23. If the objective is to confirm that 23 belongs to the set, then a full-fledged hash table would have to be used.

val filter = new BloomFilter(100, 100, "SHA")
final val newValues = Array[Any](57, 97, 91, 23, 67,33)  
                                
filter.add(newValues)

println( filter.contains(22) )
println( filter.contains(23) )

Performance evaluation
Let's look at the behavior of the bloom filter under load. The test consists of adding 100,000,000 new random values then test if the filter contains a value (10,000) times. The test is run 10 times after a warm up of the JVM.

final val newValues = Array[Any](57, 97, 91, 23, 67,33)                                  
  // Measure average time to add a new data set
filter.add(Array.tabulate(size)(n => Random.nextInt(n + 1)))

  // Measure average time to test for a value.
filter.contains(newValues(Random.nextInt(newValues.size)))

The first performance test evaluates the average time required to insert a new element into a Bloom filter which size range from 100M to 1Billion entries.
The second test evaluates the average search/query time for bloom filters with same range of size.




As expected the average time to load a new set of values and check the filter contains a specific value is fairly constant.


References
Bloom filter Wikipedia
github.com/prnicolas
The Scala Programming Language - M. Odersky, L. Spoon, B.Venners - Artima 2007