Tuesday, January 5, 2016

Simple class versioning in Scala

This post describes a very simple library versioning scheme using a class loader and jar file identified by its URL.
Versioning is a common challenge in commercial software development. The most common technique to support multiple versions of a library, executable or framework, in Java or Scala relies on the class loader.
A library can be easily versioned by creating multiple jar files, one for each version.

Simple implementation
Let's select a simple case for each a library is self-contained in a jar file with two versions
  • ArrayIndex1.jar
  • ArrayIndex2.jar

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
import java.net.{URL, URLClassLoader}
import java.io.File
import java.lang.reflect.Method

val cl1 = new URLClassLoader( 
  Array[URL](new File("ArrayIndex1.jar").toURL),   
  Thread.currentThread().getContextClassLoader()
)
val cl2 = new URLClassLoader( 
  Array[URL](new File("ArrayIndex2.jar").toURL),  
  Thread.currentThread().getContextClassLoader()
)
  
val compatibleClass: Class[_] = 
  if( version > 0.9) 
    cl1.loadClass("ArrayIndex") 
  else 
    cl2.loadClass("ArrayIndex")

val obj = compatibleClass.newInstance
val methods = compatibleClass.getMethods
println(s"${methods.map( _.getName).mkString(",")}")

The first step consists of loading the two versions of the library through their URL by converting each jar file names to a URL. This feat is accomplished by instantiating the class loaders: In this particular case, the class loader is undefined by its URL with the type URLClassLoader (line 5 & 9). The jar files are loaded within the current thread (line 7 & 11): A more efficient approach would consists of creating a future to load the classes asynchronously.
The class to be used in this case depends on the variable version (line 15). Once the appropriate class is loaded, its instance and method are ready available to the client be invoked.

5 comments:

  1. Excellent post!!! Java is most popular and efficient programming language available in the market today. It helps developers to create stunning desktop/web applications loaded with stunning functionalities.Java Course in Chennai | Best JAVA Training in Chennai

    ReplyDelete
  2. To learn shorter programs when compared to Java then it is the right time to choose python training. There is increasing market demand for professionals who have completed python training.
    python training in chennai | python training institutes in chennai

    ReplyDelete
  3. Sas Training Institute in Noida-Webtrackker is the best sas training institute in noida. SAS has taken the lead role for a long time, where most companies have standard software. While you have this certification under your belt, give big rewards to the IT industry, it can also serve as an important payer on the business side of the economy. SAS can read data files created by other statistical packages. Therefore, for experienced users of these statistical packages, SAS does not threaten to create data files created by these packages in a SAS file format.
    Sap Training Institute in Noida
    PHP Training Institute in Noida
    Hadoop Training Institute in Noida
    Oracle Training Institute in Noida
    Linux Training Institute in Noida
    Dot net Training Institute in Noida
    Salesforce training institute in noida
    Java training institute in noida

    ReplyDelete
  4. Hadoop Training Institute in Noida- Webtrackker is the best Hadoop training institute in noida. If you want take the training in a Hadoop than Webtrackker is the best option for you. Since then Hadoop has continued with the development of the YARN cluster manager, releasing the project from its first distribution of HadoopMap Reduce. HadoopMap Reduce is still available in Hadoop to perform static batch processes for which Map Reduce is suitable. Other data processing activities can be assigned to different processing engines (including Spark), where YARN manages the management and allocation of cluster resources.
    Projects like Apache Mesas provide a powerful and growing range of distributed cluster management capabilities. Most Spark implementations still use Apache Hadoop and its associated projects to meet these requirements.

    ReplyDelete