Harvesting Values: a workshop at SAFIM

Last month, I attended the workshop on Harvesting Values by SAFIM - Sri Aurobindo Foundation of Integral Management) at Sri Aurobindo Society, Pondicherry. It was a great workshop wherein I can say for sure that I learnt a lot and just to enumerate the learnings:
  1. I have had my aha moments when I discovered how values as expressed in behaviour and conduct mapped back to Universal Ideals
  2. I am now equipped with set of tools and guidelines so that I can cross-examine the decisions (positive or negative) I made or make. I can see whether those stemmed out of positive values or negative values and how to develop an eye to observe the shifts in consciousness.
  3. Also, learnt to see how I can understand my psychological dispositions in terms of ideals and use it to better my life; see which I want to work upon and take up work such that there is a match between the psychological content of the job and my inner dispositions so that it allows me to express myself fully and thus help in progress.
  4. Finally - learnt that I can improve my personal retros using a more comprehensive set of questions, it contains some questions that either I’ve never asked or some that have been always pushed down in my priority list due to more pressing ones that needed to be addressed at that time.

The workshop started out with usual ice-breaker to get the participants introduced to each other and we were divided into 2 groups. The session started out with the necessity of values and explaining the difference between values and ethics as these two terms generally get used inter-changeably. It then proceeded to defining of values - just jotting down what I captured in my notes -

Values are stable inner dispositions of human being which nurture good human beings from within.

Saikat Sen, further elaborated that - moral conduct is only an expression and sign of the soul-state and that the general understanding of morality - is bounded by outward conduct.

We were then shown Hindi movie clip - Corporate. In the clip we were shown that a corporate meeting is in progress to decide on go-no-go decision on a soft-drink launch that the company made. It was found to contain pesticides and they did not receive FDA approval. In that top-level meeting, various members voiced their concerns, one put his foot-down and said, it can’t happen, but the boss made an executive decision to launch it despite the problems.

After watching the clip we were asked to analyse each and every character in terms of what values they exhibited in their conduct and based on that whether they took positive or negative decision, during the corporate meeting before the on a go-no-go for a product launch. Everybody was then asked to share their analysis with the rest.

We were then introduced to positive values (for example - humility, gratitude, sincerity, courage, self-scrutiny, trust etc…) and negative values (greed, envy, anger, fear, cunning, fault finding, backbiting, jealousy, vindictiveness etc...) and the decisions one made based on that. Saikat gave a nice graphical representation of it, here it goes -

                     ^                                
                     |  (+ve)                         
                     |  Values                        
                     |                                
                     |                                
                     |                                
                     |                                
                     |                                
  <------------------+------------------>
(-ve)                |                    (+ve)    
Decisions            |                    Decisions
                     |                                
                     |                                
                     |                                
                     |                                
                     |  (-ve)                         
                     v  Values                        

 

Just to elaborate the above quadrants - it is not surprising to see individuals with positive values making positive decisions and individuals with negative values with negative decisions. What is really surprising to see the other 2 diagonally opposite quadrants -

  1. Individuals with positive values make negative decisions and
  2. Individuals with negative values making positive decisions

Example for #1
To help grasp the two, one classic example of individuals with positive values making negative decision from Peeps in to the Mythology (Foundations of Managerial Work: Contributions from Indian Thought by S. K. Chakraborty) - Mahabharata, is the character of Dronacharya, the mentor of Pandavas and Kauravas - After finishing their training under him, he gave Arjuna the Supreme Brahmastra. Ashwatthama, Drona’s own son and a Kauravite, was denied this. But under severe pressure from Ashwatthama, he gave away another piece of Brahmastra to his son. Initially, Drona could discriminate in favour of Arjuna and against Ashwatthama on the ground of values. As a mentor, he was aware that, his son was at par with Arjuna in terms of skill, but his value-system was in a mess and knew that such powerful tools in the hands of people who have weak values, but strong skills are bound to be used destructively. Though the mentor in Drona initially snubbed and bridled the father in him, but later the man of his will power and wisdom succumbed to familial emotions. He could not retain the objectivity of values in decision making and got caught in quick-sands of subjectivity.

Talking about similar situations in present life is the Satyam scam, the WorldCom scam. These scams have nothing to do with IQ, in fact the guys behind the scam had very high IQ. The quality of the decision is not determined by the IQ, but by consciousness.

Example for #2
Read the section Early life on http://en.wikipedia.org/wiki/Valmiki. As a robber, when he asked his family to bear the burden of his sins, the family refused and it dawned to him that he needs to change his life. This is a case of negative values and positive decision.

Using the above graphical representation, we were introduced to the session on “Conflict of Values”. We were then shown another clip from Hindi movie - Rocket Singh, where he goes on his first sales assignment and is asked for kick-backs by the manager. We were then asked to analyse it in terms of values and the decision made. Further, each participant was then presented with several case studies and asked to classify this. It was quite an interesting day.

One of the things that I distilled out of this was - if I used the above as a tool to be applied on myself on every decision I made, then over a period of time, I would find myself dotted in all the quadrants. This trending, I believe will be an eye-opener, as it will help me see my values/decisions shift over time. Generally everyone of us (barring exceptions) has both, negative and positive values and based on how (conscious) we are at that moment, take a decision, which could either be negative or positive.

That is precisely why Sri Aurobindo says -

To be able to do the right thing in the right way, in each case and at every moment, one must be in the right consciousness.

Day 2:

It started with a small meditation (as usual) and then we were shown another Hindi movie clip - “do duni char”. The distillation of which was - Professor Duggal did not realise that his consciousness shifted when a demand was placed on him by his sister and wife to buy a car, despite being out of budget. He then decides to pass a failing student in exchange for money. But when he is about to do that, he meets his past student who appreciates him and expresses gratitude for changing his life and making him a great person. The Professor was quite perplexed as he is now got pulled in to higher part of his consciousness by his student. He then immediately decides not to take money and walks out of the room clean and is now very clear about his purpose in life.

Quoting straight from the notes -

The quality of a person's consciousness depends on which part of the consciousness s/he lives. There are two parts in our consciousness. First is the lower physical-vital being driven predominantly by self-interest, material needs and sensuous desires, quite often degenerating into greed. The second is the higher mental, moral and spiritual being seeking for truth, beauty, goodness, harmony and unity. For values to be truly effective and enduring, they have to be based on this higher part of our human nature or consciousness.

We were then introduced to Universal Ideals…well, what are they? Ideals are a set of values and these are universal and this is where I had my aha moment and I could see the direct mapping of these values expressed in behaviour and conduct. For example - if in a person we consistently observe that (s)he is honest, transparent, and deals with other people or situations according to the deeper truth in it, then one can say that the individual has the ideal of Truth manifested.

I further believe that an individual is born with (imprint of the nature) a few of the universal values and some are to be cultivated along the journey of life through experiences. In order to make our life more meaningful, we need to live those values on a day-to-day basis and express them in our field of work and personal life, both alike. I reject the notion that professional life and personal life are different, they are not, as it is the same person taking part in both. Because it is the same person, how can the ideals be different? In case they are then the person itself will be torn apart.

The next session was quite amazing, it was on entitled - “Ordinary Persons - Extraordinary Values”. We were shown video clips of various people, I am embedding only 3 of the many clips -

  • Dashrath Manjhi (http://en.wikipedia.org/wiki/Dashrath_Manjhi)
  • Rangaswamy - Auto Rickshaw Driver
  • Suhasini Mistry - Vegetable Seller
If you observe the videos carefully, you will realise that they all had a common constraint - resources, yet they managed to achieve what they set out to do. Further, we were then asked to identify the values they have with which they sustain what they are doing?
  1. Courage
  2. Perseverance
  3. Sincerity
  4. Selflessness
  5. Service/Contribution to the progress
The above stand out very clearly in all the cases. These are expressions of the ideals of Strength and Force, Unity and wholeness of life.

The workshop concluded and stressed on the need for inner cultivation of values. This is because, again from notes…

We realise more and more that no amount of fiddling with outer structures, systems is going to solve the problems. The root causes are within. Only a change in our consciousness, attitudes, values, can bring about a lasting solution.

In my view, imbibing values is an osmotic process, they cannot be preached, if preached they just remain abstract in upper layers of mind giving rise to hypocrisy, pretension, and duplicity. What we need is internalising values and for that it requires us to embrace psychological discipline that cultivates them...and alongside, re-orient and integrate the whole being around values. We need to constantly retrospect, change and adapt. In the final session, Inner cultivation of values, we were given a comprehensive set of questions that would help an individual conduct personal retros very holistically.

I stayed back for 2 more days after the workshop to award myself a quite-time to reflect back on the workshop proceedings. During the quite-time, I could trace back a few ideals right unto childhood and was glad to observe that I could preserve them till date. To my surprise, what I do in my professional work (creating software), I found that these ideals were expressed in my work (were the universal ideals) or I was striving to expressing them. I also found quite a few ideals that I am far away from and I recognise the need to cultivate them…so now I know what I’d like to see in myself next :-)

If you feel this article has helped you, leave a comment below and if others can benefit from this article, share it:

Comments [0]

Functional Conf 2014 Call for Papers...

Delighted to announce the first Functional Programming conference in Asia. Functional Conf will be hosted in Bangalore, India on Oct 9-11th. This is your golden opportunity to meet the Functional Programming community.

For over 35 years, functional programming has been a hot research topic. However in the last 5 years, driven by the need to build massively concurrent systems and to handle big-data, we've experienced a rapid adoption of functional programming concepts by diverse companies, ranging from tech start-ups to financial institutes.

These days, functional programming is at the heart of every, new generation programming technologies. Companies are employing functional programming to enable more effective, robust, and flexible software development. This has given birth to a very vibrant community of functional programmers, who are constantly exploring ways to bring functional programming concepts to the world of enterprise software development.

Functional Conf is designed to bring the growing community of functional programmers together under one roof. At Functional Conf:
  • participants can understand the fundamentals concepts behind functional programming,
  • they can learn how others are using functional programming to solve real world problems,
  • practitioners can meet peers and exchange their experience,
  • experts can share their expertise on practical usage and gotchas in functional programming concepts.
If you are interested in presenting at the Functional Conf, please submit your proposals at http://confengine.com/functional-conf-2014 To know more about the conference, please visit http://functionalconf.com

If you feel this article has helped you, leave a comment below and if others can benefit from this article, share it:

Comments [0]

Scanning Annotations in Classpath and Sealed Factories

To give a bit of context on this, on my earlier project Midas, we developed transformation functions (thats what we are called it) that quite closely resemble MongoDB's aggregation-projection framework functions for Arithmetic operations (add, subtract, ...) , String operations (concat, tolower, ...) etc... Here is an example.
db.orders.transform('totalAmount', '{ $subtract: ["$totalAmount", { $multiply: ["$totalAmount", 0.2] } ] }')

We parse the above using Scala's Parser Combinators and convert them to domain model objects which are sub-types of Expression. Each expression is either a Literal, a Field expression or a Function. We have a fairly wide and sufficiently deep hierarchy of Functions and is depicted below

                           +----------+
                           |Expression|
                           +----------+
                      ______/   |    \_____
                     /          |          \
                +-------+   +-------+   +--------+
                |Literal|   | Field |   |Function|
                +-------+   +-------+   +--------+
                        ________________/    |   \_________________
                       /                     |                     \
              +------------------+        +--------------+      +------------+
     _________|ArithmeticFunction|        |StringFunction|      |DateFunction|
    /         +------------------+        +--------------+      +------------+
   /       ______/   |    \___  \_____         |   |____ \_______
  /       /          |        \       \        |        \        \
+---+ +--------+ +------+ +--------+ +---+  +-------+ +-------+ +------+
|Add| |Multiply| |Divide| |Subtract| |Mod|  |ToLower| |ToUpper| |Concat|
+---+ +--------+ +------+ +--------+ +---+  +-------+ +-------+ +------+
In our case, we need to be able to support various arithmetic functions, string functions and other data functions. We also would not know how many more functions we would need to support in the future. Obviously, adding that functionality would manifest as classes added to the hierarchy which in turn means that I have to modify the factory function each time a new class gets added to the Function hierarchy. the factory function looks something like this.
def fn: Parser[Expression] = fnName~":"~fnArgs ^^ {
  case "add"~":"~args      =>  Add(args: _*)
  case "subtract"~":"~args =>  Subtract(args: _*)
  case "multiply"~":"~args =>  Multiply(args: _*)
  case "divide"~":"~args =>  Divide(args: _*)
  case "concat"~":"~args   =>  Concat(args: _*)
  ...
  ...
}
So, in other words, the factory method is not sealed against these kind of changes. Goal is to get to a place where the case statements don't exist. As pointed out earlier in one of my blog-posts Achieving Explicit Closure for a Simple Factory using Annotations... many years back, I decided to seal the factory method so that this concern is taken away from the developer.

The approach is to mark behavior-implementing sub-types of the Function, pick them up from the classpath and store them in cache so that they can then be instantiated when required. As we are on the JVM, we use a custom annotation @FunctionExpression to mark the Function sub-types.

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
public @interface FunctionExpression {
    Class value();
}
Here is how it is applied to a Function sub-type.
@FunctionExpression
final case class ToLower(expression: Expression) extends StringFunction(expression) {
  def evaluate(document: BSONObject) = {
    val string = value(expression.evaluate(document))
    Literal(string.toLowerCase)
  }
}
Next, we do need to scan the classpath for all the classes annotated with @FunctionExpression annotation. It turns out that using reflection to check for the presence of the annotation is not only expensive time-wise, but also it loads the class into memory and thus causing the heap to grow.

ASM from Object Web is a bytecode manipulation and analysis library that does not need to load the class into the JVM and is very performant in doing such an analysis. Also, there are many open-source frameworks like Scannotation or Reflections that use ASM to scan annotations in classpath. Scannotation is not under active development and I did not need many features from Reflections for such a small work that I need to do. So instead of using these frameworks, I just wrote a custom AnnotationScanner that uses ASM's ClassVisitor

class AnnotationScanner(pkg: String, annotationClass: Class[_]) {
  private val fsSlashifiedPkg = fsSlashify(pkg)

  private val slashifiedPkg = slashify(pkg)

  private val slashifiedAnnotation = slashify(annotationClass.getName)

  private val classLoader = AnnotationScanner.this.getClass.getClassLoader

  private val pkgURI = classLoader.getResource(slashifiedPkg).toURI

  private var startDir: Path = null

  val pkgURIString = pkgURI.toString
  if(pkgURIString.startsWith("jar")) {
    val (jar, _) = pkgURIString.splitAt(pkgURIString.indexOf("!"))
    val jarUri = URI.create(jar)
    import scala.collection.JavaConverters._
    FileSystems.newFileSystem(jarUri, Map[String, AnyRef]().asJava)
  }
  startDir = Paths.get(pkgURI)

  private val dirWalker = new DirWalker(startDir, Pattern.compile(".*\\.class$"))

  private def fsSlashify(string: String) = string.replaceAllLiterally(".", File.separator)

  private def slashify(string: String) = string.replaceAllLiterally(".", "/")

  private def classesInPackage: Set[String] = {
    dirWalker.walk map { file =>
      val index = if(pkgURIString.startsWith("jar"))
                    file.indexOf(slashifiedPkg)
                  else
                    file.indexOf(fsSlashifiedPkg)

      val className = file.substring(index)
      className.replaceAllLiterally(".class", "")
    }
  }

  private def hasAnnotation(annotationClass: Class[_], className: String): Boolean = {
    val slashifiedClassName = fsSlashify(className)
    var foundAnnotation = false
    val cv = new ClassVisitor(Opcodes.ASM4) {
      // Invoked when a class level annotation is encountered
      override def visitAnnotation(desc: String, visible: Boolean): AnnotationVisitor = {
        val annotation = desc.substring(1, desc.length - 1)
        if (annotation == slashifiedAnnotation)
          foundAnnotation = true
        super.visitAnnotation(desc, visible)
      }
    }
    val in = classLoader.getResourceAsStream(slashifiedClassName + ".class")
    try {
      val classReader = new ClassReader(in)
      classReader.accept(cv, 0)
    } catch {
      case _: Throwable =>
    } finally {
      in.close()
    }
    foundAnnotation
  }

  private def dotify(string: String) = string.replaceAllLiterally("/", ".").replaceAllLiterally("\\", ".")

  def scan = {
    val classes = classesInPackage
    classesInPackage.filter(className => hasAnnotation(annotationClass, className)).map(dotify)
  }
}
Here is the DirWalker implemented using the Java7 Paths that walks down from a start directory down recursively.
class DirWalker (startDir: Path, collectFilesRegex: Pattern)  {
  private val files = scala.collection.mutable.Set[String]()

  private val visitor = new SimpleFileVisitor[Path] {
    override def visitFile(path: Path, mainAtts: BasicFileAttributes) = {
      val file = path.toAbsolutePath.toString
      val matcher = collectFilesRegex.matcher(file)
      if(matcher.matches) {
        files += file
      }
      FileVisitResult.CONTINUE
    }

    override def visitFileFailed(path: Path, exc: IOException) = {
      log.info(s"Continuing Scanning though visiting File has Failed for $path, Message ${exc.getMessage}")
      FileVisitResult.CONTINUE
    }
  }

  def walk = {
    files.clear
    Files.walkFileTree(startDir, visitor)
    files.toSet
  }
}
Finally, I need a place from where I can use AnnotationScanner. What better place could I have found other than placing this under Function as a Singleton factory for creating its sub-types.
sealed abstract class Function(expressions: Expression*) extends Expression {
  override def toString = s"""${getClass.getSimpleName}(${expressions mkString ", "})"""
}

object Function {
  lazy val functions = new AnnotationScanner("com.ee.midas", classOf[FunctionExpression])
    .scan
    .map { className =>
      val clazz = Class.forName(className).asInstanceOf[Class[Function]]
      clazz.getSimpleName.toLowerCase -> clazz
    }
    .toMap
    .withDefaultValue(classOf[EmptyFunction])

  def apply(fnName: String, args: Expression*): Function = {
    val fnClazz = functions(fnName.toLowerCase)
    val constructor = fnClazz.getConstructor(classOf[Seq[Expression]])
    log.debug(s"Instantiating Class $fnClazz...")
    constructor.newInstance(args)
  }
}
Eventually, the sealed parser now looks
  def fn: Parser[Expression] = fnName~":"~fnArgs ^^ { case name~":"~args => Function(name, args: _*) }

If you feel this article has helped you, leave a comment below and if others can benefit from this article, share it:

Comments [0]

Memoization to the Rescue

Context:

On my previous project, we wanted to transform a MongoDB document based on what user specified. Say for example, we are offering 20% discount on totalAmount, then we'd expect the user to say something like -

# Our Custom DSL:
#
# Transform a field by using a built-in Arithmetic Operators.
# It takes in an input field and an expression containing built-in 
# arithmetic function and returns the result by setting it on the
# output field.
#
# syntax: 
# db.things.transform("outputFieldName", "{ $add: ["$age", 1] }")
##################################################################

db.orders.transform('totalAmount', '{ $subtract: ["$totalAmount", { $multiply: ["$totalAmount", 0.2] } ] }')

Apart from the transform operation, there are many other operations that we offer to the user. For each operation we produce a corresponding lambda expression. Many such operations specified by the user result in a collection of lambda expressions that would be applied sequentially one-at-a-time to transform the existing document.

The lambda expression for the above transform operation looked something like this.
  def transform(outputField: String, expressionJson: String) : BSONObject => BSONObject = {
    ((document: BSONObject) => {
      try {
        val expression: Expression = parse(expressionJson)
        val literal = expression.evaluate(document)
        document + (outputField, literal.value)
      } catch {
        case t: Throwable => injectException(document, outputField, t)
      }
    })
  }
As you see in the above code, we return a lambda expression, that goes from document to document (BSONObject). Look at line 4, where we parse the expressionJson, a String to an Expression tree and then the tree is evaluated by consuming the document. This happens each time the lambda is called. This is a serious problem...why? Because,
  1. Each time the lambda is invoked, the parsing expression JsonString to expression tree is a expensive computation
  2. Parsing errors are not known until run-time, ideally I would be like to know them when my DSL is parsed
Solution:

I could move the parsing expression line outside of the lambda and satisfy both the above problems, that is, parsing will happen once and closure would close over the expression. This means effectively cached expression tree will be available each time and there would not be any performance penalty of parsing. It would also throw parse errors at DSL compile time, where I map user operation to lambda expression by calling the above transform function.

Forces:
  1. Preventing Force

    In order for me to move to such a solution, there were impediments - we allow users to change DSL at run-time, so that means the new transformation lambdas will be regenerated on-the-fly and the way application worked this out was to generate textual Scala file, compile it and hot-deploy the code into running JVM. So doing the parsing outside the body of the returned lambda meant losing the reference to the expression tree generated during DSL compilation. So the only option was to keep the parsing call within the body of lambda.

  2. Pushing Force

    One of the obvious pushing force is Performance, parsing again and again for every document in the collection, is an expensive computation to repeat, imagine 1000s of documents and many such transform operations, this will make it hellishly slow to respond to requests.

Which Force to tackle?:

Well, it turns out that Pushing Force was easy to nullify as compared to a long-term solution needed to solve the Preventing Force. This is where memoization comes to the rescue. Simply put, if you have a pure function (read side-effect free), then I know that calling it again and again with same set of inputs, results in the same output. This means that we can optimize the program by calling the function once, do the computation and cache its result. For future calls, return the result directly from the cache, thus avoiding that expensive computation. Thus, memoization gives us the ability compute-once, use-many times, a nice technique that trades space for time.

So below is the same function after applying memoization to it.
  def transform(outputField: String, expressionJson: String) : BSONObject => BSONObject = {
    ((document: BSONObject) => {
      try {
       val expression: Expression = Memoize(parse)(expressionJson)
        val literal = expression.evaluate(document)
        document + (outputField, literal.value)
      } catch {
        case t: Throwable => injectException(document, outputField, t)
      }
    })
  }
As Scala does not offer memoization out-of-box, you can easily hand-toss one or use libraries like Scalaz.

In the code below, I use a simple mutable map to cache the result of evaluation by using function argument as a key. So Memoize is a Function as it extends from Function1, that takes in single argument T1 and returns the result R. It is a class at the same time and its apply method first checks for the presence of the argument in the cache, if the argument is found, it returns the result of previously stored computation, else it evaluates the function, caches its result and then returns it to the caller.

import scala.collection.mutable.Map

class Memoize[-T1, +R] private (fn: T1 => R) extends (T1 => R) {
  private[this] val cache = Map[T1, R]()

  def apply(arg: T1): R = {
    if(cache.contains(arg)) {
      cache(arg)
    } else {
      val result = fn(arg)
      cache += ((arg, result))
      result
    }
  }
}

object Memoize {
  def apply[T1, R](fn: T1 => R) = new Memoize[T1, R](fn)
}
Below are the tests that were used to flush it out. We used the Specs2 framework.
@RunWith(classOf[JUnitRunner])
class MemoizeSpecs extends Specification {
  "Memoizer" should {

    def square(x: Int) = x * x

    "evaluate a function correctly" in {
      //Given
      val memoizedSquare = Memoize(square)

      //When
      val x = 2

      //Then
      memoizedSquare(x) mustEqual square(x)
    }

    "ensures it calls a function with same arguments just once and memoizes for later use" in {
      //Given
      var called = 0
      def aFunction(x: Int): (Int, Int) = {
        called += 1
        (x, called)
      }

      val memoizedFunction = Memoize(aFunction)

      //When
      val x = 1
      val resultOnce = memoizedFunction(x)
      val resultTwice = memoizedFunction(x)

      //Then
      resultOnce mustEqual (x, 1)
      resultTwice mustEqual (x, 1)
    }

    "calls a function with different arguments just once and memoizes for later use" in {
      //Given
      var called = 0
      def aFunction(x: Int): (Int, Int) = {
        called += 1
        (x, called)
      }

      val memoizedFunction = Memoize(aFunction)

      //When
      val x = 1
      val resultXOnce = memoizedFunction(x)
      val resultXTwice = memoizedFunction(x)
      
      val y = 2
      val resultYOnce = memoizedFunction(y)
      val resultYTwice = memoizedFunction(y)

      //Then
      resultXOnce mustEqual (x, 1)
      resultXTwice mustEqual (x, 1)

      resultYOnce mustEqual (y, 2)
      resultYTwice mustEqual (y, 2)
    }
  }
}
So, in the Memoize code above, you may have wondered, what is the use of private[this] on line 4 that is highlighted. Why can't I just use private here?

When I used private here, I get the following 2 compile errors:

  1. contravariant type T1 occurs in invariant position in type => scala.collection.mutable.Map[T1,R] of value cache
    private val cache = Map[T1, R]()
  2. covariant type R occurs in invariant position in type => scala.collection.mutable.Map[T1,R] of value cache
     private val cache = Map[T1, R]()

To be honest with you I had to fight this one out, the compiler would not simply take in private and not much help from googling as well. Eventually, I could not find any better explanation in other than Martin Odersky's book - Programming in Scala, Chapter 19, Section 19.7. Just inlining it here for reference

....You might wonder whether this code passes the Scala type checker. After all, queues now contain two reassignable fields of the covariant parameter type T. Is this not a violation of the variance rules? It would be indeed, except for the detail that leading and trailing have a private[this] modifier and are thus declared to be object private.

As mentioned in Section 13.5, object private members can be accessed only from within the object in which they are defined. It turns out that accesses to variables from the same object in which they are defined do not cause problems with variance. The intuitive explanation is that, in order to construct a case where variance would lead to type errors, you need to have a reference to a containing object that has a statically weaker type than the type the object was defined with. For accesses to object private values, however, this is impossible.

Scala's variance checking rules contain a special case for object private definitions. Such definitions are omitted when it is checked that a type parameter with either a + or - annotation occurs only in positions that have the same variance classification. Therefore, the code below compiles without error.

On the other hand, if you had left out the [this] qualifiers from the two private modifiers, you would see two type errors:...

So, the private[this] is really needed here, because mutable Map[T1, R] here has invariant types.

Just to complete this post, eventually, we tackled the Preventing Force, where we no longer hot deploy the code by generating a text file, compiling it and hot deploying bytecode into the JVM. Now, the transform method finally looks like -

def transform(outputField: String, expressionJson: String) : BSONObject => BSONObject = {
  val expression: Expression = Try { parse(expressionJson) } match {
    case scala.util.Success(expr) => expr
    case scala.util.Failure(failure) => throw failure
  }
  ((document: BSONObject) => {
    try {
      val literal = expression.evaluate(document)
      document + (outputField, literal.value)
    } catch {
      case t: Throwable => injectException(document, outputField, t)
    }
  })
}

If you feel this article has helped you, leave a comment below and if others can benefit from this article, share it:

Comments [0]

Alienating Atmosphere

About 3 weeks back, I was at the Agile India 2014 conference at Bangalore. I heard Martin Fowler on his keynote "Software Design in 21st century" where he broke this talk into series of 3 different ones. In the last talk, he left audience with thoughts to ponder upon. Amongst the few things that he touched was "Dark Patterns" when you as a developer developing software, one needs to become a user advocate and push back the dark patterns product owners ask us to implement. It is the moral duty and a social responsibility of the developer to do that.

Finally, he talked about - Alienating Atmosphere - a title he chose to give towards the end of talk. One of the examples he gave was - the gender discrimination that we see at work place. I then started to think about other forms of such things that I have seen/continue to see in my career.

  • Sexism - for example, in some organizations, to go up the hierarchy, the men take the elevators and the women take the stairs or vice-versa.
  • Favourism - selective treatment because they readily align with your preferences or behavior.
  • Communalism - preferring to work with people of that have similar communal/linguistic background. In meetings, at lunch tables etc... using native language when communicating with select few rather than using the business language. This is what I call conveniently ignoring the other participant(s).
  • Punditism - Veterans! Rein-in your horses.
  • Colorism - very self-explanatory, isn't it?

I think the root cause of alienating atmosphere are affinities. People tend to align themselves by affinities. These affinities create fractures on the face of the collective and they have effects, worse than what we can imagine. Affinities are responsible for disrespectful behavior, what my wife calls - "silent insults" towards the other person. These "silent insults" when hurled leave an indelible impression on the person who is at the receiving end. While the individual is already a victim, it may create micro-fracture(s) in the collective, polluting the harmony. These fractures can grow wide and deep when not taken care of, right from the beginning. Organizations will not be able to realize their full potential.

At an individual level, based on the nature of the person at the receiving end, one can pardon it once in a while, but if it is frequent, becomes intolerable. This can result in either of the two outcomes - retaliate (like a revolutionary) or abandon the collective. In some situations, abandoning is not an option. At a collective level, it either bonds the entire organization or causes its gradual disintegration.

What can be the possible solutions? I don't have all the answers, but one that comes right off the bat is - Inclusiveness. Inclusiveness at all levels. If you come across any other, please ping back.

If you feel this article has helped you, leave a comment below and if others can benefit from this article, share it:

Comments [0]

Midas - On-the-fly schema migration tool for MongoDB Released!

Indeed I feel happy to announce that we have released Midas, an on-the-fly schema migration tool for MongoDB. This is a moment of rejoice for my team and myself here at EqualExperts, where I am currently heading the RnD efforts. This is the second open-source product after Tayra - an incremental backup and restore tool for MongoDB was released last year.

So what does Midas do? Just in a nutshell, applications have to hand-roll their own schema migration infrastructure or use some third-party tool. It is difficult to migrate TBs of data without downtime (unacceptable from SLA stand-point!). This is where Midas fills the gap. It intercepts responses at MongoDB Protocol level and upgrades or downgrades document schema in-transit. As Midas works at protocol level, it is agnostic of Language specific MongoDB drivers (Ruby, Python, C# and Java drivers) and their versions within those languages. Further, Midas is Agnostic of the MongoDB configurations like Standalone, Replica Sets, Sharded environments.

For further information, you can view the slides below:
Midas - on-the-fly schema migration tool for MongoDB. from Dhaval Dalal

It is open-source, so feel free to send us a pull-request on https://github.com/EqualExperts/Midas. I welcome feedback and please let us know your experiences as well. Cheers!

If you feel this article has helped you, leave a comment below and if others can benefit from this article, share it:

Comments [0]

Leveraging Groovy's MOP to create embedded DSL

To give a context, we have created an embedded DSL for declaring the kind of transformations that the user wants to be applied to a MongoDB document to get that final expected document. Here is an excerpt of such a DSL.

// Sample Delta file 
use test
db.customers.add("{'city' : 'Please Set City', 'pin':  'Please Pin code' }")

//Increment age field by 1
db.customers.transform('age', "{ $add: ["$age", 1] }")

// Set context to transactions db
use transactions

// add to orders collection a nested document
db.orders.add('{"dispatch" : { "status" : "Default", "address": { "city": "someCity" }}}')

If you have used MongDB and its shell, you would immediately recognise the semblance to this DSL. The tool that we are building is intended to be used by developers and Devops guys. This particular DSL is dev-centric and so it makes sense for us to stay very close to the MongoDB lingo. This will cause less friction in adoption as it has virtually no learning curve. The objective of using this DSL is to allow the developers to specify the transformations that they to apply to documents in that collection and get the transformed document.

The use keyword is used to set the context of the DB in which you issue commands. transform and add commands specify the type of transformation to be applied on customers collection and orders collection respectively. For example, transform takes in a built-in function add, adds 1 to the age field in the document and stores the results back into the age field.

Also, the tool allows multiple files that contain such transformations, we call these files as delta files. Obviously I'd need to read all these files, parse them and create a data-structure that will be consumed by the tool itself to make sense of the information stored in there. So the question before me was how do achieve this using Groovy? Before I begin describing the how I approached it by using Groovy's Meta-Object Protocol, lets take a detour and understand some concepts elaborated by Terrence Parr in his book - Language Implementation Patterns, thanks to my friend Aslam Khan for pointing this book out to me. In here, Terrence gives an excellent big picture of how to implement it. He breaks down the whole process into a multi-stage pipeline (I have modified the diagram below to suit my context while preserving the spirit though) that analyzes and manipulates the input stream. The output of this pipeline is a useful internal data structure, he calls this as Intermediate Representation (IR). It is this IR that is consumed by Generators and it generates output based on this IR.

                               T R A N S L A T O R
       +---------------------------------------------------------------+
       |                                                               |
       +                                                               +
         +------------+         +-------------+         +------------+
         |            |         |             |         |            |
input -->|   Reader   |--> IR ->|   Analyzer  |--> IR ->| Generator  |--> output
         |            |         |             |         |            |
         +------------+         +-------------+         +------------+
       +                                         +
       |                                         |
       +-----------------------------------------+
                   I N T E R P R E T E R

                          M U L T I S T A G E    P I P E L I N E

  • It is the responsibility of the Reader is to build this data structure - Intermediate Representation from the input stream. It really is concerned with the Syntax in the input and ensures that it is honoured.
  • The job of the Analyzer is to perform Semantic Analysis, that is, to figure out what the input means, as Terrence Parr puts it - "anything beyond Syntax is called the Semantics".
  • The job of the Generator is to consume this IR and produce output, it does so by walking this IR.
  • The Translator is really an entity that is composed out of the above three and does the overall translation of the input, whether it is text or binary and converts it to another consumable form, say translating a markdown to HTML could be one of the examples.
  • Finally Interpreter is composed out of Reader and Analyzer, it reads, decodes and executes instructions, much like the Java Interpreter.

For me, the above was very helpful in breaking down big problem in to bite-sized problems. So jumping straight in to the implementation, the Reader would read in the delta files for us and hand in each to the Parser and it's Parser's responsibility to build the IR for consumption by the Generator. The Translator is responsible for doing the entire translation job and does so be delegating it appropriately to the Reader and Generator. So here is some code to help grasp this.

public class Translator<T> {
  private final Reader reader
  private final Generator<T> generator

  public Translator(Reader reader, Generator<T> generator) {
    this.reader = reader
    this.generator = generator
  }

  public T translate(final TransformType transformType, final List<File> deltaFiles) {
    def tree = reader.read(deltaFiles)
    generator.generate(transformType, tree)
  }
}
In the above code the reader produces a Tree, an Intermediate Representation (IR) which is consumed by the generator. The Generator is a just a Strategy where I can have variety of them. In our case, the concrete is a ScalaGenerator, that produces Lambda Expressions for each of the transforms specified in the delta files. So, the Generator walks the Tree and produces a converted output.
public interface Generator<T> {
  T generate(TransformType transformType, Tree representation)
}
Here is the Reader, the consumes the delta files given to it. For each delta file, I create a new GroovyShell and tell it to evaluate the code (delta file text). Result of shell evaluation is an object that is passed to my parser. It is here that the parser gets nodes where GStrings are already converted to String and 'use' is already converted to 'using' method name. Please see my earlier post where I explain in depth, how this is achieved.
public class Reader {
  private GroovyShell createNewShell() {
    ...
    ...
    def configuration = new CompilerConfiguration()
    configuration.addCompilationCustomizers(secureCustomizer)
    new GroovyShell(configuration)
  }

  public Tree read(final List deltaFiles) {
    def parser = new Parser()
    deltaFiles.each { deltaFile ->
      def deltaFileAbsoluteName = deltaFile.absolutePath
      log.info("Reading $deltaFileAbsoluteName")
      def dsl = deltaFile.text
      def code = """{-> $dsl}"""
      //shell evaluates once, hence create new each time
      def shell = createNewShell()
      def delta = shell.evaluate(code, deltaFileAbsoluteName)
      try {
        use (FileExtension) {
          parser.parse(deltaFile.changeSet(), delta)
        }
      } catch (Throwable t) {
        throw new InvalidGrammar("$deltaFileAbsoluteName --> ${t.message}")
      }
      shell = null
    }
    parser.ast()
  }
}
Now, the real fun starts. I know it took us quite long to get here, but in my view it was important to see the overall big picture and how every element contributes towards the final output in the scheme of things. Let us now look at the Parser.
@Slf4j
class Parser {
  private Tree tree = new Tree()

  @CompileStatic
  def getProperty(String name) {
    log.debug("property name is: $name")
    if(name == 'db') {
      return tree.currentDB()
    }
    tree.using(name)
  }

  def using(db) {
    log.info "Changing db context $db"
  }

  public Tree parse(Long changeSet, Closure closure) {
    tree.updateCS(changeSet)
    def cloned = closure.clone()
    cloned.delegate = this
    cloned.resolveStrategy = Closure.DELEGATE_FIRST
    cloned()
    tree
  }

  def ast() {
    tree
  }
}

When the Parser's parse method starts it work, the first line that it encounters is the use test. This invokes the using(db) method on the Parser class. But before Groovy can invoke the using method, it needs to resolve the property db. As it does not find test property anywhere on the Parser class, it invokes the getProperty method. getProperty method is a part of Groovy MOP, where in if a property is not found, then it throws MissingPropertyException. But here, as we have provided getProperty implementation, it first invokes it and gives it a chance to handle that property. In the body of the getProperty method, I check if the name of the property is 'db'. If it is so, then the Tree is told to return the current database tree.currentDB(). In case, if it's value is not 'db' then, I know that the user is wanting to either:

  • Create a new database (in our case - test)
  • Or use an existing database (test - in the example above)
The above behavior is similar to what one would see on MongoDB shell. The Tree is again told to do that by invoking the tree.using(db) method on it. The Tree's using (String dbName) is self explanatory, implementing the above 2 bullet points. Below is the Tree object.

@Slf4j
public class Tree {
  @Delegate
  private final Context ctx = new Context()
  private final Map<String, Database> databases = [:]

  @CompileStatic
  def using(String dbName) {
    def database = createOrGetDatabase(dbName)
    updateDB(database)
    database
  }

  @CompileStatic
  private Database createOrGetDatabase(String name) {
    if (databases.containsKey(name)) {
      log.info("Using database $name")
      databases[name]
    } else {
      log.info("Creating Database $name")
      databases[name] = new Database(name, ctx)
    }
  }

  @CompileStatic
  def eachWithVersionedMap(TransformType transformType, Closure closure) {
    databases.each { name, Database database ->
      database.eachWithVersionedMap(transformType, closure)
    }
  }
}
While itself acting as an Aggregate Root for Intermediate Representation, the Tree also holds the Context. What is this Context all about? Well, just so that you have a link, the Context is just an object that holds the database on which the transformations are being applied and hence it is DatabaseAware and also needs to know to which ChangeSet does this transformation belong to and hence the Context is also ChangeSetAware. Take a look below.
class Context implements ChangeSetAware, DatabaseAware {
  private Long cs = 0
  private Database db

  @Override
  def updateCS(Long cs) {
    this.cs = cs
  }

  @Override
  def resetCS() {
    cs = 0
  }

  @Override
  def currentCS() {
    cs
  }

  @Override
  def currentDB() {
    db
  }

  @Override
  def updateDB(Database db) {
    this.db = db
  }
}

Now, for the next bit, once the Database object is returned (either existing or fresh), the 'collection' property on gets invoked. So we are at

db.customers.add("{'city' : 'Please Set City', 'pin':  'Please Pin code' }")
in our journey. Again, this needs to be tackled in a fashion similar to how use was, that is, I don't know beforehand, to what collection the user wants to have this transformation applied to. To make this happen, I again resort to Groovy MOP and make sure I provide a getProperty implementation. If the collection 'customers' existed before I would return it, else I would create a new Collection object. Take a look below

@ToString
@Slf4j
class Database {
  final String name
  private final Map collections = [:]
  private final Context ctx

  @CompileStatic
  Database(String name, Context ctx) {
    this.name = name
    this.ctx = ctx
  }

  @CompileStatic
  def getProperty(String name) {
    if(collections.containsKey(name)) {
        log.debug("Using Collection with $name")
        collections[name]
    } else {
        log.info("Creating Collection $name")
        collections[name] = new Collection(name, ctx)
    }
  }

  @CompileStatic
  def eachWithVersionedMap(TransformType transformType, Closure closure) {
     def dbName = this.name
     collections.each { String name, Collection collection ->
        closure(dbName, name, collection.asVersionedMap(transformType))
     }
  }

  def String toString() {
    "${getClass().simpleName}: $name $collections"
  }
}
Now, for the final bit the add method that users write in the DSL. So we are at in the journey of DSL statement resolution
db.customers.add("{'city' : 'Please Set City', 'pin':  'Please Pin code' }")
Again, Groovy MOP at rescue. On the Collection object, such a method is not present and I want to keep DSL open to future extensions, so that I can accommodate more command Verbs in future. The invokeMethod is available on any GroovyObject and you can do a lot of stuff with it. One of usecases can be to implement method interception, like the AOP or to synthesize methods that never existed on the class before. Here I am using it neither to synthesize methods nor as an AOP, but for adding series of transformations that are specified by the DSL writer to be applied on that collection. So, when Groovy encounters
add("{'city' : 'Please Set City', 'pin':  'Please Pin code' }")
on the customers collection, it tries to locate that method. But as this method is not present, it calls invokeMethod. Please take a look at Using invokeMethod and getProperty documentation for details.
@Slf4j
class Collection {
  final String name
  private final Map<Double, Tuple> versionedExpansions = [:] as LinkedHashMap
  private final Map<Double, Tuple> versionedContractions = [:] as LinkedHashMap
  private Double curExpansionVersion = 1
  private Double curContractionVersion = 1
  private final Context ctx

  Collection(String name, Context ctx) {
    this.name = name
    this.ctx = ctx
  }

  def invokeMethod(String name, args) {
      log.info("${this.name} invokeMethod: Operation $name with $args")

      Verb verb = asVerb(name)
      def parameters = args? args as List<String> : []
      verb.validate(parameters)
      def changeSet = ctx.currentCS()
      if (verb.isExpansion()) {
         log.info("${this.name} Adding Expansion $verb with $args to changeSet $changeSet")
         versionedExpansions[curExpansionVersion++] = new Tuple(verb, args, changeSet)
         return
      }
      if (verb.isContraction()) {
        log.info("${this.name} Adding Contraction $verb with $args to changeSet $changeSet")
        versionedContractions[curContractionVersion++] = new Tuple(verb, args, changeSet)
        return
      }
  }

  @CompileStatic
  private Verb asVerb(String token) {
    try {
       Verb.valueOf(token)
    } catch (IllegalArgumentException iae) {
      throw new InvalidGrammar("Sorry!! Midas Compiler doesn't understand $token")
    }
  }

  @CompileStatic
  def asVersionedMap(TransformType transformType) {
    Map<Double, Tuple> versionedTransforms = null
    if(transformType == EXPANSION) {
      versionedTransforms = versionedExpansions
    }
    if(transformType == CONTRACTION) {
      versionedTransforms = versionedContractions
    }
    versionedTransforms
  }
}
In the invokeMethod, I convert the method name to Verb object, and tell the verb to validate its parameters length and each parameter's type. And based on how the Verb is classified, i.e, either as an Expansion or Contraction type verb, I put that into the appropriate map in the Collection object along with changeSet and expansionVersion or contractionVersion as deemed by the TransformType. It is this invokeMethod where each Verb is analyzed for its meaning. So the Collection object also doubles up as Semantic Analyser.

So, the domain model for Tree looks like, which is pretty much close to MongoDB's model where in you have a server containing several databases and each database containing several collections.

 +------+      +----------+      +------------+      +------+
 | Tree |+---->| Database |+---->| Collection |+---->| Verb |
 +------+ 1  * +----------+ 1  * +------------+ 1  * +------+

Now to allow any client in our case the Generator object to walk the Tree I have provided the eachWithVersionedMap(TransformType transformType, Closure closure) that takes in a closure (allowing client to do what it can) and this method does internal iteration without breaking encapsulation (read without leaking internal structure) like Database, Collection and Verb objects to the outside world.

For the curious, here is how the Verb looks like
public enum Verb {
  @Expansion @ArgsSpecs(ArgType.JSON)
  add,

  @Expansion @ArgsSpecs({ ArgType.Identifier, ArgType.Identifier })
  copy,

  @Expansion @ArgsSpecs({ ArgType.Identifier, ArgType.String, ArgType.JSON })
  split,

  @Expansion @ArgsSpecs({ ArgType.JSON, ArgType.String, ArgType.Identifier })
  merge,

  @Expansion @ArgsSpecs({ ArgType.Identifier, ArgType.JSON })
  transform,

  @Contraction @ArgsSpecs(ArgType.JSON)
  remove;

  private Annotation getAnnotation(final Class<? extends Annotation> annotationClass) {
    try {
      return Verb.class
        .getField(name())
        .getAnnotation(annotationClass);
    } catch (NoSuchFieldException e) {
      return null;
    }
  }

  public boolean isExpansion() {
    return getAnnotation(Expansion.class) != null;
  }

  public boolean isContraction() {
    return getAnnotation(Contraction.class) != null;
  }

  public void validate(final List<String> args) {
    ArgsSpecs annotation = (ArgsSpecs) getAnnotation(ArgsSpecs.class);
    if (annotation == null) {
      throw new InvalidGrammar("You seem to have forgotten @ArgsSpecs on verb " + name());
    }
    ArgType[] types = annotation.value();
    validateArgsLength(args, types);
    validateArgsValues(args, types);
  }

  private void validateArgsValues(final List<String> args,
       final ArgType[] types) {
    for (int index = 0; index < types.length; index++) {
      types[index].validate(args.get(index));
    }
  }

  private void validateArgsLength(final List<String> args, final ArgType[] types) {
    if (types.length != args.size()) {
      final String errMsg = "Wrong number of arguments supplied for %s, Required %d, Found %d";
      throw new InvalidGrammar(String.format(errMsg,
               name(),
               types.length,
               args.size()));
    }
  }
}
Phew that was a long post..but I hope it was helpful.

If you feel this article has helped you, leave a comment below and if others can benefit from this article, share it:

Comments [0]

Using Groovy AST Transformations for DSL Manipulation

To give a context on this problem, on my current project, I have created a embedded DSL that uses Groovy as the host language. This DSL closely resembles MongoDB lingo.

An example would be:

// Sample Delta file 
use test
db.customers.add("{'city' : 'Please Set City', 'pin':  'Please Pin code' }")

//Increment age field by 1
db.customers.transform('age', "{ $add: ["$age", 1] }")

// Set context to transactions db
use transactions

// add to orders collection a nested document
db.orders.add('{"dispatch" : { "status" : "Default", "address": { "line1" : "Road", "city": "City" }}}')

Like the Mongo Shell, I wanted to support command arguments that can either be wrapped in a single or a double quoted String. Same as JavaScript where you can use quotes inside a string, as long as they don't match the quotes surrounding the string. When I want to do that I hit two problems right away:

  1. use is a DefaultGroovyMethod for pimping your library that is used by Groovy Categories, quite similar to the implicit conversions in Scala and extension methods in C#.
  2. Double-quoted strings for arguments in functions - add, transform are GStrings in Groovy that support string interpolation using the $ insertion - as they say in the Groovy world and you probably have heard it - "You need a $ in GString ;)". It evaluates the expression following the dollar sign and substitutes the evaluation result in its place in the output string. GStrings are lazily evaluated, that is, they are not evaluated until toString() is called on them or they are passed around as parameters in functions, where a function call causes it to be evaluated. As you can see in the above example, $age will cause problems when the GString is evaluated by the parser that parses this. It won't know where to get the value of $age during GString evaluation and would throw a fit.

Well, I could come up with a hack. Lets not use - use and instead choose a different verb - say, using. But for the second problem, how would I stop the user from entering double quoted strings in function arguments? Putting a caveat in documentation would mean being non-proactive and demand a disciplined user. So this one cannot be hacked. Both these problems, sounded like acting at a compiler level in some form or the other. Here is how I solved it, much like killing two birds with one stone!

Groovy offers a way to visit the Abstract Syntax Tree (AST) and transform it. An AST is an intermediate representation that the compiler generates during the compilation phase. It is this AST that gets used to generate another translation or bytecodes. Groovy provides a hook in the form of ASTTransformation that allows us to add or modify this tree during execution of a specific compiler phase. A class that implements this interface must annotate it with @GroovyASTTransformation so that Groovy knows which compile phase to run in. As I am dealing with global AST transformation, the visit method is called once for the sourceUnit, i.e. the actual source code and I'll ignore the first and the second entries in the ASTNode[] array. Here is my ASTTransformation code.

@Slf4j
@GroovyASTTransformation
public class StatementTransformation implements ASTTransformation {
  private def transformations = ['use' : 'using']

  @Override
  void visit(ASTNode[] nodes, SourceUnit source) {
    log.info("Source name = ${source.name}")
    ModuleNode ast = source.ast
    def blockStatement = ast.statementBlock

    blockStatement.visit(new CodeVisitorSupport() {
      void visitConstantExpression(ConstantExpression ce) {
        def name = ce.value
        if (transformations.containsKey(name)) {
          def newName = transformations[name]
          log.debug("Transform Name => $name -> $newName")
          ce.value = newName
        } else {
          log.debug("Skip Name => $name")
        }
      }

      public void visitArgumentlistExpression(ArgumentListExpression ale) {
        log.debug("Arg List $ale.expressions")
        def expressions = ale.expressions
        expressions.eachWithIndex { expr, idx ->
          if(expr.getClass() == GStringExpression) {
            log.debug("Transform GString => String ($expr.text)")
            expressions[idx] = new ConstantExpression(expr.text)
          }
        }
        log.debug("Transformed Arg List $ale.expressions")
        super.visitArgumentlistExpression(ale)
      }
    })
  }
}

In the code above:

  1. visitConstantExpression(...) gets called when a constant like, use, db, customers, add, transform, fn params etc... are encountered. Based on what is defined in the transformations map (Line 4), a transformation is applied by simple assignment to the value field of ConstantExpression (Line 18).
  2. visitArgumentlistExpression gets called when there is a function call. In my case db.customers.transform(...) and db.customers.add(...) are function calls and the entire argument list gets passed to this visitArgumentlistExpression method. It is here that I inspect the each argument for occurrence of a GStringExpression and convert it to a ConstantExpression (Line 30).

Here is how you would then use the above transformation.

The Reader reads the DSL files, in my case, we are calling them as delta files. For each delta file, I create a new GroovyShell and tell it to evaluate the code (delta file text). This shell is configured using my custom AST transformer - StatementTransformation. Result of shell evaluation is an object that is passed to my parser. It is here that the parser gets nodes where GStrings are already converted to String and 'use' is already converted to 'using' method name.

@Slf4j
public class Reader {
  private def createNewShell() {
    def secureCustomizer = new SecureASTCustomizer()
    secureCustomizer.with {
      methodDefinitionAllowed = false // user will not be able to define methods
      importsWhitelist = [] // empty whitelist means imports are disallowed
      staticImportsWhitelist = [] // same for static imports
      staticStarImportsWhitelist = []
      ....
    }

    def astCustomizer = 
      new ASTTransformationCustomizer(new StatementTransformation())
    def config = new CompilerConfiguration()
    config.addCompilationCustomizers(secureCustomizer, 
                          astCustomizer)
    new GroovyShell(config)
  }

  public Tree read(final List<File> deltas) {
    def parser = new Parser()
    deltas.each { delta ->
      def deltaName = delta.name
      def dslCode = """{-> $delta.text}"""
      //shell evaluates once, hence create new each time
      def shell = createNewShell()
      def deltaObject = shell.evaluate(dslCode, deltaName)
      try {
        parser.parse(deltaObject)
      } catch (Throwable t) {
        throw new InvalidGrammar("$deltaName --> ${t.message}")
      }
      shell = null
    }
    parser.ast()
  }
}
Here is the Parser code. In here is the using(db) method that gets called after the custom transformation is applied. An astute reader may have noticed how I intercept property access using the getProperty method (a part of the the Groovy MOP - Meta-Object Protocol feature) to change the database context.
@Slf4j
class Parser {
  private Tree tree = new Tree()
  private def dbContext

  @CompileStatic
  def getProperty(String name) {
    log.debug("property name is: $name")
    if(name == 'db') {
      return dbContext
    }
    tree.using(name)
  }

  def using(db) {
     log.info "Setting db context to ${db.toString()}"
     dbContext = db
  }

  public Tree parse(Closure closure) {
    def cloned = closure.clone()
    cloned.delegate = this
    cloned.resolveStrategy = Closure.DELEGATE_FIRST
    cloned()
    tree
  }

  def ast() {
    tree
  }
}

If you feel this article has helped you, leave a comment below and if others can benefit from this article, share it:

Comments [0]

Fun with Scala Implicits

On this current project of mine, I wanted to provide operations like adding/removing a field or adding/removing multiple fields to a BSONObject (from the BSON library - org.bson.BSONObject) along with operations like toBytes on it. Instead of creating a utils class that contains all these behaviors (please see my earlier post on - Kill That Util Class! ) I resorted to Scala's Implicits. With that I can then write more readable code like:

  import DocumentOperations._

  val document: BSONObject = ...

  // add a field with value
  document + ("field" ,  value)

  // remove a field from document
  document - "field"

  val otherDocument: BSONObject = ...
  // add document
  val aggregate = document ++ otherDocument

  // remove existing fields from document present in otherDocument
  val remaining = document -- otherDocument

If we compiled the previous code without importing the class DocumentOperations, Scala compiler will shout that methods +, -, ++ and -- do not exist on document. With the import and the definition of DocumentOperations class present, we tell Scala to convert BSONObject to DocumentOperations and that helps us accomplish the above operations. As this conversion takes place without explicit syntax or method call, it is called Implicit Type Conversion. With this it appears as if the original BSONObject has this behavior, while in reality a type conversion from BSONObject -> DocumentOperations takes place. Using such an Implicit Type conversion we can augment things using our own domain-specific syntax.

Scala compiler automatically applies all the conversions in the current and in the imported scope. When we imported the DocumentOperations, we automatically got implicit def apply(document: BSONObject) inside object DocumentOperations in our scope and the conversion of BSONObject to DocumentOperations was effected.

import org.bson.{BasicBSONEncoder, BSONObject}
import scala.collection.JavaConverters._

class DocumentOperations private (document: BSONObject) {
  def + [T] (field: String, value: T): BSONObject = {
    document.put(field, value)
    document
  }

  def - (name: String): BSONObject = {
    document.removeField(name)
    document
  }

  def ++ (fields: BSONObject) : BSONObject = {
    document.putAll(fields)
    document
  }
  
  def -- (fields: BSONObject) : BSONObject = {
    fields.toMap.asScala.foreach { case(index, value) =>
      val name = value.asInstanceOf[String]
      document.removeField(name)
    }
    document
  }

  def toBytes: Array[Byte] = 
    DocumentOperations.ENCODER.encode(document)
}

object DocumentOperations {
  private val ENCODER = new BasicBSONEncoder()
  implicit def apply(document: BSONObject) = 
    new DocumentOperations(document)
}
Below is another example where I got rid of a factory to create a more visually appealing DSL to explicitly show the wiring. Before Using Scala Implicits, the code would look something like this:
import PipesFactory._

val requestPipe = createRequestPipe(client, server)
val responsePipe = createResponsePipe(client, server)
...
val duplexPipe = createDuplexPipe(client, server)
...

// PipesFactory.scala
object PipesFactory {
   def createRequestPipe(from: Socket, to: Socket) = {
     val fromIn = from.getInputStream
     val toOut = to.getOutputStream
     new SimplexPipe("-->", fromIn, toOut)
   }

  def createResponsePipe(from: Socket, to: Socket) = {
    val toIn = to.getInputStream
    val fromOut = from.getOutputStream
    new SimplexPipe("<--", toIn, fromOut)
  }

  def createDuplexPipe(client: Socket, server: Socket) = {
     val requestPipe = createRequestPipe(client, server)
     val responsePipe = createResponsePipe(client, server)
     new DuplexPipe(requestPipe, responsePipe)
  }
}
After Refactoring to Scala Implicits and renaming PipeFactory to SocketConnector, the code looks like this:
import SocketConnector._

val requestPipe = client ==> server
val responsePipe = client <== server
...
val duplexPipe = client <==> server
...
//SocketConnector.scala
import java.net.Socket

class SocketConnector private (source: Socket) {

  def <==> (target: Socket) = 
    new DuplexPipe(==> (target), <== (target))
 
  def ==> (target: Socket) = {
    val srcIn = source.getInputStream
    val tgtOut = target.getOutputStream
    new SimplexPipe("==>", srcIn, tgtOut)
  }

  def <== (target: Socket) = {
    val tgtIn = target.getInputStream
    val srcOut = source.getOutputStream
    new SimplexPipe("<==", tgtIn, srcOut, interceptable)
  }
}

object SocketConnector {
  implicit def apply(source: Socket) = new SocketConnector(source)
}
So that was some fun with Scala's Implicit Type conversion.

If you feel this article has helped you, leave a comment below and if others can benefit from this article, share it:

Comments [0]

Global Day of CodeRetreat 2013

Last Saturday was the Global Day of CodeRetreat and I really enjoyed facilitating it. This was the 4th CodeRetreat that I facilitated. Thanks to Corey Haines for such a wonderful idea and the CodeRetreat team for organizing it ( Jim Hurne, Alissa Conaty, Adi Bolboaca, and Martin Klose.)

All in all, there were 25 participants and took to the usual Conway's Game of Life, and this time my agenda was to nudge people gently towards functional programming. First session was pretty much a warm-up and second session was more warm-up with TDD with folks wrapping their head around the problem.

Third session onwards, I started introducing constraints:

  • Session 3 - No Boolean Operators - No Conjunction (and), no Disjunction (or) or no Negation (not) operators
  • Session 4 - No Loops - no 'for', 'while', 'do'...'while', or 'repeat'...'until' constructs
  • Session 5 - No Instance Variables

People really enjoyed it. At the end of each session, they shared what they observed and learnt. Here are the links to some pictures. Thanks to EqualExperts for sponsoring the event in Pune.

If you feel this article has helped you, leave a comment below and if others can benefit from this article, share it:

Comments [0]