A software blog by Joseph Junker
JavaScript and TypeScript developers often look down on the original Gang of Four design patterns as irrelevant. Some (though not all) are rendered unnecessary due to the flexibility that JS provides over C++, and TS preserves most of the relevant flexibility. This post focuses on one of the patterns which remains relevant: the Visitor pattern. The Visitor pattern enables a beneficial mixing of functional and object oriented programming and has a deep and rigorous connection to type theory. Its idiomatic expression in JS is much more lightweight than in C++, and in fact is already common (by different names) in FP-oriented JS and TS codebases. This post will explore these topics.
Code samples for this post are given in TypeScript. The same code is idiomatic and relevant to JavaScript; the Visitor pattern does not exist solely to work around the constraints of a type system. The abbreviation “ADT” will be used to refer to algebraic data types, not abstract data types. The reader is expected to have some familiarity with ADTs, and will benefit from (optional) knowledge of Church and Scott encodings. Finally, the concepts of operational, representational, and data variant extensibility will be discussed. I cover the relevant information on extensibility here, but interested readers may further acquaint themselves via my prior posts on the subject, or via the paper “On Understanding Data Abstraction, Revisited”.
In addition to William Cook’s work cited above, this post draws heavily from the paper “The VISITOR Pattern as a Reusable, Generic, Type-Safe Component”.
There is an (informal) duality between functional and object-oriented approaches to data and operations, and this duality continues to exist in the absence of side-effects, mutable state, and/or inheritance.
When programming in a functional style, data is generally unencapsulated. Exposing the concrete structure of the data enables developers to continually write new functions which operate on that data in a modular fashion. Take for example a binary tree with a signature such as type ADTTree<A> = ADTNode<A> | ADTLeaf<A>. The components in the sum type, ADTNode<A> and ADTLeaf<A>, are referred to as “data variants”. We can modularly write functions which operate on a tree; adding a new tree-consuming function doesn’t affect any of the existing tree-consuming functions. However, when programming in this style we cannot modify a data type to include new variants without potentially breaking all existing code which operates on the type. Were we to modify ADTTree<A> to have the definition ADTNode<A> | ADTLeaf<A> | ADTList<A>, we would also have to modify potentially every function in our codebase which consumes trees to handle the additional case. Functional programming has a bias toward making it easy to write new functions while making it hard to add new data variants. To use some definitions from literature, algebraic data types enable operational extensibility (the addition of new operations on data) but prevent data variant extensibility (the addition of new data variants.)
type ADTTree<A> = ADTNode<A> | ADTLeaf<A>
ADTNode<A>
ADTLeaf<A>
ADTTree<A>
ADTNode<A> | ADTLeaf<A> | ADTList<A>
On the flip side of the coin, the purest essence of OOP is encapsulation. Data is defined via an interface, and objects conform to this interface. When observing the tenents of OOP this encapsulation should not be broken by the use of features like instanceof or other manual inspection. When this encapsulation is respected the concrete representation of data within individual objects is invisible to the rest of the program. Importantly, this concrete representation may not exist at all; when operating at their full power objects can completely abandon the use of concrete data and can instead represent data purely as behavior. For example, take this interface for immutable sets of numbers:
instanceof
interface NumberSet { has(n: number): boolean; union(s: NumberSet): NumberSet; }
We can write implementations of NumberSet which use concrete data internally:
NumberSet
class ArraySet implements NumberSet { #arr: number[]; public constructor(arr: number[]) { this.#arr = arr; } public has(n: number): boolean { return this.#arr.includes(n); } public union(s: NumberSet): NumberSet { return new SetUnion(this, s); } } class SetUnion implements NumberSet { #left: NumberSet; #right: NumberSet; public constructor(left: NumberSet, right: NumberSet) { this.#left = left; this.#right = right; } public has(n: number) { return this.#left.has(n) || this.#right.has(n); } public union(s: NumberSet): NumberSet { return new SetUnion(this, s); } }
We can also write a NumberSet whose internal implementation contains no data:
class Odd implements NumberSet { public constructor() {} public has(n: number): boolean { return n % 2 !== 0; } public union(s: NumberSet): NumberSet { return new SetUnion(this, s); } }
In the pure OOP world there is not a primitive notion of “data variants”, because different implementations of NumberSet cannot (should not) be distinguished after they are constructed. Instead, there are classes which allow the construction of different “representations” of the NumberSet type. We can write new representations without breaking old ones; were we to write an Intersection class we would not need to modify the classes we defined above. However, if we were to add a new intersect method to the NumberSet interface we would break all existing implementations. Using more formal terms, encapsulated objects provide representational extensibility and prevent operational extensibility.
Intersection
intersect
Put together, and glossing over the (subtle but important) distinction between “data variants” and “data representation”, this is roughly the situation: functional programming makes it easy to define new functions, and hard to add new data. Object oriented programming makes it easy to define new data, and hard to add new functions.
The first crux of this post is that this restriction is not always desireable. Sometimes we may want to represent data variants in OOP, making it easy to add new functions. Sometimes we want to encapsulate data in FP, making it easy to add new data representations. The second crux of this post is that, in the absence of additional language features, the technique for adding data variants to OOP and the technique for adding data encapsulation to FP are the same technique.
Let’s say we want to implement a Tree type in an object-oriented codebase. A naieve implementation might look like this:
Tree
interface Tree<T> {}; class Node<T> implements Tree<T> { #left: Tree<T>; #right: Tree<T>; public constructor(left: Tree<T>, right: Tree<T>) { this.#left = left; this.#right = right; } } class Leaf<T> implements Tree<T> { #value: T; public constructor(value: T) { this.#value = value; } }
The use of an empty interface is problematic in TypeScript, but we will amend the situation shortly. More importantly, the classes we’ve defined are completely useless. They do not provide any means of accessing their contents. Even if they were to expose their contents (a questionable decision if we’ve decided to do OOP) consumers would have to implement involved logic or lean on advanced TypeScript features in order to know what fields are available to read in a type-safe way. While TypeScript’s flexibility makes this possible, though inconvenient, other OOP languages do not have any way to resolve the situation without unsafe downcasts. We could make our classes usable by adding the operations we want to perform on them as members of the Tree interface, but this brings us back to the problems listed in the previous section. If we want operational extensibility we will have to take a different tack.
This is where the Visitor pattern comes in. The pattern lets us provide an ADT-like view of objects, without breaking their encapsulation. We can add a traditional implementation of the pattern in three steps.
First, we declare a Visitor interface, which defines the data variants which we want to simulate:
interface TreeVisitor<TContents, TResult> { visitNode(left: TResult, right: TResult): TResult; visitLeaf(value: TContents): TResult; }
We will be describing trees in terms of nodes and leaves, and so we have added a method for each. (The signature of these methods will become more clear shortly.)
Next, we add an accept method to our Tree interface. This method will take a Visitor, use it to compute a result, and then return the result of the Visitor:
accept
interface Tree<T> { accept<S>(visitor: TreeVisitor<T, S>): S; }
Finally, we implement the accept method inside our Leaf and Node classes:
Leaf
Node
class Node<T> implements Tree<T> { #left: Tree<T>; #right: Tree<T>; public constructor(left: Tree<T>, right: Tree<T>) { this.#left = left; this.#right = right; } public accept<S>(visitor: TreeVisitor<T, S>): S { return visitor.visitNode( this.#left.accept(visitor), this.#right.accept(visitor)); } } class Leaf<T> implements Tree<T> { #value: T; public constructor(value: T) { this.#value = value; } public accept<S>(visitor: TreeVisitor<T, S>): S { return visitor.visitLeaf(this.#value); } }
The Node and Leaf classes choose to present themselves to the visitor as nodes and leaves, and do so by calling the corresponding visitor methods. Here, they provide the data which they contain to the visitor.
Now we can write algorithms on trees by making instances of our TreeVisitor interface. These implementations contain the same logic as the algorithms we’d write on ADTs, but instead of using conditional logic or pattern matching to differentiate cases we place the logic for each branch into different methods. For example, here’s a TreeVisitor that sums all of the numbers in a tree:
TreeVisitor
class SumTree implements TreeVisitor<number, number> { visitNode(left: number, right: number): number { return left + right; } visitLeaf(value: number): number { return value; } }
Compare this to the corresponding implementation on ADTs:
type ADTTree<T> = ADTNode<T> | ADTLeaf<T>; type ADTNode<T> = { tag: "ADTNode", left: ADTTree<T>, right: ADTTree<T> } type ADTLeaf<T> = { tag: "ADTLeaf", value: T } function sumTree(tree: ADTTree<number>): number { if (tree.tag === "ADTNode") { return sumTree(tree.left) + sumTree(tree.right); } return tree.value; }
In both versions we have one location which declare the logic for a node, and a separate location which declares the logic for a leaf. Visitors separate these locations by putting them in separate classes, while functions operating on ADTs separate them by putting them in different branches of a conditional.
As a second example of an operation, here’s a TreeVisitor which detects whether any strings in a tree are empty:
class DetectEmpty implements TreeVisitor<string, boolean> { visitNode(left: boolean, right: boolean): boolean { return left || right; } visitLeaf(value: string): boolean { return value.length === 0; } }
Here’s how these visitors are used:
const numTree = new Node( new Leaf(1), new Node( new Leaf(2), new Leaf(3))); const six = numTree.accept(new SumTree()); const strTree = new Node( new Leaf(""), new Leaf("asdf")); const tru = strTree.accept(new DetectEmpty());
There’s a few important points regarding this approach:
The recursion issue is a mixed blessing. The two operations defined above both inherently involve recursion over the entire tree and so it was convenient for us to not have to manually make recursive calls inside the Visitor implementations. This does not apply to every operation, though. Say that we wanted to optimize our DetectEmpty operation and make it stop execution as soon as an empty leaf is found. In this case we need the visitor itself to be in control of the recursion, so it can make a decision as to whether continue to recurse at each step.
DetectEmpty
When accept methods encapsulate recursion the pattern is sometimes referred to as “internal visitors”. When recursion is done explicitly in the visitors the pattern is referred to as “external visitors”. We can modify our implementation to use external visitors instead:
interface ExternalVisitor<TContents, TResult> { visitNode(left: Tree<TContents>, right: Tree<TContents>): TResult; visitLeaf(value: TContents): TResult; } interface Tree<T> { acceptExternal<S>(visitor: ExternalVisitor<T, S>): S; } class Node<T> implements Tree<T> { #left: Tree<T>; #right: Tree<T>; public constructor(left: Tree<T>, right: Tree<T>) { this.#left = left; this.#right = right; } public acceptExternal<S>(visitor: ExternalVisitor<T, S>): S { return visitor.visitNode(this.#left, this.#right); } } class Leaf<T> implements Tree<T> { #value: T; public constructor(value: T) { this.#value = value; } public acceptExternal<S>(visitor: ExternalVisitor<T, S>): S { return visitor.visitLeaf(this.#value); } } class DetectEmptyOptimized implements ExternalVisitor<string, boolean> { visitNode(left: Tree<string>, right: Tree<string>): boolean { return left.acceptExternal(this) || right.acceptExternal(this); } visitLeaf(value: string): boolean { return value === "" } } const strTree = new Node( new Leaf(""), new Leaf("asdf")); const hasEmpty = strTree.acceptExternal(new DetectEmptyOptimized());
Notes on this implementation:
acceptNode
Tree<T>
TResult
left
right
acceptExternal
visitLeaf
visitNode
||
So now we have two verbose and non-idiomatic implementations of visitors. Next, let’s see how we can turn them into lightweight, idiomatic TypeScript.
I used classes to create visitor objects above, because this keeps in line with the traditional presentation of visitors in OO languages. We don’t need to go through this trouble in TypeScript though, and can much more easily use anonymous inline objects containing inline functions. TypeScript’s structural typing allows us to do so without modifying the interface we declared previously. Here’s what that looks like:
const sum = numTree.accept<number>({ visitNode: (x, y) => x + y, visitLeaf: x => x }); const containsEmpty = strTree.acceptExternal<boolean>({ visitNode: function(left, right) { return left.acceptExternal(this) || right.acceptExternal(this) }, visitLeaf: x => x === "" }); // Alternatively, for those who prefer to avoid using `this`: const detectEmpty: ExternalTreeVisitor<string, boolean> = { visitNode: (left, right) => left.acceptExternal(detectEmpty) || right.acceptExternal(detectEmpty), visitLeaf: x => x === "" }; const containsEmpty2 = strTree.acceptExternal(detectEmpty)
This is significantly nicer than declaring multiple classes. The internal visitor (demonstrated by sum) is the lightest weight, as it does not have to deal with recursion. The external visitors are a bit heavier, due to the inherent complexity of handling recursion. We either have to use this, which is slightly unidiomatic in functional code and when using anonymous objects, or we have to declare the logic out-of-band. I find this slightly nicer here, but using it requires care to not use destructuring inappropriately inside of the accept methods.
sum
this
Next, let’s improve these names a bit. Using the “visit” prefix everywhere is unnecessary, and accept also obscures the logic behind the pattern. Here’s how I would declare the Tree interface instead:
interface Tree<TContents> { // Previously "accept" reduce: <TResult>( { node, leaf }: TreeReduceArgs<TContents, TResult> ) => TResult // Previously "acceptExternal" match: <TResult>( { node, leaf }: TreeMatchArgs<TContents, TResult> ) => TResult, } interface TreeReduceArgs<TContents, TResult> { node: (left: TResult, right: TResult) => TResult, leaf: (value: TContents) => TResult } interface TreeMatchArgs<TContents, TResult> { node: (left: Tree<TContents>, right: Tree<TContents>) => TResult, leaf: (value: TContents) => TResult }
We declare TreeReduceArgs and TreeMatchArgs separately from the Tree interface to aid in readability and improve error messages.
TreeReduceArgs
TreeMatchArgs
Our implementation of Node and Leaf are the same as before, but with the method names changed.
Here’s an example of how we use reduce and match:
reduce
match
const sum = numTree.reduce<number>({ node: (x, y) => x + y, leaf: x => x }); const isEmpty = strTree.match<boolean>({ node: function(left, right) { return left.match(this) || right.match(this); }, leaf: x => x === "" });
These match and reduce functions are relatively common in TypeScript code which leans towards functional programming. The first simulates pattern matching, while the second performs a fold. These are two very fundamental functional programming constructs.
To be clear, this is still the visitor pattern. We’ve tweaked the names and leaned on some nice JavaScript features which Java lacks, but anything we can write in the Java-ish version we can trivially write in the JS-ish version and vice versa. Most importantly, the extensionality characteristics of the two approaches are the same. We’ve written operations in a functional style by declaring logic based on data variants without giving up encapsulation.1
It may not be immediately clear why it’s important to maintain encapsulation here. After all, we have two variants and a class for each, which seems like a direct translation of the FP approach. It’s important to remember, though, that instances of these classes should be completely opaque to the operations that are written on them. There’s no reason we have to stick to “one variant, one class”.
As an exercise to demonstrate why encapsulation is still valuable even in the presence of data variants, say we want to construct perfectly balanced trees whose leaves all contain the same contents. Naievely, we could do so as a function solely in terms of our existing Node and Leaf types:
function fillTree<T>(value: T, depth: number): Tree<T> { return depth === 0 ? new Leaf(value) : new Node( fillTree(value, depth - 1), fillTree(value, depth - 1)); }
This general approach is idiomatic when simulating ADTs in TypeScript. We have a class (or function) for making data representing each variant, and we call them to produce a concrete data structure.
We can do much better, though. Here’s a class which takes advantage of OO-style encapsulation to represent a filled tree:
class FillTree<TContents> implements Tree<TContents> { readonly #value: TContents; readonly #depth: number; constructor(value: TContents, depth: number) { this.#value = value; this.#depth = depth; } public match<TResult>( options: TreeMatchArgs<TContents, TResult>): TResult { if (this.#depth === 0) { return options.leaf(this.#value); } return options.node( new FillTree(this.#value, this.#depth - 1), new FillTree(this.#value, this.#depth - 1)) } public reduce<TResult>( options: TreeReduceArgs<TContents, TResult>): TResult { const value = this.#value; function recursive(depth: number): TResult { return depth === 0 ? options.leaf(value) : options.node( recursive(depth - 1), recursive(depth - 1)); } return recursive(this.#depth); } } const anotherTree = new Node( new Leaf("foo"), new FillTree("bar", 3));
FilledTree gives us a third way to create trees, which we can intermingle with calls to the Node and Leaf constructors. When consuming such trees using match and reduce our operations will not be aware that anything funny is going on, because match and reduce still expose the same data variants at their interface.
FilledTree
Internally though, FilledTree is a completely different beast. Similarly to the Odd class in the introduction, the implementation of FilledTree does not contain anything resembling a tree data structure. This allows it to have much better performance than our naieve solution:
Odd
node
leaf
depth
2 ** depth
This example is somewhat contrived, but hopefully it serves to illustrate that there is no need to have a 1:1 relationship between classes and data variants. A single class can simulate having multiple variants, and multiple classes can simulate the same variants.
If you’ve read my previous post on The Church and Scott encodings of algebraic data types then the surprise was likely spoiled for you several paragraphs ago. The fields we place on the visitor’s signature are the variants of a simulated ADT. The match function is the Scott encoding of this ADT, and the reduce function is the corresponding Church encoding. In that post I mention that the Church encoding of a non-recursive ADT is the same as the Scott encoding of that ADT. This fact is mirrored by the equivalence of internal and external visitors on non-recursive data variants.
Taking one step further: the Visitor pattern/Church encodings/Scott encodings are not limited to bringing FP-style data variants into OOP. They also brings OOP-style encapsulation into the lambda calculus. The mechanisms by which Church encodings can optimize list operations, for example, are possible because the implementations of these encodings have the same level of encapsulation as objects in OOP. Functions are a fundamental unit of abstraction, after all. When functions are used to represent data, implementing multiple functions with the same type signature is equivalent in power to implementing multiple objects with the same interface. The ergonomics may vary, but the ability to extend code without breaking existing implementations remains the same.
I hope this post has convinced you that the Visitor pattern is useful in TypeScript code, regardless of whether you prefer a FP or OOP style. If not, I hope you at least found the concepts to be an enjoyable curiosity. I’ve wanted to write about the Visitor pattern for a great time now but I felt that I first had to write my previous posts on Church and Scott encodings to do the topic full justice. If you found this post worthwhile you may like them as well. The intersection of FP and OOP is a powerful one, and I would love to see more developers put it to use.
All code samples in this post are released into the public domain under a Creative Commons 0 license. Code samples and full license text may be found here.
As an added bonus, I find this pseudo pattern matching interface more readable than the conditional logic in the ADT-based sumTree function. ↩
sumTree