Tradeoffs of highly-expressive types

Or, hypertyping considered sometimes harmful, sometimes beneficial, it's complicated really

I’ve had three different things on my mind recently:

  1. I read the excellent blog post “Hyper-Typing”.
  2. I watched a proponent of dynamically typed languages claim that state machines can’t be represented in statically typed languages.
  3. I’ve noticed a trend of posts on the r/typescript subreddit, in which posters make heavy usage of advanced TypeScript features in order to enforce invariants which could be more easily handled with a small amount of runtime logic.

I link to the hypertyping post because I think it is well written and well argued, and I agree with much (but not all) of it. It is very short (at least by the standards of my own blog) so I recommend reading it before you continue. I won’t be linking to the other two because I’m not seeking conflict with any specific individuals. Rather, I’m aiming to comment on a question which I think is implied by all three of these: How restrictive should one make the types in their programs, and what factors influence this?


I am going to set up a strawman and immediately knock it down. This is generally not a productive way to argue, however, it will give me a chance to tug on what I think is a common unstated assumption in static vs dynamic typing wars. Here is the position which I plan to argue against:

One cannot reasonably statically type a state machine. It is possible to use union types to specify the set of possible states and the set of state transitions, but ruling out invalid transitions through static types alone is either impossible or impractically difficult.

I’m going to set aside the host of Haskell libraries which do exactly this. I believe the original author considers these libraries impractical, and I don’t know Haskell well enough to attempt a rebuttal. I have a decent grasp of TypeScript, however, and I will say that in TypeScript I would absolutely use a union type to represent a state machine.

Let’s take this sort of argument to the extreme.

One cannot statically type a function which returns a prime number when given an odd input and a composite number when given an even input.

How would you approach this problem?

For the odd/even distinction I might attempt to represent natural numbers in unary form, using zero and successor constructors. zero is even. The successor of an even number is odd, and the successor of odd is even. For natural and composite numbers, one might represent natural numbers as arrays of their prime factors, and state that a prime number is one where this array has a length of 2…

I have a better solution, though. The type of this function is (input: number) => number. While maximally typing the original problem might be a fun puzzle, its challenge comes from surmounting the fact that TypeScript is really really not meant to do this. The fact that we might be able to encode a notion of divisibility into its type system does not mean that doing so would be practical. Even if we could do it, I don’t believe it would be desirable.

Looking outside of the realms of TypeScript, Lean is a dependently typed language which does have the facilities to express arbitrary properties in type signatures. You absolutely could write our full type signature in Lean if you wanted. It’s not idiomatic to give Lean functions maximally restrictive types, though. A Lean developer would likely give this function a type from numbers to numbers, and then use Lean’s proof facilities to write a proof of the relevant properties of this function. The function and proof exist separately and the function may be used without its associated proof.

There may be multiple reasons why one would make this design decision in Lean, but I’d like to focus on one which is strongly applicable to TypeScript: functions with expressive types apply pressure to the code which provides their inputs, in order to provide guarantees to the code which uses their outputs. The more detailed the input types are, the more the function constrains its callers, and the less composable the function is. In some cases this tradeoff is an unmitigated good; a function which squares a number shouldn’t be composed with a function that provides a string. When you get into a more complex type like the oddness of an integer or the dependency graph of a state machine it’s less clear that imposing these constraints is worth the cost. This is even assuming that the downstream guarantees provided by our function’s return type are beneficial and necessary. For instance, as excessive as our prime number example already is, the situation would be even worse if the next caller simply discarded the prime factors encoding and converted the result back into a float. The effort needed to determine and type a correct-by-construction encoding is wasted if the resultant encoding isn’t usable by downstream code.

A middle ground does exist here. If a developer wanted to track these sorts of properties within their program they could do so using runtime parsing and tagging. Consider the following:

type Even = {
  tag: "even",
  value: number
}

type Odd = {
  tag: "odd",
  value: number
}

function makeEven(n: number): Even | null {
  return n % 2 === 0 ? {
     tag: "even",
     value: n
  } : null;
}

function makeOdd(n: number): Odd | null {
  return n % 2 === 1 ? {
     tag: "odd",
     value: n
  } : null;
}

Here we’re using a combination of static and dynamic behavior. The actual parity checks occur at runtime but we use the information we’ve gained to produce statically typed values, enabling a hybrid approach. This mindset should be familiar to those who have read Alexis King’s excellent Parse, Don’t Validate.

My point in all this is: the choice between static and dynamic checks is an engineering decision. I personally almost always prefer static types, and I love that TypeScript’s type system is sufficiently expressive to use hybrid solutions like the one above. This does not mean that more strict types are universally better, more “pure”, or even necessarily safer than more lax ones. Complexity comes at a cost, including an increased likelihood of bugs within complex code. Type-level code is still code. When writing application code, if your type-level logic is hard for you to trace after not looking at it for an hour, I recommend leaning into a hybrid static/dynamic solution.

I’d be remiss to not also consider libraries, given that they are what the hypertyping post focuses on. Application code and library code face significantly different pressures and requirements. The complexity threshold for library code is significantly higher than for application code; after all, the purpose of libraries is to concentrate and encapsulate the challenges inherent in solving a specific, difficult problem exceptionally well. Should this extend to type-level complexity?

Well… sometimes? The dangers of “hypertyping” is a worsening of developer experience when using a library in unexpected ways. This can be due to users struggling with the reduced composability of a library’s exceedingly strict types. Problems also arise when developers wish to specify explicit types in their application code while working with libraries which rely heavily on type inference, or when developers have to debug large and cryptic type errors when they misuse a library’s components. All of these have been covered before. What I think needs to be considered though is that these kinds of problems are not unique to type level code.

High quality libraries are harder to write than high quality application code, because they solve a harder problem. Optimally a library provides an embedded domain-specific language; a way of fully describing a problem space using high-level, safe, composable components, without leaking implementation details or enforcing unnecessary limitations. Most libraries fall far short of this ideal. We take it for granted when libraries have non-compositional behavior, poorly documented errors, messy stack traces that require digging through the code of the library itself, and general friction when one strays too far from the beaten path. These are frequently written off as an inherent cost of using someone else’s code.

Ironically, expressive types are one tool which can help mitigate these issues. At their best they can improve tooling support and catch errors at compile time rather than at debugging-someone-else’s-stack-trace time. Sometimes they can even make composability issues more obvious early on, helping library authors to address them. If a library provides a domain specific language, expressive types provide the opportunity for a domain-specific type system. That is the dream, at least.

My personal take is that most of the problems with “hypertyped” libraries come from friction between the possibilities of a custom type system and the reality of TypeScript’s abstraction limitations. TypeScript doesn’t provide good ways to create opaque encapsulated types. Providing custom type errors relies on hacks involving objects with literal string types as error messages. Type representation is second-class; library authors cannot control how deeply types get unfolded by default before they are shown to users. Type-level programming in TypeScript involves working in a first-order quasi-functional quasi-declarative language with an unusual and restrictive syntax and feature set. And the mechanisms of subtype checking and type inference are sometimes opaque and always unextensible. There are GitHub issues for most of these, and some of them could probably be tractably added to the language with enough support. (Please do not use my blog post as motivation for being rude to the TypeScript team.) Dreaming larger, macro support through compiler extensions could more fully liberate libraries’ interfaces from the constraints of the language.

From this perspective the solution to the “problem” of hypertyping is to double down. I encourage authors to look at their libraries as languages in their own right and ask what type system they’d want. This can illuminate areas where safety can be enhanced, or places where we should ask more of our programming language, or yes, places where maybe the types don’t need to be quite so restrictive.


There’s no single simple answer to questions about type design. It’s an engineering problem, with all of the messy, quasi-subjective, and often unsatisfying constraint-solving that software engineering entails. The constraints at play involve some additional details: tradeoffs between composability and specificity, impedance mismatches with the underlying type system, and working within an unusual type-level programming model. Still, the problems of type design are not significantly different in kind than the problems of runtime code design and it’s commonly held that there are multiple valid ways to approach coding problems. Whether in the context of application or library code, I think it would be an astonishing coincidence if there was a globally optimal level of usage of TypeScript’s advanced features. I would find it even more astonishing if this optimal level was either “zero” or “as much as possible”. It’s uncomfortable not having an absolute rule but that dissatisfaction is the place where our skills and tools can grow.