# My Existential Crisis

People have long told me that being good at math is important for a computer programmer, and I have long maintained “No, really, it isn’t.” (This goes back at least as far as the third grade, where “programming” to me meant BASIC, and “math” to me meant “arithmetic.”)

For the most part, I’ve been able to survive just fine despite my aversion to math, and I’ve happily focused on “practical” programming. However, one very practical consideration is this: programs have bugs. Most people seem happy to accept this, but personally I want to do a Picard facepalm whenever one of my coworkers says “I ran the program once, on my machine, and it seemed to do what I thought it should, so that means it works.” Of course, this situation can be improved upon greatly by having a comprehensive test suite, and running it regularly, but I still find that to be unsatisfyingly haphazard. Perhaps a perfect test suite could catch all bugs, but it’s been proven by experience that, in practice, the test suites for complex software don’t catch some really horrifying bugs. And in fairness, I think it’s much easier to test that software *does* do what it *should*, than to test that it *doesn’t* do what it *shouldn’t*.

Even if one trots out the aphorism “there’s no such thing as a bug-free program,” I believe that we can and should write software that’s *orders of magnitude* less buggy than typical software currently is. Therefore, I’ve recently gotten interested in dependently-typed languages, and languages that allow proofs to be written for code. (And I admit to being ignorant enough that I’m not even sure if those two are the same thing. Can one have proofs without dependent types? Are there dependently-typed languages that don’t support proofs?) I briefly looked at Agda and Idris, but both seemed to have fairly impenetrable documentation, not to mention that both failed to build with the infamous `ExitFailure 139`

(apparently a segfault?).

So, I’ve decided that ATS2 is the dependently-typed language I want to learn. It seems to be aimed at actually writing dependently-typed programs in the real world, rather than just being an academic curiosity. And it has very accessible documentation, for the most part. I had issues getting it to work on my Ubuntu 12.04 machine, but I got it to work on my Mac laptop, so that’s good enough for now.

I was able to digest the ATS documentation fairly easily, up until the chapter on dependent types. (Which, of course, is the main event, since my whole goal here was to learn to program in a dependently-typed language.) When it started talking about universal and existential quantification, that’s where I started to have trouble. Now, part of it is that these are scary-sounding things… one of my complaints about math is that it likes to use scary-sounding words. But, that’s a fairly superficial complaint, and I *do* know what univeral and existential quantification mean, at least at a very basic level. I know they correspond to the unicode symbols ∀ and ∃ and mean “for all” and “there exists.” And I grasp what they would mean for numbers: it makes sense to say “x + x = 2 * x, for all x” or “there exists an integer x where x * x = 25.”

But, what do these mean for *types*? That’s where it gets tricky for me. I sort of understand what universal quantification means for types: it’s just “generics” or “parametric polymorphism.” For example, I can have a list type which can hold any type. (Or more formally, “list” is a “type constructor” which can take any type as an argument.) But what does existential quantification mean for types? Part of what confuses me is that, at least in Haskell, existential types are created using the `forall`

keyword. I mean, that just seems really messed up to me. But, even ignoring that, what exactly *is* an existential type? In Haskell, it seems like existential types are used to get runtime polymorphism (like Java interfaces), as opposed to the compile-type polymorphism of Haskell typeclasses. But this seems far enough from any intuitive understanding of the phrase “there exists” that it makes my head want to explode.

And does any of what I sort-of-know-but-don’t-really-understand about existential types in Haskell apply to ATS? The documentation says:

In order to unleash the expressiveness of dependent types, we need to employ both universal and existential quantification over static variables. For instance, the type Int in ATS is defined as follows:

typedef Int = [a:int] int (a) // for unspecified integers

where the syntax [a:int] means existential quantification over a static variable a of the sort

int. Essentially, this means that for each value of the type Int, there exists an integer I such that the value is of the type int(I). Therefore, any value that can be given the type int can also be given the type Int. A type like Int is often referred to as an existentially quantified type.

So, what does that really mean? How is the type Int different than the type int? I don’t get it. It then goes on to say:

As another example, the type Nat in ATS is defined as follows:

typedef Nat = [a:int | a >= 0] int (a) // for natural numbers

where the syntax [a:int | a >= 0] means existential quantification over a static variable a of the sort

intthat satisfies the constraint a >= 0.

So that makes it sound like “existential quantification” means “specifying a constraint.” Is that what it means? And if so, this is an entirely different thing than what it means in Haskell. Damn mathematicians.