Dependent Types in Prolog

Wednesday, 7th May, 2014

or, “Had we but world enough, and time …”

** intro

I’ve been occasionally looking into programming languages with dependent types, when a recent tweet gave me pause:


I got interested in dependent types because SO TOLD ME TO^W^W^W^W they seemed necessary to encode some constraints on types for some toy projects I was thinking of, namely:

  • for a Bayesian network: represent probabilities (positive floating point number less than or equal to 1.0) and discrete probability distributions (a {T, Probability} mapping where the sum of probabilities = 1.0);
  • for fast Fourier transformation: respresent a list whose length must be a power of 2.

Naively, I thought I might be able to do this kind of thing in Haskell. I don’t like the cult hype around Idris. I thought I might try it with ATS, which I like the look of a lot, but it’s proving slow getting into.

** prolog

In the meantime, here are the types represented in Prolog:

% some "dependent" types

%%%% X is a power of 2

    Y is X / 2,

%%%% X is a list whose length is a power of 2

    length(X, L),

%%%% X is a probability

    X >= 0,
    X =< 1.

%%%% X is a discrete probability distribution

    maplist(prob, Y),
    sum_list(Y, 1.0).

%%%% helpers

seconds([], []).
seconds([[_, Y] | Z], [Y | Z2]):-
    seconds(Z, Z2).

tuplist([[_X, _Y] | Z]):-

** next


I might yet try this with ATS.

On the other hand, I might try it with Mercury, which seems to be (among other things perhaps) a typed Prolog.

** p.s.

In case the above is a bit downbeat on ATS: I do think ATS is a very interesting language. If any reader can give a quick sketch of one of the simpler types above in ATS, I am definitely up for having a go at the others.

Building and installing ATS on FreeBSD

Wednesday, 30th April, 2014

ATS is a statically typed functional (eager) programming language with dependent types and linear types. It “can be as efficient as C/C++ both time-wise and memory-wise”.

Building and installing on FreeBSD is mostly straightforwardly following the documentation, but three things tripped me up:

GNU make vs BSD make

The Makefiles assume GNU make and won’t run with BSD make. However, on FreeBSD, “make” is BSD make, and GNU make is “gmake”. The way the Makefiles are written (e.g., make is called explicitly from within the Makefiles) seemed to rule out just aliasing gmake as make, so I (as root) hid (BSD) make and symlinked “make” to gmake.

which compiler

By default the Makefiles compile with gcc, although there is an option to compile with clang. My FreeBSD set up had both, with clang as the default compiler (at cc). However, although gcc was installed, it was at “gcc47″ and there was no symlink from “gcc”. I added the symlink.


In the install_files sections of the top-level Makefile, the install commands have a space between the -m flag and the actual mode, e.g.:

    105  install_files_1: bin/patscc ; \
    106    $(INSTALL) -T -m 755 $< $(PATSLIBHOME2)/bin/patscc && echo $<

This tripped up install, which reads that 755 as a separate argument. This is fixed by removing the space, i.e.:

    105  install_files_1: bin/patscc ; \
    106    $(INSTALL) -T -m755 $< $(PATSLIBHOME2)/bin/patscc && echo $<

There are five points which need to be changed.

(n.b.: the Makefile may have been updated by the time you read this.)


With all that done, and a “Hello world!” program compiled and run, I am ready to adventure into dependent and linear types!

Review: Beginning Haskell

Saturday, 26th April, 2014

book details

“Beginning Haskell: A Project-Based Approach”
Alejandro Serrano Mena


This book could have been very good: the author has the enthusiasm, the knowledge, and the focus. Unfortunately he has been very very badly let down by his publisher. As it is, the book is almost unbelievably horrible to try and read.

The comments below are based only on the first four chapters, which was as much as I could stomach. The contents of the remainder look interesting, so I may read on, … but I don’t really think I shall.


The book has all the usual dysfluencies, typos, inaccuracies, errors, which we have become used to in programming language publishing, but very thick and fast.

As well as the author, the book boasts a Lead Editor, an Editorial Board (of twenty people), a Coordinating (sic) Editor, a Copy Editor, an Indexer, and a Technical Reviewer (who is so important he warrants a photograph). That’s at least twenty five people “helping” the author. The editing and indexing in the book is so awful, and the accuracy of the technical content so variable, that I can’t imagine what any of them did for their money. The book could hardly have been worse if they’d conspired together to wreck it on purpose.

The index was actually useless. Of the five terms I looked up, *none* were in the index: case, fold, map, monoid, set.

Most pages are cluttered with dysfluent English, grammatical mistakes and typos. These are so thick they make the text quite difficult to read. Each textual error trips up the reader’s eye. It’s like walking along a street when the pavement is covered in dogshit: you can’t enjoy the scenery, you can’t think about where you’re going, all you can do is pick your way through the shit.

Worst are the code-related errors, some of which beggar belief. Code which is not Haskell:

let name = match companyName client of
	       Just n -> n

As far as I know “match” is not a Haskell keyword. The book does not disabuse me of that impression. Neither does ghci.

There’s code that will not compile, or doesn’t do what the text says it does (in one case the code does the exact opposite of what the text claims it does).

Page 58 has a “wonderful” code example which manages not only to be inconsistent with the text, but also inconsistent with itself. It manages to do this in three lines of code!


The text says the code imports filter and reverse; the code actually imports permutations and subsequence; but the code calls (i.e., expects to have been imported) filter and permutations.

Twenty five editors. And the book has a full page dedicated to praising the Technical Reviewer.

All-in-all reading this book was an unpleasant, even nauseating, experience. Because of the book’s good points (see below) I persevered, and worked through many of the exercises.

The last straw was exercise 4-3 (p. 89).

Earlier the book had introduced a data type Client i, where i is a polymorphous type used as an id (p. 49), and where client can be one of three subtypes (Government, Company or Individual). Exercise 4-3 is to write a function that takes a list of Clients and returns a map from these subtypes to sets of clients, or:

classifyClients :: [Client Integer] -> Map ClientKind (Set (Client Integer))

The exercise has two parts, each suggesting an alternative implementation, and then says:

It’s interesting that you create a very large client list and run the two implementations to compare which one behaves better in speed.

Compare how exactly? Later (p. 122), we are told about how to build a cabal project with profiling options that will profile a Main module.

So let’s just forget about comparison for now and write these two functions.

It turns out that Set (Client Integer) will not work:

  *Main> let  cs = S.fromList [Gov 1, Gov 2, Edu 3, Edu 4, Ind 5, Ind 6]

      No instance for (Ord (Client i0))
        arising from a use of `S.fromList'
      Possible fix: add an instance declaration for (Ord (Client i0))
      In the expression: S.fromList [Gov 1, Gov 2, Edu 3, Edu 4, ....]
      In an equation for `cs':
          cs = S.fromList [Gov 1, Gov 2, Edu 3, ....]

Needless to say none of Ord, Set or deriving are in the index, and nothing in the text on Sets indicates that there is a constraint that members of sets must be of type/class Ord. Type classes, including Ord and Eq and deriving, are introduced a bit further on (p. 97). The Client type is defined with just deriving Show (p. 49).

I could have taken this as an extra challenge — interpret those error messages, fix the data type: I did do both those things after all — and in another book I would have. With this book though, I decided it was just more dogshit, and I decided I just didn’t care any more.

What a horrible book.


All of the textual errors I encountered would have been picked up by an averagely competent editor doing their job. Similarly all of the code errors would have been picked up on a first sweep by an editor with a passing acquaintance with programming languages in general.

The bad points of the book, which make it unusable I think, and certainly unrecommendable, are all down to the publisher and the editorial team.

On the other hand, the author has written something which could have been a very good book, and could certainly have been the best practical introduction to Haskell.

The author’s enthusiasm is palpable and infectious. As well as the exercises, I was motivated to try out code from the text (admittedly, often just to see if it was correct Haskell) and some variations of my own. e.g., inspired by a haskell function for 3x + 7(x + 2) (p. 57), I tried one for ax^2 + bx + c:

  dup x = (x,x)
  f *** g = \ (x,y) -> (f x, g y)

  abc a b c = (uncurry (+)) . (((*a) . (^2)) *** ((+c) . (*b))) . dup

  *Main> let f = abc 1 2 3
  *Main> f 5
  *Main> f 10
  *Main> let f = abc 3 1 2
  *Main> f 5
  *Main> f 10

The author is ambitious and wide-ranging — covering parsing, parallelism, DSLs, Idris(!); the GHCi ecosystem including things like GHC extensions, cabal, Hoogle, Haddock, HUnit, and more — but by tieing everything to a concrete project (the usual web app) keeps things focussed, keeps the pace up, and introduces advanced topics in a natural way.


If this book had the appearance of having been edited competently, it could have become the definitive practical introduction to Haskell: the kind of book a programmer could use for self-study, or a technical manager could give to starting team members.

As it is, I would be embarrassed to recommend this to anybody.

The Fibonacci sequence as an unfold

Wednesday, 9th April, 2014

import Data.List

f :: [Integer]
f = unfoldr ( \x -> case x of
                      (0,0) -> Just (0, (1,0))
                      (a,b) -> Just (a+b, (b, a+b)) ) (0,0)


> head f
> take 5 f
> take 15 f

All the info is on this thread:

gpointing-device-settings is a braindead gui from Gnome. Worst of all, it doesn’t remember your settings, so every time you boot up, you have to fiddle about with the gui again.

The post linked to shows how to use xinput to get/set configurations for peripherals, and how to wrap xinput commands in scripts.

A morning with elixir

Saturday, 1st June, 2013

After reading Joe Armstrong’s recent blog post on elixir, and the ensuing discussion there and on twitter, I’ve thought a bit about why I don’t like the language. I can’t spend a week with elixir, so the three observations below are after a morning’s reading and watching.

My overall impression is that elixir is a language that makes it easy for people with a ruby background to dabble with erlang. However, after a morning I’ve found at least three things about elixir that would make my own programming life less pleasant (see below). Elixir contains apparently powerful new features (e.g., macros and protocols), but the examples I’ve seen (e.g. in José Valim’s Erlang Factory talk) are not very exciting.

word-like forms as delimiters

  # elixir
  [1]  def name ( arglist ) do code end

  % erlang
  [2]  name ( arglist ) -> code .

I fail to see how [1] is an improvement on [2]. [2] is shorter by a handful of characters but, mainly, [1] uses word-like forms as delimiters, which I think is a very bad idea.

A human reader will read those word-like forms as words (e.g., the English words “do” and “end”), which they’re not. The delimiters “def”, “do”, and “end” delimit the function definition in the same way that “(” and “)” delimit the argument list.

Using word-like forms as delimiters will either/both (a) slow down the human reader as they read these forms as words, or/and (b) make the code harder to read, as the human reader must explicitly remember not to interpret certain word-like forms as words.

cf also lc and inlist or inbits used to delimit comprehensions.

n.b.: the same arguments apply to erlang’s use of “end” as a delimiter for fun and case blocks. Presumably erlang’s excuse was that they ran out of delimiters borrowed from English punctuation, and they didn’t want to overload “.”. I wonder whether some kind of list delimiter might be appropriate (for case at least), e.g.:

  R = case f() of
        [this -> 1;
         that -> 2],

The pipeline operator |>

The pipeline operator strikes me as being halfway between useful and troublesome. It would probably be very useful in cases similar to the example Joe gives (repeated below for convenience) — where a series of functions each take and produce a single item of data.

  capitalize_atom(X) ->
    V1 = atom_to_list(X),
    V2 = list_to_binary(V1),
    V3 = capitalize_binary(V2),
    V4 = binary_to_list(V3),

  def capitalize_atom(X) do
    X |> atom_to_list
      |> list_to_binary
      |> capitalize_binary 
      |> binary_to_list
      |> binary_to_atom

However, I think if any of the functions in the series take more than one argument, things could quickly get cumbersome. Witness the discussion on Joe’s blog, Joe’s recent tweets suggesting improvements, and the caveat in the elixir documentation.

The simple case above could be done in erlang with a fold:

  capitalize_atom(X) ->
    lists:foldl(fun(F, Acc) -> F(Acc) end, X, 
                [fun atom_to_list/1, 
                 fun list_to_binary/1,
                 fun capitalize_binary/1,
                 fun binary_to_list/1,
                 fun binary_to_atom/1]

Granted, this is possibly even yuckier than Joe’s version, but using a fold or, more generally, writing a function that walks through a list of funs, gives the possibility of handling errors, using some kind of context dictionary as the passed argument, etc.

I think elixir’s “|>” looks more general than it usefully is.


In erlang, variables must begin with a capital letter; atoms need special marking only if they start with a capital letter, or contain certain characters, in which case the atom is enclosed in single quotes. In erlang, module names and function names are atoms.

In elixir, variables don’t seem to need any special marking. Atoms /sometimes/ need to be marked with an initial colon. Unless the atom is a function or module name. Unless the module/function is from the erlang standard library. I might have that wrong.

Learn you a Haskell describes these two type classes like this:

Ord is for types that have an ordering.

Enum members are sequentially ordered types.

I didn’t find this especially clear.

Real World Haskell failed (yet again). It [the printed book] gives no definition of Enum, although it seems to think it does (p. 472).

Mostly for my own benefit, this is what I’ve found:

The GHC documentation describes the two type classes like this (emphasis added):

The Ord class is used for totally ordered datatypes.

Class Enum defines operations on sequentially ordered types.

So both classes represent ordered types (both are instances of the Ordering data type). The difference is in the kind of ordering.

A clue is in the methods provided: Ord provides >, <, max and min; Enum provides succ and pred (and, interestingly, nothing similar to max and min).

Finally, the best explanation I found was in Chapter 8: Standard Haskell Classes of A Gentle Introduction to Haskell:

8.2 The Enumeration Class

Class Enum has a set of operations that underlie the syntactic sugar of arithmetic sequences; for example, the arithmetic sequence expression [1,3..] stands for enumFromThen 1 3. We can now see that arithmetic sequence expressions can be used to generate lists of any type that is an instance of Enum. This includes not only most numeric types, but also Char, so that, for instance, ['a'..'z'] denotes the list of lower-case letters in alphabetical order. Furthermore, user-defined enumerated types like Color can easily be given Enum instance declarations. If so:

[Red..Violet] => [Red, Green, Blue, Indigo, Violet]

Note that such a sequence is arithmetic in the sense that the increment between values is constant, even though the values are not numbers. Most types in Enum can be mapped onto integers; for these, the fromEnum and toEnum convert between Int and a type in Enum.

Note however, that for some reason the Ord type class includes “All Prelude types except IO, (->), and IOError”, so a simple mathematical interpretation won’t do.


So! Actually I’m going to ignore that last caveat.

I think the comfiest way for me to understand these type classes is using the available methods: Ord types are those for which it makes sense to talk about relationships like “greater than” and “less than”; Enum types are those for which it makes sense to talk about relationships like “next” and “previous”. Days of the week would be Enum; Dates would be Ord.


Get every new post delivered to your Inbox.