home

malbolge: hello world

update - since writing this a couple of people (lorenzo, and someone whose email i've lost - sorry) have written to me claiming to have produced hello world programs with little more than pencil and paper (and i believe them, although i've not tested their programs). so this is yet another example of resorting to computing when thinking alone would have given a quicker solution. ah well...

update 2 - matthias ernst sent me a detailed description of how he finds malbolge programs (his search is much more efficient because instead of blindly trying "all" programs, he has added enough "smarts" so that he's just effectively searching for each letter - the printing part is already there). his code is here, together with a short description.

this malbolge program generates:

HEllO WORld

it's not perfect - i ignored case to make the problem simpler (completion left as an exercise for the reader - it should be possible).

when i finally got a decent algorithm worked out (i've been playing around with this for the best part of a month - see below), it took a few hours to generate the program on a 500mhz nt box with 96mb memory (the code was written in lisp - i started with clisp on suse linux and then changed to corman lisp on nt) (more numbers here).

incidentally, i've come to hate malbolge. no doubt that was the author's intention (i'm not sure if it was also their intention to write an interpreter whose results can be unpredicatble because of memory access errors, but it adds to the general flavour - grrr.....).

the notes below sketch out how this was done:

normalization

a normalised malbolge program is one that has had the intial mapping removed. if you look at the malbolge source you'll see that the input program is translated via a (random?) lookup table. a normalised program is one which would run directly if that translation is removed (you can generate an ordinary program later by doing the translation backwards).

the advantage of normalising is that the commands (characters) in the program don't change meaning with position, allowing them to be generated/combined easily. in other words, a program contains only the command characters (j, i, *, p, <, v, o).

breeding programs

with normalisation, it looked as though a simple genetic algorithm (ga) would be able to synthesize a hello-world program. this meant generating candidate programs at random, scoring their output, and merging programs with high scores to generate new programs.

to avoid worrying about the padding function i used malbolge programs of the maximum length (if you supply a program with a shorter length to the malbolge interpreter, the rest of memory is filled with values calculated almost at random).

without thinking too hard (i already had some ga code, so it seemed like a quick and easy approach) i played around with various scoring functions (scoring characters in the correct position, ordered pairs, etc.), but after a week or so (it was taking a day or two to breed anything interesting) it was clear that merging programs wasn't working.

running a few malbolge programs through an instrumented interpreter i could see that the order in which sections of a program are executed is important. so inserting a fragment of a program stopped a later section of the program from working correctly.

rather than giving up on gas completely, i then tried scoring programs only by how much of the string was correct from the left (ie "h", then "he", then "hel" etc) and, later, ring-fencing memory locations used to generate successful output (allowing merging to affect the programs only after they had generated the initial string).

the best result at this point was "hello wor" (mixed case), but it couldn't extend further because jumps within the program meant that immediately after the code that printed "r" was data used to print an earlier section(!).

searching

a break from playing around with genetic algorithms (my father-in-law came to visit) gave me space to reflect. a little thought showed that by ring-fencing memory and scoring from the left i had reduced my breeding to a kind of randomized search.

a direct search would probably more efficient.

adapting the simple (elegant) code outlined in norvig's book (ai programming - case studies in common lisp) i tried a best first search, generating a new node in the search path whenever memory was accessed.

this appeared to be working, but was storing too much state, making the computer page continually. since i was worried about the disk (the keyboard was already warm over the cpu - now it was heating up over the disk too) i reduced the size of the stored state (beam width in norvig's terminology).

this gave two parameters to select an appropriate search. first, the beam width, and second, the score for a matching character (each memory access had a cost of "1"; a typical reward for matching a character was 10, so getting "h" after 5 memory accesses score 5 - 10 = -5; the search followed lowest scoring paths first). these values interact: a large reward requires a large beam width if rollback to before the match is required, for example.

the first solution i found (the one given above) used a beam width of 10,000 and a score for matching a character of 10, after searching just over 57,800 nodes (programs), at a depth (ply) of 62 memory fetches (this is not the same as the program length as not all the memory accesses are continguous - some of the program is just nop padding).

other solutions can be found with different parameters: "hello World" is found after searching 28,300 nodes if the beam width is 1,000 (reward still 10), but has more memory accesses (71). it's not surprising that a wider beam width finds shorter (in number of memory accesses) programs, but needs more time. a beam width of 100 states fails to find a solution.

(my use of wide beam width and a lot of memory/state in the first solution i found was driven by a subtle bug in my code that - until i spotted and squashed it - made finding a solution more difficult).

at each point in the search (given sufficient beam width) i store program state (register, data and instruction pointer), an a-list of accessed memory, and output so far in a struct. at each iteration of the search 8 new copies are made of the best node so far and each stepped forwards. on the first new memory access a new value (one of the eight commands, inverted through the initial lookup) is supplied. the second new memory access saves the program state and exits from the interpreter routine. each new struct is then scored and added to the stored state.

since all memory reads are recorded in the struct as an a-list it's easy to reconstruct the successful program on completion (writes from within malbolge are prepended to the a-list, leaving the initial values for this reconstruction). note, finally, that the a-list of memory accesses is not stored as a new copy for each node, but chains back up the search tree (so each node only has to store one additional request). this saves (a lot of) memory, but distributes the structure throughout memory (corman lisp seemed better than clisp at handling this, with disk thrashing apparently occuring later).

turing complete?

i don't have a cs degree, so feel free to correct the following speculations.

the docs for malbolge suggest that it may be turing complete. i'm not so sure.

first, despite executing many millions of instructions in a malbolge interperter, i have never had a program that enters an indefinite loop - not overwhelming evidence, but worrying (it's the kind of thing that tends to turn up when you generate programs at random in most turing complete languages).

second, it's not clear how to extend malbolge to have indefinite memory extent - addresses are absolute rather than incremental and any "arithmetic" is limited to operations that keep values within a fixed range. there is no increment or decrement operation to give relative adressing and it's not clear that one can be written (even for arithmetic within a limited range of values).

more info

why lisp?

i suspect many people haven't used lisp - since i think it's a pretty good language for this kind of thing, and since you're reading this page, you might be interested...

lisp is a high level language with a long history. unlike java or c/c++ you don't need to declare types (although you can, if you want safer / more efficient code). nor do you need to worry about memory allocation, but - unlike scripting languages like perl or python - lisp can be compiled to run quickly. it also has a huge range of useful built-in features (everything from support for oop to case-insensitive string comparison).

perhaps best of all, it can also run in an interpreter and it doesn't "crash". if something goes wrong the program stops at the point the error occurred and you can examine variables, change the program etc (using lisp itself, not a special debug program) and continue. this allowed me to fix bugs quickly and generate useful data at the same time.

although lisp is an ancient language, as these things go, it has evolved continually. most modern lisps follow an ansi standard and are called "common lisp".

for some reason many people think lisp is slow - maybe this used to be the case, but these days lisp compilers are pretty good (there's an article somewhere onthe net comparing lisp with c++ which found that for a particular problem that many people tried to solve the average lisp program was faster, but the fastest solution was in c++. that makes sense to me, as lisp is easier to use and runs pretty fast, but coding in c++ can get you closer to the machine, allowing wizards to make optimizations that are still past the ability of compilers - check out the report before you decide you're wizard quality; most c++ programs were slower than those in lisp).

i used clisp because it came with suse linux. it's a pretty solid implementation, but not as fast as some others (it's an exception to what i said above - it only compiles to byte code, like java). when it became clear i needed more memory than my own laptop (32mb) i "borrowed" my work's nt machine (96mb) and switched to corman lisp because it was faster and clisp seemed to be having a problem with large data sets. corman lisp doesn't implement quite as much of the standard as clisp (full standard implementations do exist, but the missing bits aren't used much anyway) and is win32 only (it includes a very nice interface to win32 dlls). if you're thinking of starting with lisp, either would be a good start - see www.lisp.org for more details (if you're on win32 and would like to access c libraries or like a nice gui (ide), i'd recommend corman lisp, but the gui isn't free).

ps. [note added later] i deleted the lisp code when updating the OS on my computer. before that i had generated a properly punctuated "Hello world", but never saved the code. so i guess this will remain the only non-trivial malbolge program....