Andrew Cooke | Contents | Latest | RSS | Previous | Next

C[omp]ute

Welcome to my blog, which was once a mailing list of the same name and is still generated by mail. Please reply via the "comment" links.

Always interested in offers/projects/new ideas. Eclectic experience in fields like: numerical computing; Python web; Java enterprise; functional languages; GPGPU; SQL databases; etc. Based in Santiago, Chile; telecommute worldwide. CV; email.

Personal Projects

Choochoo Training Diary

Last 100 entries

Surprise Paradox; [Books] Good Author List; [Computing] Efficient queries with grouping in Postgres; [Computing] Automatic Wake (Linux); [Computing] AWS CDK Aspects in Go; [Bike] Adidas Gravel Shoes; [Computing, Horror] Biological Chips; [Books] Weird Lit Recs; [Covid] Extended SIR Models; [Art] York-based Printmaker; [Physics] Quantum Transitions are not Instantaneous; [Computing] AI and Drum Machines; [Computing] Probabilities, Stopping Times, Martingales; bpftrace Intro Article; [Computing] Starlab Systems - Linux Laptops; [Computing] Extended Berkeley Packet Filter; [Green] Mainspring Linear Generator; Better Approach; Rummikub Solver; Chilean Poetry; Felicitations - Empowerment Grant; [Bike] Fixing Spyre Brakes (That Need Constant Adjustment); [Computing, Music] Raspberry Pi Media (Audio) Streamer; [Computing] Amazing Hack To Embed DSL In Python; [Bike] Ruta Del Condor (El Alfalfal); [Bike] Estimating Power On Climbs; [Computing] Applying Azure B2C Authentication To Function Apps; [Bike] Gearing On The Back Of An Envelope; [Computing] Okular and Postscript in OpenSuse; There's a fix!; [Computing] Fail2Ban on OpenSuse Leap 15.3 (NFTables); [Cycling, Computing] Power Calculation and Brakes; [Hardware, Computing] Amazing Pockit Computer; Bullying; How I Am - 3 Years Post Accident, 8+ Years With MS; [USA Politics] In America's Uncivil War Republicans Are The Aggressors; [Programming] Selenium and Python; Better Walking Data; [Bike] How Fast Before Walking More Efficient Than Cycling?; [COVID] Coronavirus And Cycling; [Programming] Docker on OpenSuse; Cadence v Speed; [Bike] Gearing For Real Cyclists; [Programming] React plotting - visx; [Programming] React Leaflet; AliExpress Independent Sellers; Applebaum - Twilight of Democracy; [Politics] Back + US Elections; [Programming,Exercise] Simple Timer Script; [News] 2019: The year revolt went global; [Politics] The world's most-surveilled cities; [Bike] Hope Freehub; [Restaurant] Mama Chau's (Chinese, Providencia); [Politics] Brexit Podcast; [Diary] Pneumonia; [Politics] Britain's Reichstag Fire moment; install cairo; [Programming] GCC Sanitizer Flags; [GPU, Programming] Per-Thread Program Counters; My Bike Accident - Looking Back One Year; [Python] Geographic heights are incredibly easy!; [Cooking] Cookie Recipe; Efficient, Simple, Directed Maximisation of Noisy Function; And for argparse; Bash Completion in Python; [Computing] Configuring Github Jekyll Locally; [Maths, Link] The Napkin Project; You can Masquerade in Firewalld; [Bike] Servicing Budget (Spring) Forks; [Crypto] CIA Internet Comms Failure; [Python] Cute Rate Limiting API; [Causality] Judea Pearl Lecture; [Security, Computing] Chinese Hardware Hack Of Supermicro Boards; SQLAlchemy Joined Table Inheritance and Delete Cascade; [Translation] The Club; [Computing] Super Potato Bruh; [Computing] Extending Jupyter; Further HRM Details; [Computing, Bike] Activities in ch2; [Books, Link] Modern Japanese Lit; What ended up there; [Link, Book] Logic Book; Update - Garmin Express / Connect; Garmin Forerunner 35 v 230; [Link, Politics, Internet] Government Trolls; [Link, Politics] Why identity politics benefits the right more than the left; SSH Forwarding; A Specification For Repeating Events; A Fight for the Soul of Science; [Science, Book, Link] Lost In Math; OpenSuse Leap 15 Network Fixes; Update; [Book] Galileo's Middle Finger; [Bike] Chinese Carbon Rims; [Bike] Servicing Shimano XT Front Hub HB-M8010; [Bike] Aliexpress Cycling Tops; [Computing] Change to ssh handling of multiple identities?; [Bike] Endura Hummvee Lite II; [Computing] Marble Based Logic; [Link, Politics] Sanity Check For Nuclear Launch; [Link, Science] Entropy and Life

© 2006-2017 Andrew Cooke (site) / post authors (content).

Lepl 4 Preview - Simpler, Faster, Easier

From: andrew cooke <andrew@...>

Date: Thu, 11 Mar 2010 21:55:49 -0300

Here's a draft of the page from the Lepl docs that describes the new features.
This is *not* released so, at the tie of writing, it does *not* describe the
current library, or web site (but if you're impatient, everything you read
here is available for download from the source repo!).



Lepl 4 - Simpler, Faster, Easier
================================

I've made Lepl simpler to use.  For example, if a parser fails to match the
input, you get an exception with the location of the problem.  If that's not
what you want, it can be disabled by calling `.no_full_match()` (configuration
got simpler too!).

Another example: it's easier to add new matchers.  Before, you needed to
subclass a complex class.  Now, you can add a decorator to a simple function.

Even debugging is simpler.  If you want to understand what the parser is
doing, add `with TrackVariables()` and the progress of the match will be
printed to your screen.  The display includes the variable names that you used
in the code, so it's easy to understand.

Often when software is made simpler to use, it becomes slower.  The reverse is
true for Lepl - the new, simpler, approach supports new optimisations and
makes fixing bugs easier.  In my tests, parsers using the default
configuration are up to 10 times faster.

Below I'll explain all these new features in much more detail, but if you want
to get started with Lepl now, installation instructions are on the (new,
simpler) front page.


A Simpler API
-------------

Configuration
~~~~~~~~~~~~

Matches are now configured via methods on the `.config` attribute.  There's no
need to hunt round the documentation looking for rewriters -- everything is
right at your fingertips::

   >>> matcher = Any()
   >>> dir(matcher.config)
   [...]
   >>> help(matcher.config.left_memoize)
   
   Help on method left_memoize in module lepl.core.config:
   
   left_memoize(self) method of lepl.core.config.ConfigBuilder instance
       Add memoization that can detect and stabilise left-recursion.  This
       makes the parser more robust (so it can handle more grammars) but
       also significantly slower.

Each configuration option has two methods --- one to turn it on, and one to
turn it off.  These changes are relative to the default configuration [TODO
reference] unless you first call `.clear()` (which removes all options).

So, for example::

  >>> matcher.config.no_lexer()

removes lexer support from the default configuration, while::

  >>> matcher.clear().lexer()

gives a configuration that *only* has lexer support.


Full Match
~~~~~~~~~

Often, particularly with a simple parser, you expect all the input to be
matched.  If it isn't, something went wrong, and you'd like to know where.

In Lepl 4 you get all that by default::

  >>> matcher = Any()[5]
  
  >>> try:
  >>>     matcher.parse('1234567')
  >>> except FullMatchException as e:
  >>>     print(str(e))
  The match failed at '67'.

And if you use a more specific parse method, you get a more detailed error::
   
  >>> try:
  >>>     matcher.parse_string('1234567')
  >>> except FullMatchException as e:
  >>>     print(str(e))
  The match failed at '67',
  Line 1, character 5 of str: '1234567'.

Of course, you can disable this with `.config.no_full_match()`.

For more details, see the manual [TODO - reference].


Multiple Matches, Parsers
~~~~~~~~~~~~~~~~~~~~~~~~

The new `.parse_all()` method (and related `.parse_string_all()`, etc) returns
a generator of all possible matches.  This is similar to the old `.match()`
method (which still exists), but without the remaining streams (which were
usually not interesting).  If you need multiple matches you'll probably find
that `.parse_all()` simplifies your code.

Also, parsers are now cached (this isn't strictly new - it was also present in
later Lepl 3 versions).  This means that you can call `.parse()` repeatedly
without worrying about wasting time re-compiling the parser.

Cached parsers and configuration interact like you would expect --- changing
the configuration clears the cache so that a new parser is compiled with the
new settings.  If you want to keep a copy of the parser with the old settings
(useful in tests) then try `.get_parser()`.


Upgrading from Lepl 3
~~~~~~~~~~~~~~~~~~~~

Lepl 4.0 is a major release, which means that it contains changes to
often-used methods.  If you have code from a previous version the changes
described here are important, because they will probably cause your program to
fail.  The good news is that the parts of the API with most changes are those
that are called only once (configuration, creating the parser, etc).  So
updating your code should be relatively easy.  In particular, the way that the
grammar is specified is unchanged.


Easier to Extend
----------------

Roll Your Own Matcher
~~~~~~~~~~~~~~~~~~~~

Adding a new matcher to Lepl is now as easy as writing a function::

  >>> @function_matcher
  >>> def Capital(support, stream):
  ...    '''A matcher for capital letters.'''
  ...    if stream[0] in ascii_uppercase:
  ...        return ([stream[0]], stream[1:])
  ...
  >>> Capital.config.no_full_match()
  >>> Capital.parse('ABC')
  ['A']

If the matcher supports multiple results then it should `yield` them::

  >>> @sequence_matcher
  ... def Digit(support, stream):
  ...     '''Provide all possible telephone keypresses.'''
  ...     digits = {'1': '',     '2': 'abc',  '3': 'def',
  ...               '4': 'ghi',  '5': 'jkl',  '6': 'mno',
  ...               '7': 'pqrs', '8': 'tuv',  '9': 'wxyz',
  ...               '0': ''}
  ...     if stream:
  ...         digit, tail = stream[0], stream[1:]
  ...         yield ([digit], tail)
  ...         if digit in digits:
  ...             for letter in digits[digit]:
  ...                 yield ([letter], tail)
  ...
  >>> list(Digit()[3, ...].parse_all('123'))
  [['123'], ['12d'], ['12e'], ['12f'], ['1a3'], ['1ad'], ['1ae'], ['1af'], 
  ['1b3'], ['1bd'], ['1be'], ['1bf'], ['1c3'], ['1cd'], ['1ce'], ['1cf']]

Note how these matchers inherit the full functionality of Lepl!

For more information, including support for matchers that process other
matchers, or be configured in the grammar, see [TODO].


General Transformations
~~~~~~~~~~~~~~~~~~~~~~

Lepl has always supported functions that transform results, but the underlying
implementation is now signifcantly more powerful.  For example, a function may
add alternative matches, or abort the matching early.

This functionality is unlikely to be used in grammars, but will make adding
cool new features easier.


Easier Debugging
----------------

The `Trace()` functionality in Lepl has never been easy to understand, for two
reasons.  First, it tracks *every* matcher.  Second, it's unclear which
matcher corresponds to which part of the grammar.

Normally, when we debug a program, things are simpler because we can see the
*variables*.  So I have added that to Lepl.  The implementation has some rough
corners, because it uses parts of Python that were not intended to be used in
this way, but I think you'll agree that the result is worth the effort.

Here's an example.  The variables that will be displayed must be defined
inside `with TrackVariables()`::

  >>> with TrackVariables():
  ...     word = ~Lookahead('OR') & Word()
  ...     phrase = String()
  ...     with DroppedSpace():
  ...         text = (phrase | word)[1:] > list
  ...         query = text[:, Drop('OR')]
  ...
  >>> query.parse('spicy meatballs OR "el bulli restaurant"')
        phrase failed                             stream = 'spicy meatballs
  OR...
          word = ['spicy']                        stream = ' meatballs OR "el
  ...
        phrase failed                             stream = 'meatballs OR "el
  b...
          word = ['meatballs']                    stream = ' OR "el bulli
  rest...
        phrase failed                             stream = 'OR "el bulli
  resta...
          word failed                             stream = 'OR "el bulli
  resta...
        phrase failed                             stream = ' OR "el bulli
  rest...
          word failed                             stream = ' OR "el bulli
  rest...
          text = [['spicy', 'meatballs']]         stream = ' OR "el bulli
  rest...
        phrase = ['el bulli restaurant']          stream = ''
        phrase failed                             stream = ''
          word failed                             stream = ''
          text = [['el bulli restaurant']]        stream = ''
  [['spicy', 'meatballs'], ['el bulli restaurant']]



Faster Parsers
--------------

Faster Defaults
~~~~~~~~~~~~~~

I spent some time profiling, experimenting with different configurations, and
have tweaked the default settings so that, on average, parsers are faster.  In
particular, memoisation is used only to detect left--recursive loops (if you
do want full memoisation you can still configure it, of course, with
`.config.auto_memoize(full=True)`).


No Trampolining
~~~~~~~~~~~~~~

Lepl is unique (I believe) in using trampoling and co-routines to implement
the recursive descent.  This has several advantages, but introduces some
overhead.

I have measured the overhead, and it's surprisingly small, but even so it
seems silly to have it when it's not needed.  But the problem has always been:
when is it not needed?  The ability to define matchers via functions,
described above, finally gave an answer to that question.

Matchers that are defined as functions are simpler than a completely general
matcher.  So Lepl exploits this to remove trampolining when they are used.
And, of course, matchers provided by Lepl are implemented this way when
possible.

The end result is that trampoling is removed when the grammar is unlikely to
need it.  If you disagree you add it back through the configuration
(`.config.no_direct_eval()`).


Better Memoisation
~~~~~~~~~~~~~~~~~~

Sometimes memoisation is a *big* win.  It's not enabled by default, so you
still need to experiment to find out when to use it.  But until now it had a
stupid bug that made it less likely to work.  That bug is now fixed, so when
you need memoisation, it will be there for you.

Comment on this post