With the release of ANTLR 2.7.5, you can now generate your Lexers, Parsers and TreeParsers in Python. This feature extends the benefits of ANTLR's predicated-LL(k) parsing technology to the Python language and platform.
To be able to build and use the Python language Lexers, Parsers and TreeParsers, you will need to have the ANTLR Python runtime library installed in your Python path. The Python runtime model is based on the existing runtime model for Java and is thus immediately familiar. The Python runtime and the Java runtime are very similar although there a number of subtle (and not so subtle) differences. Some of these result from differences in the respective runtime environments.
ANTLR Python support was contributed (and is to be maintained) by Wolfgang Haefelinger and Marq Kole.
The ANTLR Python runtime source and build files are completely
integrated in the ANTLR build process.The ANTLR runtime support module
for Python is located in the lib/python
subdirectory
of the ANTLR distribution. To enable the installation of the Python
runtime support you will have to provide the --enable-python
option for the configure script, for instance:
./configure --enable-python --prefix=$HOME
With Python support enabled the current distribution will look for the presence of a python executable of version 2.2 or higher. If it has found such a beast, it will generate and install the ANTLR Python runtime as part of the overall ANTLR building and installation process.
If the python distribution you are using is at an unusual location, perhaps because you are using a local installation instead of a system-wide one, you can provide the location of that python executable using the --with-python=<path> option for the configure script, for instance:
./configure --enable-python --prefix=$HOME --with-python=$HOME/bin/python2.3
Also, if the python executable is at a regular location, but has a name that differs from "python", you can specify the correct name through the $PYTHON environment variable.
export PYTHON=python2.3 ./configure --enable-python --prefix=$HOME --with-python=$HOME/bin/python2.3
All the example grammars can be built and run in one go by running
make in the examples/python
subdirectory
of the ANTLR distribution.
# Build all examples and run them cd examples/python ; make # Clean all examples make clean
You can instruct ANTLR to generate your Lexers, Parsers and TreeParsers using the Python code generator by adding the following entry to the global options section at the beginning of your grammar file.
{ language="Python"; }
After that things are pretty much the same as in the default
java code generation mode. See the examples in
examples/python
for some illustrations.
One particular issue that is worth mentioning is the handling of
comments in ANTLR Python. Java, C++, and C# all use the same lexical
structures to define comments: //
for single-line
comments, and /* ... */
for block comments. Unfortunately,
Python does not handle comments this way. It only knows about
single-line comments, and these start off with a #
symbol.
Normally, all comments outside of actions are actually comments in the ANTLR input language. These comments, and that is both block comments and single-line comments are translated into Python single-line comments.
Secondly, all comments inside actions should be comments in the
target language, Python in this case. Unfortunately, if the actions
contain ANTLR actions, such as $getText
, the code
generator seems to choke on Python comments as the #
sign
is also used in tree construction. The solution is to use Java/C++-style
comments in all actions; these will be translated into Python comments
by the ANTLR as it checks these actions for the presence of predefined
action symbols such as $getText
.
So, as a general issue: all comments in an ANTLR grammar for the Python target should be in Java/C++ style, not in Python style.
import
directives
You can instruct the ANTLR Python code generator to import additional Python packages in your generated Lexer/Parser/TreeParser by adding code to the header section which must be the first section at the beginning of your ANTLR grammar file, apart from any other header sections.
header { import os, sys }
__init__
method
You can instruct the ANTLR Python code generator to include
additional Python code in your generated Lexer/Parser/TreeParser by
adding code to the init
header section which must
be the first section at the beginning of your ANTLR grammar file,
apart from any other header sections. The code in the header is
appended to the end of the __init__
method.
header "init" { self.message = "This is the default message" }
If your grammar file contains both a Lexer and a Parser (or any
other multiple of definitions), the code in the
init
header will be reproduced in the
__init__
methods of all of these definitions without
change. If you really want to update only one of the definitions,
for instance, the __init__
method of the Lexer class
you are creating, use
header "<LexerGrammar>.init" { self.message = "This is the default message" }
where <LexerGrammar> is the name of the Lexer grammar. The same construction also works with the Parsers and TreeParsers, of course.
In the case both a generic init header and a grammar-specific header are present, the grammar-specific one will override the generic one.
You can instruct the ANTLR Python code generator to add
additional Python code at the end of your generated
Lexer/Parser/TreeParser, so after the class definition itself by
adding code to the main
header section which must
be the first section at the beginning of your ANTLR grammar file,
apart from any other header sections.
header "main" { print "You cannot execute this file!" }
If your grammar file contains both a Lexer and a Parser (or any
other multiple of definitions), the code in the main
header will be reproduced at the end of all of the generated class
definitions. If you really want to add code after only one of the
definitions, for instance, after the Lexer class, use
header "<LexerGrammar>.main" { print "You cannot execute this file!" }
where <LexerGrammar> is the name of the Lexer grammar. The same construction also works with the Parsers and TreeParsers, of course.
In the case both a generic init header and a grammar-specific
header are present, the grammar-specific one will override the
generic one. If no main
headers are present and the
grammar is for a Lexer, automated test code for that lexer is
automatically added at the end of the generated module. This can be
prevented by providing an empty
options { className="Scanner"; }
If you are using the className option conjunction with the
Python specific header options, there will be no collisions. The
className option changes the class name, while the
main
headers require the use of the grammar name which will
become the module name after code generation.
header "ParrotSketch.init" { self.state = JohnCleese.select("dead", "pushing up daisies", \ "no longer", "in Parrot Heaven") print "This parrot is", self.state } class ParrotSketch extends Lexer; options { className="Scanner"; }
As the handling of modules &emdash; packages in Java speak &emdash; in Python differs from that in Java, the current approach in ANTLR to call both the file and the class they contain after the name of the grammar is kind of awkward. Instead, a different approach is chosen that better reflects the handling of modules in Python. The name of the generated Python file is still derived from the name of the grammar, but the name of the class is fixed to the particular kind of grammar. A lexer grammar will be used to generate a class Lexer; a parser grammar will be used to generate a class Parser; and a treeparser grammar will be used to generate a class Walker.
header { // gets inserted in the Python source file before any // generated declarations } options { language = "Python"; } { // global code stuff that will be included in the 'MyParser.py' source // file just before the 'Parser' class below ... } class MyParser extends Parser; options { exportVocab=My; } { // additional methods and members for the generated 'Parser' class ... } ... generated RULES go here ... { // global code stuff that will be included in the 'MyLexer' source file // just before the 'Lexer' class below ... } class MyLexer extends Lexer; options { exportVocab=My; } { // additional methods and members for the generated 'Lexer' class ... } ... generated RULES go here ... { // global code stuff that will be included in the 'MyTreeParser' source // file just before the 'Walker' class below ... } class MyTreeParser extends TreeParser; options { exportVocab=My; } { // additional methods and members for the generated 'Walker' class ... } ... generated RULES go here ...Version number in parantheses shows the tool version used to develop and test. It may work with older versions as well. Python 2.2 or better is required as I'm using some recent Python features (like super() for example).
Known Bugs and Limitations
- E0004
ANTLR requires that a rule's return statement contains always a type and an identifier. Python does not know about types, so the type information is getting ignored but still needs to be present. Further, ANTLR's API does not allow to access the identifier. Therefore the variable 'r' is used for returning values, no matter what identifier is listed.
For example:
Note that 'r' is used even if 'f' is given!expr returns [float f] { r = 0 } : #(EXPR r = multexpr()) ;
- L0001
- There's no documentation available but the source code.
- L0002
- Performance should be improved
Miscellaneous Notes
- The API of the generated lexers, parsers, and treeparsers is supposed to be similar to the Java ones. However, calling a lexer is somewhat simplified:
### class "calcLexer extends Lexer" will generate python ### module "calcLexer" with class "Lexer". import calcLexer ### read from stdin .. L = calcLexer.Lexer() ### read from file "test.in" .. L = calcLexer.Lexer("test.in") ### open a file and read from it .. f = file("test.in", "r") L = calcLexer.Lexer(f) ### this works of course as well import sys L = calcLexer.Lexer(sys.stdin) ### use a shared input state L1 = calcLexer.Lexer(...) state = L1.inputState L2 = calc_l.Lexer(state)- The loop for the lexer to retrieve token by token can be written as:
or even:lexer = calcLexer.Lexer() ### create a lexer for calculator for token in lexer: ## do something with token print tokenfor token in calcLexer.Lexer(): ### create a lexer for calculator ## do something with token print token- However, writing this lexer loop is rarely necessary as it is generated by default in each generated lexer. Just run
to test the generated lexer.python calcLexer.py < calc.in
- Symbolic token number, table of literals bitsets and bitset data functions are generated on file (module) scope instead of class scope. For example:
import calc_l # import calc lexer module calc_l.EOF_TYPE # prints 1 calc_l.literals # {';': 11, 'end': 12, 'begin': 10}- Comments in action should be in Java/C++ formats, ie. // and /* ... */ are valid comments. However, make sure that you put a comment before or after a statement, but not within. For example, this will not work:
The reason is that Python only supports single-line comments. Such a Python comment skips everything till end-of-line. Therefore in the translation of the comment a newline will be introduced on reaching */. The code above would end up inx = /* one */ 1which is probably not what you want.x = # one 1
- The Lexer actions $newline, $nl and $skip have been introduced as language independent shortcuts for calling self.newline() ($newline, $nl) and _ttype = SKIP ($skip).
- Unicode is supported but it's easy to run into errors if your terminal(output device) is not able to handle unicode chars.
Here are some rules when using Unicode input:
- You need to wrap your input stream by a stream reader which translates bytes into unicode chars. This requires usually knowledge about your input's encoding. Assume for example that your input is 'latin1', you would do this:
Here reading from stdin gets wrapped.### replace stdin with a wrapper that spits out ### unicode chars. sys.stdin = codecs.lookup('latin1')[-2](sys.stdin)- When printing tokens etc containing Unicode chars it appears to be best to translate explicit to a unicode string before printing. Consider:
This explicit cast appears to be a bug in Python found during development (discussion still in progress).for token in unicode_l.Lexer() : print unicode(token) ## explict cast