The parser code is located in its own package that can be found in the bakefile/src/bkl/parser directory.
The parser define its own tree class class _TreeAdaptor(CommonTreeAdaptor) and the TOKENS_MAP maps Node classes to tokens. So after parsing the nodes of the
trees are of specific types.
_ast_dispatch in builder.py maps node classes with their handlers.
The top-level functions are parse_file and parse in __init__.py
get_parser builds the ANTLR v3 parser
""" Prepares Bakefile parser for parsing given Bakefile code from string argument passed in. The optional filename argument allows specifying input file name for the purpose of errors reporting. """if code and code[-1] !="\n":
cStream = antlr3.StringStream(code)
lexer = _Lexer(cStream)
lexer.filename = filename
tStream = antlr3.CommonTokenStream(lexer)
parser = _Parser(tStream)
parser.filename = filename
parser.adaptor = ast._TreeAdaptor(filename)
TODO: remove error processing to clarify. The parse function builds the parser and then gets the tree. This is typical ANTLR v3 stuff.
defparse(code, filename=None, detect_compatibility_errors=True):
""" Reads Bakefile code from string argument passed in and returns parsed AST. The optional filename argument allows specifying input file name for the purpose of errors reporting. """
parser = get_parser(code, filename)
except ParserError as err:
raise# Report usage of bkl-ng with old bkl files in user-friendly way:if code.startswith("<?xml"):
raise ParserError("this file is incompatible with new Bakefile versions; please use Bakefile 0.2.x to process it",
# Another possible problem is that that this version of Bakefile# may be too old and doesn't recognize some newly introduced# syntax. Try to report that nicely too.
code_lines = code.splitlines()
for idx inxrange(0, len(code_lines)):
ln = code_lines[idx]
except VersionError as e:
e.pos.filename = filename
e.pos.line = idx+1raiseexcept ParserError as e:
The interpreter then calls process on the returned tree.
"""Like :meth:`process()`, but takes filename as its argument."""self.process(parse_file(filename))
This ends up calling add_module passing the AST. Which then calls module = b.create_model(ast, parent) which then calls self.handle_children(ast.children, self.context) which calls
for n in children:
which does func = self._ast_dispatch[type(node)]
""" Interprets input file and generates the outputs. :param ast: AST of the input file, as returned by :func:`bkl.parser.parse_file`. Processing is done in several phases: 1. Basic model is built (see :class:`bkl.interpreter.builder.Builder`). No optimizations or checks are performed at this point. 2. Several generic optimization and checking passes are run on the model. Among other things, types correctness and other constraints are checked, variables are substituted and evaluated. 3. The model is split into several copies, one per output toolset. 4. Further optimization passes are done. 5. Output files are generated. Step 1 is done by :meth:`add_module`. Steps 2-4 are done by :meth:`finalize` and step 5 is implemented in :meth:`generate`. """self.add_module(ast, self.model)