Bakefile Internals (4/6)

TargetNode Class
Figure 1: TargetNode Class

TODO: Parsing uses ANTLR v3

The parser code is located in its own package that can be found in the bakefile/src/bkl/parser directory.

The parser define its own tree class class _TreeAdaptor(CommonTreeAdaptor) and the TOKENS_MAP maps Node classes to tokens. So after parsing the nodes of the trees are of specific types.

_ast_dispatch in maps node classes with their handlers.

The top-level functions are parse_file and parse in

get_parser builds the ANTLR v3 parser

def get_parser(code, filename=None):
    Prepares Bakefile parser for parsing given Bakefile code from string
    argument passed in. The optional filename argument allows specifying input
    file name for the purpose of errors reporting.
    if code and code[-1] != "\n":
        code += "\n"

    cStream = antlr3.StringStream(code)
    lexer = _Lexer(cStream)
    lexer.filename = filename

    tStream = antlr3.CommonTokenStream(lexer)
    parser = _Parser(tStream)
    parser.filename = filename
    parser.adaptor = ast._TreeAdaptor(filename)

    return parser

TODO: remove error processing to clarify. The parse function builds the parser and then gets the tree. This is typical ANTLR v3 stuff.

def parse(code, filename=None, detect_compatibility_errors=True):
    Reads Bakefile code from string argument passed in and returns parsed AST.
    The optional filename argument allows specifying input file name for the purpose
    of errors reporting.
    parser = get_parser(code, filename)
        return parser.program().tree
    except ParserError as err:
        if not detect_compatibility_errors:
        # Report usage of bkl-ng with old bkl files in user-friendly way:
        if code.startswith("<?xml"):
            raise ParserError("this file is incompatible with new Bakefile versions; please use Bakefile 0.2.x to process it",
            # Another possible problem is that that this version of Bakefile
            # may be too old and doesn't recognize some newly introduced
            # syntax. Try to report that nicely too.
            code_lines = code.splitlines()
            for idx in xrange(0, len(code_lines)):
                ln = code_lines[idx]
                if "requires" in ln:
                        parse(ln, detect_compatibility_errors=False)
                    except VersionError as e:
                        e.pos.filename = filename
                        e.pos.line = idx+1
                    except ParserError as e:
            raise err

The interpreter then calls process on the returned tree.

def process_file(self, filename):
        """Like :meth:`process()`, but takes filename as its argument."""

This ends up calling add_module passing the AST. Which then calls module = b.create_model(ast, parent) which then calls self.handle_children(ast.children, self.context) which calls for n in children: self._handle_node(n) which does func = self._ast_dispatch[type(node)]

def process(self, ast):
        Interprets input file and generates the outputs.

        :param ast: AST of the input file, as returned by

        Processing is done in several phases:

        1. Basic model is built (see :class:`bkl.interpreter.builder.Builder`).
           No optimizations or checks are performed at this point.

        2. Several generic optimization and checking passes are run on the
           model.  Among other things, types correctness and other constraints
           are checked, variables are substituted and evaluated.

        3. The model is split into several copies, one per output toolset.

        4. Further optimization passes are done.

        5. Output files are generated.

        Step 1 is done by :meth:`add_module`. Steps 2-4 are done by
        :meth:`finalize` and step 5 is implemented in :meth:`generate`.
        self.add_module(ast, self.model)

blog comments powered by Disqus

Copyright(c) 2006-2017 Xavier Leclercq | Privacy policy

Contact Us