Parse an assignment target. As Jinja2 allows assignments to tuples, this function can parse all allowed assignment targets. Per default assignments to tuples are parsed, that can be disable however by setting with_tuple to False. If only assignments to names are wanted name_only can be set to True. The extra_end_rules parameter is forwarded to the tuple parsing function.
parse_expression(with_condexpr=True)
Parse an expression. Per default all expressions are parsed, if the optional with_condexpr parameter is set to False conditional expressions are not parsed.
parse_statements(end_tokens, drop_needle=False)
Parse multiple statements into a list until one of the end tokens is reached. This is used to parse the body of statements as it also parses template data if appropriate. The parser checks first if the current token is a colon and skips it if there is one. Then it checks for the block end and parses until if one of the end_tokens is reached. Per default the active token in the stream at the end of the call is the matched end token. If this is not wanted drop_needle can be set to True and the end token is removed.
Works like parse_expression but if multiple expressions are delimited by a comma a Tuplenode is created. This method could also return a regular expression instead of a tuple if no commas where found.
The default parsing mode is a full tuple. If simplified is True only names and literals are parsed. The no_condexpr parameter is forwarded to parse_expression().
Because tuples do not require delimiters and may end in a bogus comma an extra hint is needed that marks the end of a tuple. For example for loops support tuples between for and in. In that case the extra_end_rules is set to ['name:in'].
explicit_parentheses is true if the parsing was triggered by an expression in parentheses. This is used to figure out if an empty tuple is a valid expression or not.
A token stream is an iterable that yields Tokens. The parser however does not iterate over it but calls next() to go one token ahead. The current active token is stored as current.
current
当前的 Token。
eos
Are we at the end of the stream?
expect(expr)
Expect a given token type and return it. This accepts the same argument as jinja2.lexer.Token.test().
look()
Look at the next token.
next()
Go one token ahead and return the old one
next_if(expr)
Perform the token test and return the token if it matched. Otherwise the return value is None.
push(token)
Push a token back to the stream.
skip(n=1)
Got n tokens ahead.
skip_if(expr)
Like next_if() but only returns True or False.
classjinja2.lexer.Token
Token class.
lineno
token 的行号。
type
token 的类型。这个值是被禁锢的,所以你可以用 is 运算符同任意字符 串比较。
value
token 的值。
test(expr)
Test a token against a token expression. This can either be a token type or 'token_type:token_value'. This can only test against string values and types.
Baseclass for all Jinja2 nodes. There are a number of nodes available of different types. There are four major types:
Stmt: statements
Expr: expressions
Helper: helper nodes
Template: the outermost wrapper node
All nodes have fields and attributes. Fields may be other nodes, lists, or arbitrary values. Fields are passed to the constructor as regular positional arguments, attributes as keyword arguments. Each node has two attributes: lineno (the line number of the node) and environment. The environment attribute is set at the end of the parsing process for all nodes automatically.
find(node_type)
Find the first node of a given type. If no such node exists the return value is None.