Age | Commit message (Collapse) | Author |
|
This behavior was useful when starting the implementation over
("take-2") where the whole test suite was failing. This made it easy
to focus on one test at a time and get each working.
More recently, we got the whole suite working, so we don't need this
feature anymore. And in the previous commit, we regressed a couple of
tests, so it's nice to be able to see all the failures with a single
run of the suite.
|
|
content."
This reverts commit 7db2402a8009772a3f10d19cfc7f30be9ee79295
It doesn't revert the new test case from that commit, just the
extremely ugly second-pass implementation.
|
|
Recently I'm seeing cases where "gcc -E" mysteriously omits blank
lines, (even though it prints the blank lines in other very similar
cases). Rather than trying to decipher and imitate this, just get rid
of the blank lines.
This approach with sed to kill the lines before the diff is better
than "diff -B" since when there is an actual difference, the presence
of blank lines won't make the diff harder to read.
|
|
To do this correctly, we change the lexer to lex integers as string values,
(new token type of INTEGER_STRING), and only convert to integer values when
evaluating an expression value.
Add a new test case for this, (which does pass now).
|
|
Along with a passing test to verify that this works.
|
|
For this we always add a new argument to the argument list as soon as
possible, without waiting until we see some argument token. This does
mean we need to take some extra care when comparing the number of
arguments with the number of expected arguments. In addition to
matching numbers, we also support one (empty) argument when zero
arguments are expected.
Add a test case here for this, which does pass.
|
|
This just makes these functions easier to understand all around. In
the case of _token_list_append_list this is an actual bug fix, (where
append an empty list onto a non-empty list would previously scramble
the tail pointer of the original list).
|
|
That is, a function-like invocation foo(x) is valid as a
single-argument invocation even if 'x' is a macro that expands into a
value with a comma. Add a new COMMA_FINAL token type to handle this,
and add a test for this case, (which passes).
|
|
the content.
That is, the following case:
#define foo(x) (x)
#define bar
bar(baz)
which now works with this (ugly) commit.
I definitely want to come up with something cleaner than this.
|
|
This adds three new pieces of state to the parser, (is_control_line,
newline_as_space, and paren_count), and a large amount of messy
code. I'd definitely like to see a cleaner solution for this.
With this fix, the "define-func-extra-newlines" now passes so we put
it back to test #26 where it was originally (lately it has been known
as test #55).
Also, we tweak test 25 slightly. Previously this test was ending a
file function-like macro name that was not actually a macro (not
followed by a left parenthesis). As is, this fix was making that test
fail because the text_line production expects to see a terminating
NEWLINE, but that NEWLINE is now getting turned into a SPACE here.
This seems unlikely to be a problem in the wild, (function macros
being used in a non-macro sense seems rare enough---but more than
likely they won't happen at the end of a file). Still, we document
this shortcoming in the README.
|
|
This is what I get for using a non-type-safe hash-table implementation.
|
|
To do this we have split the existing "HASH_IF expression" into two
productions:
First is HASH_IF pp_tokens which simply constructs a list of tokens.
Then, with that resulting token list, we first evaluate all DEFINED
operator tokens, then expand all macros, and finally start lexing from
the resulting token list. This brings us to the second production,
IF_EXPANDED expression
This final production works just like our previous "HASH_IF
expression", evaluating a constant integer expression.
The new test (54) added for this case now passes.
|
|
Simply need to move the rule for IDENTIFIER to be after "defined" and
everything is happy.
With this change, tests 50 through 53 all pass now.
|
|
With this change, tests 41 through 49 all pass. (The defined operator
appears to be somehow broken so that test 50 doesn't pass yet.)
|
|
|
|
Which makes test 40 now pass.
|
|
Now that we no longer have nested for loops with 'i' and 'j' we can
use the 'node' that we already have.
|
|
All the code referencing these was removed some time ago.
|
|
With this fix, tests 37 - 39 now pass.
|
|
None of these are fundamental---just a few things that haven't been
implemented yet.
|
|
Always better to use proper grammar in our grammar.
|
|
As required by the C99 specification of the preprocessor.
With this fix, tests 33 through 36 now pass.
|
|
This doesn't change any functionality here, but will allow us to make
future changes that were not possible with direct printing.
Specifically, we need to expand macros within macro arguments before
performing argument substitution. And *that* expansion cannot result
in immediate printing.
|
|
With this fix, test 32 no longer recurses infinitely, but now passes.
|
|
Supporting embedded newlines in a macro invocation is going to be
tricky with our current approach to lexing and parsing. Since this
isn't really an important feature for us, we can defer this until more
important things are resolved.
With this test out of the way, tests 27 through 31 are passing.
|
|
This trailing whitespace was coming from macro definitions and from
macro arguments. We fix this with a little extra state in the
token_list. It now remembers the last non-space token added, so that
these can be trimmed off just before printing the list.
With this fix test 23 now passes. Tests 24 and 25 are also passing,
but they probbably would ahve before this fix---just that they weren't
being run earlier.
|
|
We're no longer using the expansion stack, so its functions can go
along with most of the body of glcpp_parser_lex that was using it.
|
|
We weren't including this left parenthesis in the argument's token
list so the nested function invocation wasn not being recognized.
With this fix, tests 21 and 22 now pass.
|
|
This causes test 16 to pass. Tests 17-20 are also passing now, (though
they would probably have passed before this change and simply weren't
being run yet).
|
|
This makes tests 16 - 19 pass.
|
|
This is what gcc does, and it's actually less work to do
this. Previously we were having to save the contents of space tokens
as a string, but we don't need to do that now.
We extend test #0 to exercise this feature here.
|
|
This simply ensures that spaces in input line are preserved.
|
|
This makes test 15 pass and also dramatically simplifies the lexer.
We were previously using a CONTROL state in the lexer to only emit
SPACE tokens when on text lines. But that's not actually what we
want. We need SPACE tokens in the replacement lists as well. Instead
of a lexer state for this, we now simply set a "space_tokens" flag
whenever we start constructing a pp_tokens list and clear the flag
whenever we see a '#' introducing a directive.
Much cleaner this way.
|
|
This supports function-like macro invocation but without any argument
substitution. This now makes test 11 through 14 pass.
|
|
Which is as simple as copying the former action back from the git
history.
Now all tests through test 11 pass.
|
|
For this we add an "active" string_list_t to the parser. This makes
the current expansion_list_t in the parser obsolete, but we don't
remove that yet.
With this change we can now start passing some actual tests, so we
turn on real testing in the test suite again. I expect to implement
things more or less in the same order as before, so the test suite now
halts on first error.
With this change the first 8 tests in the suite pass, (object-like
macros with chaining and recursion).
|
|
With this change, we can recreate the original text-line input
exactly. Previously we were inserting a space between every pair of
tokens so our output had a lot more whitespace than our input.
With this change, we can drop the "-b" option to diff and match the
input exactly.
|
|
Still not doing any macro expansion just yet. But it should be fairly
easy from here.
|
|
This pernicious stuff managed to sneak in on us.
|
|
Yet another talloc wrapper that should come in handy.
|
|
This is a fresh start with a much simpler approach for the flex/bison
portions of the preprocessor. This isn't functional yet, (produces no
output), but can at least read all of our test cases without any parse
errors.
The grammar here is based on the grammar provided for the preprocessor
in the C99 specification.
|
|
These operators have been supported already, but were not covered in
existing tests yet. So this test passes already.
|
|
This new test covers several features from the last few commits.
This test passes already.
|
|
In addition to the decimal literals which we already support. Note
that we use strtoll here to get the large-width integers demanded by
the specification.
|
|
This is what the C99 specification demands. And the GLSL specification
says that we should follow the "standard C++" rules for #if condition
expressions rather than the GLSL rules, (which only support a 32-bit
integer).
|
|
This was simply missing before, (and unnoticed since we had no test of
the '~' operator).
|
|
The operator coverage here is quite complete. The one big thing
missing is that we are not yet doing macro expansion in #if
lines. This makes the whole support fairly useless, so we plan to fix
that shortcoming right away.
|
|
So far the only expression implemented is a single integer literal,
but obviously that's easy to extend. Various things including nesting
are tested here.
|
|
This is *very* easy to implement now that macro arguments are pre-expanded.
|
|
Previously, we were using the same lexing stack as we use for macro
expansion to also expand macro arguments. Instead, we now do this
earlier by simply recursing over the macro-invocations replacement
list and constructing a new expanded list, (and pushing only *that*
onto the stack).
This is simpler, and also allows us to more easily implement token
pasting in the future.
|