I am setting up some large width integer types, and so I am making heavy use of macros to make the types usable as much as possible like the basic integer types. An issue I keep running into is that I can have the most straightforward and easily implemented solutions if I make generous use of _Generic expressions in my macro expansions, as opposed to minimizing _Generic use and relying more on macros and possible multiple versions of operations.
So, my question is if _Generic expressions are in practice the same as macros. Are there optimizer issues with _Generic's that are not a problem with macros? Are there differences in compilation through some other mechanism?
What makes me nervous is that _Generics conceptually are almost identical to macros, and so why are they syntactically expressions?
I understand that the answer to this question is compiler related, but I imagine all reasonable compilers will have similar behaviour.
Some responses suggest I should explain how _Generics and macros are similar with regard to my question. Both replace their invocation with code specific to a circumstance. A macro has more general rules for how to produce the replacement, where a _Generic must make a selection from specific inputs based on the type of the first. The point is that both are conceptually preprocessing ideas, in that they determine what code is actually compiled.
答案 0 :(得分:0)
If <table>
<thead>
<tr>
<td>A</td>
<td>B</td>
<td>C</td>
</tr>
</thead>
</table>
and macros are similar for you because they can be evaluated before run-time, do you also think that _Generic
(except for VLA), sizeof
or constant expressions are similar to macros?
Remember that at pre-processing time, the implementation has no knowledge of types, keywords, etc. and is only processing tokens.