I have a very large c++ project and I’m trying to decrease build times. I’ve been religious about forward declaring and only including files which are used. However I have many preprocessor defines which are setup in the project files (ie not in cpp or header files) so that I can keep them all in a central location for my different build configurations. Is it possible that this could increase build times? Would moving the preprocessor defines to a header file so they will only be included where needed speedup the build time?
It seems like the build time would be faster if the preprocessor didn’t need to scan and replace the defines for files that don’t use any of them, but I’m not sure if it’s worth the effort.
4
Unused macros that are specified on the command line of the compiler (through -D
options) should not have any measurable impact on build times.
The parsing of command line arguments is insignificant compared to the time it takes to compile even a smallish source file and the compiler can’t skip the preprocessing step anyway because there will invariably be some #include
-s to process and some macros to expand.
2
My experience with build time is pretty simple: it’s about linear with number of translation units. So if you want to reduce it, either paste some sources together, or do “unity compile”. the latter means you #include groups of .cpp files, and compile that instead of the originals. Certainly it means static symbols get shared. For variables it’s not that hard to detect, for functions might be trickier.
A source that may be just 100 LOC, or a few thousands at most normally picks up 100k to 1M LOC in included headers. What explains the figures.
Other things to explore are obviously precompiled headers and parallel builds. With PCH rules change significantly, fight to not include stuff and stick to forwards can be dropped.
I doubt the #defines themselves would cause any measurable impact.
Certainly all depends much on what compiler you use.
3