Hello.
This will be my 1st post here, so hopefully it won't be out of topic or contain other issues.
I am certain most if not all of you already knows what I'll say in the first lines, but I think it is better to (try to) define the context.
When writing programs, especially in compiled languages, one often have to apply sets of rules on sets of files, eventually producing new files, on which again sets of rules might be applied.
The probably most know example in this regard is the generation of an executable binary from C code: usually, ".c" "text" files are compiled to ".o" object files, which are linked together in some binary, but one could also mention the gettext translation system, where some code source is searched for patterns, those patterns being extracted to generate a catalog (a ".pot" file), that translators copy/paste for the language they want to support, and start to change (writing a ".po" file). When the translation is done, it is compiled into a final ".mo" file which will be used by the application.
So, no, it's not limited to binary programs (and there are other uses, too, for example code generators like flex, yacc, coco/R...).
This have to be done by 2 kinds of people: those who creates the program (I will call them "devs"), and those who package or installs it (I will call them "users").
Two different usages, and different needs, too.
The "users" just wan't to have the final binary, without being annoyed with too many steps. They most often use complete "builds", maybe even just delete the source code when they have the binary, since they no longer need the code.
Fast build time is, here, merely a convenience, while being able to *not* have to tinker is an important feature.
The "devs" on the other hands, needs to repeatedly rebuild the code, in order to hunt for bugs, to try fixes, or to implement new features.
Build times here are critical, and usually being able to tinker easily with the build system is also quite important.
Since the need for speed is important, projects are usually not rebuilt from scratch, instead, tools try to only rebuild the files that were changed or depends on the changed files.
Now that there is some context, let name some players in the area, the ones I had to work with (or sometimes, against):
- home-made build scripts. Ok enough for very small projects and code snippets, only mentioned because it's possible... I do that often tbh, for my snippets and other junk code.
- make (Makefiles): the old one. 1 name, multiple implementations, each one is different. Have implicit rules that may match your needs (but often won't while still getting in the way when you ask it to be verbose, and also depends on implementation). The common feature set is unable to easily handle multiple outputs of 1 rule. It is sometimes written by hand, but most often people use generators to work around those problems (and others). "Devs" can't easily ask it to put intermediate files into specific directories, which results in polluting the source code ones. Same to have multiple configurations (debug build, release build, static analysis, etc). Diagnostic is poor, notably because of if you ask for it, it will go through all the implicit rules, bloating the log.
- GNU autotools... another old one, previously the de-facto standard for C and C++ on unix-like systems I believe, generates Makefiles. What to say... it's easy for "users" (as long as things works flawlessly, ofc), but each time I tried to contribute to a project using that thing, if I had to do changes the build system, I gave up. It's complex, bloated, and have a real bad reputation on windows, which is often a target. I'm not sure if it's possible to handle multiple configurations easily, I *really* try to avoid messing with projects that use that... I regularly got missing commands in the middle of the build, and good luck to find which one, since the diagnostic is really poor, being a combination of make and shell tools.
- CMake. Again, a generator. One of it's strengths is that it is easy to grasps the basics when hacking someone else's code. Other strengths are, with no particular order: *not* limited to generate Makefiles, good diagnostics, damn easy to have multiple build configurations. Still, it's users tend to try to reinvent pkg-config on unix-like systems, which *SUCKS*. It is often considered the current de-facto standard for C and C++. Oh, forgot that. Even if it's better than autotools, the syntax is still ugly, lot of syntactic sugar.
- scons... I have not really tried it, just used it as a user. Claims to be a make replacement, built in python, configuration is, also, in python, which is why I don't even wan't to try it until I'm paid for that: I don't like languages that forces a particular codestyle for one, and second, while other tools use (or favor a lot) declarative, non-turing-complete syntax, which is relatively easy to hack, this one encourages adding complexity to what should be simple. I quote them: "Configuration files are Python scripts--use the power of a real programming language to solve build problems.". Stupid. Oh, and it tries to do the whole job, too. It replaces both Make *and* the autotools/cmake/whatever. Meh.
- meson: another one I don't have tried, have interesting selling points, despite being written in a language I don't like (but if I don't have to work with it, it's fine). Honestly, I think it may be a good alternative to CMake, except that it seems to only be able to generate build configs for ninja.
- ninja. This one is make replacement, the one I am slowly but certainly starting to love. Why? Because unlike make, it have a very clear and concise documentation. It have no default rules that will bite you. It does one thing, but does it correctly. Unlike make, it supports 2 types of outputs (which can all consist of multiple files) and 3 types of dependencies which include a feature to automatically discover the includes a file depends on (well, actually, it understands the compiler's output). Oh. And it has *versionning* too. It's approach still have some "problems", of course: you must specify files one by one (unlike make which have globbing) and you must write all rules by yourself. Now, this is intended, as they say: "You should generate your ninja files using another program.". That's probably true for big projects, really. But for projects where there are only a handful of files, it's actually pretty nice to write those. TBH I even think it would be possible to write a build.ninja file that would extract the list of files to compile from git (or another (D)VCS) to generate target-specific build configurations.
And you, what do you use, and what do you think about the current state?
|