The role of distributions &/or Unix flavors, where does pkg management stands - Psychology, Philosophy, and Licenses
jkl: I'd say 2 options combined:

when a package.lock has more than 10 entries and the author says "minimal dependencies"
when a library that pull dependencies on another library is a common practice.

It is not necessarily a bad thing, as long as you do not end-up with string padding libraries. Tools are not good or bad on their own after all.
One tip for doing static linking :

It is terribly hard to configure it, so instead of configuring it, it is possible to /not/ build the shared libraries, and only expose the libsomething.a to the compiler, which will pick it.

That is exactly the same compiler flags! it just picks what's available to fulfill -lsomething among what is there. And given ./configure and other autotools works by launching cc nad check the outcome rather than stating out the presence of the files themself, the libsomething.a wil survive through all the piping of the autotools and the craziest of the makefiles.

Yes, you have to configure it from the library's rather than though the program's package, but it works _nicely_!

$ ldd $(which curl) (0x00007ffd9b5f0000) => /lib/x86_64-linux-gnu/ (0x00007f5fc033b000) => /lib/x86_64-linux-gnu/ (0x00007f5fc031a000) => /lib/x86_64-linux-gnu/ (0x00007f5fc0159000)
/lib64/ (0x00007f5fc0694000)

$ ldd $(which gpg2) (0x00007fff5f738000) => /lib/x86_64-linux-gnu/ (0x00007f2baafad000)
/lib64/ (0x00007f2bab4d8000)

Yes, these are not "true static binaries", but looks at the pile of dependency for compiling gpg2 and look-ma-no-configure-flags and still have all of the libraries statically linked !

Next step is to rm Hmm, I'll wait a bit if you do not mind. ;)

[edit] BTW, the ./configure flags in autotool style are --enable-shared=no --enable-static=yes [/edit]
(08-04-2020, 04:41 PM)josuah Wrote: BTW, the ./configure flags in autotool style are --enable-shared=no --enable-static=yes

Because it's there does not mean it works. I used that to build around ~50 softwares, and over half of them ended up being dynamically linked.
@z3bra: yes, the state is not so bright regarding static linking, and from what you says, you have the bagguage to say that.

But these flags are really meant for the library package, not for the binary program's package :

If no .so ever get built and find / -name '*.so' -o -name '*.so.*' | wc == 0, then if a binary ever comes out, it will not be a dynamic one.
(08-04-2020, 03:43 PM)jkl Wrote: Define “dependency-heavy”?

Another definition: When it takes more than 500Mb of memory to compile one binary:

#[quote="josuah" pid="20594" dateline="1586438961"]
fatal error: runtime: out of memory

But this is normal: this is matterbridge: making the greatest effort to support as many protocols as possible, so *obviously* it is taking a lot of dependencies.

I really like out of memory operations for a single step, a hard limit due to the algorythm and not the load.

It "reminds" me an epoch I did not know, but had a sense through, where resources were limited and you had to work around it (P.S.: heh, no I don't do low-level video graphics, and I am merely as dumb as when I did join that forum for the first time).
i use slackware, which has no out-of-the-box package management. several platforms exist - slackbuilds, most notably - but there is no dependency resolution amongst the different platforms. so, i absolutely think twice about building anything big unless i feel like sitting around for a few hours and taking care of the whole dependency tree.
(08-04-2020, 03:43 PM)jkl Wrote: Define “dependency-heavy”?

I think he meant something like "uses code from external sources (other than what is provided by the language's standard library or libraries)".

As opposed to "batteries included".

If I understand correctly, updating a statically-built program to incorporate changes in any of its dependencies still requires a rebuild. Otherwise it will still use the old code. With dynamic linking the updates can be transparent (unless the major version changes, usually because the API does), and the using program doesn't always need to be recompiled.

It does put the burden on the static program's maintainer to keep track of changes in the dependencies, but personally I prefer getting the opportunity to test their effects on my code rather than having them deployed "behind my back".

(Edit: Somehow I had missed page 3 of this thread when I wrote this reply. Oh well.)

(Edit2: actually the maintainer of a dynamically-linked program ought to also be tracking changes in the libraries it depends on and testing that they don't adversely impact his program. In my time as a FreeBSD port maintainer I learned that the ports management team only checks that an upgrade of a library doesn't break the build of the programs which depend on it; as far as I know they don't do even the most rudimentary "smoke" test of those programs, let alone more detailed functional or other tests. I expect the same is true for the various package management systems. So the difference between static and dynamic linking is really a wash as far as upgrades go, assuming conscientious maintainers in both cases. Which might be a BIG assumption.)

Members  |  Stats  |  Night Mode