Why is modular code so hard to implement? - Programming On Unix
Users browsing this thread: 1 Guest(s)
|
|||
I wrote a YouTube downloader recently and I made an observation: While it is rather easy to dynamically load libraries during runtime, writing statically linked software that makes it easy to add new “libraries” (classes) is rather complicated. There seems to be no obvious way in most programming languages to automatically embed all files in a certain directory.
My original plan was to simulate the default behavior of Go: Adding a new site to the list of supported sites would only require to add one file, the compiler would pick it up automatically. In Rust (yes, I am a software masochist, why do you ask?), this won’t work: I’ll still need to specify the new module in another file. (Arguably, this is already a relatively easy approach, but it took me a while to figure it out.) So I wonder why most languages don’t have Go’s “plug-in” compilation system that wouldn’t require additional files. Is this really an unusual use case? -- <mort> choosing a terrible license just to be spiteful towards others is possibly the most tux0r thing I've ever seen |
|||
|
|||
It's sometimes called the microkernel architecture, related related to the easiness to which you can add modules, and that the compiler/preprocessor will automatically pick-up when they are added.
I think it comes from the phases in the compiler/interpreter process and the early design decisions in the language. It's still easier to do this in Rust than in many other languages, like you mention. In the Java world it is popular to do it through dependency injection (DI), which as I'm discovering now there's a bunch of languages built precisely around the concept. While it's not really necessary to have DI to do automatic plug-in discovery, it's one of the way I can think of that makes it automatic. |
|||