Global Aur equivalent - GNU/Linux
Steph
(06-12-2018, 09:12 PM)eduarch42 Wrote: and kind of "chain" them in order to maintain all of them updated.

This sounds nice and all, but it flies in the face of the way packages work. The linuxverse is crazy in that so many distros ship with so many different versions of packages and different packages that depend on other packages.
Did I say packages enough times? its a mess.
jkl
The Linuxers had one standard package management system defined. They could have had it so easy...

Dumb kids.
Dworin
(06-12-2018, 09:12 PM)eduarch42 Wrote: Example: I submit my own window manager to AUR, then a script detects there is a new package in the AUR, and creates a new package for another package manager.
I think that with this solution it will be easier to maintain and more reachable. Because it doesn't involves the creation of another package manager. Only the creation of "mirror" repositories of the AUR.
What do you guys think?

You can probably make some helper scripts but to have unsupervised uploading of untested 'translated' packages to repos doesn't seem like a good idea. You'd still need to build and test on each target distro.

How about a kind of clean chroot system that lets you choose which distro is used for building a package?
z3bra
(06-12-2018, 11:56 PM)Dworin Wrote: How about a kind of clean chroot system that lets you choose which distro is used for building a package?

I had such an idea in mind for an automated packages build.
Start with an empty directory, and install only the dependencies. This way you can ensure that you got all the dependencies right, and that your software is gonna work correctly.

Now to add on top of the cross-distro idea, the best solution would be, IMO, to go with static linkage.
This would make your binaries work on any host with the same architecture, no matter the libc, of libraries installed.
This is also faster to run, and definitely not bigger than say, snap or flatpack "apps", which ship all the full runtime dependencies along with each software.

For the repository, I prefer going with how crux handles it, rather than arch: maintainers host the repositories themselves, instead of pushing to a central repo.
This has the advantage to give more responsibility to maintainers, and prevent orphanage. Web hosting is now cheap, and you could even use online "hubs" to host your port tree like http://repo.or.cz.
z3bra
So is this idea dead already?
Steph
(12-12-2018, 06:38 AM)z3bra Wrote: So is this idea dead already?

I feel like you summed it up well when you described the difficulties of creating such a system.

If such a thing was feasible I would bet it would have already been implemented.
z3bra
(12-12-2018, 11:35 AM)Steph Wrote:
(12-12-2018, 06:38 AM)z3bra Wrote: So is this idea dead already?

I feel like you summed it up well when you described the difficulties of creating such a system.
If such a thing was feasible I would bet it would have already been implemented.

Such a thing already exist, I just didn't want to spoil the idea with pre-made options, to see what we would come up with.
If the idea dies with the first difficulty though, we won't go far indeed :)

Distributed user repositories:
  • https://crux.nu/portdb
  • ???? - some linux ports that can be built on any distro, POSIX shell only (can't remember the name!)

Package manager wrappers / Cross-platform
eduarch42
(12-12-2018, 06:38 AM)z3bra Wrote: So is this idea dead already?
Its been a long time, and i took it to think about how it could be implemented.

What about a package manager with no repositories and packages, giving it a type of scheme as the one of Uber (The largest taxi company without owning any car). It will not have packages because all of them will be hosted on their respective github/gitlab repositories.

Lets say i want to download a packages, ill just need to run "palo install venam/2bwm". Then the package manager will search for the repository in github/gitlab and clone it. After that it will search for any Makefile file in the repository and run it.

I know the idea its no so concrete, but atleast it will avoid the necessity of adding packages to a repository and make compete with other package managers that don't have a certain amountof packages.

What do you guys think?

Notes:
-(palo is just a name that i came up for the package manager, new ideas a accepted)
-(Instead of using a Makefile, we could create a global makepkg or makefile exclusively for the package manager)
Dworin
I just stumble on the following: http://www.rastersoft.com/programas/multipackager.html . Can be added to Z3bra's list, I guess. Works by automating creation of VM with environment for a number of distro's. Haven't actually tried it or checked it out.
z3bra
@eduarch42: Even though I like this idea, it has 2 main flaws:

First, not all softwares rely on git as their VCS, and not all of the git based projects are hosted on github/gitlab. If you want to build a drop-in solution, it should adapt to every solution out there.
A solution to that would be to use some kind of "driver" system, based on the protocol used to fetch a project: http://, ftp://, git://, gopher://, ...
It would hqve some corner cases, but could be more generic!

Second problem (which is huge IMO), is that there is no standard build system. And even out of all the build systems, all of them don't compile the same way.
From my experience, you cannot assume all makefiles have an "install" target for example. It means that you'll have to patch the makefile at some point, and for that, you need a way to keep track of your patches, and some kind of recipe to apply them.
The idea you propose (makepkg or global makefile) is how all package managers work already. They have an internal build recipe to create packages that the package manager can install/track.

Some other implementations try to "replace" completely any makefile, by providing a generic dropin build system: http://git.2f30.org/mkbuild/log.html
I love the idea but this looks too complex for me...

I went with a different approach myself, and decided to put the compilation process out of the package manager. You compile the software manually, install it in a chroot, and generate a tarball out of this chroot. this is your package, and the package manager can install it!
It leaves the user the possibility to use/create any automated build system he wants, and make use of an easy to understand and portable package format: a tarball.




Members  |  Stats  |  Night Mode  |  Help