Users browsing this thread: 1 Guest(s)
venam
Administrators
Hello fellow nixers,
In this thread let's share how we share files and media with others.

In relation with the entry of last week newsletter "Dropbox is dropping support" in issue 89. I thought of opening the topic of how everyone handles file sharing, especially media files. It's not uncommon to find ourselves in a situation where we have to sync or at least give to someone else a file that resides on our machine, so what do you do in that scenario.

from Dropbox, to rsync and ftp, to SAMBA shares, to NAS (network attached storage), to putting files on your http web server index and display them prettily with h5ai, to centralized home media center on raspberry pi like Kodi or Xbian or others, to any other media center solution, to a mini uploader like paste.xinu.at , to sharing physical media like USB and DVDs, to your own solution.

In my case I use a simple uploader paster for quick small files and when it comes to big files I use a physical medium, a harddisk or USB usually, as my internet speed isn't able to send enormous files. The disadvantage is that I can't share files I would like to share with people that are not next to me.

What machine or device do you use to share media and what is your solution, how do you handle this, what are the downsides to your solution?
jkl
Long time nixers
I usually use ShareX on Windows, but of course that's not the ultimate solution for everyone. Generally, simple uploaders are awesome unless it is sensitive data. Framapic is sadly pausing, I loved that, but I'm too lazy to set up my own ...

--
<mort> choosing a terrible license just to be spiteful towards others is possibly the most tux0r thing I've ever seen
venam
Administrators
(25-08-2018, 12:13 PM)jkl Wrote: Framapic is sadly pausing
Nice, it's from the Framasoft team. Maybe someone here can set an instance up.
acg
Members
My classmates use Google Drive, so that's what I used with them and with Dropbox's news I've been looking for other solutions. First one that comes to my mind is ownCloud, which I've setup before on a VPS but never really tried it as my main host/sharing platform.
venam
Administrators
(25-08-2018, 10:37 PM)acg Wrote: with Dropbox's news I've been looking for other solutions
I'm also using Dropbox to share ascii art files with the group I'm part of, I think I'll continue using it as I'm on a non-encrypted ext4 filesystem and support for that isn't dropping. I assume you're using an encrypted partition or some other type of filesystem.
oda
Members
When I need to share small files I use 0x0.st. To that end, I have a short function in my zshrc so I can use it easily:
Code:
0x0() { curl -F"file=@$1" https://0x0.st; }

For larger files / more complicated permissions / etc, I use Seafile on my own server.
pkal
Long time nixers
I'm surprised that nobody has mentioned Syncthing yet. I use it to synchronize text documents between my devices, automatically, without any external server. It works just as well when all devices are in the same network, or I'm not at home. As soon as I tidy my pdf and image collections up, I'll be adding them too, albeit without version control.

The major downsides would be
  • Requires at least two nodes to be active to synchronize (this wasn't too much of a issue for me, since I have one node that's always on)
  • It's still under development, so there are a few bugs and changes that pop up from time to time
  • the android client is very disapointing
  • one has to put some thought into setting the network up to avoid dead or duplicate nodes
Dworin
Members
I use Dropbox for some stuff, Google drive, send files by Telegram or Whatsapp for others. It depends on who I'm working with.

Syncthing looks interesting.
prx*
Members
+1 for synthing.

For family or rookies, I use framadrop.org or https://transfer.sh/

Code:
transfer.sh() {
  if [ $# -eq 0 ]; then
    echo -e "No arguments specified. Usage:\necho transfer /tmp/test.md\ncat /tmp/test.md |
    return 1;
  fi
  tmpfile=$( mktemp -t transferXXXXXXX );
  if tty -s; then
    basefile=$(basename "$1" | sed -e 's/[^a-zA-Z0-9._-]/-/g'); curl --progress-bar --uploa
  else
  curl --progress-bar --upload-file "-" "https://transfer.sh/$1" >> $tmpfile ;
  fi;
  cat $tmpfile; rm -f $tmpfile;
  echo ""
  }
w | ~ |
z3bra
Grey Hair Nixers
(15-12-2018, 03:26 PM)thuban Wrote: +1 for synthing.

For family or rookies, I use framadrop.org or https://transfer.sh/

I didn't know transfer.sh finally managed to stay up. That's a good news!
Sharing files has always been a puzzle to me... There are so many ways to do it, yet there is no "easy" solution that works for all use cases!

I use http://p.iotek.org for ephemeral shares (screenshots, piece of code, tiny tarballs, ...). For anything long-term that's public, I tend to put it on my website, though it's far from ideal.
When it comes to data synchronisation between servers or for my own use, rsync is fine (especially 'cause I have it on my phone as well!). I wrote a wrapper to sync files between multiple nodes.
Syncthing is nice, but has trouble keeping up with lots of data. It also has bugs, so I'd rather not corrupt any data with it. I also find it too complex to use, for the simple thing it is supposed to do.

For sharing data "privately" (eg. family pictures or whatever), I have setup an FTP server. It is definitely not ideal though, for multiple reason.
First of them being that it's not encrypted (though I'm working on that, I just need to sort my shit when it comes to certificates).
Second is that lambda people can't use a computer, and have troubles making connections to an FTP from their computer without assistance.
Third is that FTP is not good for "online" viewing, for example to go through pics of your latest travel with your familly.
Finally, my server currently runs alpine, which lacks some PAM modules to correctly implement virtual users with vsftpd. That is only contextual though, and I plan to wipe this install clean and run an OpenBSD node instead.

It kills me that, so close to 2019, it is still so complex to have someone share a bunch of big files with you...
atbd
Members
I recently found a pretty useful service with a cli tool: http://push.tf
I made a gentoo ebuild (https://github.com/spnngl/gentoo/blob/ma...999.ebuild) if anyone is interested
z3bra
Grey Hair Nixers
(18-12-2018, 06:41 AM)atbd Wrote: I recently found a pretty useful service with a cli tool: http://push.tf
I made a gentoo ebuild (https://github.com/spnngl/gentoo/blob/ma...999.ebuild) if anyone is interested

The following script will covert 90% of the use cases (note that API may change):
Code:
IDTOK=$(curl -s http://push.tf/id)
curl -F id=${IDTOK%%:*} -F token=${IDTOK##*:} -F "filename=$(basename ${1:-stdin})" -F "upload_file=@${1:--}" http://u.push.tf
printf 'http://push.tf/%s\n' ${IDTOK%%:*}

I'm a passionate C advocate, but you should know where to draw the line between wrapping a tool, or writing a new one :)
This looks over-engineered to me though, but that's only my opinion!

Some things bother me with this service though...
Quote:[...]
But there are many cases where push.tf will be the fastest and easiest solution to share or backup files.
[...]
We don't sell you the classical speech that our service is secure because we use encryption (we encrypt nothing actually). Recent security holes have proven that no one should rely on this argument. By encrypting your data before sending, you can be sure that no one will be able to read them.
[...]
you can set the expiration time in hours (max 336 hours)

http://push.tf
http://u.push.tf
Code:
$ curl -I https://push.tf
curl: (7) Failed to connect to push.tf port 443: Connection refused

When you advocate that your service is suitable for backups, or "anonymous" transfer. You cannot not provide a fully encrypted version of your service.
prx*
Members
Interesting.
Instead of syncthing or external services, I use more an dmore my own self-hosted server.

Via sftp, I copy my files to the server (or with scp) in a directory "auto indexed" via httpd. It gives this : https://yeuxdelibad.net/DL/
If necessary, a htpasswd file is enough to password protect the access.
z3bra
Grey Hair Nixers
sftp is nice, but require that you give an ssh access to the people you want able to retrieve files. Even if chrooted, it still bother me a bit.
I do like the directory listing approach (and that is actually how I do it today. This, however, doesn't allow pushing any data. I wish there was a simple way for browsers to use PUSH requests without having to implement a script server wise... Like you right click on a page, choose "send files to link location...", then select some files in the explorer and click ok. Browsers have been there for years, HTTP is the main protocol for network communication, but we can still barely use it!
prx*
Members
To retrieve files, people use the http list (auto-index) mentioned above. sftp is just for me to put files, and many server have a ssh access.
sebsauvage has a list of sharing services : https://sebsauvage.net/wiki/doku.php?id=...e_fichiers
atbd
Members
(18-12-2018, 08:34 AM)z3bra Wrote: When you advocate that your service is suitable for backups, or "anonymous" transfer. You cannot not provide a fully encrypted version of your service.

You're right, that's why I use it only for no-sensitive data and quick transfer with colleagues. Habits are hard to forget.
prx*
Members
In order to let friends sending files, I hosted for a while jirafau, which is quite simple : https://jirafeau.net/
https://gitlab.com/mojo42/Jirafeau
venam
Administrators
The topic of synchronizing/replicating/sharing files between machines is back in vogue. There's a lot of discussion on the web about it.

OpenBSD is doing a rewrite of rsync, OpenRsync.

Some other talks are about alternatives to puppet, ansible, and company with tools such as rdist:
http://johan.huldtgren.com/posts/2019/rdist
https://chargen.one/obsdams/rdist-1-when...s-too-much

What's rocking your way, anything to add to the conversation?
jkl
Long time nixers
I find the Openrsync project funny. The issue tracker is filled with Linux users who complain that their OS is too insecure to make a port viable.

--
<mort> choosing a terrible license just to be spiteful towards others is possibly the most tux0r thing I've ever seen
z3bra
Grey Hair Nixers
Synchronizing files is an interresting topic, as there are so many ways to do it, and even more use-cases!

I think that there are 3 types of tools to sync files between multiple hosts: one-way, two-way or hybrid.

One way sync is when a single host is pushing to all others at the same time, and no change would happen while the transfer is on (in theory). This assume that you are only modifying one host at a time, and that you always havw the latest changes when you edit files on any host.
Tools in this vain are rsync(1), rdist(1), ftp(1) or even git(1). They work in a push/pull manner, and you have no way to ensure that what you have is the latest version, because there is no "synchronisation" state between your hosts. You either push (force latest change to be what you push), or pull (assume your remote hosts have the latest version).

Two-way sync works in real-time. Whenever a change happen, it gets pushed to all other hosts. If two hosts are modified in parallel, their mutual changes have to be merged. If the same part of the file is changed, conflict happens. That is the price of two-way sync.
You get a more reliable synchronization, but it is easier to corrupt your data. Two-way sync tools are also forced to act as daemons, and must always be watching your files to push the changes in real time. Example tools are unison, syncthing and dropbox.

Finally, here comes the shameless plug: synk! (README)
It is what I find the best of both world. It is a one-way sync, but it first tries to find the most recent copy using the mtime of your file. This require a good time sync between your hosts.
Note that this is only a draft, and there are design problems, like concurrency issues if you fire it on multiple hosts at the same time. But it does the job quite well!
It is not a daemon, so you fire it up whe. you see fit: manually, with cron, entr, wendy, fswatch, inotifywatch, ... whatever. When started, it connect to all hosts, fetch all mtime locally and then spawn rsync(1) processes from the host that has the highest mtime to push the file on all other hosts!
I would like to use bittorrent internally instead of rsync algorithm, to be even faster, but that is another topic ;)

What do you guys think? Does that fill a need? Do you see many flaws in the design?
wolf
Members
(18-12-2018, 08:34 AM)z3bra Wrote: but you should know where to draw the line between wrapping a tool, or writing a new one :)

Wise words. Now I got the point.
venam
Administrators
With remote working this topic is more important than ever. Have you changed anything from your previous practices and file sharing/syncing setup?
TheAnachron
Members
I'm using syncthing for sharing stuff between my laptop, phone and rpi.
For trusted people in my local network I've set up a samba share on my RPI.


For everything else I'm considering using https://min.io/ again, here is a short overview of a simple local file-share: https://www.youtube.com/watch?v=dIQsPCHvHoM

It may be overkill for simple use but I've come to like it, it's reliable and fast.
jkl
Long time nixers
Why is it called “min” when it uses Kubernetes?

--
<mort> choosing a terrible license just to be spiteful towards others is possibly the most tux0r thing I've ever seen
TheAnachron
Members
It supports Kubernetes, it doesn't require it. I for example used it on a local partition. You can create very powerful applications that use cloud file storage APIs or a simple local bucket. It's quite flexible.