Sharing Media - Servers Administration, Networking, & Virtualization
Users browsing this thread: 9 Guest(s)
|
|||
Hello fellow nixers,
In this thread let's share how we share files and media with others. In relation with the entry of last week newsletter "Dropbox is dropping support" in issue 89. I thought of opening the topic of how everyone handles file sharing, especially media files. It's not uncommon to find ourselves in a situation where we have to sync or at least give to someone else a file that resides on our machine, so what do you do in that scenario. from Dropbox, to rsync and ftp, to SAMBA shares, to NAS (network attached storage), to putting files on your http web server index and display them prettily with h5ai, to centralized home media center on raspberry pi like Kodi or Xbian or others, to any other media center solution, to a mini uploader like paste.xinu.at , to sharing physical media like USB and DVDs, to your own solution. In my case I use a simple uploader paster for quick small files and when it comes to big files I use a physical medium, a harddisk or USB usually, as my internet speed isn't able to send enormous files. The disadvantage is that I can't share files I would like to share with people that are not next to me. What machine or device do you use to share media and what is your solution, how do you handle this, what are the downsides to your solution? |
|||
|
|||
I usually use ShareX on Windows, but of course that's not the ultimate solution for everyone. Generally, simple uploaders are awesome unless it is sensitive data. Framapic is sadly pausing, I loved that, but I'm too lazy to set up my own ...
-- <mort> choosing a terrible license just to be spiteful towards others is possibly the most tux0r thing I've ever seen |
|||
|
|||
|
|||
My classmates use Google Drive, so that's what I used with them and with Dropbox's news I've been looking for other solutions. First one that comes to my mind is ownCloud, which I've setup before on a VPS but never really tried it as my main host/sharing platform.
|
|||
|
|||
(25-08-2018, 10:37 PM)acg Wrote: with Dropbox's news I've been looking for other solutionsI'm also using Dropbox to share ascii art files with the group I'm part of, I think I'll continue using it as I'm on a non-encrypted ext4 filesystem and support for that isn't dropping. I assume you're using an encrypted partition or some other type of filesystem. |
|||
|
|||
When I need to share small files I use 0x0.st. To that end, I have a short function in my zshrc so I can use it easily:
Code: 0x0() { curl -F"file=@$1" https://0x0.st; } For larger files / more complicated permissions / etc, I use Seafile on my own server. |
|||
|
|||
I'm surprised that nobody has mentioned Syncthing yet. I use it to synchronize text documents between my devices, automatically, without any external server. It works just as well when all devices are in the same network, or I'm not at home. As soon as I tidy my pdf and image collections up, I'll be adding them too, albeit without version control.
The major downsides would be
|
|||
|
|||
I use Dropbox for some stuff, Google drive, send files by Telegram or Whatsapp for others. It depends on who I'm working with.
Syncthing looks interesting. |
|||
|
|||
+1 for synthing.
For family or rookies, I use framadrop.org or https://transfer.sh/ Code: transfer.sh() { |
|||
|
|||
(15-12-2018, 03:26 PM)thuban Wrote: +1 for synthing. I didn't know transfer.sh finally managed to stay up. That's a good news! Sharing files has always been a puzzle to me... There are so many ways to do it, yet there is no "easy" solution that works for all use cases! I use http://p.iotek.org for ephemeral shares (screenshots, piece of code, tiny tarballs, ...). For anything long-term that's public, I tend to put it on my website, though it's far from ideal. When it comes to data synchronisation between servers or for my own use, rsync is fine (especially 'cause I have it on my phone as well!). I wrote a wrapper to sync files between multiple nodes. Syncthing is nice, but has trouble keeping up with lots of data. It also has bugs, so I'd rather not corrupt any data with it. I also find it too complex to use, for the simple thing it is supposed to do. For sharing data "privately" (eg. family pictures or whatever), I have setup an FTP server. It is definitely not ideal though, for multiple reason. First of them being that it's not encrypted (though I'm working on that, I just need to sort my shit when it comes to certificates). Second is that lambda people can't use a computer, and have troubles making connections to an FTP from their computer without assistance. Third is that FTP is not good for "online" viewing, for example to go through pics of your latest travel with your familly. Finally, my server currently runs alpine, which lacks some PAM modules to correctly implement virtual users with vsftpd. That is only contextual though, and I plan to wipe this install clean and run an OpenBSD node instead. It kills me that, so close to 2019, it is still so complex to have someone share a bunch of big files with you... |
|||
|
|||
I recently found a pretty useful service with a cli tool: http://push.tf
I made a gentoo ebuild (https://github.com/spnngl/gentoo/blob/ma...999.ebuild) if anyone is interested |
|||
|
|||
(18-12-2018, 06:41 AM)atbd Wrote: I recently found a pretty useful service with a cli tool: http://push.tf The following script will covert 90% of the use cases (note that API may change): Code: IDTOK=$(curl -s http://push.tf/id) I'm a passionate C advocate, but you should know where to draw the line between wrapping a tool, or writing a new one :) This looks over-engineered to me though, but that's only my opinion! Some things bother me with this service though... Quote:[...] http://push.tf http://u.push.tf Code: $ curl -I https://push.tf When you advocate that your service is suitable for backups, or "anonymous" transfer. You cannot not provide a fully encrypted version of your service. |
|||
|
|||
Interesting.
Instead of syncthing or external services, I use more an dmore my own self-hosted server. Via sftp, I copy my files to the server (or with scp) in a directory "auto indexed" via httpd. It gives this : https://yeuxdelibad.net/DL/ If necessary, a htpasswd file is enough to password protect the access. |
|||
|
|||
sftp is nice, but require that you give an ssh access to the people you want able to retrieve files. Even if chrooted, it still bother me a bit.
I do like the directory listing approach (and that is actually how I do it today. This, however, doesn't allow pushing any data. I wish there was a simple way for browsers to use PUSH requests without having to implement a script server wise... Like you right click on a page, choose "send files to link location...", then select some files in the explorer and click ok. Browsers have been there for years, HTTP is the main protocol for network communication, but we can still barely use it! |
|||
|
|||
To retrieve files, people use the http list (auto-index) mentioned above. sftp is just for me to put files, and many server have a ssh access.
sebsauvage has a list of sharing services : https://sebsauvage.net/wiki/doku.php?id=...e_fichiers |
|||
|
|||
(18-12-2018, 08:34 AM)z3bra Wrote: When you advocate that your service is suitable for backups, or "anonymous" transfer. You cannot not provide a fully encrypted version of your service. You're right, that's why I use it only for no-sensitive data and quick transfer with colleagues. Habits are hard to forget. |
|||
|
|||
In order to let friends sending files, I hosted for a while jirafau, which is quite simple : https://jirafeau.net/
https://gitlab.com/mojo42/Jirafeau |
|||
|
|||
The topic of synchronizing/replicating/sharing files between machines is back in vogue. There's a lot of discussion on the web about it.
OpenBSD is doing a rewrite of rsync, OpenRsync. Some other talks are about alternatives to puppet, ansible, and company with tools such as rdist: http://johan.huldtgren.com/posts/2019/rdist https://chargen.one/obsdams/rdist-1-when...s-too-much What's rocking your way, anything to add to the conversation? |
|||
|
|||
I find the Openrsync project funny. The issue tracker is filled with Linux users who complain that their OS is too insecure to make a port viable.
-- <mort> choosing a terrible license just to be spiteful towards others is possibly the most tux0r thing I've ever seen |
|||
|
|||
Synchronizing files is an interresting topic, as there are so many ways to do it, and even more use-cases!
I think that there are 3 types of tools to sync files between multiple hosts: one-way, two-way or hybrid. One way sync is when a single host is pushing to all others at the same time, and no change would happen while the transfer is on (in theory). This assume that you are only modifying one host at a time, and that you always havw the latest changes when you edit files on any host. Tools in this vain are rsync(1), rdist(1), ftp(1) or even git(1). They work in a push/pull manner, and you have no way to ensure that what you have is the latest version, because there is no "synchronisation" state between your hosts. You either push (force latest change to be what you push), or pull (assume your remote hosts have the latest version). Two-way sync works in real-time. Whenever a change happen, it gets pushed to all other hosts. If two hosts are modified in parallel, their mutual changes have to be merged. If the same part of the file is changed, conflict happens. That is the price of two-way sync. You get a more reliable synchronization, but it is easier to corrupt your data. Two-way sync tools are also forced to act as daemons, and must always be watching your files to push the changes in real time. Example tools are unison, syncthing and dropbox. Finally, here comes the shameless plug: synk! (README) It is what I find the best of both world. It is a one-way sync, but it first tries to find the most recent copy using the mtime of your file. This require a good time sync between your hosts. Note that this is only a draft, and there are design problems, like concurrency issues if you fire it on multiple hosts at the same time. But it does the job quite well! It is not a daemon, so you fire it up whe. you see fit: manually, with cron, entr, wendy, fswatch, inotifywatch, ... whatever. When started, it connect to all hosts, fetch all mtime locally and then spawn rsync(1) processes from the host that has the highest mtime to push the file on all other hosts! I would like to use bittorrent internally instead of rsync algorithm, to be even faster, but that is another topic ;) What do you guys think? Does that fill a need? Do you see many flaws in the design? |
|||
|
|||
|
|||
With remote working this topic is more important than ever. Have you changed anything from your previous practices and file sharing/syncing setup?
|
|||
|
|||
I'm using syncthing for sharing stuff between my laptop, phone and rpi.
For trusted people in my local network I've set up a samba share on my RPI. For everything else I'm considering using https://min.io/ again, here is a short overview of a simple local file-share: https://www.youtube.com/watch?v=dIQsPCHvHoM It may be overkill for simple use but I've come to like it, it's reliable and fast. |
|||
|
|||
Why is it called “min” when it uses Kubernetes?
-- <mort> choosing a terrible license just to be spiteful towards others is possibly the most tux0r thing I've ever seen |
|||
|
|||
It supports Kubernetes, it doesn't require it. I for example used it on a local partition. You can create very powerful applications that use cloud file storage APIs or a simple local bucket. It's quite flexible.
|
|||