I recently found a pretty useful service with a cli tool:
I made a gentoo ebuild ( if anyone is interested
(18-12-2018, 06:41 AM)atbd Wrote: I recently found a pretty useful service with a cli tool:
I made a gentoo ebuild ( if anyone is interested

The following script will covert 90% of the use cases (note that API may change):
IDTOK=$(curl -s
curl -F id=${IDTOK%%:*} -F token=${IDTOK##*:} -F "filename=$(basename ${1:-stdin})" -F "upload_file=@${1:--}"
printf '\n' ${IDTOK%%:*}

I'm a passionate C advocate, but you should know where to draw the line between wrapping a tool, or writing a new one :)
This looks over-engineered to me though, but that's only my opinion!

Some things bother me with this service though...
But there are many cases where will be the fastest and easiest solution to share or backup files.
We don't sell you the classical speech that our service is secure because we use encryption (we encrypt nothing actually). Recent security holes have proven that no one should rely on this argument. By encrypting your data before sending, you can be sure that no one will be able to read them.
you can set the expiration time in hours (max 336 hours)
$ curl -I
curl: (7) Failed to connect to port 443: Connection refused

When you advocate that your service is suitable for backups, or "anonymous" transfer. You cannot not provide a fully encrypted version of your service.
Instead of syncthing or external services, I use more an dmore my own self-hosted server.

Via sftp, I copy my files to the server (or with scp) in a directory "auto indexed" via httpd. It gives this :
If necessary, a htpasswd file is enough to password protect the access.
sftp is nice, but require that you give an ssh access to the people you want able to retrieve files. Even if chrooted, it still bother me a bit.
I do like the directory listing approach (and that is actually how I do it today. This, however, doesn't allow pushing any data. I wish there was a simple way for browsers to use PUSH requests without having to implement a script server wise... Like you right click on a page, choose "send files to link location...", then select some files in the explorer and click ok. Browsers have been there for years, HTTP is the main protocol for network communication, but we can still barely use it!
To retrieve files, people use the http list (auto-index) mentioned above. sftp is just for me to put files, and many server have a ssh access.
sebsauvage has a list of sharing services :
(18-12-2018, 08:34 AM)z3bra Wrote: When you advocate that your service is suitable for backups, or "anonymous" transfer. You cannot not provide a fully encrypted version of your service.

You're right, that's why I use it only for no-sensitive data and quick transfer with colleagues. Habits are hard to forget.
In order to let friends sending files, I hosted for a while jirafau, which is quite simple :
The topic of synchronizing/replicating/sharing files between machines is back in vogue. There's a lot of discussion on the web about it.

OpenBSD is doing a rewrite of rsync, OpenRsync.

Some other talks are about alternatives to puppet, ansible, and company with tools such as rdist:

What's rocking your way, anything to add to the conversation?
I find the Openrsync project funny. The issue tracker is filled with Linux users who complain that their OS is too insecure to make a port viable.
Synchronizing files is an interresting topic, as there are so many ways to do it, and even more use-cases!

I think that there are 3 types of tools to sync files between multiple hosts: one-way, two-way or hybrid.

One way sync is when a single host is pushing to all others at the same time, and no change would happen while the transfer is on (in theory). This assume that you are only modifying one host at a time, and that you always havw the latest changes when you edit files on any host.
Tools in this vain are rsync(1), rdist(1), ftp(1) or even git(1). They work in a push/pull manner, and you have no way to ensure that what you have is the latest version, because there is no "synchronisation" state between your hosts. You either push (force latest change to be what you push), or pull (assume your remote hosts have the latest version).

Two-way sync works in real-time. Whenever a change happen, it gets pushed to all other hosts. If two hosts are modified in parallel, their mutual changes have to be merged. If the same part of the file is changed, conflict happens. That is the price of two-way sync.
You get a more reliable synchronization, but it is easier to corrupt your data. Two-way sync tools are also forced to act as daemons, and must always be watching your files to push the changes in real time. Example tools are unison, syncthing and dropbox.

Finally, here comes the shameless plug: synk! (README)
It is what I find the best of both world. It is a one-way sync, but it first tries to find the most recent copy using the mtime of your file. This require a good time sync between your hosts.
Note that this is only a draft, and there are design problems, like concurrency issues if you fire it on multiple hosts at the same time. But it does the job quite well!
It is not a daemon, so you fire it up whe. you see fit: manually, with cron, entr, wendy, fswatch, inotifywatch, ... whatever. When started, it connect to all hosts, fetch all mtime locally and then spawn rsync(1) processes from the host that has the highest mtime to push the file on all other hosts!
I would like to use bittorrent internally instead of rsync algorithm, to be even faster, but that is another topic ;)

What do you guys think? Does that fill a need? Do you see many flaws in the design?

Members  |  Stats  |  Night Mode