Nixers project: Bittorrent library - Community & Forums Related Discussions
|
||
i never trust HDT
always russian hax0rz in that pool |
||
|
||
Then that's a good way to prove this library is strong as a grizzly!
|
||
|
||
|
||
A typo, he meant DHT (distributed hash table)
|
||
|
||
It could be good to have a small client along with it, just like lib bearssl have brssl. At least for testing if works across the whole chain.
If libgbt needs other libraries, this opens a library chain: someone writing a wrapper library over libgbt to build its client more "dev friendly", which is in turn used in a GUI widget to stream videos used in a larger program. What about an open and free implementation of sha1.c and md5.c in the repo ? Maybe it could not support md5 at all as it is quite deprecated in the RFC. |
||
|
||
Having a client shipped with it is indeed a good idea. The library is too young yet though.
It's you're talking about free impl. of SHA1, as I just started exporting sha1.c from libtomcrypt! For MD5, we could do the same if needed |
||
|
||
(05-08-2017, 06:18 PM)z3bra Wrote: The library is too young yet though. A tool to produce a bencoded .torrent metainfo file is already useful, and will permit to test bencoding and sha1 against other software already. First little reward. :) (30-07-2017, 08:45 PM)z3bra Wrote: "ben" as a prefix for everything "ben" prefix is fine for me, maintaining a list of prefix in comments would make it easy for devs new to the project? [EDIT]: .torrent metainfo file != torrent file |
||
|
||
r4ndom is working on the bencode() function right now. I wrote a small code to dump a torrent's content to stdout, but it's not really useful. For such tools, I don't think it's a good idea to include them in the repo, as we'll end up with many tools for no practical reason. I prefer keeping them outside the tree until the lib is feature complete.
As for the prefixes, I settled on simply 'b' for bencoding related function. And no, a list of prefixes in comment shouldn't be needed as it should be obvious when reading existing code. That's also why the first patches someone submits should be reviewed by existing contributors |
||
|
||
(06-08-2017, 07:10 AM)z3bra Wrote: As for the prefixes, I settled on simply 'b' for bencoding related function Perfect :) (06-08-2017, 07:10 AM)z3bra Wrote: to include them in the repo Maybe some could be turned into tests. It may be a bit early to think about it, but this came on its own while reading about bittorrent. Once we start to transfer data from a peer, how do we store it? Here is a prososition: An approach is to store the parts into files and directories: Parts gets downloaded to a memory buffer, and once one is complete, it gets saved to the disk as <hash of torrent>/<hash of the part>: Code: |-- 2072a695613e5103d9ac03c2885c5e2656cb5ff0 # hash of the torrent #1 Advantages:
Disadvantage:
To overcome this, it would be possible to store every single part in an unique dir for all the torrents, but then, race condition could occur: if two process/threads download the same part at the same time, the first one write it to the disk. Instead, before starting a download, a worker could seek in the parts directories of the other torrent if it can find the existing part. I would go to the simplest way, with one directory per torrent, which still permit optimizations. [EDIT] torrent != parts of a torrent |
||
|
||
After talking a bit on ##bittorrent @freenode, I learned how clients seems to implement it:
Some put parts on multiple files in some way or another (like above). But most are putting the parts directly in the torrent file: 1 - Write parts at the beginning of the torrent file (the full data blob, not the .torrent metainfo file), and sort them as they come:
2 - Or they allocate storage for the file (such as an empty 2GB file) and fill it with the parts as they come, writing them with the correct offset. This way is much simpler: as you have a list of which part goes where, there is no sorting involved: read where should the part go, an you have where you should read it. With this latter approach, in the case of multifile torrents:
These two approaches (1- and 2-) has an advantage: no need to keep the parts files (which cost a lot of storage [EDIT] and inodes). On the other hand, if the final file is moved, it can not be seeded anymore. If I was me, I would still do one file per part, but you are no me. :) |
||