What software have you made to improve your workflow or rice? - Desktop Customization & Workflow
Users browsing this thread: 2 Guest(s)
|
|||
Anyways for me, https://github.com/halfwit/dsearch is my main utility for any query, from opening a file, to a link in the correct program. It uses dmenu as a backend, and many bespoke pieces are tightly integrated. That makes it really unfortunate to try to use with other setups, thus https://github.com/halfwit/searchfs is being worked on, which will present a 9p-based filesystem that one can listen on your home network with, mounting up on client PCs with 9p-fuse or even a bare mount -t 9p. From it you can simply read from a file, like /path/to/where/i/mounted/youtube/channel/craz3bra and get a list of videos, /path/to/where/i/mounted/google/image/chickadee to get a list of URLs pointing to images of chickadees, etc. The benefit of this approach is twofold. First, you can trivially client-side cache the results, as most 9p client implementations have baked in caching, and secondly a system with relatively light resources can leverage the more powerful host system and better (ethernet connected intsead of wifi) latencies to the internet. It's pretty simple to pipe it through like `cat /path/to/where/i/mounted/youtube/search/chickens | dmenu | plumb` should I require something transient like I had with dsearch, but much more stable, and simple to install and use on any arbitrary system; quite contrary to dsearch.
Other than that, I've written hwwm, which is deprecated and pending a complete rewrite to abstract it from X11 (I want to also use it with Plan9's Rio) and to clean up the configurations heavily, as well as moving much of the logic to a monolithic binary for reasons of abstracted state management (currently I rely heavily on X11 atoms) I did a rewrite of Plan9's Plumber in shell that understood remote URLs by content-type, but since I've gone back to using the original plumber from Plan9 (basically xdg-open on crack for the uninitiated) and instead rewrote the client binary to understand remote URLs, for a much more flexible and clean approach. Tangential to the Plumber thing was a dsearch-target called `store`, which took an arbitrary resource, mostly URLs, and ran a specific handler based on the content-type. So for a pdf, it would run a script which fetched artwork, set in the Author's name and publishing year, and created a launcher entry for a dmenu that has pango image support. On a Youtube video, storing it meant appending it to an m3u playlist, for an RSS feed, an entry was added to my aggregator, for a github URL, I could clone into my src directory or store it as a bookmark; super handy and generally useful. Since then, I've realised I could use Plan9's Plumber to achieve the same thing, with a very clean syntax and greater granularity than otherwise possible (seriously, it's so clean!) (The code for this is still available in dsearch, under https://github.com/halfwit/dsearch/blob/...dlers/save, but again it's terribly bespoke to my own needs and not very generally useful for others.) Basically everything I've made in the past on Linux was re-evaluated when I went to plan9, and I came up with better solutions overall. |
|||