vain
(23-06-2020, 02:23 PM)ckester Wrote: rawdog doesn't do anything to the feed content except wrap it up in div tags for inclusion in a local webpage.

Oh, I see. I misunderstood then. I was assuming it only showed the title and maybe a few lines, so you'd *always* have to go visit the original page.
Saos
Love them, will use them whenever possible as an alternative to following people on YouTube/Twitter/podcasting platforms. Newsboat (and the built-in podboat) are terminal-based and easy to use once you have the XML.
twee
I love feeds, I access them via email using rss2email at the back. My only problem with the program is that it doesn't do anything about feeds that only provide summaries or links; my hackish solution is to not follow those websites. Previously I used Newsboat, which had an option to automatically download the page of feeds that provided summaries, not sure how it guessed those though.

I also get things like some Youtube channels via RSS (I don't have a Google account), and stick the URLs the feed provides through a script to queue them for downloading at some point.

Like a few others in this thread, I like blogrolls as ways of discovering sites. I'd be interested in a piece of software which automatically converts an OPML file into a HTML page, or something.

(23-06-2020, 01:08 PM)vain Wrote: I was naïvely hoping to get recommendations along the lines of Liferea – but, yes, SaaS is probably what normal users want. (Maybe even that is too much work. I have no idea. I live in a different world. :-))

There's a world out there. I started out with QuiteRSS, and have also enjoyed Rawdog, Elfeed, and Gnus (with Gwene, or another similar thing whose name has entirely escaped me). They're all local, and they all provide a slightly different experience, depending on what tickles you.
bouncepaw
Been using RSS for a couple of years. I have recently moved from Feedly to a MiniFlux instance a friend hosts for us both.

RSS makes me happy. It is easy to read good content from many places without actually visiting them. Best of all, I don't have to go LiveJournal (it barely loads on my 4 GB RAM phone!).

Though I still consume most of content through social media, mostly Telegram (awesome in all ways) and Twitter (cool idea, bad implementation).

It's sad how hard it is access content on social media nowadays. For example, Reddit. I have no desire to watch your READ IN OUR APP popup twice every time I click anything. I even have it installed — awful.

RSS is peaceful. I recommend everyone to give it a try.
acg
Recently I've changed my social media behavior, where I mostly used Twitter. Unfollowed many people in hopes to curate the content, but even those I know post quality content (which I have separated in Lists) tend to post other stuff from time to time and ends up contaminating my feed.

After which I tried doing the same with mastodon, without being really a solution. Content is still better but it also depends on the instance.

Finally (starting June 2020) I opened an old Newsboat `urls` file I had and started from there, enjoyed the experience much more but I usually read while commuting and pulling out my laptop in public transportation is a no-go. So as mentioned by @bouncepaw, I also set up miniflux, from the web it's... decent. Although the android apps I've tried using Fever API haven't been maintained for some time or are still buggy (can't even mark as read).

I'm still looking for alternatives; I'll keep you updated since miniflux feels promising and is just a binary (with postgresql being required) one can easily host.
twee
(01-07-2020, 09:11 AM)acg Wrote: I'm still looking for alternatives

What features are you looking for? Do you like the ability to sort by feed? As mentioned by a few people, rawdog is great for river-of-news feeds; just stick it to cron every hour or so and output to a web page you can access. It's written in Python and so obviously isn't just a binary. If that's a dealbreaker to you, you might be interested in hacking something similar together with sfeed. There are probably other things that suit you but I don't know of them.
acg
(01-07-2020, 09:40 AM)twee Wrote: As mentioned by a few people, rawdog is great for river-of-news feeds; just stick it to cron every hour or so and output to a web page you can access. It's written in Python and so obviously isn't just a binary.

I only looked at rawdog for a moment. As a river-of-news looks great, I already use a different approach for saving links so that's basically what I need. The whole "just a binary" thing is mostly to reduce dependencies and keep my server clean, but I could also benefit from rawdog's static and no database.

Should be easy to get that running. From what I get this is only a webpage with the aggregated feeds, so no way to sync for offline reading, right? Not crucial.

EDIT: rawdog doesn't support Python 3.*, which keeps me away from trying it.
ckester
Yeah, rawdog is still based on Python 2.7 and there doesn't seem to be any work on migrating it to 3. That's why I have had moving to sfeed in mind as a backburner project for quite a while now. Maybe I should get to work on that.

sfeed lends itself to a pipes-and-filters approach that I find congenial. I'm not sure I like the way sfeed_update "merges" feeds, but I haven't looked too deeply at that yet. I might end up using only the XML parser from sfeed and writing my own code to build the river-of-news pages.

A more serious concern with sfeed_update is that it doesn't seem to check whether a feed has actually been updated since the last time it was fetched. It looks like it uses curl(1) to download the whole feed each and every time. Not cool, generating unnecessary traffic like that. As I mentioned before, some sites include the entire article content in their feed.

(I need to go back and look at how rawdog is dealing with unchanged feeds: is it just a matter of making a conditional GET with If-Modified-Since? I seem to recall that the pertinent date info is stored in the rawdog db for each feed...and I don't see anything like that in the sfeedrc or generated files. But this is just a first impression and I could be wrong.)

UPDATE: yes, looking at the fetch() function in rawdog.py and the parse.args there, I see that rawdog uses the etag and/or last-modified headers feature of feedparser to save bandwidth if and when the publisher supports them. With some changes to the sfeedrc file or creation of a db similar to rawdog's, it shouldn't too hard to add the same feature to sfeed_update's use of curl(1). So there's item #1 on the TODO list. ;)
ckester
This got posted recently and was updated today. It might interest readers of this thread:
https://codemadness.org/sfeed_curses-ui.html
Dworin
I've been using newsboat for a while, just for the general think I know I want to browse. If I want to pursue something, I'll switch to browser.




Members  |  Stats  |  Night Mode