z3bra
(19-12-2015, 02:31 PM)pranomostro Wrote: But we are not using Gopher
and Markdown for websites,
we are using CSS3, HTML5 and Javascript

Gopher is only a protocol to serve files, like http. The content you serve over a protocol doesn't really matter. So a graphical gopher browser could download an index.html, default.css, script.js, and render everything as expected.

It would just be simpler to retrieve elements using gopher, as per de specification.
pranomostro
Yes, I am sorry I didn't make clear I don't like neither http nor the structure of websites.

>The content […] doesn't really matter
I would disagree here, the content is the most important thing. If the content is easy to
parse with an ad-hoc parser written in 'awk|sed|grep|perl', I see it as positive. If the
content has got a very complex structure, it is harder to process, what really matters to
me, because I like to automate things, and processing websites shouldn't be an exception.
z3bra
I do agree that the content matter for users. But we were talking about the protocols here. So wether you're sharing a text, json, html or C source file, it doesn't make any difference with gopher. You just share a text file. That was my point.

I think we can both agree that HTML sucks, and that seeing the trend of static HTML blogs, markdown would be enough in most cases.
XcelQ
@Z3bra my site already does that. staticly generated html but the post are in markdown. thanks to hugo. maybe i should just ditch hugo and do everything in markdown?
Pr0Wolf29
The only thing I can think of as not being files are databases. I'm not sure everything being a file is necessarily a good thing. There are some things that could only be accessed between programs that the user never touches. That's fine. When you have thousands of read and writes atomically from a db, direct access from the outside could mess that up greatly.
rain1
That's such a cool idea to be able to mount a site and copy stuff out of it. it's a shame they don't usually provide information needed to do ls or glob.

I found that ftp clients are usually really awkward, it only occurred to me from reading this thread that if they just mounted the remote filesystem it would be so much easier! Does anyone know/use an ftp client like that?

Also sshfs rocks.
venam
(19-04-2016, 08:48 AM)rain1 Wrote: I found that ftp clients are usually really awkward, it only occurred to me from reading this thread that if they just mounted the remote filesystem it would be so much easier! Does anyone know/use an ftp client like that?

I recently, and finally, tried Freenet.
The basic idea of Freenet is a secure peer to peer network of files.

It goes like this: You browse it like you browse the internet but you don't request webpages from a main server, you request them from a host you are connected to. If this host doesn't have it it requests it from someone else.
All and all until someone has the file/webpage (or when it's not found), the file then circuits back to the user and is stored on all the nodes that happened to be on its way.

With Freenet you are actually storing webpages and files locally and in a distributed manner.

PS: What I presented is a very "dumb" approach to how Freenet works, for accurate details check their websites.
pranomostro
From your description it seems a lot like ipfs.
rain1
josuah
[EDIT] I was confusing text files and files. I'll keep the post there, and I can move or remove it as well.

<h3>Web</h3>

In 'everything is a file' I first saw configuration files and file descriptors (I mean like /dev/*). But I forgot about how some files are inacccessible, like webpages that must be parsed to reveal their content, and the other files related to this one (css, js, assets, but also other pages...).

Aaron Swartz brought the fun back with html source and made this: https://github.com/aaronsw/html2text

<h3>config.h</h3>

There are some software written in C that use a config.h file for their configuration. That makes configuration easy for the developper, and the software light, but once compiled, the configuration is not a text file anymore, it is part of the software. So if the software is provided as a binary package, there is no way to edit the configuration. That is why I think source distribution are a step furnther toward EIAF.

<h3>Lisp philosophy</h3>

A sibling approach I discovered with Emacs is "everything is code", (and code as data): All information is stored as lisp code files, with a vertical integration, permitting to get information of every node of the system interactively: key being pressed, function being executed, element under cursor, processes, buffer....
And for all of those, the followings are available in three or four keystrokes:
- Documentation: often from docstrings (automatically formatted and grammar-checked while writing them);
- Corresponding source code;
- States and return values;
- Live debuging like for http://www.pythontutor.com/visualize.html

Then as _everything_ can be obtained by calling a function, there is no need for any text file anymore, and no need to ever parse anything: Just call one function with some argument and you have your value. At the time of lisp machines, the whole system could be passed through the debugger.

On the other hand, this require the system information to always be accessed through a running lisp instance populating all the variables, raising the minimal complexity required to run a system. And it is not flexible to anything else than lisp. For this, text files are needed.




Members  |  Stats  |  Night Mode