Everything is a file - Psychology, Philosophy, and Licenses

Users browsing this thread: 1 Guest(s)
venam
Administrators
Hello fellow nixers,
This thread is about Unix philosophy and files.

Quote:Everything is a file

I haven't really understood the principle behind this until yesterday.
I suddenly woke up in the middle of the night wondering:
Quote:But what's not a file?

Like someone that has lived in a cocoon for too long I was so accustomed to have everything as a file that I completely forgot about the rest.

I opened the Wiki and refreshed my mind. Then I casually browsed the internet and found threads such as this one.

I forgot about Operating systems where all the configurations are stored right inside the program itself, where to access the hardware or to interact with it you have to fight with a mountain of documents, where most of the important stuff happening is stored in memory and volatile.

Let's respect this philosophy and fight against any software in the Unix community that doesn't respect it!

Bump this thread with your opinion on files and your own stories.
sulami
Members
I think EIAF is an incredibly elegant solution to a problem that only became really relevant decades later. I cannot believe how people are actually programming systems which expose super-simple stuff via complicated syscalls and other APIs. It is (one of) the basis for the effectiveness of shell-scripting, one of the huge benefits of UNIX, enabling easy access to the heart of the machine, both soft- and hardware. It promotes choice of tools, because there is no need for elaborate client-side libraries that need to pre-exist.
ninjacharlie
Members
I really think that its cool who easy it is to create your own FUSE filesystem for whatever wacky way you want to expose your code through files. One of my recent ideas (haven't yet had a chance to implement it), was to create a FUSE filesystem where you would mount a video file, and would have access to each frame as a file. Then you could do standard image editing stuff with whatever software you wanted *on each frame*. Didn't work out quite how everything would work, but I thought it was a cool idea.
Houseoftea
Long time nixers
(17-12-2015, 10:26 AM)ninjacharlie Wrote: Then you could do standard image editing stuff with whatever software you wanted *on each frame*. Didn't work out quite how everything would work, but I thought it was a cool idea.

very cool!
And people say that linux doesn't have powerful video editors.
z3bra
Grey Hair Nixers
Well, it doesn't exist yet :P
pranomostro
Long time nixers
One thing that bugs me is that
computer networks (and especially the internet)
are not filesystems, if you look at modern
unices.

What if I want to download all images from an
instagram account? It is unfortunately not
as simple as
Code:
mount www.instagram.com/testaccount /net/insta
and then just use
Code:
cp *.png *.jpg /tmp/img
to copy the images.
Instead, I can do two things: spend one and a half hours
reading the wget manpage to find out how to this, or write
a small script with an ad-hoc parser using curl.
But if I want to reuse this another time, I will have
to write another parser.

My main complaint here is that we could treat data
on servers as filesystems that could easily be mounted into
the local file system (this is, obviously, what plan 9 does).

One could argue that there are FUSE systems for that, but
this argument does not count. Why?
Because FUSE systems are specialised, and can mostly be used only
for one website. For another website, somebody will have
to write another FUSE. I can't accept this as a viable solution.

Imagine what e-mail would look like in a world where you
can mount servers as file-systems. You log into that server,
ls your inbox, mount another server, go to your friends directory,
edit a mail for him, save it on the other server, unmount the two
servers again and then proceed. I don't know how you feel about this,
but I wish so much that this could come true.
z3bra
Grey Hair Nixers
This is what gopher is for.
pranomostro
Long time nixers
Gopher or 9P.
But we are not using Gopher
and Markdown for websites,
we are using CSS3, HTML5 and Javascript,
things that are not content, but
style descriptions, and which have to be parsed out.

There are more simple solutions, but we are not
using them.
Wildefyr
Long time nixers
That's because the 'simple solutions' are not what consumers want nowadays for the web.
apk
Long time nixers
(19-12-2015, 05:56 PM)Wildefyr Wrote: That's because the 'simple solutions' are not what consumers want nowadays for the web.
User friendliness has and always shall supersede simplicity in any regard.
What's easier to drive, an auto or a stick?
Yet such concepts should be a reality, since people like us could benefit greatly from them.
z3bra
Grey Hair Nixers
(19-12-2015, 02:31 PM)pranomostro Wrote: But we are not using Gopher
and Markdown for websites,
we are using CSS3, HTML5 and Javascript

Gopher is only a protocol to serve files, like http. The content you serve over a protocol doesn't really matter. So a graphical gopher browser could download an index.html, default.css, script.js, and render everything as expected.

It would just be simpler to retrieve elements using gopher, as per de specification.
pranomostro
Long time nixers
Yes, I am sorry I didn't make clear I don't like neither http nor the structure of websites.

>The content […] doesn't really matter
I would disagree here, the content is the most important thing. If the content is easy to
parse with an ad-hoc parser written in 'awk|sed|grep|perl', I see it as positive. If the
content has got a very complex structure, it is harder to process, what really matters to
me, because I like to automate things, and processing websites shouldn't be an exception.
z3bra
Grey Hair Nixers
I do agree that the content matter for users. But we were talking about the protocols here. So wether you're sharing a text, json, html or C source file, it doesn't make any difference with gopher. You just share a text file. That was my point.

I think we can both agree that HTML sucks, and that seeing the trend of static HTML blogs, markdown would be enough in most cases.
XcelQ
Members
@Z3bra my site already does that. staticly generated html but the post are in markdown. thanks to hugo. maybe i should just ditch hugo and do everything in markdown?
Pr0Wolf29
Members
The only thing I can think of as not being files are databases. I'm not sure everything being a file is necessarily a good thing. There are some things that could only be accessed between programs that the user never touches. That's fine. When you have thousands of read and writes atomically from a db, direct access from the outside could mess that up greatly.
rain1
Members
That's such a cool idea to be able to mount a site and copy stuff out of it. it's a shame they don't usually provide information needed to do ls or glob.

I found that ftp clients are usually really awkward, it only occurred to me from reading this thread that if they just mounted the remote filesystem it would be so much easier! Does anyone know/use an ftp client like that?

Also sshfs rocks.
venam
Administrators
(19-04-2016, 08:48 AM)rain1 Wrote: I found that ftp clients are usually really awkward, it only occurred to me from reading this thread that if they just mounted the remote filesystem it would be so much easier! Does anyone know/use an ftp client like that?

I recently, and finally, tried Freenet.
The basic idea of Freenet is a secure peer to peer network of files.

It goes like this: You browse it like you browse the internet but you don't request webpages from a main server, you request them from a host you are connected to. If this host doesn't have it it requests it from someone else.
All and all until someone has the file/webpage (or when it's not found), the file then circuits back to the user and is stored on all the nodes that happened to be on its way.

With Freenet you are actually storing webpages and files locally and in a distributed manner.

PS: What I presented is a very "dumb" approach to how Freenet works, for accurate details check their websites.
pranomostro
Long time nixers
From your description it seems a lot like ipfs.
rain1
Members
josuah
Long time nixers
[EDIT] I was confusing text files and files. I'll keep the post there, and I can move or remove it as well.

<h3>Web</h3>

In 'everything is a file' I first saw configuration files and file descriptors (I mean like /dev/*). But I forgot about how some files are inacccessible, like webpages that must be parsed to reveal their content, and the other files related to this one (css, js, assets, but also other pages...).

Aaron Swartz brought the fun back with html source and made this: https://github.com/aaronsw/html2text

<h3>config.h</h3>

There are some software written in C that use a config.h file for their configuration. That makes configuration easy for the developper, and the software light, but once compiled, the configuration is not a text file anymore, it is part of the software. So if the software is provided as a binary package, there is no way to edit the configuration. That is why I think source distribution are a step furnther toward EIAF.

<h3>Lisp philosophy</h3>

A sibling approach I discovered with Emacs is "everything is code", (and code as data): All information is stored as lisp code files, with a vertical integration, permitting to get information of every node of the system interactively: key being pressed, function being executed, element under cursor, processes, buffer....
And for all of those, the followings are available in three or four keystrokes:
- Documentation: often from docstrings (automatically formatted and grammar-checked while writing them);
- Corresponding source code;
- States and return values;
- Live debuging like for http://www.pythontutor.com/visualize.html

Then as _everything_ can be obtained by calling a function, there is no need for any text file anymore, and no need to ever parse anything: Just call one function with some argument and you have your value. At the time of lisp machines, the whole system could be passed through the debugger.

On the other hand, this require the system information to always be accessed through a running lisp instance populating all the variables, raising the minimal complexity required to run a system. And it is not flexible to anything else than lisp. For this, text files are needed.
venam
Administrators
sshbio: I think you confused text files with files.

Text files are just a subset of files.
josuah
Long time nixers
(21-04-2016, 12:31 AM)venam Wrote: Text files are just a subset of files.

Oh, yes indeed I did! Thank you for clarification.