Creating a knowledge base for all things related to a server - Servers Administration, Networking, & Virtualization

Users browsing this thread: 1 Guest(s)
Hello Everyone !

So as I have been discussing this in the IRC, the idea is to create an accurate centralized knowledge base about all the things related to a server. It doesn't matter if it's your personal home-lab, university, or even a data center ! I ( and pretty sure you too ) are interested in the Technology, People, and even the Environment involved which keeps it all running !

On the top of my head, I'm interested how the following are designed/implemented/managed/maintained including but not limited to all software and hardware ( bonus points for including specifications ) involved in your server(s):
<li> Networking Architecture</li>
<li> Electrical Infrastructure</li>
<li> Cooling Systems ( if any )</li>
<li> Security Measurements</li>
<li> Virtualization Infrastructure</li>
<li> OS Deployment </li>
<li> Storage Management</li>
<li> Configuration Management</li>
<li> Backup System</li>
<li> Migration Scenarios ( tools and people involved )

So that's just a very basic overview and I'm 99.99% sure I missed a lot of details but that's what you awesome people are for ;)
You may want to fully define everyone of those terms as it's not obvious what needs to be said about them.

An example would be great.
Yes, sure. Just wanted to get the few topics I had in mind out there. But I do have a basic example coming up !
Grey Hair Nixers
As far as I'm concerned, I find the following network architecture to be pretty good/common practice:
  • 1 public IP per physical site (more if you want backup lines)
  • VPN between all the physical sites (to interconnect offices)
  • 1 internal network per "service" (development, testing, benching, ...)

This architecture is easy to maintain and understand. The hardest part here is to figure out the mask for each internal network.

For electrical/cooling system, I must say I'm no expert. All I know is that our server room has its own electrical circuit that's independent within the building, and we have 3 air conditionners in the room to keep it at around 20°C.

For OS deployment, well, it depends on the purpose of the machine. We actually install the OS by hand every time, be it a windows computer, centos server, or esxi, or whatever we need.
For most linux server, we then use a configuration management system to "bootstrap" the machine and give it a purpose.

Regarding storage, you don't have much more choice than buying some here and there, or rely on an outsite service that will handle that for you.
I personally (like, at home) use 3 backup methods:

For my desktop, everything is both on my acutal drive, on a second harddrive INSIDE the computer, and on a usb drive sitting on my desk. It means that if my house burns, I'll loose my data...
The problem is that for the amount of data I have, online backup solution are too expensive... My plan was to buy cheap boxes to get running in house of people I know, or build a "circle of trust" with people online to backup my things in multiple places.
I'm still "working" on that part...

For servers, I use tarsnap to backup my server's configuration and data. The service is pretty cheap, and I trust the guy.
My main server (the one with the most diskspace) fetches data via rsync from its peers over a VPN to a local folder, then weekly send the data to tarsnap server.
I then get a report of the uploaded data in my mail, as a reminder that my backup is working.

That's all folks!
This one is pretty neat:
Long time nixers
Wouldn't a wiki make sense?

<mort> choosing a terrible license just to be spiteful towards others is possibly the most tux0r thing I've ever seen