More Opensolaris home fileserver setup

I am writing this blog post on paper, in the car while Tien drives, in preparation for NaNoWriMo, this year, in which I hope to write my entire novel with pen and paper.

I think I want to try to re-learn a cursive style of handwriting, primarily for the supposed speed advantage.

I recently finished setup of the hardware of the new home fileserver. I previously had trouble using more than one memory DIMM — bad enough when I had two 512MB DIMMs — a fact I only recently rediscovered, thinking that the 512MB of usable RAM was two 256MB DIMMs — but far less tolerable with the two 1GB ECC DIMMs I just purchased. I began to suspect that the problem — the machine either not POSTing or not recognizing more than the first DIMM might be somehow caused by an incompatibility between the 13-year-old PCI video card (an S3 Vision 968 chipset with the full 4MB of VRAM) and large system ram, for example, due to a design limitation in the video BIOS. (How many PCs in 1995 in 1995 had over 1GB? — I see that near the end of 1996, Dell was selling servers that you could upgrade to 1GB. IA Wayback Machine link

Anyway, now that I’m using the old and new RAM, I have 2.5GB of ECC RAM available to OpenSolaris, and I can do pretty much all I want with the upgraded fileserver.

The only parts of the setup of the new fileserver I haven’t written about yet, but have completed, are the transfer of data, setup of automatic snapshots, and scrubbing. Also, I have had a weird issue with the CIFS server included in SXCE, and not having time to fix it, I switched to the Samba server (included in /usr/sfw).

I had previously copied the data from the old fileserver’s disks into a format that both Linux and Solaris can reliably handle on a local disk — FAT32 filesystem, with data in GNU tar archives, split in sub-4GB chunks, with SHA-1 checksums of the archives for verification. This is awkward, and doubly so over a slow USB disk and with lots of hard links (a BackupPC pool and an rsnapshot pool.) But, it worked fine — once I had my large file (>2GB) issues on the Linux side worked out (“openssl sha1” gave correct results, “md5sum” did not.)

After creating a hierarchy of filesystems to reflect the old data — ZFS encourages lots of filesystems by making “zfs create” easy and fast. I restored everything and made some snapshots of the initial state — this is a copy-on-write mechanism that makes it easy to roll back to an earlier state of the filesystem.

I had turned on compression for everything but the “media” filesystem — already compressed music, video, etc. At this point, I ran regular (once or twice daily) scrubs of the storage pool to stress-test the new disks (it is effectively a read verification of all the used storage in the pool.)

Once I had things in a reasonable organization, I set up zfs-auto-snapshot — this provides a nice way to recover from operator error, even though it isn’t real backup — I will address that need later on, though all my really critical data (e.g. pictures I’ve taken) is backed up otherwise.

I also set up a monthly “zpool scrub” of both the root and data pools — I think this should be frequent enough to avoid data loss, but not introduce excess wear of the drives.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*