Monthly Archives: October 2008

BLT Salad

This is a rather low-carb meal, and depending on how bad pork fat really is (and depending on whether or not you are using bacon cured with nitrates/nitrites), possibly rather healthy.

This is for a single portion.

Ingredients: salad greens, tomatos, ground pepper, balsamic vinegar, bacon. (Optional: raw almonds, avocado.)

Cut two big strips (or three smaller strips) of bacon into pieces no larger than 1-2 cm^2. Lay them flat in a frying pan and apply low to moderate heat so that you get nicely cooked bits of bacon and plenty of grease.

While that’s cooking, fill a fairly big (but still single-serving) bowl about two-thirds with salad greens. Cut up tomatos in bite-sized pieces to put a nice layer of tomato on top. Grind some pepper onto this. If you’re adding avocado, add pieces of a small avocado at the same time.

When the bacon is done cooking, cut off the heat, optionally stir in some raw almonds — just enough to cook them slightly, coat them with grease, and cool the grease down a little. Spoon it all (including the bacon fat!) onto the greens, tomatos, etc. Sprinkle on a tablespoon or two of balsamic vinegar. Stir it around a bit. Enjoy immediately while it’s still in that warm, slightly wilted but very enjoyable texture!

If you really don’t like greasy food, you may want to reduce the quantity of bacon fat that you add to the salad by half, but be warned that it may not be filling enough to be a complete meal on its own then.

Consolidating mailservers, leaving gmail, sup, mutt, and what I want in a mail reader

I finally moved the second of the two domains I actually receive mail on from my old co-located server to my new virtual server instance. This simplified things enough that I could easily leave gmail and switch back to reading mail on a unix box where I control everything. I’d been wanting to do this for a long time — not because Google has turned evil (yet) nor because I dislike gmail in general — I love it. However, it offers very little ability to tune the spam filter, and it seems that the extra Received headers from forwarding mail through my own server was resulting in on average one or two false positives per day. Because of this, I desperately wanted to go back to my own spamassassin setup.

I accomplished this yesterday. I also tried out sup a mail reader I’d been wanting to try out for months. In many ways it’s great, it does search a lot better than traditional unix mail readers. However, as part of its design, it only makes changes to its index, it doesn’t modify the mail sources. This means that if I go and use it for a long time, but then decide to switch away, I’m going to have to find or write a tool to write all that state back to the mail folders. After using it for a couple hours, I decided that was not tolerable to me, at present. I might still consider using it for searching and only searching, I’m not sure.

So, I switched back to mutt, with a couple config tweaks over what I used to do — I’ve been converging on a workflow where I bring anything except moderate-to-high traffic mailing lists into one inbox, and from there, when I check email, I process everything as one of the following: mark as spam and move out of the inbox, read (possibly act on) and move to an archived/old mail folder, leave in the inbox if I really have to defer it, delete (actually move to a trash folder) if it’s just random crap like cron emails.

But, this got me to thinking about what I really want in a mail reader:
– MH/nmh-like seperate commands command-line interface
– Maildir support (this means nmh won’t work)
– header caching for performance (depending on the overhead of opening/closing the db, etc, this might be a daemon that the user programs connect to.)
– modern MIME handling
– fast search engine

More Opensolaris home fileserver setup

I am writing this blog post on paper, in the car while Tien drives, in preparation for NaNoWriMo, this year, in which I hope to write my entire novel with pen and paper.

I think I want to try to re-learn a cursive style of handwriting, primarily for the supposed speed advantage.

I recently finished setup of the hardware of the new home fileserver. I previously had trouble using more than one memory DIMM — bad enough when I had two 512MB DIMMs — a fact I only recently rediscovered, thinking that the 512MB of usable RAM was two 256MB DIMMs — but far less tolerable with the two 1GB ECC DIMMs I just purchased. I began to suspect that the problem — the machine either not POSTing or not recognizing more than the first DIMM might be somehow caused by an incompatibility between the 13-year-old PCI video card (an S3 Vision 968 chipset with the full 4MB of VRAM) and large system ram, for example, due to a design limitation in the video BIOS. (How many PCs in 1995 in 1995 had over 1GB? — I see that near the end of 1996, Dell was selling servers that you could upgrade to 1GB. IA Wayback Machine link

Anyway, now that I’m using the old and new RAM, I have 2.5GB of ECC RAM available to OpenSolaris, and I can do pretty much all I want with the upgraded fileserver.

The only parts of the setup of the new fileserver I haven’t written about yet, but have completed, are the transfer of data, setup of automatic snapshots, and scrubbing. Also, I have had a weird issue with the CIFS server included in SXCE, and not having time to fix it, I switched to the Samba server (included in /usr/sfw).

I had previously copied the data from the old fileserver’s disks into a format that both Linux and Solaris can reliably handle on a local disk — FAT32 filesystem, with data in GNU tar archives, split in sub-4GB chunks, with SHA-1 checksums of the archives for verification. This is awkward, and doubly so over a slow USB disk and with lots of hard links (a BackupPC pool and an rsnapshot pool.) But, it worked fine — once I had my large file (>2GB) issues on the Linux side worked out (“openssl sha1” gave correct results, “md5sum” did not.)

After creating a hierarchy of filesystems to reflect the old data — ZFS encourages lots of filesystems by making “zfs create” easy and fast. I restored everything and made some snapshots of the initial state — this is a copy-on-write mechanism that makes it easy to roll back to an earlier state of the filesystem.

I had turned on compression for everything but the “media” filesystem — already compressed music, video, etc. At this point, I ran regular (once or twice daily) scrubs of the storage pool to stress-test the new disks (it is effectively a read verification of all the used storage in the pool.)

Once I had things in a reasonable organization, I set up zfs-auto-snapshot — this provides a nice way to recover from operator error, even though it isn’t real backup — I will address that need later on, though all my really critical data (e.g. pictures I’ve taken) is backed up otherwise.

I also set up a monthly “zpool scrub” of both the root and data pools — I think this should be frequent enough to avoid data loss, but not introduce excess wear of the drives.