Browser Wars

After using Chrome for years now, I figured I’d give Firefox a try again just to give it a fair shake. Although Chrome still works perfectly well for me, it’s a major memory hog and quickly sucks up all the RAM on my laptop, and I’m a bit concerned about privacy issues with it as well.

Unfortunately Firefox still has a quirk that really annoyed me back in the day: when reading a forum thread that contains a lot of images, Firefox takes you to the last-unread-post anchor immediately, but then doesn’t keep you at the same relative position, so the page starts scrolling back up as images load in. So, quite often, I go to read the new posts in a thread, and it positions me somewhere back in posts I’ve already read, not at the actual first new post. Unfortunately, I read an awful lot of forum threads like this…

It also has trouble with Twitch streams, which seem a lot choppier under Firefox and sometimes gets into a state where the audio becomes staticky, and this persists until I reload the tab.

These are annoying enough that I’m probably going to wind up going back to Chrome, alas. I can at least live with having to restart it more frequently to free up memory.

Who Needs Blue Teeth Anyway

I’ve needed to upgrade some audio equipment; my trusty old Sony MDR-CD380 headphones lasted for ages, but have been cutting out in the right ear and the cable’s connection feels a bit flimsy now. I also needed a proper microphone to replace the ancient old webcam that I’d still been using as a “mic” long after its video drivers had stopped working with modern OSes.

I normally anguish for ages over researching models, trying to find the perfect one, but I cut that research short this time. A lot of the “best” gear is out-of-stock pretty much everywhere, and I don’t want to rely on ordering from Amazon too much. Instead, I figured I’d look at what was in-stock in stores in town, and try and get something good and actually locally available. So, after a bit of stock-checking and some lighter research, I finally left my neighbourhood for the first time in this pandemic and headed to a Best Buy.

For the microphone, I picked up a Blue Yeti Nano. Not the best mic ever, but readily available and perfectly adequate for my needs. From some quick tests, it already sounds waaaay better than that old webcam I’d been using. Clearer and crisper, and almost no background hiss, which had been awful with the webcam. It doesn’t have any of the advanced pickup patterns, just cardioid and omnidirectional, but it’s highly unlikely that’ll ever matter to me. It’s not like I’m doing interviews in noisy settings, where you’d really want the “bidirectional” pattern, for example.

For the headphones, I wound up picking up the Sennheiser HD 450BT. I wasn’t really originally considering Bluetooth headphones, since I didn’t want to worry about pairing, battery lifetime, etc., but this model appealed to me as the best of both worlds, as we’ll see in a bit.

I am actually a bit disappointed with the Bluetooth aspect of it. It mostly works…except that there’s a tiny bit of lag on the audio. Not really noticeable most of the time, except when you’re watching someone on Youtube and you can definitely notice a bit of desync between their mouth and what you’re hearing. I suspect my particular really-old Mac hardware/OS combination doesn’t support the low-latency Bluetooth mode, but it’s hard to verify. That wasn’t all, though. If I paused audio for a while, it would spontaneously disconnect the headphones, requiring me to manually reconnect them in the Bluetooth menu before resuming playback, which is really annoying. Playback also becomes really choppy when the laptop gets memory and/or CPU-starved, which happens fairly easily with Chrome being a huge memory hog. None of these are really the fault of the headphones themselves, it’s more the environment I’m trying to use them in, so I don’t think any other model would have done any better.

But, fortunately, I’m not entirely reliant on Bluetooth. The other major feature of these headphones is that you can still attach an audio cable and use them in a wired mode, not needing Bluetooth at all. They still sound just as good, and don’t even consume any battery power in this mode, so I’ll probably just use them this way with the laptop and desktop. I’ll leave the Bluetooth mode for use with my phone and TV, which should work far more reliably.

Speaking of my phone though, the other disappointment is that some of the features of the headphones like equalizer settings can be managed via a mobile app…which requires a newer version of iOS than I have. I could upgrade, but I’ve been reluctant to because that would break all the 32-bit iOS games I have. Dangit. I’ll probably have to upgrade at some point, but I don’t think this will be the tipping point just yet; the headphones still work fine without the app.

These headphones also have active noise cancellation, but I haven’t really had a chance to test it yet. Just sitting around at home, it’s hard to tell whether it’s even turned on or not.

So, overall I’m pretty happy with them so far. The Bluetooth problems aren’t really their fault and aren’t fatal, they sound pretty good, and they’ve been comfortable enough (not quite as comfy as the old Sonys, but those were much bigger cups).

Angry At Clouds

I’ve had a Youtube Premium subscription for a while now and it’s definitely nice not having ads on videos anymore, but I mainly wanted it to check out the Youtube Music service for my music streaming needs.

I have my own music library of ripped CDs and other files, of course, but I’d been turning into one of those old farts who still listened to mainly just their old music from 20 years ago and had no clue about much outside that comfort zone. YT Music has a “Discover Mix” feature where it’ll recommend new music to you based on what tracks you’ve marked ‘liked’, and after tagging a bunch of my regular music, the recommendations so far have been pretty good and I’ve found a lot of good, newer music. It is kind of electronic-heavy though, which might be some kind of feedback loop where having tagged a bunch of a genre starts biasing what it presents, which then biases how many of them you tag as ‘liked’, which further biases what it presents… I’ll have to see if manually finding and tagging some more stuff like industrial and rock balances things out.

However, the big problem with it is the interface. It’s a web site, so of course you have to keep it open in a web browser, closing the browser stops the music, it can get choppy if the browser’s heavily loaded, etc. All the usual drawbacks of being a web app.

It’s also glitchy, though, with new glitches appearing and disappearing all the time. At one point, my ‘liked’ playlist was filled with non-music Youtube videos I’d also happened to hit ‘like’ on. Songs are often left with “ghost” pause or like/dislike buttons on their row when they’re not the selected row. Most recently, anytime I started playing a song, the usual song information and playback and volume controls at the bottom of the page would appear for a split second and then vanish, leaving no way to control it other than starting a different song from the start. I’m often left wondering “okay, what’s going to break this week…”

But right now, my biggest frustration is probably with trying to manage my collection. You have a “Library” with all of the artists, albums, and individual songs you’ve manually added to it, but when you’ve been using it the way I have, hitting ‘like’ on a bunch of tracks it recommends, most of your music is going to be in the “Your likes” playlist. After you’ve been doing this for a while, that list gets unwieldy. There’s no way to sort the list. There’s no way to filter or search just a particular artist or album or song name. Scrolling through the list takes forever as it regularly pauses for 4 or 5 seconds to load the next chunk of songs. You can’t even invert the order of the list, so the songs you liked early on are buried deeply in it. You can click on the controls at the bottom to pop up the album art for the current song, but closing the art puts you at the top of the list of songs, not where you left off, so now you have to scroll back and scroll and scroll… You can’t add the songs to your main library from this list individually, let alone in bulk; you have to go to the three-dot menu, select Go To Album/Artist which takes you to a new page, and then add them from there. (Update: They did change this so the artist of a song you mark as ‘liked’ is automatically added to your library, but I’m not sure I want all of them in there either.) You can make playlists, but there’s a complete lack of “smart playlists” that would let me play my overall favourites by playcount, songs I haven’t played recently, grouped by genre, etc., like I can do in Clementine.

I guess it would be fine if I were to put it on shuffle mode and never worry about even trying to “manage” the list, but I do get these moods for some song or cluster of songs from a few months ago and then I have to dig through the list and it’s just awful.

Man, now I miss WinAmp…

To end on a positive note, here’s a few of the songs I’ve discovered through YT Music:

Back In The Stone Age

Woke up to a dead router this morning (RIP ASUS Black Knight, you served well) but at least I was able to find an old one in the old pile o’ parts to sub in for now. Even though it only has 100Mb ports and too-insecure-to-actually-use-802.11b…

I’m not sure what to do now though, since I’ve been thinking of upgrading my internet service and I think either of the potential options are going to force their own all-in-one DOCSIS/Fibre router on me anyway. I’ve always been kind of wary of those since I’ve always used custom firmware for advanced features like static DHCP, dynamic DNS updates, QoS, bandwidth monitoring, etc., and having to use their router might mean giving some of that up. Time to do some research.

From C to Shining C++

I’ve had some extra time to do a bit of code cleanup at work, and I decided a good refactoring of string usage was way overdue. Although this code has been C++ for a long time, the original authors were either allergic to the STL or at least uncomfortable with it and hardly ever used it anywhere; this was originally written a good 15-20 years ago, so maybe it was still just too ‘new’ to them.

In any case, it’s still riddled with old C-style strings. They aren’t inherently bad, but it does tend to lead to sloppy code like:

char tempMsg[256];
setMsgHeader(tempMsg);
sprintf(tempMsg + strlen(tempMsg),
        "blah blah (%s) blah...", foo);

That’s potentially unsafe, so it really should use snprintf instead:

char tempMsg[256];
setMsgHeader(tempMsg);
snprintf(tempMsg + strlen(tempMsg),
         sizeof(tempMsg) - strlen(tempMsg),
         "blah blah (%s) blah...", foo);

Ugh, that’s ugly. Even after making it safer, it’s still imposing limits and using awkward idioms prone to error (quick, is there a fencepost error in the strlen calculations?). So I’ve been trying to convert a lot of this code to use std::string instead, like:

std::string tempMsg = getMsgHeader();
tempMsg += strprintf("blah blah (%s) blah...", foo); 

which I feel is far clearer, and is also length-safe (strprintf is my own helper function which acts like sprintf but returns an std::string).

It’s usually fairly straightforward to take a C string and convert it to the appropriate C++ equivalent. The main difficulty in a codebase like this is that strings don’t exist in isolation. Strings interact with other strings, get passed to functions, and get returned by functions, so anytime you change a string, it sets off a chain reaction of other things that need to be changed or at least accommodated. The string you just changed to an std::string is also an output parameter of the function? Welp, now the calling function needs to be converted to use std::string. Going to use .c_str() to call a function that takes plain C strings? Oops, it takes just ‘char *’ and now you gotta make it const-correct…

To try and keep things manageable, I’ve come up with the following guidelines for my conversion:

Don’t try and convert every string at once; triage them.

A lot of strings will be short and set once, used, and then discarded, and they’re not really so much of a risk. Instead, focus on the more complex strings that are dynamically assembled, and those that are returned to a caller. These are the cases where you really need to be wary of length limits with C strings, and you’ll gain a lot of safety by switching to std::string.

Functions that take an existing ‘const char *’ string pointer and only work with that can be considered lower-priority. Since they’re not altering the string, they’re not going to be the source of any harm, and changing them could risk introducing regressions.

Use wrapper functions to ‘autoconvert’ strings.

If you have functions that already take ‘const char *’ strings as parameters and work perfectly fine, you don’t necessarily need to convert the whole thing right away, but it can help to have a wrapper function that takes an std::string and converts it, so that you can use std::strings in the callers.

void MyFunc(const char *str)
{
    ...
}

void MyFunc(const std::string& str)
{
    MyFunc(str.c_str());
}

Now functions that call MyFunc can be converted to use std::string internally and still call MyFunc without having to convert MyFunc as well, or having to pepper the caller with .c_str() calls.

This can get tricky if you have functions that take multiple string parameters and you still want to be able to mix C-style and std::string as parameters. If it’s only a couple of parameters you can just write all possible combinations as separate wrappers, but beyond that it gets unwieldy. In that case you could use template functions and overloading to convert each parameter.

void MyFunc(const char *foo1, const char *foo2, const char *foo3, const char *foo4)
{
    printf("%s %s %s %s\n", foo1, foo2, foo3, foo4);
}

const char *ToC(const char *x) { return x; }
const char *ToC(const std::string& x) { return x.c_str(); }

template<typename S1, typename S2, typename S3, typename S4>
void MyFunc(S1 foo1, S2 foo2, S3 foo3, S4 foo4)
{
    MyFunc(ToC(foo1), ToC(foo2), ToC(foo3), ToC(foo4));
}

int main()
{
    MyFunc("foo", "bar", "baz", "blah");
    MyFunc("aaa", std::string("fdsfsd"), "1234", "zzzz");
    MyFunc(std::string("111"), "222", std::string("333"), std::string("444"));
    return 0;
}

It’s still kind of awkward since it needs helper overload functions polluting the namespace (I’d prefer lambdas or something nested function-ish, but I can’t see a good way to do it that isn’t ugly and difficult to reuse), but it avoids having to write 2^N separate wrappers.

Don’t get too fancy while converting

std::string will let you be more flexible when it comes to string manipulation, and you’ll probably no longer need some temporary string buffers, some string assembly steps can be combined, sanity checks can be removed, other string operations can be done more efficiently in a different way, etc. But if you try and make these changes at the same time you’re converting the string type, you could risk messing up the logic in a non-obvious way.

Make the first pass at string conversion just a straight mechanical one-to-one conversion of equivalent functionality. Once that’s done and working, cleaning up and optimizing the logic can be done in a second pass, where it’ll be clearer if the changes maintain the intent of the code without being polluted by all the type change details as well.

There are some other caveats to watch out for, too:

Beware of temporaries.

Wherever you still need C-style strings, remember that a returned .c_str() pointer only remains valid as long as the std::string still exists and is unchanged, so beware of using it on temporary strings.

const char *s = (boost::format("%s: %s") % str1 % str2).str().c_str();
SomeFunc(s); // WRONG, 's' is no longer valid

However, the following would be safe, since the temporary string still exists until the whole statement is finished, including the function call:

SomeFunc((boost::format("%s: %s") % str1 % str2).str().c_str());

Of course, most people already knew this, but the odds of inadvertently introducing an error like this goes way up when you’re changing a whole bunch of strings at once, so it’s worth keeping in mind.

Variadic functions are annoying

Unfortunately, ye olde variadic functions don’t work well with std::string. It’s most often used for printf-style functions, but an std::string doesn’t automatically convert to a C-style string when passed as a variadic parameter, so you have to remember to use .c_str() whenever passing in an std::string.

There are non-variadic alternatives like boost::format that you might want to prefer over printf-style formatters, but if you’re stuck with printf-style functions for now, make sure you enable compiler warnings about mismatched printf parameters and set function attributes like GCC’s __attribute__ ((format …)) on your own printf-style functions.

If you have boost::format available, and a C++17 compiler, but don’t want to convert a bajillion printf-style parameter lists, you could use a wrapper like this with variadic templates and fold expressions:

template<class... Args>
std::string strprintf(const char *format, Args&&... args)
{
    return (boost::format(format) % ... % std::forward<Args>(args)).str();
}

With this you can then safely pass in either C-style or std::string values freely. Unfortunately then you can’t use printf-format-checking warnings, but boost::format will be able to do a lot more type deduction anyway.

Performance is a concern…or is it?

C-style strings have pretty minimal overhead, and with std::strings you’ve now got extra object memory overhead, reallocations, more temporary objects, generated template functions, etc. that might cause some performance loss. Sticking with C-style strings might be better if you’re concerned about performance.

But…with modern optimizers and move constructors, return value copy elision, small string optimizations in the runtime, etc., the performance penalty might not be as bad as you think. Unless these strings are known to be performance-critical, I think the safety and usability value of std::strings still outweighs any slight performance loss. In the end, only profiling can really tell you how bad it is and it’s up to you to decide.

Ah, C++…

(edit: Of course it wasn’t until after writing this that I discovered variadic macros, available in C++11…)

While working on some old code, there were a bunch of rather cumbersome printf-style tracing macros where you had to make sure the name of the macro matched the number of parameters you wanted to use. E.g., use TRACE1(“foo: %s”, s) for one format parameter, use TRACE2(“foo: %s %d”, s, x) for two parameters, etc. It was always annoying having to make sure you picked the right name, corrected it whenever the parameters changed, and so on.

I can understand why someone created these macros in the first place. They exist as a shorthand for another variadic tracing function with a much longer name, to keep tracing calls short and snappy, or to redirect them to nothing if you want tracing disabled, but that presents a few challenges. An inline function would preserve the preprocessor-controlled redirection, but you can’t simply have a function with a shorter name call the other function because variadic functions can’t pass on their arguments to another variadic function. You could just create a #define to map the short name to the longer name, like “#define TRACE GlobalTraceObj::CustomTraceFunc”, but that risks causing side effects in other places that use ‘TRACE’ as a token, and doesn’t let you eliminate the call if desired. A parameterized macro avoids that problem, but only works for a specific number of parameters, and hence you wind up needing a different macro for each possible number of arguments.

I figured there had to be a better way and hey, now that I have newer compilers available, why not try C++11 variadic templates? They do let you pass variadic arguments on to another variadic function, which is exactly what’s needed, and it can still be conditional on whether tracing is enabled or not!

template<typename... Args>
inline void TRACE(const char *format, Args... args)
{
#ifndef NOTRACE
    GlobalTraceObj::CustomTraceFunc(format, args...);
#endif
}

And it worked perfectly, now I could just use TRACE(…) regardless of the number of parameters.

Except, I got greedy… Another nice thing is to have the compiler do consistency checks on the printf-style formatting string and its arguments, which you can do with compilers like GCC with a function attribute like on the main tracing function:

static void CustomTraceFunc(const char *format, ...)
#ifdef __GNUC__
    __attribute__ ((format (printf, 1, 2)))
#endif

I wanted that same checking on the TRACE wrapper function, but it turns out that at least at the moment, you can’t apply that same function attribute against a variadic template function; GCC just doesn’t recognize the template arguments as the right kind of variadic parameters that this attribute works on. Oh well.

I really wanted that consistency checking though, so in the end I abandoned the variadic template approach and just wrote TRACE as a plain old variadic function, which meant having to modify the other tracing functions to use va_lists instead, but that wasn’t too big a deal. If I didn’t also have control over those other functions, I would have been stuck again.

#ifdef __GNUC__
    __attribute__ ((format (printf, 1, 2)))
#endif 
inline void TRACE(const char *format, ...) {
#ifndef NOTRACE
    va_list args;
    va_start(args, format);
    GlobalTraceObj::CustomTraceFuncV(format, args);
    va_end(args);
#endif
}

static void GlobalTraceObj::CustomTraceFuncV(const char *format, va_list args)
...

Wrestling Hercules

For some reason I got it into my head last weekend to set up a Linux s390x instance, using the Hercules emulator. We do some mainframe stuff at work, and we have a couple instances of z/Linux already, but they’re in Germany and managed by another team, so maybe it would be neat to have our own local instance, even if emulated? Although I’ve done some work on the mainframe before (mainly on the USS side), I’m hardly an expert on it, but how hard could it be?

So fine, I set up a Fedora Server 30 VM as the emulator host and installed Hercules onto it and set up a basic configuration, per various guides. There are a handful of s390x Linux distros but I figured that Debian would make for a nice, generic baseline instance, so I grabbed the Debian 9.9 s390x install DVD image.

Problem #1: It wouldn’t boot. IPLing from the DVD image just spit out a few lines of output and then hung. After some digging, this had been noted on the Debian bug mailing list, but with no resolution or workaround.

Figuring that maybe the distro was too new for the emulator, I grabbed a Debian 7 install DVD (there was no listing for an s390x version of 8 on the main download pages) and hey, it actually booted and started going through the install process.

Problem #2: It doesn’t actually install from the DVD. Even though it has all of the packages on it, the s390x installer still goes and gets them from the network, and the networking wasn’t working. It could ping out to other networks, but DNS and HTTP wouldn’t work. After way too much fiddling around, I finally figured out it was an iptables ordering problem, and using ‘iptables -I …’ instead of ‘iptables -A …’ on the forwarding commands worked around that and got networking going.

Problem #3: The mirrors didn’t have Debian 7. Unfortunately I didn’t realize beforehand that the Debian 7 packages were no longer available on the mirror sites, so the installer couldn’t proceed. With a bit of searching, I found that there actually was a Debian 8 disc for s390x though, so I got that and gave it a try.

Problem #4: The mirrors don’t really have Debian 8, either. At least not the s390x packages, just the other main platforms. At this point it looked like there just wasn’t a path to get any version of Debian working, so I started trying some of the other distros.

Problem #5: The other distros aren’t quite as easy to install. Newer Fedora releases hung, older Fedora releases didn’t quite behave as expected and it was unclear how to make installation proceed, and openSUSE was still experimental and unclear how to install it. I even tried Gentoo, which seemed to work for a while after starting up before hanging at a point where it was unclear if it was grinding away at something intensive or not, and I let it sit there for two days before giving up on it. So yeah, not much luck with the other distros either.

Searching around for more info, I found that there were some newer versions and forks of Hercules that potentially fixed the hang problem, so it was time to give Debian 9.9 another try, using the Hyperion fork of Hercules.

Problem #6: Hyperion’s still a bit buggy. It compiled and installed just fine, but some of the permissions seemed incorrect and I had to run it as root. Even before IPLing it was extremely sluggish (sending the load average up to over 8), and trying to IPL the Debian disc just froze in an even earlier spot. So much for that.

Then I gave the ‘spinhawk’ fork of Hercules a try, and…hallelujah, everything’s gone smoothly since. It IPLed from the Debian image fine, it could find the mirrors and download packages, partition the disk, etc., and I now have a fully installed and working s390x Linux system.

Was it worth the hassle? Eh, probably not, I’m still better off doing any coding for work on our actual non-emulated z/Linux systems. It was interesting just to experiment and play around with for a bit, though.

Lack of Mac

I’m still using a 2010 Macbook Pro as my main day-to-day system, and I’ve been meaning to upgrade for a while now since both RAM and disk space have been getting tight, and although I could slap a bigger hard drive in it, the RAM can’t be upgraded any further. May as well just upgrade the whole shebang at once anyway.

Except…I haven’t been too happy with the available choices lately. The newer MBPs have a new type of ‘butterfly’ keyboard that’s widely hated and fairly fragile, I don’t know if I’d like the lack of a physical ESC key (especially as a ‘vi’ user), you need dongles for fairly common connection types now, etc. But, perhaps worst of all, they’re just really friggin’ expensive!

I paid about $2200 for my current MBP, and then upgraded the memory and hard drive later on, but they’re non-upgradable now, so you have to buy the long-term specs you want right up-front, and those upgrades are ludicrously expensive. I currently have a 500GB hard drive, but I’m always running out of space and cleaning things up, so I’d like to go to 1TB for a new system. Bumping the storage up to 1TB adds $720 to the price, even though a decent 1TB M.2 module costs around $300; that’s a hell of a markup!

Putting together a new MBP that would actually be a one-step-up upgrade for RAM and hard drive, the total price starts at $3900. If I drop down to a 13″ screen (which I’d rather not do for a system I use so heavily), it’s still around $3400.

That’s just too much for me right now, especially if it’s going to have an awful keyboard. It’s not like they’re going to get any cheaper in the future though, so I’m still not sure whether to just suck it up, wait even longer, or just start looking at Windows laptops. There are plenty of Lenovos with half-decent specs in the $2200-$2500 range…

100% Guaranteed Genuine Drivel

After putting it off way too long, I finally have a TLS certificate set up for all the web stuff I host here, courtesy of Let’s Encrypt and Certbot.

It’s not like I’m particularly worried about attackers hijacking the site (because it has so much influence…), but the winds are blowing towards HTTPS-everywhere, and no point in getting left behind. Certbot makes it pretty painless, at least, though I’m sure there are some older broken images or links that I’ll have to clean up over time. Any non-HTTPS links on or to the site should automatically redirect to the appropriate HTTPS URL.

Time To Kill My Brain

Well, after 10+ years of using my dinky little computer monitor as a “TV” (meant to be a temporary measure after my old tube TV broke), I finally have a proper TV again. It’s three weeks late thanks to shipping shenanigans, but I’d rather not dwell on that…

It’s not OLED, which is still kinda pricey, but it is 55″, 4K HDR, and has local dimming, so the quality’s really nice and it should remain futureproof for quite a while yet. I may have to do some calibration, but God of War already looks amazing on it.

Right now I have the cable box, PS3, and PS4 hooked up to it, and I also want to hook up the PC so I can play PC games on it, but I’ll need to get a longer HDMI cable for that. I don’t have the 360 hooked up since I don’t have the right cables (it’s one of the early non-HDMI 360s), and I could hook up the Wii with component cables, but they’re not a priority right now. If I run out of HDMI ports, I have an HDMI auto-switcher so some of the consoles could share a port, if needed.

It also finally let me clean up a whole mess of audio cabling. I used to have to split out the audio signals from all the boxes, route them through a switcher, and then through a PC, resulting in a ton of RCA audio cables and proprietary console A/V connectors strewn all over, but now everything’s just HDMI so all that cabling is gone.

The only quibble so far is that the UI (Android TV) is kinda clunky and slow, it can take up to 5 seconds for some of these menus to come up, but hopefully I won’t have to use it too much. Oh, and the motion smoothing they enable by default is total garbage (Mad Max Fury Road happened to be on TV, and it feels so weird with it enabled), but it’s easily disabled.

Now maybe I’ll actually watch more TV and movies now that they don’t have to fight with the PC…

Internal, External, Who Cares

The Dell I ordered to replace my old Linux box arrived, and after spending a few hours setting it up, it’s amazingly tiny, whisper-quiet, and works pretty well! It spooked me a bit when it suddenly started making sound, as I didn’t realize it has a speaker built right into the case.

It also doesn’t have room for the 8TB internal storage drive. Whoops! I probably misread the storage configuration information when I ordered it and it comes with two internal 2.5″ bays, and no 3.5″. It was super-easy to get at those bays though, so I took the old SSD and added it to the one the system came with.

I think I can still work with this, though. It’s so small that there’s plenty of space to put the storage drive in an external enclosure instead. Just from testing with the two external drives I have right now, I can get 150MB/s out of the USB3 ports (versus the 15-20 I was getting on the old system), so speed certainly won’t be a problem. It’ll just be another wall-wart to deal with…

And after using Ubuntu for probably close to 10 years, I’m giving Fedora a try instead. Just for something different, and I’d been seeing recommendations for it in a few places now. I’ll still mostly be using it via SSH, so there probably won’t be too much difference in practice.

The Anti-Climax

Well, the rescue finished, and after letting e2fsck have its way with the recovered data, it looks like only a handful of files were affected: a Windows ISO I can redownload from MSDN if needed (it was Windows 8.1, so…not likely), a couple of game soundtrack files, and five ripped DVDs, all of which were fairly recently ripped ones so the discs are actually still on hand and hadn’t been packed away in the closet yet. The game soundtrack was the only one I actually needed to restore from backup.

Still, I’m a bit paranoid about corruption, so I’m also running a big ‘diff’ between the recovered data and the backups, just to make sure there isn’t some silent corruption in them. That’ll only take another few days to run. Then I have a script I can use to test the integrity of various file types. And then I need to clone all this so that this isn’t my only copy of the recovered data lest another drive fail… (Update: It has found at least a few more corrupted files, so far. Paranoia works!)

This is all still on external drives though, so I still need to get a new internal storage drive. But I’ve also been meaning to replace this old Linux box, as it’s just a bunch of old parts cobbled together as a temporary workaround when a motherboard failed in the old system. It’s just been ‘temporary’ for a few years now… I’m not feeling too keen on assembling my own system right now though, so instead I ordered a Dell T3420 SFF system. It’s a couple years old, but I don’t need anything new and fancy for this role, and it was reasonably priced enough. The small case size (7.8L) will be a nice change from the Antec P180 behemoth (54L!) currently sitting on my desk. I wish you could order them without any hard drives at all though, as I’ll be replacing them with my own anyway.

Cleanup

The ‘ddrescue’ is still running (current ETA: 2.5 more days), and there’s some good news in that it’s stopped logging errors and so far there’s only around 60MB worth of missing data out of 4TB. I’ve also received a couple of 8TB external drives so I have some spare room now to start parts of the recovery process, like getting data out of the ‘dar’ backups.

Fortunately, test runs on the backup drive show that the backups are not affected by the bad block I found on that drive. Unfortunately, I also realized that since the backup was in progress when the drive failed, I’m not sure if maybe that backup run picked up and stored some corrupted data as it was running.

So, I have the following data to work with:

  • A fully-intact base backup that dates from July 2017.
  • The partial, might-possibly-have-some-corruption differential backup from just before the failure.
  • The data salvaged from the failing drive.

My plan so far is that once the ‘ddrescue’ is done, I take the bad blocks list and run e2fsck against the salvaged data. This should get me a list of files that were hit by the bad blocks and could not be recovered. For each of those files (hopefully the list isn’t too long!), try and find a copy in the base backup. If it’s not in the base backup, or I really need a newer version than that one, get it from the differential backup and try and eyeball it for corruption. If any of the ripped DVDs gets hit (quite likely, they took up a lot of the space), note which one and get ready to re-rip it.

Too bad this is all going to happen at USB2 speeds though…

Being Less Clever

So, while my drive recovery grinds away (current ETA: 8.5 days) and I wait for the arrival of new drives, I need to step back and rethink how I do my data storage and backup to begin with.

Previously, I thought I had a fairly decent setup: one big storage drive that did automatic weekly differential backups via ‘dar’ to an external USB drive, rotating and starting a new full backup once a month while still keeping one previous backup set around, and every once in a while I would physically swap the external drive with an offsite one, for disaster recovery.

That sounds alright, and it worked well for a while, but the problem was that I let the backup solution fall behind my main storage needs, and started compromising on things. As the amount of data I had grew, I started letting it run less often so that the backup drive wouldn’t fill up so quickly. Soon there just wasn’t room for both the previous and current backup sets, so I got rid of the previous one. I had to do full backups from scratch more often because there just wasn’t much room left for the differentials. I got lazier about how frequently I did the offsite drive swap. And eventually, I’d upgraded my storage drive to 4TB but the backup drive was still a mere 1.5TB, so I had to start excluding stuff like the ripped DVDs from the backup because they just wouldn’t fit. This all left my backups in a more fragile state than I’d realized, leading to my current hassles.

Once this is all cleaned up, I obviously need something better. Something simpler, less prone to error and laziness and compromise.

Right now, my basic line of thought is:

  • Still have one big internal storage drive, an external backup drive, and an offsite replica.
  • The backup drives must always be at least as large as the storage drive.
  • Instead of differentials, the backup drive is a straight mirror of the storage drive.
  • The mirroring is done automatically on a schedule via ‘rsync’, instead of my horribly convoluted wrapper script around ‘dar’.

This should avoid most of the aforementioned problems except perhaps the offsite swapping laziness, which I’m just going to have to do better at. The data is guaranteed to fit, there’s no need to compromise or exclude anything, the syncing can be done automatically safely, and in the event of drive failure, the backup drive can swap straight in as the storage drive. The downside is that plain mirroring will also mirror any mistakes I make if I don’t realize it and catch them before the next sync occurs, but hopefully most mistakes will be heat-of-the-moment ones I can restore immediately, and in the worst case I can still resort to the offsite drive for an older copy.

Rescue Me

Well the plain ‘dd’ on my failing drive was a bust, as a day later the transfer rate had dropped to tens of kB/s, and it would have taken forever at that rate.

So, instead, I’m using ‘ddrescue’, which is better suited to this kind of rescue as it can skip ‘slow’ areas of the disk, retry bad blocks multiple times from different directions, and will pick up where it left off if interrupted. I can’t compress the output from it though, since it’s no longer just a stream of bytes, so I had to completely move everything else off of the external drive I’m writing to so I could write to it at the raw partition level instead. It’s making good progress though, with an overall transfer rate of 3 MB/s after a day.

What’s becoming obvious from the log is that disk errors are occurring scattered all over the place, not just in one spot, and the concern here is that this may wind up leaving ‘holes’ in random files. Since I’m primarily trying to recover a collection of ripped DVDs, I’d prefer not to find out that a movie is corrupt when I’m halfway through watching it!

Fortunately, as documented here, it looks like I can use the list of bad blocks found by ‘ddrescue’ and feed it to e2fsck when it comes time to try to repair the rescued image, and it should be able to figure out which files are damaged.

When It Rains, It Pours…

Oh goodie. A couple weeks ago, the backup drive for my gaming PC failed, which isn’t too big a deal; I’d been meaning to upgrade it to a larger one anyway.

Then this morning, I noticed that my Linux box’s HD activity light was stuck on. Turned out that my main storage drive was failing, and it was stuck on accessing the hard drive. I have a separate backup drive for this system, but the failure started while it was mid-backup, so the backup is incomplete. And then, for the final kick in the pants, I noticed that there are also bad blocks on the backup drive, so who knows how that’ll affect trying to restore from it.

Fortunately I’m paranoid and I have a second backup drive that I rotate with this one and store offsite, so that backup drive should still be fully intact. Unfortunately, I am also lazy, and it’s been a while since I swapped drives, so it’s going to be an old backup. This is primarily just a ‘media’ drive and all my important personal stuff is on a different drive and largely duplicated on my laptop, at least. There are also a lot of files on the failing drive that weren’t included in the backup since they took up too much space and could be re-obtained if needed. Those are primarily DVD rips, but it’s going to be super-annoying if I have to dig my DVDs out of the closet and re-rip them.

I’d still like to recover as much as I can, though. So, recovering from this is going to be an exercise in trying to reconcile a) an intact but fairly old backup, b) a recent but incomplete and potentially corrupt backup, and c) what I can scrape off the failing drive before it completely gives out. Fun.

I started copying some files off the failing drive and got a bunch of stuff including my music collection and non-Windows gaming files (mainly a lot of Minecraft servers and worlds, and emulation stuff), but then an error hit and I can no longer access the root directory of that partition, so I can’t even get at any of the intact files anymore. So, now I’m ‘dd’ing and compressing the raw partition of the failing drive onto a spare external drive so I can go at it with more in-depth tools later on. Though at its current rate of 364 kB/s, it’ll only take another, uh, 127 days or so. (Hopefully it’ll get past the damaged part and speed up soon.)

Except I don’t really have anywhere I can un-‘dd’ it to right now, since it’s a rather large drive. Guess it’s time to start buying large hard drives in bulk…

Know Yourself As Well As Google Does

Privacy on the Internet has been a big topic lately, and one I’m certainly concerned about, so I figured I’d take advantage of Google’s “Takeout” feature to get an archive containing (supposedly) all of the data related to you that Google keeps on their servers. It took a couple of days for Google to prepare it, but I now have a 380MB zip file containing everything Google knows about me.

A lot of the contents aren’t that surprising, since it’s data that you’d expect Google to have. My gmail messages, my contact list, my map bookmarks, etc. And some stuff that makes sense but I’d forgotten about, like some quick-n-dirty spreadsheets I’d slapped together in Google Docs, or that one +1 I’d given in Google+ before never touching it again.

Where the really surprising stuff is though, is in the activity tracking. This includes:

  • All of the web sites I’d visited where I’d seen a Google Ad, for up to five years back, way longer than my regular browser history.
  • Every time I even just opened an app on my Android tablet, and what that app was, for up to three years back.
  • My GPS location history, which you can also see here. Fortunately this is taken from my tablet, which has barely budged from its charging location in years now, but it would be a lot more invasive if I had an Android phone.
  • Anything I’d even just browsed in the various Google stores (books, apps, movies, etc.)
  • Which specific images I picked to view from an image search.
  • Not just the map locations, but directions to places obtained through Google Maps.
  • Search history going back for five years, longer than I’d expect based on autocomplete history.
  • Oddly enough, it doesn’t include my Youtube history or activity, presumably because my Youtube account is a linked one from back before Google bought Youtube, so it’s still treated separately. They’re certainly still tracking my Youtube data, it’s just not included here.

It’s certainly eye-opening to finally see this kind of data, especially since I’m somewhat of a paranoid, private person who thought he was already being careful, and I’ll have to take another pass through various settings to see what can be tightened up a bit. It’s not like I have anything particularly nefarious to hide, but given a long enough history, somebody could probably spin a sinister-sounding story out of anyone’s data.

This is just one piece of the big picture, too, as there’s also data being collected by Apple, Facebook, Microsoft, Twitter, Amazon, etc. It’s a battle between privacy and convenience, though, and all too often, convenience wins.

Facebook Redux

Well, thanks to a combination of the recent crap and other longstanding concerns, I’ve deactivated my Facebook account (again). Just in case anyone’s wondering where I’d disappeared to. I just don’t want to feel complicit in their shenanigans anymore.

I’ve been using Twitter a bit more even though I’m not entirely happy with them either, but I probably need to entirely rethink how I communicate anyway. It’s not like I’ve been particularly active here either…

Dammit, Logitech

We got new desktop systems at work a while back, and I hooked my webcam back up to the new system, but never bothered setting up the software for it since it rarely gets used. Well I’ll need to do a demo soon where it’ll be useful, so I find the installer and run it and…”This product requires a USB port. Please make sure your USB port is working before launching setup again.” Um, I’m pretty sure the USB ports work on this thing. Some searching reveals that it’s probably because this new system is UEFI, and this is an old driver package (circa 2009) that probably doesn’t know how to check for USB ports properly. I double-check that ‘Legacy USB’ support is enabled in the BIOS, but that doesn’t help. There’s a newer 2.x version of the webcam software and it installs just fine, but doesn’t recognize this old model, and support suggests that I specifically need that old 1.x package.

Well maybe I can just install the drivers manually, I figure. I extract the files out of the installer, notice that there’s a “RequireUSB=1” setting in the setup.ini file, try setting that to 0, and rerun the setup, and yay, it no longer gets blocked at that USB check. It starts to go through the normal install dialogs, except…there’s no ‘Next’ button on any of the screens even though it tells me to press ‘Next’. Hitting enter works to proceed to the next dialog until I get to the EULA screen, where I have to specifically click the ‘Agree’ radio button and then I can’t tab back to the invisible Next button, so I’m stuck again. Trying various compatibility modes doesn’t help.

The setup program just seems to be a launcher for various MSI files in the package though, so I go to the ‘Drivers’ and ‘Software’ subdirectories, run the .msi files there, and those all seem to install just fine. I run the webcam UI program, and…no webcam detected. Check device manager, and the webcam model name does now show up, but only as a ‘controller’, not an ‘imaging device’, and there’s still an ‘Unknown device’ whose USB ID matches that of the webcam. Somehow it’s managed to identify the specific webcam model as a USB hub, and the built-in mic, but not as an actual camera. The ‘Installing New Hardware’ systray notification also has an error about not being able to install the driver, but I change the settings there to also search Windows Update. It goes off and grinds on that for a while…and still fails to find a driver to install.

Cue several rounds of uninstalling and reinstalling the packages and trying different ports and other things, with no luck. Finally, I go to Device Manager, notice the ‘Update Driver Software’ option there, give it a try on the unknown device…and now it successfully finds and installs a driver, even though it couldn’t via Windows Update (I’d assumed it would have been the same process). But it does all work now, at least.

Webcams always seem to be among the worst devices for long-term support, and I wouldn’t be surprised if it’s at least partly forced obsolescence because otherwise there wouldn’t be as much of a reason to upgrade to a new marginally-better webcam…

A Thing That Still Exists

I received my renewal notice for my Flickr Pro account this morning, and my first thought was “I have a Flickr Pro account?”

I haven’t touched it in literally years now, as my photo-taking habits have gone from an elaborate ritual of hauling my point-and-shoot camera around, transferring pics off the card, meticulously tagging and uploading them via the Flickr client, and arranging them in galleries on the site, to “eh, I should sync these off my phone sometime, I guess I’ll just paste this one into iMessage for now…”