Don’t Call It A Pad

Well, I gave in to my techno-curiosity about this whole Android thing (I just had an iPhone already) and picked up a Nexus 7 last night. It’s pretty cheap for a tablet while still being fairly high quality, so it lets me indulge my curiosity without feeling like a big commitment.

The build quality is pretty great, the screen is awesome, and no problems at all with the touch interface. The only real concerns were that the camera is pretty low-res and grainy in my living room light, but it’s clearly really just for Skype and such and people who try and take photos with tablets are evil anyway; and when holding it in landscape mode to watch a video, the sound distractingly comes from just one side, so have a good set of headphones instead. It’s a little bit heavier than expected, and my arm got a bit tired of holding it up, but that’s really just because I’m not exactly in great shape…

I checked out some comics, and although cramped, I still found them to be reasonably legible at the full-page-fit level. That was admittedly with some DC/Marvel-style bright, large-text pages though, and I can see comics with denser art and text needing more fiddling with zooming in and out. I’d say it’s ‘adequate’, obviously not as good as a full-size tablet but I’d certainly rather view them on this than my iPhone.

Google Play seems nice enough, though I haven’t really probed the depth of their selections yet. The major omission in Canada is the music store and sync/streaming service, so for us it’s just not going to be a very good music device unless you set up your own DLNA server and use something like Subsonic to do your own streaming. The Google integration is nice in that I recently switched to Chrome on the desktop, so it’s convenient already having bookmarks and such shared.

And I’m still exploring the whole Android ecosystem, but it does please the geek inside me that I can get fairly low-level with terminals, file management, etc. even without having rooted it. The app selection may be smaller, but it hasn’t really felt like anything’s missing yet (hell, I even found a .mod music player). I still have to check out the gaming selection in more depth.

I still do way too much fiddly stuff on my laptop for this to ever be a replacement for it, but I’ll just have to keep using it and see what kind of role it naturally falls into. It’s definitely way easier to use than the laptop while lying in bed…

(Though one thing that really bugged me while setting it up is that I had to log in to my Google account four times, typing out my whole crazy-long strong password each time. The first time fails because of two-factor authentication, so it redirects you to a web page to sign in again and enter the mobile code, but then I misclicked something and it took me to another web page with no obvious navigation or gesture controls to get back to the code entry. I wound up having to hard power it off and go through the whole setup process again, requiring another two password prompts. At least it was only a one-time process… The paranoid side of me also isn’t crazy about having another way essential credentials for things like Google could be leaked, but hopefully having a strong PIN on it suffices.)

It’s Shinier Too

(Yikes, it’s been a while…)

Well, it finally did it. I’ve been a staunch Firefox user ever since it was in beta, since I liked using the same browser across multiple platforms, but it’s gotten to the point where it’s just too glitchy to tolerate. Pages not responding to the refresh button properly, unexplained choppiness in video playback, the tab bar not scrolling all the way and leaving some tabs accessible only by the tab list, crashes when I use the Flash player full-screen, the tab list missing some tabs at the bottom, high CPU usage on OS X even when lightly loaded, the Firefox window suddenly losing focus so I have to click on it again, not remembering passwords on some pages it should, and so on. Little things, but they add up.

So, I’m going to give Chrome a try for a while. There’s no guarantee that I won’t find a bunch of things about it that’ll annoy me just as much, but it’s worth a shot. Now I just have to find a set of equivalent extensions…

The Settings Are A Bit Off

The offset sizes were another area I could experiment with a bit. Originally I had three different offset lengths, a short-range one, a medium-range one, and a long-range one, on the theory that shorter offsets might occur more often than longer offsets, and could be stored in fewer bits. If a buffer size of ‘n’ bits was specified, the long-range offset would be ‘n’ bits, the medium range offset would be ‘n-1’ bits, and the short-range offset would be ‘n-2’ bits.

Some experimentation showed that having these different ranges was indeed more efficient than having just a single offset length, but it was hard to tell just what the optimal sizes were for each range. I kept it to only three different ranges because initially I didn’t want the number of identifier symbols to be too large, but after merging the identifiers into the literals, I had a bit more leeway in how many more ranges I could add.

So…why not add a range for every bit length? I set it up so that 256 would correspond to a 6-bit offset, 257 indicated a 7-bit offset, 258 is an 8-bit offset, etc., all the way up to 24-bit offsets. This also had the property that, except for the bottom range, an ‘n’-bit offset could be stored in ‘n-1’ bits, since the uppermost bit would always be ‘1’ and could be thrown away (if it was ‘0’, it wouldn’t be considered an ‘n’-bit offset, since it fits in a smaller range). Some testing against a set of data files showed that this did indeed improve the compression efficiency and produced smaller files.

With all of these possible bit values and lengths though, there was still the open question of what should be considered reasonable values for things like the default history buffer size and match length. Unfortunately, the answer is that it…depends. I used a shell script called ‘explode’ to run files through the compressor with all possible combinations of a set of buffer sizes and match lengths to see which would produce the smallest files, and the results varied a lot depending on the type and size of input file. Increasing the match length did not necessarily help, since it increased the average size of the length symbols and didn’t necessarily find enough long matches to cancel that out. Increasing the buffer size generally improves compression, but greatly increases memory usage and slows down compression. After some more experimentation with the ‘explode’ script, I settled on defaults of 17 bits for the buffer size, and a match length of 130.

Another idea I’d remembered hearing about was how the best match at the current byte might not necessarily be the most efficient match. It might be more efficient to emit the current byte as a literal instead if the next byte is the start of an even longer match. It was only an intuitive feeling though, so I implemented this and tested it and it did indeed seem to give a consistent improvement in compression efficiency. As an example, in one text document the phrase ‘edge of the dock’ was compressed like so:

Literal: 'e' (101) (4 bits)
Literal: 'd' (100) (6 bits)
Literal: 'g' (103) (8 bits)
10-bit offset: 544   Length: 3 'e o' (16 bits)
 8-bit offset: 170   Length: 6 'f the ' (17 bits)
10-bit offset: 592   Length: 3 ' do' (16 bits)
Literal: 'c' (99) (6 bits)
Literal: 'k' (107) (7 bits)

but with the new test, it generated the following instead:

Literal: 'e' (101) (4 bits)
Literal: 'd' (100) (6 bits)
Literal: 'g' (103) (8 bits)
Literal: 'e' (101) (4 bits) (forced, match len=3)
 8-bit offset: 170   Length: 8 ' of the ' (19 bits)
10-bit offset: 592   Length: 3 ' do' (16 bits)
Literal: 'c' (99) (6 bits)
Literal: 'k' (107) (7 bits)

The ‘forced’ literal normally would have been part of the first match, but by emitting it as a literal instead it was able to find a more efficient match and only two offset/length tokens were needed instead of three, for a difference of 80 bits for the original versus 70 bits for the improved match. Doing these extra tests does slow down compression a fair bit though, so I made it an optional feature, enabled on the command line.

At this point though, it’s getting harder and harder to extract gains in compression efficiency, as it starts devolving into a whole bunch of special cases. For example, increasing the buffer size sometimes makes compression worse, as in the following example:

'diff' output between two runs:
 17-bit offset: 87005   Length: 10 'with the t' (26 bits)
 14-bit offset: 10812   Length: 3 'arp' (18 bits)
-13-bit offset: 7705   Length: 3 ', w' (17 bits)
-13-bit offset: 5544   Length: 8 'ould you' (19 bits)
+18-bit offset: 131750   Length: 4 ', wo' (41 bits)
+13-bit offset: 5544   Length: 7 'uld you' (19 bits)
 16-bit offset: 50860   Length: 7 '?  You ' (22 bits)
 17-bit offset: 73350   Length: 10 'take that ' (26 bits)

The compressor looks for the longest matches, and in the ‘+’ run it found a longer match, but at a larger offset than in the ‘-‘ run. In this case, 18-bit offsets are rare enough that their symbol has been pushed low in the Huffman tree and the bitstring is very long, making it even less efficient to use a long offset, and in the end a whopping 24 bits are completely wasted. Detecting these kinds of cases requires a bunch of extra tests though, and this is just one example.

So, I think that’s about all I’m going to do for attempting to improve the compression efficiency. How does it do overall? Well, that 195kB text file that originally compressed to 87.4kB and then made it down to 84.2kB can now be compressed down, with harder searching on and optimal buffer and match length sizes determined, to 77.9kB. That’s even lower than ‘gzip -9’ at 81.1kB!

It’s not all good news, though. If I take the Canterbury Corpus and test against it, the original total size is 2810784 bytes, ‘gzip -9’ reduces them to a total of 730732 bytes (26.0%), and at the default settings, my compressor gets…785421 bytes (27.9%). If I enable the extra searching and find optimal compression parameters for each file via ‘explode’, I can get it down to 719246 bytes (25.6%), but that takes a lot of effort. Otherwise, at the default settings, some of the files are smaller than gzip and others are larger; typically I do worse on the smaller files where there hasn’t really been much of a chance for the Huffman trees to adapt yet, and the Excel spreadsheet in particular does really poorly with my compressor, for some reason I’d have to investigate further.

But I’m not going to. No, the main remaining problem was one of speed…

I Ain’t No Huffman

In terms of compression efficiency, I knew there were some obvious places that could use improvement. In particular, my Huffman trees…weren’t even really Huffman trees. The intent was for them to be Huffman-like in that the most frequently seen symbols would be closest to the top of the tree and thus have the shortest bitstrings, but the construction and balancing method was completely different. Whenever a symbol’s count increased, I compared it to the parent’s parent’s other child, and if the current symbol’s count was now greater, it swapped it with the current symbol, inserted a new branch where the updated node used to be, and pushed the other child down a level.

Unfortunately, that method led to horribly imbalanced trees, since it only considered nearby nodes when rebalancing, when changing the frequency of a symbol can actually affect the relationship of symbols on relatively distant parts of the tree as well. As an example, here’s what a 4-bit length tree wound up looking like with my original adaptive method:

Lengths tree:
    Leaf node 0: Count=2256 BitString=1
    Leaf node 1: Count=1731 BitString=001
    Leaf node 2: Count=1268 BitString=0001
    Leaf node 3: Count=853 BitString=00001
    Leaf node 4: Count=576 BitString=000001
    Leaf node 5: Count=405 BitString=0000001
    Leaf node 6: Count=313 BitString=00000001
    Leaf node 7: Count=215 BitString=000000000
    Leaf node 8: Count=108 BitString=0000000011
    Leaf node 9: Count=81 BitString=00000000101
    Leaf node 10: Count=47 BitString=000000001001
    Leaf node 11: Count=22 BitString=00000000100001
    Leaf node 12: Count=28 BitString=0000000010001
    Leaf node 13: Count=15 BitString=000000001000000
    Leaf node 14: Count=9 BitString=000000001000001
    Leaf node 15: Count=169 BitString=01
    Avg bits per symbol = 3.881052

If you take the same data and manually construct a Huffman tree the proper way, you get a much more balanced tree without the ludicrously long strings:

    Leaf node 0: Count=2256 BitString=10
    Leaf node 1: Count=1731 BitString=01
    Leaf node 2: Count=1268 BitString=111
    Leaf node 3: Count=853 BitString=001
    Leaf node 4: Count=576 BitString=1100
    Leaf node 5: Count=405 BitString=0001
    Leaf node 6: Count=313 BitString=11011
    Leaf node 7: Count=215 BitString=00001
    Leaf node 8: Count=108 BitString=000001
    Leaf node 9: Count=81 BitString=000000
    Leaf node 10: Count=47 BitString=1101000
    Leaf node 11: Count=22 BitString=110100110
    Leaf node 12: Count=28 BitString=11010010
    Leaf node 13: Count=15 BitString=1101001111
    Leaf node 14: Count=9 BitString=1101001110
    Leaf node 15: Count=169 BitString=110101
    Avg bits per symbol = 2.969368

That’s nearly a bit per symbol better, which may not sound like much but with the original method there was barely any compression happening at all, whereas a proper tree achieves just over 25% compression.

So, I simply dumped my original adaptive method and made it construct a Huffman tree in the more traditional way, pairing the highest count nodes in a sorted list. To keep it adaptive, it still does the count check against the parent’s parent’s other child, and when it crosses the threshold it simply rebuilds the entire Huffman tree from scratch based on the current symbol counts. This involves a lot more CPU work, but as we’ll see later, performance bottlenecks aren’t necessarily where you think they are…

My trees also differ from traditional ones in that they prepopulate the tree with all possible symbols with a count of zero, whereas usually you only insert nodes into a Huffman tree if they have a count greater than zero. This is slightly suboptimal, but it avoids a chicken-and-egg problem with the decoder not knowing what symbol a bitstring corresponds to if it doesn’t exist in the tree yet because it’s the first time the symbol has been seen.

Knowing that, and with the improved Huffman trees, another thing became clear: using Huffman trees for the offsets wasn’t really doing much good at all. With most files, the offset values are too evenly distributed, and many are never used at all, and all those zero-count entries would get pushed down the tree and become longer strings, so the first time an offset got used it would often have a string longer than its basic bit length, causing file growth instead of compression. I instead just ripped those trees out and emitted plain old integer values for the offsets.

The way I was constructing my trees also had another limitation: the total number of symbols had to be a power of two. With the proper construction method, an arbitrary number of symbols could be specified, and that allowed another potential optimization: merging the identifier tree and the literals tree. The identifier token in the output stream guaranteed that there would always be at least 1 wasted non-data bit per token, and often two. Merging it with the literals would increase the size of literal symbols, but the expectation is that the larger literal size would on average still be smaller than the sum of the identifier symbols and smaller literal symbols, on average, especially as more ‘special case’ symbols are added. Instead of reading an identifier symbol and deciding what to do based on that, the decoder would read a ‘literal’ symbol, and if it was in the range 0-255, it was indeed a literal byte value and interpreted that way, but if it was 256 or above, it would be treated as having a following offset/length pair.

The range of offsets to handle would also have to change, but that’s for next time… With the Huffman tree improvements, my 195kB test file that compressed to 87.4kB before now compressed to 84.2kB. Still not as good as gzip, but getting there.

Compressing History

While sorting through some old files of mine, I happened upon the source code to a compression engine I’d written 18 years ago. It was one of the first things I’d ever written in C++, aside from some university coursework, and I worked on it in the evenings during the summer I was in Cold Lake on a work term, just for fun. Yes, I am truly a nerd, but there wasn’t really much else to do in a tiny town like that, especially when you only get 3 TV channels.

Looking at it now it’s kind of embarrassing, since of course it’s riddled with inexperience. No comments at all, leaving me mystified at what some of the code was even doing in the first place, unnecessary global variables, little error checking, poor header/module separation, unnecessary exposure of class internals, poor const correctness, and so on. It kind of irks my pride to leave something in such a poor state though, so I quickly resolved to at least clean it up a bit.

Of course, I have to understand it first, and I started to remember more about it as I looked over the code. It’s a fairly basic combination of both LZ77 pattern matching and Huffman coding, like the ubiquitous Zip method, but the twist I wanted to explore was in making the Huffman trees adaptive, so that the symbols would shift around the tree to automatically adjust as their frequency changed within the input stream. There were two parameters that controlled compression efficiency: history buffer size, and maximum pattern length. The history size controls how far back it would look for matching patterns, and the length controlled the upper limit on the length of a match that could be found.

Compression proceeded by moving through the input file byte by byte, looking for the longest possible exact byte match between the data ahead of the current position and the data in the history buffer just behind the current position. If a match could not be found, it would emit the current byte as a literal and move one byte ahead, and if a match was found, it would emit a token with the offset and length of the match in the history buffer. To differentiate between these cases, it would first emit an ‘identifier’ token with one of four possible values: one for a literal, which would then be followed by the 8-bit value of the literal, and three for offset and length values, with three different possible bit lengths for the offset so that closer matches took fewer bits. Only matches of length 3 or longer are considered, since two-byte matches would likely have an identifier+offset+length string longer than just emitting the two bytes as literals. In summary, the four possible types of bit strings you’d see in the output were:

    | ident 0 | 8-bit literal |

    | ident 1 | 'x'-bit offset    | length |

    | ident 2 | 'y'-bit offset        | length |

    | ident 3 | 'z'-bit offset            | length |

And then I used a lot of Huffman trees. Each of these values were then run through a Huffman tree to generate the actual symbol emitted to the output stream, with separate trees for the identifier token, the literals, the lengths, and the three offset types. HUFFMAN TREES EVERYWHERE! The compression parameters were also written to a header in the file, so the decoder would know what history buffer size to use and maximum length allowed.

It worked…okay… I’ve lost my original test files, but on one example text file of 195kB, my method compresses it down to 87.4kB, while ‘gzip -9’ manages 81.1kB. Not really competitive, but not too bad for a completely amateur attempt either. There’s still plenty of room for improvement, which will come…next time.

But It Still Doesn’t Remember Where I Left My Keys

Yesterday the memory upgrade for my laptop arrived and I installed it as soon as I got home, taking it from 2GB to 4GB. Fortunately, upgrading the memory on an MBP is fairly easy, only requiring the removal of three standard screws underneath the battery.

It was mainly meant for a (now postponed) trip so I could use VMware Fusion effectively, but the difference was immediately noticeable when I went to fire up WoW as well. Normally, running WoW on my laptop grinds and chugs and stutters a lot, mainly because I always have Firefox open as well, and together the two just use up too much memory. Now though, it’s smooth as silk, with WoW loading only a little bit slower than it does on my desktop machine.

About Time

I recently upgraded my server to Ubuntu 9.10, and it finally fixed one thing that had been bugging me ever since I built this system: the audio drivers. The default drivers that came with Ubuntu wouldn’t properly set the line-in volume, so I had to go and get a newer version from Realtek’s site. But, every time there was a system update that refreshed the driver modules, I’d have to reinstall the newer drivers and reboot again. Fortunately, now the default drivers work perfectly fine as of this release, and I’ll hopefully never need to build them separately again.

It also updated MythTV, which was a bit of a surprise and I needed to go get a newer build of the OS X frontend. That took a while to get working because it would just suddenly stop running immediately after launching it, until I figured out that I had to run the main executable directly with a ‘-r’ option to reset which theme it wanted to use.

You Got Your Mac In My Windows!

All of the upgrades and reinstallation are done, and I now have a zippy essentially-new machine running Windows 7.

The most obvious change in 7 is in the taskbar. It uses large icons instead of a small icon and window title, all open windows of the same type are always consolidated into a single entry on the taskbar (previously it would only consolidate them once it started running out of room), and you can pin a running program to the taskbar in order to launch it again later. It’s basically a lot more like OS X’s Dock now.

The Explorer has also changed a bit. There’s no more ‘Explore’ option off the computer icon’s context menu, which is kind of annoying. And they’ve removed the tree view from the Explorer windows (but you can reenable it in the folder options), instead using the sidebar to emphasize a bunch of standard locations like your home directory, your music directory, network servers, etc. Which is also a lot like how Finder works…

Otherwise, things have gone fairly smoothly, and I haven’t really had any problems that I can attribute directly to Windows 7 itself. I still have to poke around and explore what else might be new, though.

Impatience

With Windows 7 still downloading at the office, I decided to do the hardware upgrades tonight, even though I didn’t quite hit my goal of finishing King’s Bounty first.

It actually went a bit smoother than expected, with only two major hiccups. The first was when I went to install a new 120mm fan to improve airflow, when I suddenly realized that I didn’t know which direction it was supposed to face… Fortunately it’s the same style as a couple other fans in the system, so I was able to deduce the direction from that, and confirmed it with a sheet of paper. The second was getting the BIOS settings right, since this motherboard doesn’t seem to do a very good job of autodetecting all the right settings. It only took a little bit of fiddling to get it up to the proper 2.83GHz speed, though.

It’s practically an entirely new machine at this point, with a new CPU (Q9550 quad-core, replacing an E6600 dual-core), more memory (from 2 to 6GB), bigger drives (1.25TB total), and a new video card (Radeon 4870). It is actually missing something, though — I yanked out the Audigy 2 sound card I’d been using before. Creative’s support for newer OSes with older cards has been a bit lacking, so I’ll take a chance on the onboard sound for now.

Now I just have to put some OSes back on it…

The Anticipation Is Killing Me

Woo, the new parts I ordered arrived today, much earlier than expected. But I can’t actually install them yet.

I only want to crack the case open once, so I have to install everything at the same time. But if I install the other hard drives, I’ll lose the OS and have to reinstall it. But Win7 isn’t ready yet for me, so I’d have to either reinstall Vista or restore the old install from the current drive, and there’s not much point in wasting time on that when I’d have to do a new install in a week now anyway.

So, for now the parts all sit on my kitchen table, taunting me, tempting me…

The End Of An Era

After talking with Shaw support, they’ve decided that it’s probably best if they just replace my cable modem with a newer one. They can’t really say if it’s the specific cause of my problems, but it’s a Motorola CyberSurfr, a positively ancient model at this point, and they’ve been trying to get people off of them anyway. I’ve been using this one for over ten years now, and the tech was surprised I’d still had one for that long, since most people experience problems and swap it out long before that point.

So, tomorrow I say goodbye to my old friend as I drop him off downtown. It may be old, but it served me well for an awfully long time.

Where Did 2008 Go?

With Windows 7 being released soon (less than two weeks away now, for us MSDN members), I figured it was time to consider upgrades for my main PC, so that I don’t have to mess with hardware changes post-install. Some upgrades are already essentially here — with the recent order of my backup drives, I’ve got a couple spare drives with plenty of space for games and apps, and I have another 4GB of RAM that I’d snuck into that same order.

I’d thought about upgrading the video card, but it felt a bit early since it wasn’t so long ago that I’d installed this one and my previous card lasted almost four years. But then I realized that, um, April 2007 was over two years ago, dumbass… It doesn’t feel like I’ve had it that long though, for some reason. The CPU is also starting to be a bottleneck in some cases too (I’m looking at you, GTA4), so it could use a bit of a bump as well.

I don’t really feel like going all-out with a completely new system though, especially since a Core i7 CPU would also require a new motherboard and expensive memory, so this is only an interim upgrade and the next one will be the big one. I’m going for good performance/price ratios rather than raw performance, so I finally settled on getting a Q9550 CPU and a Radeon 4870 video card. They should easily tide me over at least a couple more years.

I’m also going to try to add another 120mm fan into the case. The drives run a little bit warm, and these new parts aren’t going to make things any cooler…

Latency Killed The Video Star

As I briefly mentioned before, streaming video has been the main victim of my recent network problems. It’s been an interesting opportunity to examine just how the different services are handling it:

YouTube: Videos load more slowly than usual, and I can’t start watching them right away. Given enough time, though, it does eventually load the whole thing, so I just have to pause it and wait until a decent amount is preloaded. A.

Google Video: Likewise, it’s slow to load but eventually gets there, though a bit slower overall than YouTube, I think. It just suffers its usual usability and quality problems, being the abandoned orphan of Google’s video services. A-.

Viddler: The loading bar sometimes stops and gives up in the middle of a video, causing playback to stop when it gets there. You can get it started again by clicking near that spot on the bar, though, and skipping around like that is fairly robust in general, so there’s at least a workaround. B-.

Dailymotion: Unfortunately, the loading bar stops frequently here, and seeking around its progress bar isn’t nearly as robust. Trying to click outside the already-loaded areas usually just gets me a “There were technical problems, reload this page” error. In order to watch the video, I’d need the entire thing to load in one shot, and I failed to achieve that in what must have been at least a dozen tries on a short, 4 minute video. For not even letting me get to a significant chunk of the video, they get an F.

Lazy Packets

My Internet performance at home has had these occasional bizarre hiccups lately. In the above example, not a single packet in a string of 100 pings between me and the cable modem head end was lost, but just look at the latencies. There’s no physical-level problem with the data getting through, but the gateway’s holding on to packets for up to five seconds?! Good luck playing WoW under conditions like that…

Never Enough Space

After having bought a pair of 1TB drives for my new Linux box, I now have a set of three 1.5TB drives on the way to me. Damn, that’s a lot of storage.

I was actually waiting for 2TB drives to come down in price, but two 1.5TB drives together are still cheaper than a single 2TB drive. These ones will be used to complete the rest of my backup plan — right now I only have a 500GB external drive for my Linux box’s backups, and it’s 95% full. And that’s doing a straight mirror, without any room for daily differentials and rotating sets. Two drives will be used for that so I can keep one offsite, and the third drive will be for the Windows box’s backups.

Then, I can take the current backup drives and swap them into the gaming PC at the same time I upgrade to Windows 7, which should take me from 480GB to 1.2TB on there. Then I’ll never have to uninstall anything ever again…for a couple years, at least…

A Good Router Is Hard To Find

It’s a good thing I gave my mother my old router, because the new one hasn’t been working out as well as hoped. It works fine for a while, but then eventually it suddenly loses all of its settings, reverting to defaults and making me restore them from a backup. If it ever happens while I’m away, it’ll leave my wireless network completely open until I can get back and fix it.

It’s hard to tell whether it’s a problem with DD-WRT or with the router hardware, though. I’m leaning towards the latter, as apparently one possible cause of symptoms like this is if the flash memory goes bad, but it’s still hard to prove what the problem really is. And it’s not like Linksys will support a third-party firmware under their warranty, and I really don’t want to shell out another $130 just to test on another router that might well have the same problem.

For now I think I’ll revert back to the official firmware and see if it has trouble as well. Tomato was extremely reliable on my old router, but it doesn’t support this model.

My God, It’s Full Of Pixels

Dell had its regular end-of-quarter sale recently, and I couldn’t resist picking up their 2408WFP monitor. It’s normally fairly expensive, but at around 40% off with the sale, it was a better deal than a lot of plain old mid-level monitors. It also fulfills a few needs of mine, as not only is it bigger (24″ versus 20″), but it has HDCP support and an HDMI input, and also two DVI inputs and a set of component inputs. Too many of my consoles were languishing on sub-optimal inputs already.

I just got it and set it up today, and so far it’s just as good as I’d hoped. It’s about as big as I’d want for the distance I sit away from it since it already pretty much fills my view, the PS3 looks amazing on the HDMI input, I can put both PCs on separate DVI inputs, the 360 can get the VGA input to itself and not go through the KVM, and the Wii can finally use component instead of crummy old S-video.

The only caveat so far is that for the Wii, I have to set the monitor’s scaling mode to ‘Fill’ in order to get a proper widescreen display. But I don’t want it set to that for the PS3, or it scales the 1920×1080 mode up to 1920×1200, stretching things vertically a bit, so the PS3 has to be set to 1:1 or ‘Aspect’ mode. Making sure it’s on the right mode is a minor annoyance, but I can leave it on Aspect 99% of the time since I haven’t been using the Wii much lately anyway.

Edit: Hmmm, I can see some backlight bleed in the corners on the right-hand side when the screen is dark. I don’t think I’ll do anything about it, though; I’ve heard of people returning their screens six or seven times in a row before they got one that didn’t bleed at all, and it’s only really noticeable when the screen is completely dark, so it’s not really that big a deal.

Getting Dirty

Ubuntu 9.04 was just released, so I upgraded over the weekend and it went fairly smoothly except for two old friends: the sound drivers had to be rebuilt from the ones from Realtek’s site like I had to do before, and Amarok.

Oh, Amarok… This Ubuntu release includes the 2.0 version for the first time, but as far as I can tell, it’s actually a huge step back. There’s an all-new, pretter interface, but a lot of functionality seems to be missing, or is so well-hidden that I couldn’t figure out how to use it. In particular, all of my carefully-crafted smart playlists were gone, with no apparent way to recreate them. It also didn’t help that it kept crashing on me, especially while trying to import my old collection.

I was disappointed enough in it that I tried out some other programs as well, like RhythmBox, but they didn’t even recognize my iPod, since support apparently hasn’t been added for 4th gen Nanos in the library it uses yet.

In the end I removed Amarok 2 entirely and actually went back and completely rebuilt Amarok 1.4.10 from source. It’s literally been years now since I compiled a major program like this manually (just minor utilities), and it took a while just to figure out what it required and get the packages needed to satisfy all of the dependencies, but I finally seem to have Amarok working again.

A Sneaky Bastard

I’ll be trying to set up Internet access for my mother soon, so I went out today to buy a wireless router for her. But as I was researching, I wondered hey, why should she get a better router than me, since most of them are Wireless N and gigabit nowadays and mine wasn’t. So I ended up buying a new one for myself instead and she can have my old one. Hey, she won’t know the difference…

I wound up picking up the Linksys WRT310N, since it’s the easiest one to get hold of around here that’s still hackable. I would have preferred something like the WRT610N, with its dual radios and USB support, but it’s still a work-in-progress for custom firmwares. The 310N’s not supported in Tomato though, so I’m back to using DD-WRT instead. It doesn’t really matter now that DD-WRT has bandwidth monitoring as well, since that was why I switched to Tomato way back when.

The performance is definitely improved over the old one. Wireless, I can get around 40-50 Mbps, versus maybe 25-30 before. And wired I can do 160-200 Mbps, which isn’t coming close to maxing out the Ethernet speed like I could before, but is still a decent improvement over the old 100 Mbps. I might actually be bottlenecked by the SSH encryption speed there. It’ll take a while to see how the reliability is, though. It’s not on the 5 GHz band since I need compatibility with 11g, so I’m still subject to all the same old possible interference. It’ll be nice if I can reliably stream MythTV…