Archive for the 'Tech' Category

The joys of 2 meter simplex

Monday, March 2nd, 2009

I’m up in Parsippany, New Jersey at the moment on business travel. That in itself wouldn’t be anything special, except that the eastern seaboard was just rocked by a huge snowstorm. I had to leave a day early to ensure that I made it here for an important meeting Monday morning, only for that meeting to be canceled while I was en route and incommunicado. To add insult to injury, none of the client employees I work with even showed up for work today, and my car died at the hotel this morning so I walked to the client site in the falling snow. And just for some added excitement, I had to run to escape the torrent from an oncoming snowplow at one point.

The drive up here was no picnic either. About an hour in it started raining, then quickly turned to snow. Thankfully none of it started sticking to the road until I arrived at the hotel four hours and many wrong turns later (not the best time to try a new route). I saw a surprising number of other vehicles driving in the snowstorm without lights on, including one semi-trailer which kept on disappearing and re-emerging from the mist of snow in a terrifying fashion. Even my high beams didn’t provide nearly enough illumination to see the road ahead of me. This was made worse by the constant glow of headlights shining over the jersey barrier from vehicles in the opposite direction, like some dividing line across the horizon, which lit up the entire sky from about six feet above the road on up. The road was thus made darker and less see-able by contrast.

The only thing that hasn’t sucked about this trip so far is ham radio. Sunday night is an excellent time to work the ham bands, which is what I spent my whole commute doing. Repeater contacts have become passé for me these days because they are so easy; at any random point along the I-95 corridor, you can hear at least a couple simultaneous conversations on various nearby repeaters. As such, I focus mostly on making simplex (direct) contacts, which at least provides somewhat of a challenge, especially while operating mobile. I was mostly using the national calling frequency on 2 meters, which is 146.520 MHz, though I did talk to one man on another simplex frequency while idly scanning the band.

I made more simplex contacts during this trip than I ever have before. At one point I was talking with two to three people simultaneously, a feat I’ve never experienced outside of pre-arranged simplex nets while operating stationary. I had some pretty long conversations with stationary operators, as well as some shorter conversations with other mobile operators (as mobiles tend to be a lot more limited in terms of antenna size, elevation, and to a lesser extent, transmitting power).

But the neatest point in the trip was when I briefly became the best ham radio station in the whole area.

I had been talking with a stationary operator for around fifteen minutes. The signal went from bad to good to bad as I-95 took me closer and then farther from his position. His signal was never stronger than S-5 (S-meters give a measure of signal strength from 1 to 9, on a logarithmic scale). About ten minutes after we said our good-byes and he faded into the radio-frequency mist, I arrived at the foot of the Delaware Memorial Bridge.

All of a sudden, the stationary operator I had been talking to earlier came in again. And his signal strength just kept getting better and better. We excitedly traded signal reports in a rapid-fire series of transmissions, remarking on how much the signal quality of the other was improving by the second. My S-meter kept on climbing until it pegged at S-9, still 50 feet shy from the apex of the bridge. The other operator’s signal was full-quieting, meaning that his signal was so strong that not only could I hear him perfectly, even the lulls between the words of his transmission were perfectly silent (because his carrier was so strong that it overwhelmed the ambient radio-frequency noise).

Then as I reached the apex of the bridge, some 200 feet in elevation above the ground and quite the enviable radio location, something really cool happened.

I was able to make contact with my previous contact, much further distant than even the current contact that had just gotten back in range. And in between the gaps in our conversation, I heard a multitude of other voices rising above the static, a chorus of conversations on the calling frequency many miles distant in all directions on the compass rose. So many things were being said at once that I couldn’t make sense of any individual transmission. I could only hear it all as a collective murmur. All of these people out there, each holding separate conversations — and unlike any of them, I could hear it all at once.

As I crested the apex of the bridge, the signal strength from my primary contact rapidly faded back down the S-meter, and with one last hurried transmission, we said good-bye. Then he, along with everyone else, was lost to the static, and I was alone again.

I caught the Twitter bug

Tuesday, February 24th, 2009

Sigh. A lot of other people at work were using Twitter, so now I am too. If I join anything else, I’ll need to think of a good way to organize all of my web presences. I guess this blog can be the mothership, and contain links to everything else.

So far I seem to be using Twitter as a dumping ground for my Google Talk status messages, so they are no longer lost to the mists of the intertubes when I switch to a new one. I don’t ever foresee myself updating it on the go from a mobile phone — I just don’t have that much of a desire to remain connected. Being off the grid can be good sometimes.

Firefox continues gaining market share, software flaws

Tuesday, February 3rd, 2009

Excellent news! My favorite web browser, Mozilla Firefox, has gained market share yet again and now commands 21.53% of the market. That’s a far cry from several years ago when Firefox was just coming out and Internet Explorer was by far the dominant browser. I still remember all of those sites that only worked in Internet Explorer, and because alternative browsers weren’t very popular, companies got away with it. Now that Internet Explorer “only” has 67.55% of the market, no one dares make a site that requires it, thus alienating a whole third of potential customers. I can’t even remember the last time I saw an IE-only site.

Unfortunately, while Firefox’s market share is gaining, the software itself is gaining more and more problems. Firefox crashing has become a daily occurrence for me. I remember when it used to stay alive for months — barely. At least it now saves the list of open tabs and allows you to resume them upon restarting, but you still lose lots of logged in sessions and it’s just a big hassle. And while it is true that most Firefox crashes can be traced back to Adobe’s Flash plugin, there’s no excuse for Firefox allowing a bug in a plugin to crash the whole application. Google Chrome found a fix for this by running each tab as a separate process. Firefox needs to do the same, or else it won’t keep gaining market share for much longer. As much as I hate to admit it, Firefox has some pretty significant flaws.

How browser security exploits hinder exploration of the web

Monday, December 22nd, 2008

It’s important to be able to feel safe while browsing the web, both in terms of what your software protects you against and what your own “web street smarts” protect you against. Users who don’t feel safe will restrict themselves to big sites by recognizable companies and other sites that they already visit regularly — still a useful use of the web, sure, but one of the quirky charms of the web is all of that weird stuff that can exist only in this medium, and if you aren’t browsing them, you’re missing out. An even worse category of user is one who feels safe but isn’t, thus exposing themselves to viruses, malware, and even identity theft. Unfortunately, it appears that everyone who uses Internet Explorer is in this category.

In the latest in a long line of Microsoft failings, another Internet Explorer bug has been discovered that pretty much allows arbitrary malicious control over your computer simply by viewing an infecting website. This critical vulnerability was patched recently, but keep in mind that millions of computer users patch their software on an irregular basis, and further millions never patch at all. The number of computer users vulnerable to this one exploit thus remains in the tens of millions, at least. Using Internet Explorer simply isn’t safe, and the majority of people know this. The worse knock-on effect of this is that it causes people to adjust their browsing accordingly, treating the web as a shady inner city neighborhood to be avoided rather than a beautiful vista that demands exploration.

Switching to Mozilla Firefox is a no-brainer. But even with Firefox, as long as you’re still running Windows, you’re still quite vulnerable. It’s possible for even the experienced web user to get caught by what appears to be a trial download of a legitimate piece of software that is actually a virus. This is one of the many reasons why I choose GNU/Linux as my operating system. I browse the web with impunity, journeying where most others dare not, because I have taken the necessary steps to truly protect myself. And the view from way up here is amazing.

Fixing ordering bias of U.S. presidential election candidates on Wikipedia

Monday, November 3rd, 2008

Today, upon getting home from work, one of the first things I did was check the Main Page of the English Wikipedia. It always has interesting content on there, and today was no exception. For the first time ever, two articles were featured on the front page: those of John McCain and Barack Obama. Except there was one little niggling problem: John McCain was listed first. Granted, his last name does come first alphabetically … but still. This is the Internet. We don’t have the limitations of printed paper ballots; there’s no reason the candidates have to be displayed in a static order. And I happen to be an administrator on the English Wikipedia, so I can edit any page on the site, including the main page and the site-wide JavaScript. So I fixed the ordering, presumably much to the delight of all of the people who had been complaining about bias on the talk page.

I took some JavaScript that was previously used in the Wikimedia Foundation Board elections, where ordering of the several dozen candidates had proved to be a huge bias in previous elections, and added it to the English Wikipedia. Then I modified the main page slightly to use the JavaScript and, boom, the candidates now appear in a random order upon each page load. I figure if this solution was good enough for WMF Board elections then it ought to be good enough for the United States presidential election, right?

So if you go to the main page of Wikipedia now, you should see either Barack Obama or John McCain on top, with a 50% probability of each (if you’re not seeing this behavior, flush your browser’s cache). Considering how many people view Wikipedia each day, I like to think this will make some kind of difference.

How to prevent Firefox from lagging badly when dragging selected text

Tuesday, October 28th, 2008

This past week I upgraded my system from Ubuntu 8.04 to Ubuntu 8.10. The upgrade was pretty smooth, with nothing much to report except that my system now boots without requiring the all_generic_ide kernel parameter, which is nice. One problem that I immediately started seeing, however, was that my system would freeze up terribly whenever I selected more than a few words in Mozilla Firefox and tried dragging them anywhere. Depending on how large the block of text was, my entire system could freeze up for minutes at a time as it spent several seconds drawing each frame of the text block moving.

Well, I’d had enough of it, and I went looking for a solution. Firefox didn’t always render the entire contents of the selection being dragged-and-dropped; it used to just display a little icon next to the cursor. Here’s how to restore that functionality and remove the lag from the fancy but ultimately unnecessary fully rendered dragging:

  1. Type about:config into Firefox’s location bar and hit Return.
  2. In the filter text edit box at the top of the window, type nglayout.
  3. Double-click on the nglayout.enable_drag_images row to change its value to false.
  4. That’s it! Firefox will no longer try to render the contents of the selection to the screen as you drag words around. For older systems or systems with poor graphical support (like mine, apparently), this is pretty much a mandatory change. Enjoy your new, faster Firefox!

Web comics authors: Please stop using HTML image attributes

Monday, October 20th, 2008

I like XKCD. Everyone that I know who’s heard of XKCD likes it as well. But there’s one little annoying thing its author, Randall Munroe, does that I wish he would stop: putting additional commentary about the strip into HTML meta attributes on the image of the comic. Specifically, I’m referring to the title attribute, which is often incorrectly said to be the alt attribute (the name of the strip is actually what goes into the alt attribute). The contents of the title attribute is displayed when you hover your mouse above the image. The worst annoyance with the title attribute, that it wouldn’t be displayed in full in Firefox 2 unless you right-clicked on the image and opened up the Image Properties dialog, has been fixed in Firefox 3, but there are still many other problems with the customary use of the image’s title attribute for displaying additional text commentary.

The main problem with the use of the image title attribute to inject additional humor is that it is not obvious from a user interface standpoint. I read the entire backlog of 200 or so XKCD strips when I first found out about the comic, only to then discover that I had completely missed out on the “hidden” joke on each one. And since it was such a big backlog, I never even bothered going back to check out the jokes. Simply placing them as text beneath the comics, as a sort of caption postscript, would have worked much better.

More recently, when I found out about the excellent web comic Daisy Owl, I again read the entire backlog without realizing there was additional content on each comic in the form of a title attribute. The use of the image title attribute is spreading like a malevolent virus! Now, it’s gotten to the point that I hover my mouse cursor over every web comic image for fear of missing anything, even though the vast majority thankfully don’t use this feature. Now that’s just a waste of my time.

Also, using the image title attribute for these purposes simply isn’t good according to web accessibility standards. The title attribute is specifically intended to label the link that the image points to, while the alt attribute is used to describe the image itself. The title attribute is thus meaningless in the context of web comic images, which typically don’t link to anything, and relies on a browser quirk to display the contents of the title attribute even in the absence of a link. It doesn’t make sense to use the image attribute against its intended purposes simply because most web browsers happen to display it in a pop-up text box on an image mouse-over event. Needless to say, the use of image attributes in “creative” ways confuses screen reader programs used by the blind, which rely on the image attributes actually being what they say they are.

So Randall, I love your strip, but please just put the additional commentary as plain text somewhere on the page below the image. The trick with the title attribute was cute at first, but is now just annoying, and I’m afraid it’s spreading across the blagosphere, with new web comics authors feeling compelled to put something in their image title attributes as well.

Review of Antec skeleton case neglects to mention RFI issues

Sunday, October 19th, 2008

I will admit to being fascinated by Antec’s latest case. It’s more of a skeleton than an enclosure, providing mounting points for all of a computer’s components to screw into, fans, and nothing else. I especially like how up to four additional hard drives (in addition to the two it fits “internally”) can be clipped onto the outside. Despite the case’s goofy novelty, this really is something I could get into. I tinker with my computers a lot, often running them with the sides off in between swapping out hardware, so this wouldn’t be that much of a stretch. Heck, I’ve run computers with IDE ribbon cables connected to “external” hard drives sitting on top of the case; the skeleton case’s mounting option would’ve been really nice. And above all, I just like the idea of being able to see the components in my computer (which I paid quite a bit of money for) at all times.

But I’m also a bit of a realist. There is a good reason that all other consumer-level computer cases are, well, cases: it makes sense to put your computer’s delicate vitals inside of an enclosure. The case helps keep dust out. It also keeps objects from falling onto the computer’s components. Drop a sizable object onto a normal computer case and the worse that will likely happen is a large dent in the case. But even dropping a coin into the internals of an exposed skeleton case could short out some contact points on the motherboard, or get caught in a fast-spinning fan and turn it into slying shrapnel. Dropping anything larger could easily cause substantial damage to delicate internal components that a 1mm thick steel case wouldn’t blink at. And let’s not forget the problem of spilling food or drink. Spill something on top of a normal case and odds are good you can quickly wipe it up before it seeps in (and the case itself will deflect most of it). Spill something into a skeleton case, and you’re almost guaranteed some kind of catastrophic failure.

But even if you’re never clumsy, and you set up your skeleton case in such a way that there is zero probability of anything ever falling on/into it, there is another less obvious problem lurking: radio frequency interference (RFI). One of the reasons computers and most other electronics are sold enclosed in metal cages is to prevent RFI (even when the exterior is plastic, there will be an internal metallic Farraday cage enclosing the electronic components). Electronics are sold this way because of a sensible regulatory requirement by the FCC to prohibit your household electronic devices from interfering with other devices. Since the skeleton case doesn’t ship with any electronics in it, it can get past the FCC, but no computer retailer would be able to sell a pre-built computer inside a skeleton case. Computers, having all sorts of components in them running at various clock speeds, produce quite a number of radio waves of various frequencies.

The RFI produced by a computer can potentially interfere with nearby electronic devices. It might cause a hum on a speaker system, for instance, or produce static on a radio (ham radio operators on HF frequencies especially should stay far clear of skeleton cases). Depending on how severe the RFI produced by the computer is, and on which wavelengths, it could interfere with wireless mouses and keyboards, or even a monitor. There’s no way to be sure, really — the specifics of RFI are really finicky, and depend as much on the characteristics of the receiving device as of the computer in the skeleton case. The interference also works both ways, so your computer could suffer some rather catastrophic crashes if parts of its circuitry happen to be resonant with a nearby source of radio waves. Considering that I pick up low power AM radio through my bass guitar’s unshielded instrument cable when I turn the gain all the way up, it’s not far-fetched to imagine interference affecting an unshielded computer as well.

But I’m just making educated guesses. What we really need is cold hard data on how much RFI an unshielded computer puts out, and what sources of radio waves one might expect to interfere with the computer. Unfortunately, ExtremeTech didn’t examine this angle at all in their review, and my lack of a test bed (let alone the willingness to pony up $190 for the case) precludes me from finding out myself. So I really wish someone would do the requisite experimentation, because the skeleton case concept could be completely DOA for reasons less obvious than “you might drop stuff into it”.

Tab overload

Thursday, October 16th, 2008

It’s not uncommon for me to find out that Firefox is using a gigabyte of RAM at any one time. Sure, that may seem like a lot, but I have a full 4 GB of RAM to work with, and Firefox is the most intensive thing I regularly use this computer for, so it works out just fine.

What’s that? You’re wondering how Firefox is the most resource-intensive program on my computer? Well, I have 98 tabs open at the moment. I just counted them. That says it all, really. Each tab is something I’ve come across in my web browsing that I’ve been meaning to read but haven’t gotten to yet. Yes, several dozen of the tabs are Wikipedia articles on a large variety of topics. Thanks to Firefox’s feature of saving all of the open tabs when you exit — or even when it crashes — some of these tabs are pages I’ve been meaning to read for literally weeks.

If you have fewer tabs open than I do at the moment, just be thankful that you haven’t dug yourself into such a deep web browsing hole. It would take days of nonstop reading to work this backlog off. Wikipedia is a fiend like that: each article generally links to several other articles that I also end up reading, and after not too long at that rate, you end up with a number of tabs in the triple digits. I once read most of the military technologies of World War II articles in the course of some many-hour browsing sessions across several days — and that was started by looking up a single, completely unrelated article.

I also cannot remember how I ever possibly browsed the web before the era of tabbed browsing. Those must’ve been dark ages so painful my mind has completely blotted them from memory.

What C# does better than Java

Thursday, October 16th, 2008

I spend 90% of the development time at my job using either Java or C# .NET, with a roughly equal split between the two. So I know a fair amount about both languages — enough to feel qualified to comment on the differences between them, anyway. Now being a Free Software guy, my obvious preference is for Java, which is actually Free as in freedom (finally, anyway) and runs on a large variety of platforms. Given the choice of which to use on my personal projects, Java is a no-brainer. The best IDE for Java, Eclipse, is absolutely Free. The best IDE for C#, Visual Studio is … well, it’s several hundred dollars and proprietary to boot. And it has the limitation of not running on or compiling for GNU/Linux; since I use Ubuntu as my home desktop operating system, that’s a deal breaker.

But just on a pure comparison between the languages, I have to say that C# is the better of the two. It’s not a fair comparison because C# is many years younger and was able to learn from all of Java’s mistake, but then again, that old canard about life not being fair still holds true. C# is the better language. It has lots of features that simply make it more pleasant to code in. One feature I would’ve killed for in Java while writing a recent project at work is properties. Here’s a sample of the code I wrote in Java:

writeOut(data.getAccount().getContract().getAddress().getAddress1());
writeOut(data.getAccount().getContract().getAddress().getAddress2());
writeOut(data.getAccount().getContract().getAddress().getCity());
writeOut(data.getAccount().getContract().getAddress().getZipCode());
writeOut(data.getAccount().getClient().getCoSigner().getFullName());

And it went on and on for dozens of lines; you get the drift. This is getter and parentheses overload. There’s no real reason the code has to be this messy. And with C#, it isn’t. Here’s how the same code would look in C#:

writeOut(data.Account.Contract.Address.Address1);
writeOut(data.Account.Contract.Address.Address2);
writeOut(data.Account.Contract.Address.City);
writeOut(data.Account.Contract.Address.ZipCode);
writeOut(data.Account.Client.CoSigner.FullName);

And yes, you could accomplish the latter in Java by making all member variables public, but that’s a bad idea. In C# you don’t have to make all of the member variables public to do this — you simply define them as properties, which allows for fine-grained control on who can get and set each property, and without all of the messiness of having to corral dozens of different getter and setter functions for each member variable.

So if nothing else mattered, I would recommend and use C# to the exclusion of Java. But since other issues matter a lot more than programming conveniences, like software freedom, I do still recommend and use Java to the exclusion of C#. But Microsoft did put up a good effort.