Archive for March, 2009

Why I use and you should too

Sunday, March 22nd, 2009

Those of you following me on Twitter may have noticed that all of my tweets come from Identica. I started off with Twitter but I quickly switched over to Identica as soon as I learned about it. Identica, if you haven’t heard of it before, uses the same micro-blogging concept as Twitter (and in fact is compatible with it), but has several improvements. I recommend Identica, and if you aren’t using it yet, check out these reasons as to why you should.

There are several practical reasons you should use Identica:

  • All of your data is exportable on Identica, including your entire corpus of tweets. Twitter does not provide this functionality. Should you want to migrate away from Twitter down the road (for any variety of as-of-yet-unforseen reasons), you are unable to do so, but you are able to migrate away from Identica at any point easily. And since Identica uses the Free Software Laconica software, you can even install Laconica on your own web host and import all of your data there, where you can have complete control over it.
  • Identica has a powerful groups feature that allows people to collectively subscribe and see all tweets sent to a group (this is what the exclamation syntax you may have seen in tweets is about). Groups are a powerful way to build communities and have multi-party discussions, but Twitter does not have them.
  • You don’t have to quit Twitter. My Identica account is linked to my Twitter account, so every message that I send to Identica automatically appears on Twitter. Posting to Identica+Twitter takes the same amount of effort as posting to Twitter alone, except it is seen by more people.
  • Identica lets you see things from other people’s perspective. I’ll use me as an example. You can see my entire tweet stream, which includes messages from all users and groups I’m following. This should give you a great idea of the kinds of things I’m interested in. And you can see all of the replies to me, which makes it a lot easier to track and understand conversations. Note that all of this is public information and is accessible on Twitter through trickier ways (in the first case, looking at the list of a person’s followers and combining all their tweets in chronological order; in the second case, by searching for “@username” on the search subdomain), so you aren’t giving up any of your privacy. Identica simply makes these features a lot easier to use.
  • Some people you may end up finding and wanting to talk with don’t use Twitter at all; they’re only on Identica. Get on Identica and link it to Twitter and you can talk to everyone on both services. Just use Twitter, however, and you’re left out in the cold with regards to anyone who only uses Identica.

And there is one important ethical reason you should use Identica:

  • Identica is Free (as in freedom, not merely cost). Because it follows the Free software ethos, it respects your rights and maximizes your freedom to control your data as you see fit, including the ability to move all of your data elsewhere if necessary. Twitter does not respect these freedoms.

Australia blocks my page from their Internet

Wednesday, March 18th, 2009

A couple years ago, when I was more active on Wikipedia than I am now, I was trying to prove a point by compiling a list of all of the risque images on Wikipedia (link obviously NSFW). I don’t quite remember what that point is anymore, but the list remains. It has even survived a deletion attempt or two. I stopped maintaining it a long time ago, but for whatever reason, others picked it up and continued adding more pictures in my stead. I haven’t thought of it in awhile.

So imagine my surprise when I learn that that silly page has made Australia’s secret national Internet censorship blacklist. I don’t understand the justification here — all of these images are hosted on Wikimedia servers, after all — but I have to laugh when I imagine some Australian apparatchik opening a report on this page, viewing it, making the determination that it’s not safe for Australian eyes, and adding it to the list without further thought, mate.

Australians, please take back control of your country.

A Python script to auto-follow all Twitter followers

Tuesday, March 10th, 2009

In my recent fiddling around with Twitter I came across the Twitter API, which is surprisingly feature-complete. Since programming is one of my hobbies (as well as my occupation), I inevitably started fooling around with it and have already come up with something useful. I’m posting it here, so if you need to do the same thing that I am, you won’t have to reinvent the wheel.

One common thing that people do on Twitter is they follow everyone that follows them. This is good for social networking (or just bald self-promotion), as inbound links to your Twitter page show in the followers list of everyone that you’re following. You’d think Twitter itself would have a way to do this, but alas, it does not. So what I wanted to do is use a program to automatically follow everyone following me instead of having to manually follow each person.

Other sites that interface with Twitter will do it for you (such as TweetLater), but I’m not interested in signing up for another service, and I’m especially not interested in giving out my Twitter login credentials to anyone else. So I needed software that ran locally. A Google search turned up an auto-follow script written in Perl, but the download link requires registration with yet another site. I didn’t want to do that so I decided to program it for myself, which ended up being surprisingly simple.

My Auto-Follow script is written in Python. I decided to use Python because of the excellent Python Twitter library. It provides an all-Python interface to the Twitter API. You’ll need to download and install Python-Twitter (and its dependency, python-simplejson, if you don’t have it already; sudo apt-get install python-simplejson does the trick on Ubuntu GNU/Linux). Just follow the instructions on the Python-Twitter page; it’s really simple.

Now, create a new Python script named and copy the following code into it:

# -*- coding: utf-8 -*-
#(c) 2009 Ben McIlwain, released under the terms of the GNU GPL v3.
import twitter
from sets import Set

username = 'your_username'
password = 'your_password'
api = twitter.Api(username=username, password=password)

following = api.GetFriends()
friendNames = Set()
for friend in following:

followers = api.GetFollowers()
for follower in followers:
    if (not follower.screen_name in friendNames):

Yes, it really is that simple. I’d comment it, but what’s the point? I can summarize its operation in one sentence: It gets all of your friends and all of your followers, and then finds every follower that isn’t a friend and makes them a friend. Just make sure to edit the script to give it your actual username and password so that it can sign in.

Run the script and you will now be following all of your followers. Pretty simple, right? But you probably don’t want to have to keep running this program manually. Also, I’ve heard rumors that the Twitter API limits you to following 70 users per hour (as an anti-spam measure, I’m guessing), so if you have more than 70 followers you’re not following, you won’t be able to do it all at once. Luckily, there’s a solution for both problems: add the script as an hourly cronjob. This will keep who you follow synced with your followers over time, and if you have a large deficit in who you follow at the start (lucky bastard), it’ll slowly chip away at it each hour until they do get in sync. In Ubuntu GNU/Linux, adding the following line to a text file in /etc/cron.d/ (as root) should do it:

0 * * * * username python /path/to/ >/dev/null 2>&1

This will run the auto_follow script at the top of each hour. You’ll need to set the username to the user account you want the job to run under — your own user account is fine — and set the path to wherever you saved the auto_follow script. Depending on your GNU/Linux distribution and which cron scheduler you have installed, you may not need the username field, and this line might go in a different file (such as /etc/crontab). Refer to your distro’s documentation for more information.

So that’s it. That’s all it takes to automatically auto-follow everyone who’s following you — a dozen or so lines of Python, one crontab entry, and one excellent library and API. Enjoy.

Here’s a pretty bad Unicode WTF

Tuesday, March 3rd, 2009

I’m doing some research on Unicode and compression algorithms right now for a side-project I’m working on, and I came across a highly ranked Google search result for a UTF-8 munging code snippet that is so idiotic I couldn’t let it pass without comment. If this post helps even one person who would’ve otherwise followed the linked advice, it is worth it.

First, some background. UTF-8 is a character encoding format that can pretty much handle any character under the Sun, from the English alphabet to Japanese kanji to obscure extinct languages. It even includes thousands of esoteric symbols used in smaller fields of study that you’ve probably never even heard of before. But the nice thing about UTF-8 is that it is variable-length. Standard ASCII characters (including everything on a standard English keyboard) only take one byte to represent. All of the common characters from other widely used languages typically take just two bytes to encode. It’s only the really obscure characters that require more than two bytes.

So now you see why the linked “solution” is so stupid. This guy says he is “designing a little client/server binary message format” and wants “a simple and quick way to encode strings”. Well, duh — use UTF-8, no ifs, ands, or buts about it. It’s simple, quick, and already implemented in any programming language you can think of, so it requires no additional coding. There are all sorts of dumb ways to unnecessarily reinvent the wheel in sotware engineering, but trying to come up with your own character encoding is particularly idiotic. It’s really tricky to get right because there are so many corner cases you’ll never even know existed until they cause your application to break. The Unicode Consortium exists for a reason — what they do is hard.

This guy even confesses that his expected input will probably not contain Unicode characters that are longer than 2 bytes. So there is no justification at all for what he does next — he creates a mangled version of UTF-8 that turns all Unicode characters 3 bytes and longer into question marks, instead of just leaving them as is. So instead of allowing a rare character to take an additional byte or two, it gets mangled. And to accomplish this, he has to create his own custom encoding solution that is an intentionally broken version of UTF-8. That’s the worst part — he’s wasting time creating completely unnecessary code, that will need to be maintained, that will need to be debugged — and for what?

Of course, none of the people responding to his thread point out that what he is trying to do is stupid. They just smile and hand him some rope.

The joys of 2 meter simplex

Monday, March 2nd, 2009

I’m up in Parsippany, New Jersey at the moment on business travel. That in itself wouldn’t be anything special, except that the eastern seaboard was just rocked by a huge snowstorm. I had to leave a day early to ensure that I made it here for an important meeting Monday morning, only for that meeting to be canceled while I was en route and incommunicado. To add insult to injury, none of the client employees I work with even showed up for work today, and my car died at the hotel this morning so I walked to the client site in the falling snow. And just for some added excitement, I had to run to escape the torrent from an oncoming snowplow at one point.

The drive up here was no picnic either. About an hour in it started raining, then quickly turned to snow. Thankfully none of it started sticking to the road until I arrived at the hotel four hours and many wrong turns later (not the best time to try a new route). I saw a surprising number of other vehicles driving in the snowstorm without lights on, including one semi-trailer which kept on disappearing and re-emerging from the mist of snow in a terrifying fashion. Even my high beams didn’t provide nearly enough illumination to see the road ahead of me. This was made worse by the constant glow of headlights shining over the jersey barrier from vehicles in the opposite direction, like some dividing line across the horizon, which lit up the entire sky from about six feet above the road on up. The road was thus made darker and less see-able by contrast.

The only thing that hasn’t sucked about this trip so far is ham radio. Sunday night is an excellent time to work the ham bands, which is what I spent my whole commute doing. Repeater contacts have become passé for me these days because they are so easy; at any random point along the I-95 corridor, you can hear at least a couple simultaneous conversations on various nearby repeaters. As such, I focus mostly on making simplex (direct) contacts, which at least provides somewhat of a challenge, especially while operating mobile. I was mostly using the national calling frequency on 2 meters, which is 146.520 MHz, though I did talk to one man on another simplex frequency while idly scanning the band.

I made more simplex contacts during this trip than I ever have before. At one point I was talking with two to three people simultaneously, a feat I’ve never experienced outside of pre-arranged simplex nets while operating stationary. I had some pretty long conversations with stationary operators, as well as some shorter conversations with other mobile operators (as mobiles tend to be a lot more limited in terms of antenna size, elevation, and to a lesser extent, transmitting power).

But the neatest point in the trip was when I briefly became the best ham radio station in the whole area.

I had been talking with a stationary operator for around fifteen minutes. The signal went from bad to good to bad as I-95 took me closer and then farther from his position. His signal was never stronger than S-5 (S-meters give a measure of signal strength from 1 to 9, on a logarithmic scale). About ten minutes after we said our good-byes and he faded into the radio-frequency mist, I arrived at the foot of the Delaware Memorial Bridge.

All of a sudden, the stationary operator I had been talking to earlier came in again. And his signal strength just kept getting better and better. We excitedly traded signal reports in a rapid-fire series of transmissions, remarking on how much the signal quality of the other was improving by the second. My S-meter kept on climbing until it pegged at S-9, still 50 feet shy from the apex of the bridge. The other operator’s signal was full-quieting, meaning that his signal was so strong that not only could I hear him perfectly, even the lulls between the words of his transmission were perfectly silent (because his carrier was so strong that it overwhelmed the ambient radio-frequency noise).

Then as I reached the apex of the bridge, some 200 feet in elevation above the ground and quite the enviable radio location, something really cool happened.

I was able to make contact with my previous contact, much further distant than even the current contact that had just gotten back in range. And in between the gaps in our conversation, I heard a multitude of other voices rising above the static, a chorus of conversations on the calling frequency many miles distant in all directions on the compass rose. So many things were being said at once that I couldn’t make sense of any individual transmission. I could only hear it all as a collective murmur. All of these people out there, each holding separate conversations — and unlike any of them, I could hear it all at once.

As I crested the apex of the bridge, the signal strength from my primary contact rapidly faded back down the S-meter, and with one last hurried transmission, we said good-bye. Then he, along with everyone else, was lost to the static, and I was alone again.