Archive for the 'GNU/Linux' Category

My once-tiny GNU/Linux desktop morphs beyond all recognition

Tuesday, April 14th, 2009

Enermax Chakra
Almost a year ago, I bought a cute little desktop from Dell with the intent of using it as a GNU/Linux desktop alongside my existing Windows desktop. Its name is Vertumnus. But things don’t always turn out as planned. I quickly started using Vertumnus as my exclusive desktop PC, booting the Windows machine only to play games. Eventually I reformatted the Windows computer and the only applications I’ve reinstalled have been games, so it’s pretty much reduced to a gaming appliance at this point, like an XBOX360 but better.

The only problem is that when I originally bought Vertumnus, I didn’t have all of this in mind, and so I bought it rather under spec. I would’ve been better off just buying a better computer from the get-go. As a result, I’ve had to do quite a few upgrades over the past year to get it to meet my needs. From the very beginning I added more RAM and another hard drive. Then it joined a Stand Alone Complex. Then I added another hard drive. From the outside it still looked the same, but a lot of the interior was upgraded. Now even that is no longer true.

Yesterday, I spent two hours (and another $160) redoing the computer even further. The case was too cramped and was preventing further upgrades. So I moved the computer into a new case, the Enermax Chakra. It’s appreciably bigger than the previous Dell case. It’s also a lot more flexible on the inside in terms of which parts will fit into it. Why the Chakra? I only had two criteria, but the Chakra was pretty much the only case that met both of them: 1) It had to have a 250mm fan, but 2) No LEDs. Both criteria come from my computer living in my bedroom: it has to be silent (hence a big, slow-spinning fan) and it has to be dark, so that I can sleep!

Since the case didn’t come with any fans besides the huge 250mm one, I purchased two of the quietest 120mm fans in existence, the Scythe Gentle Typhoon. Again, my criteria were the same: Quiet and no LEDs. The Gentle Typhoons best met those. I also had to get a new power supply, because the 250 Watt one from Dell isn’t able to accommodate the video card I was about to put in. So I went with the Corsair 550W PSU. It was the power supply that best met my criteria: High efficiency (85%!), quiet (a big 120mm fan), and no LEDs. And it’s more than enough to power the video card that I put in, a hand-me-down GeForce 8800 GTS. Yes, that’s right, I finally got tired of the inferior performance of the Intel integrated graphics card. Now I can actually play modern 3D games in GNU/Linux.

And as if all that wasn’t enough, while transitioning all of the parts from one case to another, the CPU fan developed a faulty bearing which makes it obnoxiously loud. So the first thing I hear upon starting up my supposed-to-be-silent computer is a loud whirring fan noise. Rather than giving up my dreams of a silent computer, I ordered a replacement CPU fan/heatsink, the Arctic Cooling Freezer 7 Pro. Why that one? I already have one in my Windows computer and it cools really well. Plus it’s quiet. It hasn’t arrived yet, but it’s going into Vertumnus as soon as it does.

The new GeForce 8800 GTS is so large that it covers up one of the SATA ports on the Dell motherboard (and another one is rendered inaccessible to all but right-angle SATA connectors). Since I have three SATA hard drives and one SATA DVD-R drive, that’s a problem. The DVD drive is currently unplugged, but I’ll swap it out for an IDE DVD-R drive from my Windows desktop soon — thankfully, the video card doesn’t block the IDE port.

Once all of this is done, the only original parts that will remain in Vertumnus from the original purchase will be the Intel Core 2 Duo E7200 processor, 2 1 GB sticks of DDR2 RAM, the motherboard, and one 500 GB hard drive. And that’s after less than one year. Clearly, I tried saving too much money by buying a system far below my ultimate desired specifications, then wasted a bit more than those savings on upgrades. And I can’t even say the upgrades are done. At some point I’m going to need another hard drive, but since I’m all out of SATA ports, I’ll either have to get an add-in card or replace the motherboard. The original RAM that Dell shipped was pretty slow, and can easily (and cheaply) be replaced with something better. And the processor is looking slightly anemic. A nice quad-core processor would be fun to play around with …

Long story short, in another year, it’s quite possible that the only component remaining from my original purchase will be the 500 GB hard drive and a SATA cable or two. I guess I learned my lesson. Don’t try to save too much money on a computer if, at heart, you’re really just a techie who demands performance.

A Python script to auto-follow all Twitter followers

Tuesday, March 10th, 2009

In my recent fiddling around with Twitter I came across the Twitter API, which is surprisingly feature-complete. Since programming is one of my hobbies (as well as my occupation), I inevitably started fooling around with it and have already come up with something useful. I’m posting it here, so if you need to do the same thing that I am, you won’t have to reinvent the wheel.

One common thing that people do on Twitter is they follow everyone that follows them. This is good for social networking (or just bald self-promotion), as inbound links to your Twitter page show in the followers list of everyone that you’re following. You’d think Twitter itself would have a way to do this, but alas, it does not. So what I wanted to do is use a program to automatically follow everyone following me instead of having to manually follow each person.

Other sites that interface with Twitter will do it for you (such as TweetLater), but I’m not interested in signing up for another service, and I’m especially not interested in giving out my Twitter login credentials to anyone else. So I needed software that ran locally. A Google search turned up an auto-follow script written in Perl, but the download link requires registration with yet another site. I didn’t want to do that so I decided to program it for myself, which ended up being surprisingly simple.

My Auto-Follow script is written in Python. I decided to use Python because of the excellent Python Twitter library. It provides an all-Python interface to the Twitter API. You’ll need to download and install Python-Twitter (and its dependency, python-simplejson, if you don’t have it already; sudo apt-get install python-simplejson does the trick on Ubuntu GNU/Linux). Just follow the instructions on the Python-Twitter page; it’s really simple.

Now, create a new Python script named auto_follow.py and copy the following code into it:

#!/usr/bin/python
# -*- coding: utf-8 -*-
#(c) 2009 Ben McIlwain, released under the terms of the GNU GPL v3.
import twitter
from sets import Set

username = 'your_username'
password = 'your_password'
api = twitter.Api(username=username, password=password)

following = api.GetFriends()
friendNames = Set()
for friend in following:
    friendNames.add(friend.screen_name)

followers = api.GetFollowers()
for follower in followers:
    if (not follower.screen_name in friendNames):
        api.CreateFriendship(follower.screen_name)

Yes, it really is that simple. I’d comment it, but what’s the point? I can summarize its operation in one sentence: It gets all of your friends and all of your followers, and then finds every follower that isn’t a friend and makes them a friend. Just make sure to edit the script to give it your actual username and password so that it can sign in.

Run the script and you will now be following all of your followers. Pretty simple, right? But you probably don’t want to have to keep running this program manually. Also, I’ve heard rumors that the Twitter API limits you to following 70 users per hour (as an anti-spam measure, I’m guessing), so if you have more than 70 followers you’re not following, you won’t be able to do it all at once. Luckily, there’s a solution for both problems: add the script as an hourly cronjob. This will keep who you follow synced with your followers over time, and if you have a large deficit in who you follow at the start (lucky bastard), it’ll slowly chip away at it each hour until they do get in sync. In Ubuntu GNU/Linux, adding the following line to a text file in /etc/cron.d/ (as root) should do it:

0 * * * * username python /path/to/auto_follow.py >/dev/null 2>&1

This will run the auto_follow script at the top of each hour. You’ll need to set the username to the user account you want the job to run under — your own user account is fine — and set the path to wherever you saved the auto_follow script. Depending on your GNU/Linux distribution and which cron scheduler you have installed, you may not need the username field, and this line might go in a different file (such as /etc/crontab). Refer to your distro’s documentation for more information.

So that’s it. That’s all it takes to automatically auto-follow everyone who’s following you — a dozen or so lines of Python, one crontab entry, and one excellent library and API. Enjoy.

How to prevent Firefox from lagging badly when dragging selected text

Tuesday, October 28th, 2008

This past week I upgraded my system from Ubuntu 8.04 to Ubuntu 8.10. The upgrade was pretty smooth, with nothing much to report except that my system now boots without requiring the all_generic_ide kernel parameter, which is nice. One problem that I immediately started seeing, however, was that my system would freeze up terribly whenever I selected more than a few words in Mozilla Firefox and tried dragging them anywhere. Depending on how large the block of text was, my entire system could freeze up for minutes at a time as it spent several seconds drawing each frame of the text block moving.

Well, I’d had enough of it, and I went looking for a solution. Firefox didn’t always render the entire contents of the selection being dragged-and-dropped; it used to just display a little icon next to the cursor. Here’s how to restore that functionality and remove the lag from the fancy but ultimately unnecessary fully rendered dragging:

  1. Type about:config into Firefox’s location bar and hit Return.
  2. In the filter text edit box at the top of the window, type nglayout.
  3. Double-click on the nglayout.enable_drag_images row to change its value to false.
  4. That’s it! Firefox will no longer try to render the contents of the selection to the screen as you drag words around. For older systems or systems with poor graphical support (like mine, apparently), this is pretty much a mandatory change. Enjoy your new, faster Firefox!

Stephen Fry celebrates GNU’s 25th birthday

Wednesday, September 3rd, 2008

Now this is a slightly unexpected, yet nevertheless entirely awesome, bit of news. Stephen Fry, famous British comedian of Fry & Laurie fame (that’s Hugh Laurie, the actor who plays Dr. House on House), has released a celebratory message to GNU on its 25th anniversary. It contains a good bit of background on GNU and Linux, though nothing that should be new to you if you’ve been involved in the Free Software community for awhile.

Still, it’s a nice video, and it’s cool to see someone so, well, famous extolling the virtues of Free Software. Check it out! Unfortunately, it’ll work a lot better in the United Kingdom than here in the United States, since they actually know who he is. We just need to get an American equivalent to tape something equally praising of GNU/Linux. How about … Scarlett Johannson?

KDE 4.1 is out and good enough for everyday use

Thursday, August 14th, 2008

Version 4.1 of the K Desktop Environment (the first release of KDE 4.x suitable for everyday use) for GNU/Linux came out recently, and last week I decided to give it a try. It’s pretty good, but it’s still a bit unpolished. Installing it in Ubuntu was simple, except for an error with one of the packages that forced me to remove them all, then install them again in a specific order using the dist-upgrade command. The process will become smoother when it hits the main Ubuntu branch, but for now, just be forewarned that it still has a bit of a “beta” feel to it.

KDE 4.1 also did something weird with my xorg.conf, so I have to restore it to a working version upon each boot or I get dumped back into the basic vesa driver which can only display a resolution of 640×480. Luckily I hardly ever reboot, or this would be more of an annoyance. Again, I expect this to be something that’s fixed in the final release. I don’t think these problems are even KDE 4.1′s fault, but rather, a problem in the way the Ubuntu packager configured things.

So, after debugging the problems (and you wouldn’t even be interested in checking out bleeding edge software releases if you seriously minded contributing to the edge being bleeding in the first place), KDE 4.1 is up and running, and it’s really nice. Whereas KDE 3.5 seemed to draw inspiration for its appearance from Windows 95, KDE 4.1 draws its inspiration from Windows Vista, and even gives it a run for its money. KDE 4.1 is a pleasure to look at, right from the boot-up screen all the way to the everyday tasks performed in the desktop environment. Even the applications menu, with its smoothly animated sliding pages, is well done. Appearance matters a lot more than most free software folks will admit to, so it’s good to see a free software project that really gets it.

KDE 4′s best new feature has to be the desktop plasmoids. A plasmoid is a view of a folder that goes on the desktop, so it is always beneath all other opened applications and does not show up on the taskbar. The simplest use of a plasmoid is to show the contents of the desktop folder (creating a desktop folder plasmoid the size of the entire screen emulates the behavior in other desktop environments). Plasmoids are nice because they corral the icons that normally overwhelm a desktop into a nice sortable box. And then the real power of the plasmoid is revealed when you create other plasmoids — one plasmoid for your Documents folder, another for the download directory for Mozilla Firefox, another for the download directory for BitTorrent, etc. All of the files you need are always at your finger tips, in a neat orderly manner that don’t overwhelm your task bar. Organizing your files is as easy as dragging icons from one plasmoid to another. It’s such an incredible user interface improvement it makes you wonder why no one else has thought of it before. Oh wait, they sort of have — anyone remember Windows 3.1 and its persistent folder windows?

KDE 4.1 is also lacking some configuration options in comparison to KDE 3.5, but it’s apparently already a lot better than KDE 4.0 was, and most of the configuration options should be restored soon in future releases. All of the basic options are there, you just don’t have the kind of intricate configurability that long-time KDE users might expect.

I would love to write about all of the new desktop widgets, but alas, I couldn’t get any of them working, and this error is echoed by others out there trying out the Ubuntu build of KDE 4.1. This looks like another error by the packager. Oh well. KDE 4.1 on Ubuntu is still perfectly usable as is, it just doesn’t have all the bells-and-whistles. If the problems I’ve listed so far aren’t deal-breakers for you, go ahead and download KDE 4.1 and try it out. Otherwise, you might want to wait another few weeks or so until the official mainline release is out. Even if you’ve never used KDE before (which is quite common for Ubuntu users, seeing as how Gnome is the default window manager), you ought to give it a serious try. You might really like it.

This Mozilla/Ogg thing could end up being really important

Monday, August 4th, 2008

It’s just starting to sink in for me how important the recent inclusion of the Free Software Ogg codecs in Mozilla Firefox 3.1 will turn out to be, especially concerning the Ogg Theora video codec. This will be the first chance for a non-proprietary video codec to really break into the mainstream. Combine Firefox’s now-native support for it (with its >20% market share) and Wikipedia, which only accepts video uploads in Ogg Theora format, and we have a powerhouse for advancing the adoption of non-proprietary codecs. This is big news. Hell, I was interviewed by LinuxInsider on the topic and all I’m really responsible for is increasing public knowledge of this recent event.

As I said in that article, we’re close to reaching the point where video will be natively supported by all browsers on all platforms just as smoothly as images are today. This will have an amazing effect on the usability of the web, and by extension, what humanity is capable of doing with it. It will certainly give many companies (especially smaller start-ups with less funding) a better chance to establish a video foothold on the web, with no more licensing of finicky Flash players or H.264 codecs required. Naturally, it will do wonders for the ease of including video content on personal sites as well.

But don’t think the war is won just yet. There are many hard battles yet to fight in the war for adoption of non-proprietary multimedia codecs. We already lost one of the battles, when Apple and Nokia argued vociferously (and successfully) to remove the Ogg Vorbis and Ogg Theora wording from the HTML 5 draft spec. But the Mozilla Foundation has now successfully managed to ensure that Ogg codec compliance can no longer be ignored. And surprisingly, Microsoft isn’t even the enemy here. As I pointed out in the article, Microsoft isn’t averse to using non-proprietary codecs — they used Ogg Vorbis to handle music in the PC release of Halo, for instance. No, the real enemies here are Nokia and Apple, two members of the MPEG-LA patent pool who are currently making millions of undeserved dollars off of questionable cartel-held software patents that stifle innovation in the multimedia web space and hinder adoption of web video.

The big patent-holders like Apple and Nokia are arguing so tenaciously because they know that once non-proprietary codecs have gained a foothold in any niche, the proprietary codecs lose it permanently. Free (as in free speech) codecs have such clear advantages over non-free codecs, not least of which is that multimedia device manufacturers don’t have to pay licensing fees, that once a free codec becomes viable, no non-free codec will ever be able to reclaim that niche again. So the patent holders will fight tooth-and-nail against losing their cash cows, but inevitably that is what will happen. It’s only a matter of time. We’ve already seen it with the image and document formats — now audio and video are next.

Firefox gets Ogg

Wednesday, July 30th, 2008

Great news, Free Software fans! As of last night, out-of-the-box support for the Ogg Theora (video) and Ogg Vorbis (audio) open format codecs was enabled on the mainline Firefox development branch. Here’s the exact diff. These two codecs work in conjunction with the new <video> and <audio> tags, which will be supported in the next major release of Firefox, 3.1. If you’re feeling impatient, you can download the nightly 3.1 release which already includes the brand new Ogg codec support.

But what is the advantage of native browser support for the new tags, you may be wondering? The HTML 5 spec has lots of details, but what it boils down to is no longer having to rely on kludgy proprietary plugins like Flash or Quicktime (which often don’t work well cross-platform, I might add) to display multimedia content. The new tags work just like the current <img> tag does: feed them the URL to the appropriate media resource and they display it, just as simply as one might include a JPG image in a webpage. It’s such an obvious improvement over the previous state of affairs of dealing with online video that it really makes you wonder why it took so long. We’re several years into the online video revolution now (led by such giants as YouTube), so it’s only fair that we finally get native browser support for videos.

It’s important to point out that not only are the Ogg codecs free (as in both speech and beer) and unencumbered by patents, but that Ogg Theora’s performance has recently been significantly improved. It’s not quite as good as H.264, but it is better than many of the previous generation’s proprietary codecs, and it’s currently the best video codec around that is compatible with the Free Software philosophy. That’s why the Mozilla Foundation chose it to provide out-of-the-box video support in Firefox — all of the alternatives currently widely used for web video, such as flv, H.264, or DivX, are copyright and patent-encumbered, and thus could not be included in Firefox. It’s worth pointing out that Ogg Theora is also the only video codec allowed on all Wikimedia Foundation projects, including Wikipedia.

Not too long from now, after Firefox 3.1 is released, a significant double digit percentage of the web will have Ogg-enabled browsers. That will be a huge achievement for the Xiph.Org Foundation. Expect to see a lot more online video in the Free Software world, and hopefully a migration away from Flash video players, which I still can’t for the life of me get to work reliably in GNU/Linux. Once the <video> tag does start cropping up in a large number of places, will the competing browsers like Internet Explorer and Safari have any choice but to support it as well? Since all of the Ogg codecs are released under BSD-style — not GPL-style — licenses, there’s nothing stopping them!

How to learn Morse code in GNU/Linux

Monday, June 23rd, 2008

I know what you’re thinking — as if GNU/Linux and ham radio couldn’t possibly be nerdy enough when separate, let’s put them together! But let’s take a step back …

I started getting involved with ham radio just three months ago with VHF/UHF voice FM, and already I’m hungering for more. I don’t have an HF rig yet, and might actually not have one for awhile, but since I know it’s something I’ll want to do eventually, I figure I should just start learning Morse code now. As for why I want to learn Morse code, I couldn’t exactly tell you — there’s just a certain romance to it, and pounding away on a key is such a delightfully different method of communicating than just speaking into a microphone. But ignoring why I want to learn it, here’s how I’m going about doing it, in GNU/Linux no less.

Learning Morse code on the computer is actually harder than it should be. I couldn’t find any Flash or Java applets that do something as simple as generate Morse code. Seriously. I found some really old Java applets that no longer function in current JDKs, but they don’t count. I found lots of DOS programs, many of which are pushing two decades old, but I wasn’t having much luck with them even under Windows. And since I’m running GNU/Linux as my primary desktop now, these programs weren’t helpful at all. Luckily, there’s a simple up-to-date command-line utility for GNU/Linux that does all the basics with a minimum of fuss.

First, you’ll want the morse program. In Ubuntu or Debian GNU/Linux, you can do the following:

sudo apt-get install morse

If you’re not using Ubuntu or Debian, you should be able to find it using the package manager in your distro of choice.

Now, learning Morse is as simple as passing in the right command-line parameters to morse. Here’s what I’ve started with:

morse -rC 'ETAOINSHRDLU' -w 5 -Ts

Read the rest of this entry »

64-bit GNU/Linux is totally ready for mainstream use

Monday, June 16th, 2008

When I was installing Ubuntu GNU/Linux with KDE on my latest desktop purchase, I faced a seemingly agonizing decision between 32-bit and 64-bit. There are all sorts of peripheral arguments over performance, but the main arguments on each side are that 32-bit can only support 4 GB of RAM (not technically true) and that 64-bit has limited application support and is more buggy.

Well, I’m happy to report that all of the supposed caveats of 64-bit GNU/Linux completely failed to materialize. After over a week of heavy usage of 64-bit Ubuntu, and installation of a few hundred applications, I haven’t run across a single problem stemming from my decision to use 64-bit. So I would say the choice of 64-bit is a no-brainer. 64-bit has reached maturity, and all of the supposed problems with it are problems of the past. 64-bit is the future of computing (just like 32-bit was the future of computing back when 16-bit was still common). It’s better to make the switch now than to find yourself a year or two down the line facing a 64-bit reinstallation of a 32-bit system. This choice is pretty much set in stone when you install an operating system; there is no upgrade path. So make the correct choice now.

I should point out that not all processors support 64-bit OSes. The older the processor, the less likely it is to offer 64-bit support. So do your due diligence before you accidentally end up downloading the wrong version of a GNU/Linux distribution ISO.

Bringing a Windows mindset to a GNU/Linux world

Thursday, June 12th, 2008

I just ran across a level of stupid so off the charts I had to immediately comment on it here lest my inaction unwittingly foster an environment tolerant of such stupidity. Allow me to quote from a post on Linuxforums:

When I say cd’d I mean I used the command cd, to change directory.
So for example say I downloaded and extracted the drivers to the desktop I would open a Konsole window and type:
sudo cd /home/sebmaster/desktop/[folder extracted to]/
(You probably dont need sudo but I have got into the habit of adding it before pretty much everything)

Those of you are familiar with GNU/Linux should see this heaping mound of stupidity for what it is immediately, and will likely find the following explanation superfluous. For the rest of you, here’s a detailed explanation.

There are two distinct nexuses (nexi?) of stupidity inherent in this quote. The first is the notion that sudo, a wrapper program that executes the program passed to it as an argument with root (adminstrator) privileges, will do anything with the change directory command. It won’t. Cd is a shell command; it is not a program. Sudo can’t even find it. The exact error message I get is “sudo: cd: command not found”. And even if cd was a program, using it in this way wouldn’t do anything, since the new working directory would be lost when the sudo subshell terminated. And even if that did work, it still wouldn’t be useful, because there’s no point in setting your working directory to a directory you don’t have access to anyway. You’re still going to need to use sudo with every subsequent command just to get access to those files, so the sudo cp is superfluous; just skip the cd altogether and use a qualified path to the files.

But that’s not even touching on the second (and greater) nexus of stupidity, which is the very-Windows-like mindset that everything should be run as administrator. Saying “You probably dont need sudo but I have got into the habit of adding it before pretty much everything” is like saying “You probably don’t need a live hand grenade but I have got into the habit of carrying one around with me everywhere I go.” Like a live hand grenade, sudo is potentially very dangerous, as the root account has total access to the system (so simple mistakes or security compromises become far worse than they would with mere user account permissions). The mantra to live by is: Never run anything as root unless it is absolutely necessary. As soon as I read that this faithful deliverer-of-the-stupid executes pretty much everything as root out of force of habit, I stood up from my computer, placed my hand over my face, and let out a very long, exasperated sigh. Why doesn’t he just su at the beginning of every terminal session and get it over with?

Oh wait, I probably shouldn’t have said that. He’s probably going to read that last bit, miss all the rest of the content in this post, and think that’s a good idea. “Hey, now I don’t even have to type sudo anymore, because everything I do is always as root!” Yes, even changing directories.