Why I use Identi.ca and you should too

Sunday, March 22nd, 2009

Those of you following me on Twitter may have noticed that all of my tweets come from Identica. I started off with Twitter but I quickly switched over to Identica as soon as I learned about it. Identica, if you haven’t heard of it before, uses the same micro-blogging concept as Twitter (and in fact is compatible with it), but has several improvements. I recommend Identica, and if you aren’t using it yet, check out these reasons as to why you should.

There are several practical reasons you should use Identica:

  • All of your data is exportable on Identica, including your entire corpus of tweets. Twitter does not provide this functionality. Should you want to migrate away from Twitter down the road (for any variety of as-of-yet-unforseen reasons), you are unable to do so, but you are able to migrate away from Identica at any point easily. And since Identica uses the Free Software Laconica software, you can even install Laconica on your own web host and import all of your data there, where you can have complete control over it.
  • Identica has a powerful groups feature that allows people to collectively subscribe and see all tweets sent to a group (this is what the exclamation syntax you may have seen in tweets is about). Groups are a powerful way to build communities and have multi-party discussions, but Twitter does not have them.
  • You don’t have to quit Twitter. My Identica account is linked to my Twitter account, so every message that I send to Identica automatically appears on Twitter. Posting to Identica+Twitter takes the same amount of effort as posting to Twitter alone, except it is seen by more people.
  • Identica lets you see things from other people’s perspective. I’ll use me as an example. You can see my entire tweet stream, which includes messages from all users and groups I’m following. This should give you a great idea of the kinds of things I’m interested in. And you can see all of the replies to me, which makes it a lot easier to track and understand conversations. Note that all of this is public information and is accessible on Twitter through trickier ways (in the first case, looking at the list of a person’s followers and combining all their tweets in chronological order; in the second case, by searching for “@username” on the search subdomain), so you aren’t giving up any of your privacy. Identica simply makes these features a lot easier to use.
  • Some people you may end up finding and wanting to talk with don’t use Twitter at all; they’re only on Identica. Get on Identica and link it to Twitter and you can talk to everyone on both services. Just use Twitter, however, and you’re left out in the cold with regards to anyone who only uses Identica.

And there is one important ethical reason you should use Identica:

  • Identica is Free (as in freedom, not merely cost). Because it follows the Free software ethos, it respects your rights and maximizes your freedom to control your data as you see fit, including the ability to move all of your data elsewhere if necessary. Twitter does not respect these freedoms.

KDE 4.1 is out and good enough for everyday use

Thursday, August 14th, 2008

Version 4.1 of the K Desktop Environment (the first release of KDE 4.x suitable for everyday use) for GNU/Linux came out recently, and last week I decided to give it a try. It’s pretty good, but it’s still a bit unpolished. Installing it in Ubuntu was simple, except for an error with one of the packages that forced me to remove them all, then install them again in a specific order using the dist-upgrade command. The process will become smoother when it hits the main Ubuntu branch, but for now, just be forewarned that it still has a bit of a “beta” feel to it.

KDE 4.1 also did something weird with my xorg.conf, so I have to restore it to a working version upon each boot or I get dumped back into the basic vesa driver which can only display a resolution of 640×480. Luckily I hardly ever reboot, or this would be more of an annoyance. Again, I expect this to be something that’s fixed in the final release. I don’t think these problems are even KDE 4.1’s fault, but rather, a problem in the way the Ubuntu packager configured things.

So, after debugging the problems (and you wouldn’t even be interested in checking out bleeding edge software releases if you seriously minded contributing to the edge being bleeding in the first place), KDE 4.1 is up and running, and it’s really nice. Whereas KDE 3.5 seemed to draw inspiration for its appearance from Windows 95, KDE 4.1 draws its inspiration from Windows Vista, and even gives it a run for its money. KDE 4.1 is a pleasure to look at, right from the boot-up screen all the way to the everyday tasks performed in the desktop environment. Even the applications menu, with its smoothly animated sliding pages, is well done. Appearance matters a lot more than most free software folks will admit to, so it’s good to see a free software project that really gets it.

KDE 4’s best new feature has to be the desktop plasmoids. A plasmoid is a view of a folder that goes on the desktop, so it is always beneath all other opened applications and does not show up on the taskbar. The simplest use of a plasmoid is to show the contents of the desktop folder (creating a desktop folder plasmoid the size of the entire screen emulates the behavior in other desktop environments). Plasmoids are nice because they corral the icons that normally overwhelm a desktop into a nice sortable box. And then the real power of the plasmoid is revealed when you create other plasmoids — one plasmoid for your Documents folder, another for the download directory for Mozilla Firefox, another for the download directory for BitTorrent, etc. All of the files you need are always at your finger tips, in a neat orderly manner that don’t overwhelm your task bar. Organizing your files is as easy as dragging icons from one plasmoid to another. It’s such an incredible user interface improvement it makes you wonder why no one else has thought of it before. Oh wait, they sort of have — anyone remember Windows 3.1 and its persistent folder windows?

KDE 4.1 is also lacking some configuration options in comparison to KDE 3.5, but it’s apparently already a lot better than KDE 4.0 was, and most of the configuration options should be restored soon in future releases. All of the basic options are there, you just don’t have the kind of intricate configurability that long-time KDE users might expect.

I would love to write about all of the new desktop widgets, but alas, I couldn’t get any of them working, and this error is echoed by others out there trying out the Ubuntu build of KDE 4.1. This looks like another error by the packager. Oh well. KDE 4.1 on Ubuntu is still perfectly usable as is, it just doesn’t have all the bells-and-whistles. If the problems I’ve listed so far aren’t deal-breakers for you, go ahead and download KDE 4.1 and try it out. Otherwise, you might want to wait another few weeks or so until the official mainline release is out. Even if you’ve never used KDE before (which is quite common for Ubuntu users, seeing as how Gnome is the default window manager), you ought to give it a serious try. You might really like it.

Firefox gets Ogg

Wednesday, July 30th, 2008

Great news, Free Software fans! As of last night, out-of-the-box support for the Ogg Theora (video) and Ogg Vorbis (audio) open format codecs was enabled on the mainline Firefox development branch. Here’s the exact diff. These two codecs work in conjunction with the new <video> and <audio> tags, which will be supported in the next major release of Firefox, 3.1. If you’re feeling impatient, you can download the nightly 3.1 release which already includes the brand new Ogg codec support.

But what is the advantage of native browser support for the new tags, you may be wondering? The HTML 5 spec has lots of details, but what it boils down to is no longer having to rely on kludgy proprietary plugins like Flash or Quicktime (which often don’t work well cross-platform, I might add) to display multimedia content. The new tags work just like the current <img> tag does: feed them the URL to the appropriate media resource and they display it, just as simply as one might include a JPG image in a webpage. It’s such an obvious improvement over the previous state of affairs of dealing with online video that it really makes you wonder why it took so long. We’re several years into the online video revolution now (led by such giants as YouTube), so it’s only fair that we finally get native browser support for videos.

It’s important to point out that not only are the Ogg codecs free (as in both speech and beer) and unencumbered by patents, but that Ogg Theora’s performance has recently been significantly improved. It’s not quite as good as H.264, but it is better than many of the previous generation’s proprietary codecs, and it’s currently the best video codec around that is compatible with the Free Software philosophy. That’s why the Mozilla Foundation chose it to provide out-of-the-box video support in Firefox — all of the alternatives currently widely used for web video, such as flv, H.264, or DivX, are copyright and patent-encumbered, and thus could not be included in Firefox. It’s worth pointing out that Ogg Theora is also the only video codec allowed on all Wikimedia Foundation projects, including Wikipedia.

Not too long from now, after Firefox 3.1 is released, a significant double digit percentage of the web will have Ogg-enabled browsers. That will be a huge achievement for the Xiph.Org Foundation. Expect to see a lot more online video in the Free Software world, and hopefully a migration away from Flash video players, which I still can’t for the life of me get to work reliably in GNU/Linux. Once the <video> tag does start cropping up in a large number of places, will the competing browsers like Internet Explorer and Safari have any choice but to support it as well? Since all of the Ogg codecs are released under BSD-style — not GPL-style — licenses, there’s nothing stopping them!

Meet Vertumnus, my new GNU/Linux desktop (running on a Dell Inspiron 530)

Wednesday, June 11th, 2008

If this post seems a little glowing, don’t be alarmed; it’s because I’m still basking in the brilliant sheen of my new GNU/Linux desktop (which I am composing this blog post on as I type these very words — and these words, too). That’s right, I went through with my plans for setting up a GNU/Linux desktop, though I didn’t actually use the parts list I threw together two weeks ago. I ran across an amazing deal through Dell’s small business site (instant savings of nearly half off!) on an Inspiron 530 and I jumped on it. For $360 ($407 after shipping and state taxes), I got a nice little Dell mini-tower with an Intel Core 2 Duo E8200 processor, 2 GB of DDR2 PC2 6400 RAM, 500GB SATA hard drive with 16 MB cache, SATA DVD burner, keyboard, and optical scroll mouse. It ended up being about the same price as the parts list I put together, but the performance is marginally better, with the added possibility of upgrading to 4 GB of RAM. It also came with Windows Vista Home Premium, which I suppose would be a value add-in for some, but which just made me wince at how much cheaper I could have gotten this system without paying the Microsoft tax. Anyway, Vista’s in the trash now, where it belongs, and the price was good enough that I’m not worrying about it.

Installing the OS

I was going to install Kubuntu on my new system, but I opted for Ubuntu instead on a recommendation from Drinian, who says that Kubuntu isn’t quite as well put-together. The only reason I wanted Kubuntu was because I wanted to run KDE instead of Gnome, but it turns out that’s incredibly easy to accomplish in Ubuntu (just install the kubuntu-desktop meta-package in aptitude, then set your login session to KDE). So choosing Ubuntu over Kubuntu hasn’t left me disappointed in any way.

Unfortunately, installing Ubuntu GNU/Linux still wasn’t as easy as it should have been. I blame the problem on hardware incompatibilities, most likely with the SATA controller on the motherboard. The installation CD wouldn’t boot without passing the kernel parameter “all_generic_ide”, which is something I can handle but the average computer user is likely to be turned off by. Then, after the installation completed, my system wouldn’t boot from the hard drive for the same reason, so I had to boot back into the LiveCD environment, mount my boot partition, and then edit grub’s (a bootloader) menu.lst to pass that same kernel parameter. So, yeah, GNU/Linux isn’t exactly friendly for the masses, at least not on this hardware. Curiously enough, I had this exact same problem when dual-booting Fedora Core (another distribution of GNU/Linux) on my previous desktop. There’s definitely some room for improvement in this area by either the Linux kernel developers or the Ubuntu packagers. There’s no real reason this can’t be one of those things that “Just Works”.

Naming the system

But after the minor hitch with “all_generic_ide” , everything else worked just fine. It was the smoothest GNU/Linux installation I believe I’ve ever done. The GNU/Linux graphical installers have become quite advanced, completely putting anything Microsoft offers up to shame. Actually, the part of the installation process that took the longest time was picking a name for my new computer. I have a long history of naming computers after various mythologies, deities, or nerdy things (Ixion, Dark Anima, Fyre, Quezacoatl, Geminoid, Phoenix, etc.), so I wanted to continue the theme. I figured since this is the first time I’ve ever used a dedicated GNU/Linux system as my primary desktop (as opposed to Microsoft Windows), I wanted to emphasize the change this brings to my computing life. So I got into a lively discussion on IRC with someone who apparently knows a good deal about ancient Greek/Roman mythology, and his best suggestion was the Roman god Vertumnus, who is “the god of seasons, change and plant growth, as well as gardens and fruit trees”. I liked both the change aspect and the environmental aspect, so Vertumnus it was.

Read the rest of this entry »

DRM: how things you’ve bought aren’t actually yours

Friday, May 30th, 2008

We free software folk have been trying to warn people about the dangers of Digital Restrictions Management for a while, we really have. Yet you just aren’t listening to us! Well, here are two recent all-too-obvious-in-hindsight DRM travesties by Microsoft that might have you reconsidering. If Microsoft can’t even be trusted to do DRM correctly, then who can?

First, Microsoft decided to close down their MSN Music service, presumably because it was unprofitable. Unfortunately for any customer who ever bought anything from the store, they won’t be able to play their purchased music files on any additional devices come June because Microsoft is shutting down the servers. Each audio file is actually a file encrypted with DRM, and once the servers go away, so too go any of the means of being able to decrypt the files. Ain’t it great that “pirates” will be able to play their downloaded mp3s indefinitely, but people who legitimately purchased the music will be stuck with worthless files and no refund? But that’s what you get when you willingly buy something infected with DRM.

Microsoft also uses Digital Restrictions Management on all of its Downloadable Content for the XBOX 360. All downloaded files are linked both to the user account and to the hardware. Want to change accounts? You can’t take your downloads with you. Buying another XBOX 360? Can’t take ’em with you. Buying another XBOX 360 because your old one broke? You’re still screwed! That’s right, this poor sap’s XBOX 360 broke, taking all of the downloaded content that he bought along with it, and Microsoft’s only response was “buy all your content a second time.” It makes you wonder why they even use the word “buy”, because when you actually buy something it implies that you actually own it. If this is really the future of gaming consoles, we gamers are in big trouble. Microsoft is trying to supplant a decent product (games on DVD that can be played in any console) with an inferior one, simply because they can make a lot more money with it, what with the duplicate downloads, lower distribution costs, no need to print manuals, etc.

And why shouldn’t they? By buying all of this content that’s infected with DRM, we customers are bringing it all down upon ourselves. Unfortunately, many people will only realize too late how evil DRM is — after they’ve spent thousands of dollars on music only to have the authorization servers shut down, or after they’ve spent hundreds of dollars on downloadable content only to have their XBOX 360 crap out on them. And Microsoft doesn’t care about fixing any of this. They already have your money, and they’re big enough they can just tell you to go screw yourself. Actually, I wish they were that kind, because tauntingly suggesting you pay again for everything you’ve already purchased once is worse.

So join with me and refuse to buy anything that’s infected with DRM. Support the EFF’s anti-DRM campaign. Support the Defective by Design campaign. Spread the word. Don’t be the poor sod who abruptly finds himself “owning” hundreds of dollars of worthless DRM-infected files that cannot ever be used again.

“You have no girlfriend to make out with”

Sunday, October 28th, 2007

So there you have it, a mouthful of personal opinions. I bet you wanted to spend your time doing something else, like making out with your girlfriend (haha, just kidding, if you actually reading my opinion on OOXML you have no girlfriend to make out with).

That quote is from a comment on Slashdot by Miguel de Icaza. Miguel de Icaza founded Gnome, one of the two main desktop environments available for GNU/Linux systems (the other is KDE). He also founded Mono, a “free software” project that is a rehashing of the patent-encumbered Microsoft .NET framework. Basically, using Mono sacrifices the free software ideology and makes one more vulnerable to legal attacks by Microsoft in the future. Miguel de Icaza is also known as being somewhat of a Microsoft shill.

In case you haven’t followed the news, there is a war brewing between two competing next-generation document formats, OpenDocument Format (ODF) and Microsoft’s Office Open eXtensible Markup Language (OOXML). Just for context, the current document format that you are most likely familiar with is Microsoft Word’s proprietary .doc format. OOXML, proposed by Microsoft, is touted as an “open implementation”, but all it’s really doing is wrapping a layer of XML around the old proprietary formats. The spec doesn’t go into detail on how a lot of things are supposed to be implemented, so the only ones who’d actually be able to implement a proper OOXML reader/writer would be Microsoft themselves. Obviously, that’s not a real open standard. ODF was proposed by a large consortium of people and companies, is a true open standard, is already implemented in all big four editing suites, and was accepted by the International Organization for Standards (ISO) in 2006. This should be a no-brainer.

But the main thrust of this post is Miguel de Icaza’s sheer ineptitude as a debater and proponent for OOXML. He’s absolutely terrible at it, and if Microsoft knows what’s good for them, they’d reign him in. I’ve read many back-and-forth arguments with Miguel de Icaza on one side and ODF proponents and OOXML detractors on the other, and Miguel comes off as horribly lacking in debating skill. He intersperses hand-waving technical discussion with the kind of crude insults one would find more at home in middle school, yet Miguel is 35 years old! How the hell does he expect to be taken seriously when he makes crude ad hominem attacks against his opponents’ theorized lack of girlfriends?

He’s an embarrassment, especially to Novel, where he is Vice President. Nobody’s going to knock his programming ability, but he clearly can’t hold his own in debates without resorting to school yard tactics that make him a laughingstock. Whoever is in charge of him should keep him indoors in front of a computer coding. He’s not cut out for the job of opening his mouth and trying to convince anyone of anything. And don’t think I’m taking this one comment out of context. He makes these kinds of childish insults repeatedly, both in his Slashdot postings and on his personal blog. I’m not going to go any further and try to speculate into why he argues like this, as that would be insulting. It’s enough merely to point it out.

AMD announces open source GNU/Linux drivers for its video cards

Sunday, May 13th, 2007

Color me excited. AMD, the microprocessor company that is Intel’s chief competition and recently bought ATI, one of two major players in the graphics gard market, has announced that it will release open source drivers for its line of video cards. This is excellent, excellent news. Let me try to explain what this means to the non-techie audience.

The main thrust behind the GNU/Linux movement is free, open source, libre software. This means you can see the source code, you can redistribute the source code, you can modify the source code, and you can redistribute those modifications. Needless to say, the ramifications of these freedoms are extensive, and are the major cause for GNU/Linux’s current success. By 1992 Richard Stallman and the GNU project had put together all of the major components of a totally free operating system except for the kernel. With the addition of the Linux kernel to GNU in 1992, forming GNU/Linux, the world saw its first completely free modern operating system.

Unfortunately, there’s been a bit of backslide as of late. You can run your completely free operating system, but you won’t get very good performance out of your video cards. This is because up until now, ATI and nVidia, the only real players in the high-performance graphics card market, have not released free versions of their graphics card drivers, nor have they released the specifications on how to create our own drivers. So reverse-engineered free drivers are out there, they are just bad, and don’t take good advantage of any of the added power in the last few generations of graphics cards. So if you want to play a recent 3D commercial videogame under GNU/Linux, you really do need to use the proprietary drivers.

But the proprietary drivers have their own disadvantages. They aren’t as high quality as the Windows or Mac OS X drivers, but without the source code, we cannot fix their flaws. And they force us to do certain things that we do not wish done: for instance, the nVidia proprietary driver forces video-out output to enable Macrovision DRM, which degrades video quality. Those of us accustomed to using free software are driven crazy by this kind of nonsense, because, with free software, you have the freedom and the ability to modify the source code exactly as you see fit, so the software only does what you want it to do, and it certainly doesn’t do what a corporation is trying to force you to do if you don’t want it.

Thus, I am overjoyed by AMD’s announcement of upcoming open source drivers for their graphics cards. This will be a huge boon to free software everywhere. 3D applications (especially games) will run with much better performance. The only thing we need to watch out for is AMD’s clever use of the phrase “open source” rather than free. Open source does not always mean free, as Richard Stallman has pointed out. Microsoft has released some of its code under its own “open source” licenses, which don’t actually allow the essential free software freedoms, like being able to redistribute ones modifications. If AMD releases their drivers in a truly free way, that will be excellent. If they release it “open source” but with non-free restrictions, it will be rubbish. I’m hoping they go the free route, and once they do, nVidia will really have no choice but to follow suit.

My experiences as an open source developer

Monday, January 8th, 2007

In the second part of a series on my experiences that are not widely-shared and hopefully interesting, I will talk about my experiences as an open source software developer (the first part of this series was about being a newspaper columnist).

I am a developer on the Python Wikipedia Bot Framework, which is a collection of programs that perform automated tasks on wikis based on the MediaWiki software. Wikipedia and other Wikimedia projects are by far the largest users of MediaWiki, but there are lots of other ones out there too, and pyWiki is used by lots of people for various tasks.

Before I go over my experiences in-depth, I’ll start with an overview of everything I’ve done on pyWiki. Skip ahead to after the list if these details are too technical.

  • delete.py – A bot that deletes a list of pages. I wrote it from scratch.
  • templatecount.py – A bot that counts how many times any given number of templates are used. I wrote it from scratch.
  • category.py – A bot that handles category renaming and deletion. I made some changes to it and some libraries, catlib.py and wikipedia.py, to make it more flexible, more automated, and to handle English Wikipedia-specific “Categories for discussion” (CFD) tagging.
  • template.py – A bot that handles template substitution, renaming, and deletion. I made some changes to it (and the library pagegenerators.py) to handle operations on multiple templates simultaneously, as well as increasing flexibility. I will admit, I added the capability to delete any number of templates in one run with the hope that I would some day be able to use it on userboxes.
  • replace.py – A bot that uses regular expressions to modify text on pages. I modified it to handle case insensitive matching, amongst other things.
  • wow.py – An unreleased bot that I used to anonymize thousands of vandal userpages to prevent glorification of vandalism. I wrote it from scratch.
  • catmove.pl – A metabot* written in Perl that parses a list of category changes and does them all in one run. I wrote it from scratch.
  • cfd.pl – An unreleased automatic version of catmove.pl that pulls down the list of category changes directly from the wiki, parses them, and executes them, in one single command. I wrote it from scratch. Hopefully I will be able to release it soon (it may have some security issues that I want to make sure are entirely resolved first).

Cfd.pl is the “secret sauce” that lets Cydebot do its magic. To date, Cydebot has over 160,000 edits, most of them category-related. I attribute this to cfd.pl, which allows me to, with a single command, handle dozens of CFDs simultaneously, whereas people using other bots have to input each one manually. It’s no surprise that everyone else pretty much gave up CFD, leaving my highly-efficient bot to handle it all on its own.

I also had some involvement with Vandalbot, which is a Python anti-vandal bot that uses pyWiki. I ran a Vandalbot clone called AntiVandalBot off of my own server for many months, until somewhat recently, when AntiVandalBot was switched over to being hosted on a Wikimedia Foundation server. If you add up all of the edits committed by both Cydebot and AntiVandalBot then I have the highest number of bot edits on the English Wikipedia — of course, it’s not just my work. I merely came up with the account name and hosted it for awhile; Joshbuddy is the one who actually wrote the vast majority of Vandalbot, and Tawker is the one who hosted the original Tawkerbot2 for awhile (and who now hosts it on the Wikimedia Foundation server).

Working on an open source project is very fun, and rather unlike a programming job for pay in the “real world”. For one, it’s entirely volunteer. I work at my leisure, when I feel like it, or when I have a functionality that I or someone else needs. Programming can actually be relaxing and cathartic when there are no deadlines and I am undertaking a coding project simply for the sake of writing something.

All of the developers on pyWiki are very relaxed and they definitely “get” the open source movement. There’s no expectation of anyone having to get anything done. This can have its downsides, in that it might take awhile for something to be taken care of, but it also doesn’t scare anyone off who is worried about a large time commitment. To become a developer on pyWiki, all I had to do was ask the project head, and I was given write access to the CVS repository within a few days, even though I had never used Python before. The amount of trust is very refreshing, and I definitely feel an impetus not to let the other guys down by uploading code with bugs in it (so my testing is always rigorous).

There gets to be a point with computer languages where learning another one is simply no big deal. I wouldn’t want to estimate how many languages I’ve used by now, but it’s probably somewhere around a dozen. After the first few, though, one simply knows how to program, and learning another language is a simple manner of looking through the documentation to find the specific code to do exactly what one has in mind. That was the situation I was in with pyWiki; although I had never used Python before, I knew exactly what I wanted to accomplish and how to accomplish it: I merely needed to know how to do it in Python. Within a week I was hacking away at the code, adding significant new functionality. It should be noted that working on an existing project in a new language is much, much easier than trying to make something from scratch in a new language.

I would say that pyWiki is a medium-size open source project, which is probably exactly the right size for a first-time developer. It’s not so small that it ever goes stagnant; there are code changes submitted every day, and the mailing list is very active. Any reasonable message posted to it will get a response within a day, if not hours. On the other hand, pyWiki is not too large. It has no barriers to entry; anyone can get started hacking right away it and submitting code changes. Larger projects necessarily have large bureaucracies (large projects need to be managed, there’s no way around it), which means there’s an approval process for code changes, and it’s unlikely that anything written by a novice will actually end up making it into a release. Trying to work on a large project right off the bat can be disheartening because there’s very little one can actually do that doesn’t require an expert level of knowledge. Compare this to pyWiki, which lacks lot of functionality that even a novice would be able to code up (delete.py wasn’t hard at all; it’s simply that no one had done it yet).

I would encourage anyone who is interested in programming to find an open source project they enjoy that they can contribute to. It’s great experience, and it much more closely resembles what actually happens in industry than hacking away on a solo project. I’m sure it’s a great resume line item. The key is to find a project you want to work on because it benefits you. In my case, I was writing new functionality that I needed to get stuff done on Wikipedia.

And there’s just something very compelling about contributing to the worldwide corpus of free software. It’s a way to leave your mark on the world — I way to say, “I was here.”

Read the rest of this entry »