January 8, 2009

Another brief roundup: cheap ebooks, cool indy games, and a neat graphics library

This is another post mostly for myself so I don't lose track of some interesting looking things.

First up, cheap $1 ebooks from Orbit. It looks like this publisher is selling one ebook per month at $1, which is a deal that you can not pass up, even if the books are DRM-encumbered. I'm seriously considering buying at least the first two books, The Way of Shadows by Brent Weeks, and Ian M. Banks' Use of Weapons, even though I can't DRMd books on my OLPC with FBReader.

I highly recommend Use of Weapons by Ian M. Banks, but you should probably wait until next month to pick it up for $1. I have the paperback sitting right in front of me and I'm still going to buy the ebook.

Next, an interesting looking programming language for visualization and graphics. I wish I had more time to look into stuff like that.

Finally, Game Tunnel's list of 2008 best indy games - I want to check these out when I have more time.

January 3, 2009

Getting Tomoe to recognize Japanese characters in English on Fedora 10

I recently set up Fedora 10 on a ThinkPad X60 laptop, which has worked very well. I'll write about that a bit later I think. There are still some issues with the wireless connecting to a WPA2 network, and the Intel 945GM video drivers are apparently pretty crappy right now due to changes in the underlying architecture, but things are working really well on this small laptop.

One of the things I am interested in using this laptop for is as a Japanese-English dictionary. To that end I installed GWaEi, a Japanese-English dictionary using the Edict files. I had been using GJiten for a long time, but that project hasn't been updated in a while, so I thought I would try something new. (Warning: I had to compile from source, and make a minor change in main.c to have it default to called "/usr/local/bin/gwaei" when re-setting the language variables. A simple change, shouldn't be tough to figure out but feel free to drop a comment if you want more info.)

GWaEi seems to work well.

The other thing I like to do is use Tomoe, a linux-based handwriting recognition engine for Japanese and Chinese characters. It is conveniently available for install via yum. The problem is that after installation, no matter how many strokes I entered no candidates would show up. That is odd. I vaguely remembered that when I installed Tomoe on Ubuntu recently that I had to copy some file.

So, for those facing a similar problem at home: if you want Tomoe to make suggestions when you are running in an English environment, you will have to do something like this:

$ sudo cp /usr/share/tomoe/recognizer/handwriting-ja.xml /usr/share/tomoe/recognizer/handwriting-en.xml

That took care of the problem for me. Yay! Now I can fingerpaint my way to successful Japanese reading.

My next project: see if I can upgrade to the newest version of Ubuntu on the OLPC that I have, and get Tomoe working on that. It is a bit smaller than the X60 and might make a good machine to take to coffee shops. (That isn't really true though: the keyboard on the X60 is vastly superior to the one on the OLPC, but the OLPC has a much better screen for doing lots of ebook reading.)

Anyway, hope that helps someone out there. Jeez, it seems like my entire vacation has been spent on sundry computer things at home.

January 2, 2009

MAME Frontends in Ubuntu

I have been interested in getting MAME running on my desktop again. I never got SDLMAME working on Fedora 8 because performance was terrible. For some reason, under Ubuntu with my Intel GMA3100 onboard video things work well enough to play pacman at least (and probably others, SFA2 seemed to work well, but as always SSF2T was too fast) so I wanted to see what the state of MAME Front Ends for linux was. (Oh, also I have to run in the software render mode to get the tab menus to show up, otherwise they are garbage. Once things are set up though I don't need the menus, so back to hardware mode which is fast.)

kxmame doesn't believe me that my sdlmame executable is a MAME executable, it complains that it can't find any MAME instance and errors out. So that is enough time spent on that one.

kamefu refuses to find any of my roms. So that one is out.

wah!cade seems to work, but it uses bitmaps for the default UI and on my 24" monitor I can't read anything. So I stopped play with that for the time being.

Two more I am looking at are Lemon Launcher (probably won't install it unless I can't get AdvancedMenu to work) and AdvanceMENU which is my current best hope. There is a description of the install process here. So far installing AdvanceMENU has been a real pain. this thread has helped me fix a few SDL errors and of course I had to install a lot of stuff to get this to compile.

Once compiled I couldn't get it to run well: I added sdlmame as an xmame emulator, and had to make some init files for that. But it would freeze when trying to launch a program. I am now checking on the configuration used by Piapara, a bootable ISO that runs advancemenu and sdlmame to see what they use. To do that I had to create a boot cd, mount the iso in a loopback filesystem, then mount the app.img in a loopback filesystem. Finally, the relevant setup info they use is:

emulator "MAME" mame "/usr/bin/mame" "-inipath /mnt/pendrive/sdlmame/ini"
emulator_roms "MAME" "/mnt/cdrom/sdlmame/roms"
emulator_flyers "MAME" "/mnt/cdrom/sdlmame/flyers"
emulator_altss "MAME" "/mnt/cdrom/sdlmame/snap"
So they set up the emulator as a mame emulator. Ok. I'll try that.

Also, a new discovery: using the sdlmame option -gl_forcepow2texture fixes the menu corruption bug that I was seeing. So yay for that! Actually on further investigation I also needed to set the filter to 0 and (remove bilinear filtering from output, which makes things look more pixelated and better anyway) and enable gl_glsl in the sdlmame config. It seems to be working well now.

I haven't got advancemenu to work though. I have all these great artwork cabinet and screenshot files, but I can't get advancemenu to launch sdlmame. Ah well. That is enough time spent on that today.

Annoyingly Super Street Fighter II Turbo still runs too fast. Tapper runs well though, so that is good enough.

January 1, 2009

Spinning down external USB hard drives on Ubuntu 8.10 Intrepid Ibex

So I recently switched to Ubuntu Intrepid Ibex from Fedora 8 for no real good reason. Anyway, I have two external 500GB drives that I use for backup, accessing about once a day or so. I would like for those external USB drives to spin down when I am now using them. How can I get that to happens?

Generally, it looks like you can use the sg3-utils package to do that. First, install the package:

$ sudo apt-get install sg3-utils
To do this right you should probably make sure that your hard drives show up in predictable places. The best way I know to do that is to set a label on the partition, and then it should mount in /media/LABEL. So here is a good article on how to rename external USB hard drives. I saved the script at the URL above as /usr/local/scsi-idle and following along:
$ mount
(note that sdc1 has mostly TV shows, sdd has my other data)
$ sudo umount /dev/sdc1
$ sudo umout /dev/sdd1 
(name both partitions appropriately - one partition per drive)
$ sudo e2label /dev/sdc1 BackupTV
$ sudo e2label /dev/sdd1 BackupData
Then power cycle the drives. Check that they show up as expected:
$ mount
...
/dev/sdc1 on /media/BackupTV type ext3 (rw,nosuid,nodev,uhelper=hal)
/dev/sdd1 on /media/BackupData type ext3 (rw,nosuid,nodev,uhelper=hal)
That looks good to me. I also added to /etc/local.rc
# added by devans to spin down external disks.                                                            
# See http://ubuntuforums.org/showthread.php?t=560958&page=3                                              
# and the related wiki entry https://help.ubuntu.com/community/ExternalDriveStandby                       
# Spin down any external SCSI drives after "X" seconds:                                                   
/usr/local/bin/scsi-idle 900 &
which should take care of that. Just for this first time, I ran it myself: $ sudo /usr/local/bin/scsi-idle 900 & While that did work ok, it only spun down the hard disks. It did not go the extra step of shutting down the fans on the hard drives, so they are about as noisy as they were before. Still, at least they are spun down now. I set the drives to their "auto" setting, but it looks like that will only kick-in and shut them down when they are unmounted, which I do not want to do.

July 19, 2008

Electric Sheep Beta playing nice with mplayer and xscreensaver on Fedora 8 Linux

I've been a fan of the Electric Sheep screensaver for a long time, but I haven't been running it lately.

It turns out that there is now a new linux version, so I thought I would try to install it on my home machine running Fedora 8. The source install went great, I already had all the prerequisites installed, so simple configure; make; make install went fine.

The problem was a strange interaction with xscreensaver and mplayer. I did a system update recently, and mplayer decided that the "stop-xscreensaver=1" setting in my ~/.mplayer/config stopped working. That means every ten minutes while I'm watching videos, the screensaver kicks in. So I switched to the alternative method of preventing the screensaver from starting up by using the heartbeat command to tell the screensaver not to start every 30 minutes.

That worked great. When I got around to installing Electric Sheep though, I found a problem: Electric Sheep uses mplayer to play the videos it creates. Mplayer tells the screensaver to not invoke. So while Electric Sheep worked fine from the command line, when run as a screensaver it would just quit immediately.

The solution: a simple bash script that checks whether electricsheep is running. If it is, it does nothing, otherwise it will call the "do not invoke the screensaver" command. If anyone else is interested, here i the script:

#!/bin/bash
#
# devans 2008-07-19
# This script will check to see if electricsheep is running, and if so, it will do nothing
# If it is not running, it will invoke xscreensave-command -deactivate to prevent xscreensaver
# from running.  Mostly this is useful is set as the command to call via heartbeat-cmd in
# ~/.mplayer/config

if [[ -z $(ps -ef | grep electricsheep | grep -v grep) ]] ; then
#   echo "suppressing";
    xscreensaver-command -deactivate
fi

The whole thing works well if you put heartbeat-cmd="~devans/suppressXscreensaver.sh &" in your ~/.mplayer/config

So far the screensaver seems a bit unreliable. It has trouble starting up sometimes and the screen just blanks. I haven't tracked down what the problem is, but when it works it is really beautiful.

July 12, 2008

Playing around with the OLPC / XO

My family recently came to Japan to meet my wife and her family, and my dad gave me a great toy: an OLPC / XO laptop that he got through the Give one Get One program.

I'm excited about the XO project because I think it is a good project: essentially bringing laptops (or ebooks, primarily) to children around the world to help improve education. I think that is a nice goal to concentrate on. There are lots of interesting very small laptops available now, primarily the Asus Eee PC laptop, but I really like the OLPC because it has some very interesting hardware. I actually think the EeePC has better hardware for more traditional laptop use - particularly the keyboard is easier for me to type on, and the machines have better specs - but the OLPC has an amazingly interesting display, and I really like the sturdy build of the hardware.

The most interesting thing about the display is that if you bring down the backlight on the display it has a really great, 200dpi black and white reflective display that is readable in sunlight. You just can't get that one a normal laptop. I also like how the color works (just turn on the backlight) and you get color, albeit at a lower apparent resolution. This reminds me of the Apple //e that I grew up using. I want to play with some of the drawing programs to see if by placing individual dots you can change the color like in the old double hires Apple //e display.

The next two sections talk about getting Japanese support working under the Sugar interface that ships on the OLPC by default, and installing XFCE as an alternative to Sugar that makes the laptop seem like a more traditional linux desktop.

The way I have mostly using the OLPC though is with an install of Xubuntu on an SD card, which is the second main section of this post. The remainder of this post is mostly raw notes about the install process, and probably very boring unless you are into this kind of stuff.

You might be more interested in reading about using FBReader to read ebooks on the OLPC, or using Anki to study Japanese. If this looks interesting, click the "read more" link.

read more (2790 words)

April 16, 2008

First impressions of OSX 10.5 Leopard and Time Capsule

A while ago I bought a 500GB Time Capsule and Leopard at work to use as a backup solution. It took me a while to find the time, but I had a lot of papers to read recently so I installed Leopard and set up the Time Capsule while reading the papers.

First off, I'm really impressed with Time Capsule. It costs only a bit more than an external hard drive, but had a Gigabit Ethernet 3-port switch and 802.11n wireless. It feels very solid, is small, and is very, very quiet. I have an external IO Data 500GB hard drive right next to the Time Capsule, and it just drowns it out. Even after turning off that drive, I had a hard time hearing the Time Capsule. I'm really shocked at how quiet it is.

Setting the Time Capsule up was really simple, Zero Conf is just great for getting things on a network and making it easy to find them. Since we've got wireless at work I turned off the wireless interface, and just used it to extend the wired connection I already had. Once I set up Leopard on my machine, I started the Time Machine backup, and I have to say again that I am really impressed with how quiet the drive is: I had to listen pretty hard to hear the write noise. I was using ethernet plugged into the Time Capsule for the backup, so I was surprised that I was only getting about 10 MB/sec (sometimes up to 12) to the drive, which surprised me. The Gig-E connection should be able to support 125 MB/sec. Well, not really of course, but 10 MB/sec is an order of magnitude less than I expected!

Interestingly, the 802.11n interface should typically get (according to Wikipedia, so who knows if this is true) about 9.25 MB/sec, or about what I was seeing with the ethernet connection! Wow. Now I want to get one of those Time Capsules at home...

I'm very, very impressed with Time Machine. I haven't played with going back in time for the recovery stuff yet, but it looks like it will be great. I had been using an rsync-based backup solution that would use hard links to not duplicate files that have changed, but there are some problems with that solution. It works very nicely, but every four hours I run the backup script, and for about ten minutes the hard disk thrashes madly as rsync runs down the file tree looking for new files. I had it in a cron job, and nice'd the process so the machine is still totally usable, but there is a noticeable drop in performance, and sometimes you get the dreaded beach-ball while it thinks about file operations.

Time Machine uses a very similar approach, but Apple does some other magic that lets them link directories (not possible with standard Unix tricks as far as I know) and more importantly uses a DBUS-style notification system that tracks file operations, and keeps a list of things that have changed since the last backup. It doesn't have to check the whole file system for changes, it just has a list of things that have changed. The backup is impressively fast. I'm really impressed with how easy it all comes together: this is a consumer solution. There are no real drawbacks: the backup is so fast you wouldn't notice it happening (after the initial full backup) and the drive is so quiet that you can just forget it.

That's all I have to say about that. There are some other things about Leopard that I've noticed that I like:

- The "Spotlight" window is now a real finder window and you can use exposé to find it. In Tiger if you are in finder and do a "show application windows" the Spotlight window does not exist!

- Spotlight is a bit faster now for launching applications, which I use all the time.

- I like the coverflow view for windows much more than I expected. Being able to see large views of PDFs makes it really nice to look for a paper. It also helps recognizing Word and Excel documents quickly much more than I expected it would.

- They fixed the annoying "yellow cursor bug" in the X11 server. Yay!

Here are some annoyances:

- Mail.app again defaults to sending Japanese email in UTF-8. Generally, I think this is a good idea, but for some reason if email isn't in ISO-2022-JP for Japanese a lot of mail clients turn it into gibberish (mojibake). What really surprised me is that this happened with someone on Windows Vista - I don't know what client he uses, but if you are on Vista shouldn't your email client be able to read the headers and use UTF-8? Most mobile phones can only accept ISO-2022-JP, but I would think big computers could deal with it fine. By the way, to set the default encoding for Mail.app, you can enter "defaults write com.apple.mail NSPreferredMailCharset "ISO-2022-JP"" in a Terminal window. I vaguely remember having to do something like this on Tiger as well.

- I don't know if they fixed this, but at one point after a recent Security update in Tiger, all text pasted into Mail.app lost carriage returns. It was awful. There are ways to work around it (paste into Text Edit first, make rich text, paste into Mail.app, then make it plain text again) and so on, but that is annoying. I'm sure I'll notice this pretty quickly because I'm always pasting text into email. But I haven't noticed it yet. (I hope it is fixed.)

February 18, 2008

Added Gravatar support for comments

This person wrote a Gravatar plugin for bBlog and I set it up here for the comments. I like the idea of Gravatars a lot, so I'll try them out and see how it goes.

If you are interested in the procedure, download the function.gravatar.php file from the website above, throw it in your bBlog_functions directory, go to the admin panel, and rescan to pick up the plugin.

Then you have to edit the bBlog/inc/bBlog.class.php file, search for the format_comment function and insert the line $commentr['posteremail'] = $comment['data']->posteremail; before the return statement.

Then you have to edit your template file to put the gravatar image where you would like. The code should go into the comments section in the post.html file. I added a <img style="margin-right: 10px;" src="{gravatar email=$comment.posteremail}" align="left"/> line to template and things are working great.

I would actually like to add Wavatar support on top of the Gravatar stuff, but that would take more than five minutes, so it will have to wait until I get more time.

Also, if you want to change any of the default values, you can set them in the call to the gravatar plugin in the template file (e.g., add {gravatar email=$comment.posteremail size=80 default=http://example.com/image.png}).

February 17, 2008

Dave, your forms are EVIL!

A while ago, I got this email from a friend of mine:
i left a long comment on your starbucks entry, but i got a character wrong in your CAPTCHA, and it told me to click the "back" button and try again. however, when i did that, all the entries in the form were BLANK! I LOST MY COMMENT! AIEEEE!!!!!
This is a problem, because I don't like evil in any form. Particularly in my forms. I've been really busy, but spent about thirty minutes poking around at the bBlog internals (looks I chose a bad horse: the bBlog project seems to have died!) and made the field values sticky on an error with the captcha submission.

If you are interested in the changes, here is a diff file that you can apply via patch: patch -b bBlog.class.php bBlog.captcha.diff against an unmodified version 0.7.6 bBlog.class.php from the bBlog install.

Once that is done, you have to modify your template to add value="{$commentFieldPosterXXX}" where XXX is some value. The only exception is {$commentreplytitle} which remains the same.

Here is the relevant portion from my template:

<div class="formleft">Comment Title</div>
<div class="formright"><input name="title" size="80" type="text" id="title" value="{$commentreplytitle}"/></div>
<div class="clear"> </div>
<div class="formleft">Your Name: </div>
<div class="formright"><input name="name" size="80" type="text" id="author" value="{$commentFieldPosterName}"/></div>
<div class="clear"> </div>
<div class="formleft">Email Address: </div>
<div class="formright"><input name="email" size="80" type="text" id="email" value="{$commentFieldPosterEmail}"/>
          Make Public? <input class="checkbox" name="public_email" type="checkbox" id="public_email" value="1" checked=\
"checked"/></div>
<div class="clear"> </div>
<div class="formleft">Website: </div>
<div class="formright"><input name="website" size="80" type="text" id="url" value="{$commentFieldPosterWebsite}" />
          Make Public? <input class="checkbox" name="public_website" type="checkbox" id="public_website" value="1" chec\
ked="checked" /></div>
<div class="clear"> </div>
<div class="formleft"><img src="/randomImage.php" alt="verification image"><br>Image verification:</div>
<div class="formright"><font color="red">{$commentFieldError}</font></div>
<div class="formright"><input name="verification" type="text" id="verification" /></div>
<div class="clear"> </div>
<div class="formleft">Comment:</div>

<div class="formright"><textarea name="comment" cols="80" rows="10" wrap="VIRTUAL" id="text">
{$commentFieldPosterComment}</textarea></div>
Of course, you also need to follow the relevant directions in my original post on adding a captcha for comment protection in bblog. But it looks like things are working well for me here.

The diff patch is here.

November 30, 2007

Fedora Core / RHEL don't seem to come with up-to-date versions of Scalar::Util

I ran into this problem once before on a Red Hat Enterprise Linux machine, and now I am having the same problem on my Fedora Core (7 I think) machine. The problem is that CPAN is unable to install various things, in particular Bundle::CPAN, because:
You don't have the XS version of Scalar::Util
I have also run into problems where I get an error message like:
Weak references are not implemented in the version of perl
Both of these can be fixed by using CPAN to install an up-to-date version of Scalar::Util:
perl -MCPAN -e shell
force install Scalar::Util
You might have to force the install if you have an up-to-date version of Scalar::Util (which I did) but it had been compiled without the proper options to support XS.

If you are interested in finding out more a simple search should turn up lots more information. I find it shocking that modern Perl distributions can be around that leave out such a commonly-used feature. Hopefully Redhat / Fedora will get around to fixing it in their next release (and if not, at least I've left myself a note here.)

SELinux Problems, solutions

In general, I really like the idea of SELinux.  It conceptually allows you to specify users, roles, and types for files and then checks against those conditions when something tries to access the files.  It will only allow users that match the user condition, roles that match the role condition, and types that match the type condition to actually proceed and work with the file.

So, for example, if you a role of "web object" which the webserver account fills, and it tries to write data into some directory that is not fit for that role, say /bin/, the operation will fail and the would-be hacker can't put their trojaned ls program or whatever in there.  That is a good thing. 

The problem is that the SELinux system is really kind of complicated.  If you don't know that it is there, you will just have things failing mysteriously, especially if you add directories in places that aren't set up with the system already.  Running a webserver on one of my linux machines, I ran into this problem.  It is particularly an issue when you are trying to use executable scripts on your web server.

Here is an instance of a problem that I had:  I copied some scripts over from a production machine to a dev machine so I could build some more functionality based on the existing scripts.  They went into /var/www/cgi-bin/, a normal place for scripts on my system.  To find out the attributes they should have:
$ ls -ldZ /var/www/cgi-bin
drwxr-xr-x  root root system_u:object_r:httpd_sys_script_exec_t:s0 /var/www/cgi-bin
$ ls -ldZ /var/www/cgi-bin/my.cgi-rwxr-xr-x  root root                                  /var/www/cgi-bin/my.cgi
So that isn't good.  When I try to access the file, I get a 404, which isn't really true: actually the file is there, but SELinux is preventing it from being used.  So, what should I do?  I need to make the files have the correct SELinux settings.  First, I try setting the type of the file:
sudo chcon -t httpd_sys_script_exec_t /var/www/cgi-bin/*
chcon: can't apply partial context to unlabeled file /var/www/cgi-bin/
Oh, that's no good.  What's a partial context?  Looks like you need to specify all the attributes of the file.  Usually the files already have some default attributes, so it is ok, but for some reason these guys have nothing.  I don't know why.  But if we apply all of the attributes that we need:
sudo chcon system_u:object_r:httpd_sys_script_exec_t /var/www/cgi-bin/*

And that fixed the problem.  Also, if you need to know where SELinux error messages are, they are sometimes in /var/log/messages, sometimes in dmesg output, and sometimes in /var/log/audit/audit.log or possible /var/log/avc.log, depending on how your system is set up.

November 22, 2007

Using the Gimp to automate "cleaning" of scanned B&W images

Sometimes I play around with translating manga for fun. It doesn't really seem to get me much of anything except for lots of requests from punks to translate their favorite naruto doujinshi, but it is fun and relaxing with the bonus that I learn some cool casual Japanese.

I like to translate with a program I wrote for manga translation that puts all the text in the bubbles and whatnot, so I need to have scans of the manga to work from. So sometimes I'll scan manga.

The problem is that the scanner I have now doesn't do a great job of getting nice clean images. I often use (when I'm not feeling too lazy) pnyxtr's scan cropping program and his great scan rotation program, but if you have nasty scans that doesn't help much.

Since I'm both poor and lazy, and have a small amount of moral fiber, I use The Gimp for my image-editing needs. The Gimp has a nice scripting interface using a kind of baby scheme, so I wrote a script that will take a grayscale image, run the despeckle filter on it, re-level the image so that whites are whiter and blacks are blacker, then resizes the image to a reasonable size and converts it to a fixed number of colors. That makes the straight scanner output look just dandy, if you choose proper values for the high and low thresholds, and also automates a lot of clicking that I would have to do otherwise.

So in the off chance that anyone is interested, here is a Gimp script that will do all of that. If you place it in your proper gimp script location - on Linux or Mac OSX that would be ~/.gimp-2.2/scripts/ (or possible ~/.gimp-2.0/scripts/), and on Windows if you use the version of Gimp that I am using you can look in its application folder for something like a share/scripts folder and drop it there. On your next start-up of the Gimp you should see a new Script entry "Manga->DarkenResize" that will pop up a dialog and ask you for some values, with reasonable defaults specified.

You can get the script here: mangaDarkenResize.scm

One thing that annoys me is that I still have to manually set each picture to "Greyscale" mode. I should be able to do this automatically, but I don't know much about the Gimp, so I'm punting on that for now. It is also possible to use the Gimp in a batch processing mode, which would be totally awesome, but I haven't had time to make that work with this script. If someone makes any headway on that area, please let me know.

June 25, 2007

Installing the Perl Technorati API implementation WebService::Technorati on OSX via CPAN

This will be yet another entertaining dive into installing software on OSX. For today's task, I want to install the Perl WebService::Technorati API interface to the Technorati blog search / aggregation site. Usually, I do something like $ perl -MCPAN -e shell to get a CPAN shell, and then install WebService::Technorati and hit "yes" when asked about following references. This time, things failed because one of the requirements, XML::Parser, needs to have the XML parser Expat installed. I do have Expat installed - twice even, once from the Apple X11 extra install stuff, and once via the OSX packaging project fink - but CPAN couldn't pick either of those up since they aren't in the most obvious of places.

So it looks like I'll need to install XML::Parser myself. Since CPAN went to all the trouble to download the files that I need to do the install, I cd into the proper directory (have to spawn a root shell first since I'm installing in the system directories) cd ~/.cpan/build/XML-Parser-2.34-uwBcpV, then create a Makefile that actually points to the correct install: perl Makefile.PL EXPATLIBPATH=/sw/lib EXPATINCPATH=/sw/include, and then the magic incantation: make; make install. Since all that looked like it went well, I'll drop back into user mode, sudo perl -MCPAN -e shell and re-try install WebService::Technorati.

That installed some XPATH tools, and then failed spectacularly with a missing LWP/UserAgent.pm, which is something I should probably have installed anyway. Installing LWP::UserAgent failed with a missing HTML::Tagset, which installed easily (isn't CPAN supposed to chase down these dependencies for me? Usually it does, but today CPAN is really having trouble. It must be because of the rain.) The subsequent install of LWP::UserAgent went well. A final install WebService::Technorati completed fine as well.

So, a quick post on what I had to do to get that installed. Mainly, I needed to manually run the XML::Parser install process myself so I could create a Makefile that pointed to the existing install that I had put in via fink. Then I had to chase down some other CPAN modules that were necessary. Not to bad all told.

Just to be cautious, I tried a few things to test the install. Things were working just great. Of course, after about an hour of hacking away at some code, it looks like there are some problems with the WebService::Technorati Perl API: the SearchApiQuery does a cosmos query instead of a blog search query, but since I've got the .pm files, we can fix that easily enough...

June 18, 2007

Added SPF support to fugutabetai.com mailserver

Using the Postfix and SPF howto over on howtoforge.com I added SPF support to the Fugutabetai.com mail servers. It looks like that is working well, so after a few weeks go by, I'll try to remember to look at the logs and see if SPF has been useful at all at rejecting spam from known unauthorized users. Since SPF (Sender Policy Framework) won't work without domains deciding to actually fail to accept mail from machines that are not authorized to send that mail, I've switched the Fugutabetai.com DNS SPF records to hard fail on mail that doesn't originate from the proper places.

June 11, 2007

Emulating Wizardry I: Proving Grounds of the Mad Overlord on the GP2X

Not too long ago, I wrote about my old-school CRPG party-based gaming obsession. Randomly coming across a version of Wizardry I-III for cell phones in Japan rekindled my interest, but sadly my cell phone "terminal" (端末) is not compatible so I can't play it. It was very aggravating, because I know that there is a chance to play Wizardry while I'm in the subway - which is usually about two hours a day.

Instead, I decided to look around, and found another great old game, Dragon Wars, playing that at home on a laptop is a bit too difficult to do frequently. When I get home I am tired, and usually just plop down for dinner and some tv before going to bed. Something that is portable would be very nice...

Since I started thinking about playing older CRPGs, I thought that the most likely approach would be to get a Sony PSP or Nintendo DS and look at the state of emulation on those consoles. I'm not really a Sony fan, since they have had problems in the past with excessive Digital Restrictions Management / Digitally Restricted Media / DRM in the past, and I know that they discourage people running homebrew software on the PSP by releasing firmware that makes it difficult to run unsigned code. Things seemed a bit better on the Nintendo DS, but still requires some hardware solutions for using SD cards, and certain versions won't boot run homebrew software.

Then I found the GP2X, an amazing little portable linux device that runs off of a regular SD card, has a very nice 320x240 screen, and two ARM processors at 200 MHz that can be clocked faster and slower. The system is open, supports homebrew out of the box, uses open source software as a base, and has a plethora of emulators available for it.

Read on for more information about my (successful!) quest to get Wizardry emulated on the GP2X.

read more (1098 words)

March 18, 2007

Installing Retexturizer plugin for Gimp on OSX

Resynthesizer is an amazing plugin for the Gimp, an amazing open-source photo editing program. Since I usually run OSX, I like to use Gimp.app, but it does not include the Resynthesizer plugin. Gimpshop, a version of Gimp modified to be more like photoshop, is supposed to include Resynthesizer, but the version that I downloaded did not seem to have it. So I've decided to try to build the Resynthesizer plugin from source to see if I can use it in either Gimp from fink, Gimp.app, or Gimpshop.

First, to build the plugin read more (613 words)

March 1, 2007

Updated referrer tracking code

This is an update to a previous post on simple referrer tracking. The original code is from justinsomnia.org.

I was playing around with the referrer tracking code I use here over the past few days, and made some minor adjustments: I use a DHTML slider to set the range of days over which to track, and I allow you to choose to get a count of the unique referrers or requests. There is a max limit of 365 days, but that is totally arbitrary. Maybe I shouldn't have used a DHTML slider, but I thought it looked cool. I guess I could also change it so that "365" means "all days" too. But I won't do that right now.

Only the referrer php code has changed. You can see it in action at http://fugutabetai.com/referrer2.php and you can get the code from http://fugutabetai.com/referrer2.txt (putting the code up like that probably isn't a good idea, since I don't like how the database.php is included in the file so obviously, but whatever.) You can get the original code with the Javascript for the logging from the justinsomnia.org link above. I don't believe there are any possibilities for remote exploits since I make sure that the only user-settable parameter is a number. Anyway, thanks for the great JS code and the nice base for some fun referrer tracking. This does everything that I want for logging and is a hell of a lot easier than using a stats package for log parsing.

When I looked at some logging solutions, I was really surprised at how much referrer SPAM there was. This approach is much better from that point of view, except that users need to have JavaScript turned on for it to work.

January 24, 2007

Getting Japanese input in UTF8 to work in LaTeX on OSX

I have installed LaTeX via fink, and when writing a paper, came across the problem that Japanese just does not work in my environment. This is a bit of a problem as I'm writing about some analysis of English, Japanese, and Chinese data. So I'm going to try to get Japanese input working in my LaTeX distribution.

read more (372 words)

January 22, 2007

Chasen on OSX 10.4

I found myself needing to do some Japanese morphological analysis today, which usually means either Chasen or Kabocha. Kabocha is supposed to be the new hottness, running fast, but a quick search didn't turn up any precompiled packages for it on OSX. ChaSen, on the other hand, is available in DarwinPorts, but since I went with fink, and just want to get something running, not enter into some sort of strange package-management land-war, I skipped that. It also turns out that apple is hosting an package for chasen. It install with a nice installer into /usr/local/bin/chasen.

It seems to run fine, includes the necessary dictionaries, etc., but I had a strange problem. When I tried to process a file in shift-jis encoding using the -i s flag, I would get this strange error: chasen: /usr/local/lib/chasen/dic/ipadic/cforms.cha:9-21: no basic form

That wasn't really what I wanted: I wanted parsed output. Well, since things seem to work just fine in EUC-JP encoding, you can always use iconv to convert from shift-jis to EUC-JP and pipe the resulting output to chasen: iconv -f SHIFT-JIS -t EUC-JP file.txt | chasen

That works nicely.

January 16, 2007

Building libbow and rainbow on Mac OSX

For some research work I'm doing, I would like to do some Bayesian modeling for text categorization. Since I'm not interested in re-inventing wheels when there are plenty of very well constructed wheels available for the taking, I thought I would install Andrew McCallum's libbow and rainbow packages on my Mac. Of course, I had a little bit of trouble, and thought it would be a good idea to document how I went about installing since a quick google search didn't turn much up. (Not quite true: I turned up one or two references to the packages being available via fink, but I couldn't find them in my setup.)

Details follow the read more link.

read more (484 words)


Go to Page: 1  2 3