Thursday 23 April 2009

please go there, not here

My new home is a wordpress blog at www.wgtower.nl
All of my readers should go there for my blogging from now on. This thing here will have no more posts

Tuesday 10 March 2009

predictive text isotaps

So, ever since getting a girlfriend, I've been texting like crazy. I actually know my way around the numpad, and I've gotten pretty fast at texting. This has pulled my attention toward the input method most people use for writing text messages on their phone nowadays, called predictive text.

Every button to on your phone is mapped to multiple characters. This is how you can resolve phone numbers like 1-800-pizza, but they are also used for texting. 2 maps to {'a', 'b', 'c'}, 3 to {'d', 'e', 'g'}, et cetera. In the old, traditional texting method, you would press the 2 key once to get an 'a,' twice for a 'b,' and so on. In predictive text you tap the 2 key only once, no matter which letter you want. The software uses a dictionary of words to see which word your sequence matches to. For example, to get 'you,' you type 968. The dictionary reveals that 'you' is the only word with this combination, and that word is printed to the screen. This is much more efficient than old style texting, since you need to press less buttons to get your word.

Of course, there are inevitable some words that have the same number sequence. For example, 'home,' 'good' and 'gone' all have the sequence 4663. These words are referred to as isotaps. In this case you can select the word you want with the arrow keys. The list is of course ordered by frequency of use.

Isotaps are quite annoying, because they require that you keep looking at the keyboard in case you meet one. So, I was wondering, how many isotaps are there in the english language? Using some python magic I got the answer, along with a few other random statistics. This was ran against the Ubuntu word list, which is usually more expansive than a cell phone one, but it gives an indication. It would be interesting to compare against other languages, to see which languages is most amenable to text prediction (for a better indication, weight isotaps with word frequency)
  • number of isotaps: 14152
  • longest isotap: size(14) 78873322846617 ["putrefaction's", "stupefaction's"]
  • sequence with most isotaps: (length: 12) 22737 ['acres', 'bards', 'barer', 'bares', 'barfs', 'baser', 'bases', 'caper', 'capes', 'cards', 'cares', 'cases']
amount of sequences with x isotaps:
x = 1: 0
x = 2: 4429
x = 3: 980
x = 4: 308
x = 5: 129
x = 6: 39
x = 7: 14
x = 8: 12
x = 9: 3
x = 10: 1
x = 11: 0
x = 12: 1

amount of isotaps of length x:
x = 1: 24
x = 2: 151
x = 3: 645
x = 4: 2134
x = 5: 2937
x = 6: 3389
x = 7: 2603
x = 8: 1366
x = 9: 517
x = 10: 218
x = 11: 81
x = 12: 49
x = 13: 11
x = 14: 5

Note that there are more isotaps of length 14. given is merely an example. The script is available here. I'm afraid I didn't do any fancy graphs, but the data is pretty interesting I think.

This is why I don't like Mines


I'll wait a second while everyone absorbs that screen shot. Just click it and look at it in full size. still waiting... You got it? So yeah, that's a game of Mines (that's minesweeper if you use windows) that I just finished. 99 mines, time is 5:12. That's pretty awesome. Except that I messed up on the last mine. Those who are familiar with Mines will see, though, that the square I picked is equally as valid as the other one. I did not lose due to a logic error, but merely because I was forced to guess.

And that just bothers me. I like Mines because it is a simple logic game that does not take much mind power to solve but does require some thinking. The problem with it is that most of the games I play end up requiring some guessing to finish. This is frustrating. One guess reduces your chances of winning to a mere 5o percent, due to no fault on your part.

Which makes me wonder, is it possible to generate minefields which are ensured to be solvable without guessing? How expensive is that?

Thursday 19 February 2009

the GUI, CLI style

For the more tech-literate users, the Linux command-line interface is the best productivity boost one can get. Literally anything can be done with a few magical commands, and all without having to move this pesky mouse. Typing a command like 'locate' is much faster than launching whatever desktop search application you use, and you can go on to doing something with the found file in the same step using a pipe.

But the GUI has its uses. For one, it looks a lot snazzier. Second, any kind of data visualization is much better done in graphical mode (graphs, any kind of WYSIWYG editors including office applications). Still, it would be nice to be able to use the keyboard, which is a much more efficient input mechanism than the mouse (albeit with a steeper learning curve).

A recent project has emerged which attempt to bring the CLI to the GUI: Ubiquity. Ubiquity is a Firefox plug-in, which adds a natural language-like command line to the browser. For example, one can select some text, press CTRL-K (the shortcut to bring up Ubiquity), type "translate this to english," and press enter. This runs the selected text through Google's translator, and replaces your selection with the result. There is much more that can be done with this plug-in. I suggest you check out the video I linked above.

Another CLI-like interface, though much more limited, is Gnome Do (KDE users should check out the KDE equivalent, KRunner). It allows you to basically do away with the Gnome menu. You can open files with it, run applications, extract or create archives, tweet, etc. If you have the right plug-ins, there is almost nothing that cannot be done. Granted, the actual command line is still far more powerful. But Gnome Do can allow you to be much more productive, if you take the time to set it up and learn it.

Ubuntu has Gnome Do packages in its repositories, but they are unfortunately far out of date. Gnome Do has its own repo's, but they, too, lag a release behind. the only feasible option to get a recent release is to compile from source. luckily, Gnome Do's wiki provides good instructions, and compiling is fairly straightforward if you have gone through the process before. There is one caveat if you're running an older Ubuntu release: mono, one of the dependencies, is out of date. You can compile the latest version of mono yourself, or you could fall back to the latest official Gnome Do release:
bzr checkout -rtag:0.8.0 gnome-do-0.8.0 gnome-do
and install mono 1.9.1 from the badgerports repo. This will allow you to use Gnome Do's basic functionality. Building the plug-ins from source was somewhat more difficult. Dependency hell ensued, and I failed to build the plugins from source. I was finally able to get the plugins by downloading the gnome-plugins-0.8.0 deb for intrepid, manualy extracting the plugin files with the archive manager, and placing them in /usr/share/gnome-do/plugins. Not very amenable to upgrades, but working nonetheless. When you do get the package working, the functionality is pretty sweet.

Friday 13 February 2009

pygame: decoupling rendering from game logic

The basic execution profile of any computer game almost always looks basically the same. The devil is in the details, of course, but the basics are always like this:
  • listen to user input, change game data appropriately
  • update game objects (move objects, do collision detection, etc.)
  • render world to the screen
rinse and repeat that as fast as possible, and you have a game. However, there is a subtle problem with this approach. On faster computers, the game will run faster. If this seems desirable to you, think about networked games. Do we really want those fast computers to slow down just for the slow ones? and how exactly would we do that? And what about scenes that are harder to render? do we have to slow the game down just for that?

We can see that it is, therefore, desirable to make your game run at the same speed no matter what happens. With one caveat: We always want to render as many frames as we possibly can. More frames is a smoother experience, after all.

So we can see that it is in our interest to decouple the game logic updates from the rendering. The rendering should then proceed as fast as possible, but the game logic should proceed at a constant rate regardless of the frame rate.

The right way to do this, of course, is to look at the clock. First we must decide how many logic updates we want to do per second. For most games, about 20 to 25 will suffice. So the time between to updates should be 1/25 seconds. Then, every pass through the rendering loop (which still runs as fast as possible) we check to see how much time has passed, and, only if it is necessary, we make an update to the game. Then we proceed with rendering. If the update is skipped, we need the renderer to interpolate between the two logic updates, so that it does not just render the same world multiple times. This will result in a smoother game experience for fast computers, but will not slow down the game on the slower, older hardware.

in my gunge framework, there is a new Clock class in the experimental branch time-exp that handles this transparently for you. It watches over two things: real time, which advances continuously, and game time, which can be advanced by 1/(updates_per_second) by updating the game. It is worthy to note how game time advances in discrete units, since the game world is updated in steps every iteration of the main loop.

To decide whether the game should update, the Clock has the concept of game latency, or how far game time lags behind the real time. The continuously advancing real time increases the game latency, and it is decreased when the game updates, by 1/(updates_per_second). There are two strategies for deciding when to update the game:
  • game_latency > 1/(updates_per_second) -- this updates the game if the latency lags behind more than one update_length
  • |game_latency - update_length| < |game_latency| -- this updates if it reduces the absolute value of the latency, i.e. if the game time comes closer to real time because of the update.
The caveat of the second method is that it allows game time to be ahead of real time, with the advantage that this may allow the game time to get closer to the real time than it could if it must always lag behind. Another disadvantage of the second method is that it becomes harder to calculate the interpolation factor for renders, but it allows for updates at times when the first method must helplessly fall further behind real time.

Wednesday 11 February 2009

pyton: everything is an object

Remember that python game framework I was creating? The one with the silly name? (I won't actually tell you what the silly name was, dig through the archives or something). Well, I don't have access to the original code, since it is all in a nice git repository on my pc at home. But this has not prevented me from working on the code.

I've started a new fresh repository on my system here, and to make sure I won't make the same mistake, I've set up a github repository. The new name for the framework is gunge, but the only part that is currently available is the event handling part. I have, however, come across an interesting design challenge that really made me appreciate the fact that in python, everything is an object.

You see, the event system provides a decorator called bind that allows you to statically bind events. It would be used something like so:

class SomeClass(gunge.event.Handler):
@gunge.event.bind(pygame.KEYDOWN, {'unicode': 'a'})
def on_keya(self, event):
print "some code"

The function on_keya would then be called whenever a KEYDOWN event occurs, and furthermore, only if event.unicode is equal to 'a.' There are a few more powerful features to this second argument, called the attribute filter, but that is for another time. How would this be implemented? there is a difference between this function and an actual instance method, bound to an instance.

My first idea was to store the information in some class variable called handlers, and have each instance use this variable to bind its actual methods upon initialization. This works in the simple cases, but becomes problematic with inheritance. A class variable does not carry over nicely between inherited classes, and furthermore, there is the problem of overidden functions. If a function is bound to a certain event in the parent class, and that function is overidden in the child, what should happen? should the parent binding still count, and how would this be implemented?

Implementation issues aside, this method also requires the user to create a new class variable in each class that is an event handler, and pass this handler into the bind decorator so that it can be used for annotation. The problem seemed insolvable. But then a wise lesson came to me: If you're design runs into implementation issues, do not try to solve the implementation issues. It is likely that you need a different design.

It came to me that functions, like pretty much everything in python, are just objects. With types, attributes, the whole shenanigans. So, it seemed much simpler to simply store the binding information as an attribute of the function. Since introspection can be used to find all of an instances methods, it is trivial to retrieve this information. This means that the boiler plate code of the class variable is gone, and that derived classes will retain the bindings of their parent class, unless that particular function is overridden. In that case, the parent functions is not part of the child class, and the binding must be respecified. And this is actually desirable, for clarity's sake.

Furthermore, we can allow a single event Binder object to allow multiple callbacks. This means that a class method can simply store its own Binder object, and each new class instance can simply add its own callback to this object. This reduces the amount of event Binder objects in the event manager drastically, as a class with one method bound to an event will have one Binder object, no matter how many instances of that class exist. This can have huge benefits in both space and speed, since an event has to be tested against an attribute filter just once.

The 'everything is an object' paradigm, together with powerful introspection capabilities, allow you to do a lot more with functions than would normally be possible. And the lesson for this week, of course: If your design has problems, don't try to work around them. Instead, change the design.

Thursday 5 February 2009

variable assignment in python

Hello everyone, welcome to the blog that rarely has posts! I should really stop making jokes about this. I think it's time for another programming blog post.

The following confusion often arises with programmers new to python, who come to understand that assignments in python are done by reference, not by value. This is correct, after a fashion, and not really that different from other languages. But a little knowledge is often more dangerous than no knowledge at all. While the following statement is not all that confusing for a c programmer who knows nothing about python, it is confusing if he has learned that all assignments are reference assignments:

>>> a = 1
>>> b = a
>>> a = 2
>>> b
1

>>>

The expected result, of course, is 2, not 1. Doesn't a reference the same data as b after the second statement? So shouldn't b reflect the change that was made in a?

No, it, shouldn't, because integers are immutable, which means we can't change them. This might seem strange. We could change the value of a without problems, couldn't we? how can it be immutable then? Well, Let's look at what happens when a simple kind of assignment happens in a fresh python interpreter. What goes on inside if we do this:

Python 2.5.2 (r252:60911, Jul 31 2008, 17:28:52)
[GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)]
on linux2
Type "help", "copyright", "credits
" or "license" for more information.
>>> a = 1

Two things are being created here. First, a name. Second, some data. These things exist separately, something that becomes relevant later. Let's visualize the names and data in the interpreter at this time (cheesy graphics will follow):



Right, so we have one name, "a," pointing to some data, in this case an integer object with the value of 1. Pretty straightforward so far. Let's continue along this line. We'll execute the second statement in the above little piece of code:

>>> b = a
The result of this is that a second name is created, b. This is pointed at the same integer object as a, as we specified in the statement:



Now, let's shake things up: we're going to reassign a to something else:

>>> a = 2

Most people would think that the data would simply change it's value to 2, and the picture would remain basically the same. But this is where python catches you: integers are Immutable. That means their value can not change. Ever. So, what happens instead? A new integer object is created:



a is now pointing at the new integer object specified. But what about b? Well, we never told it to point to something else, so it is still pointing at the same old object that a used to be pointing at.

Here, then, is the fundamental aspect to grasp. You can make two names point to the same thing, but if that thing is immutable, it cannot be changed. Therefore, it makes no sense to think changes to one name would be reflected in the other. Because you cannot make changes, only make that name point to something else, which, indeed, messes up your synchronization.

In a language like C, creating two integers a and b will immedeately result in the creation of two integer objects in memory. The above behaviour of these two is expected, since the two names are pointing to two different objects. Upon learning that in python, the assignment b = a results in what is essentially the behaviour caused by the C statement int * b = &a, confusion arises. The missing gem here is the immutability, which makes the python behaviour sane again.

python: intuitive to the newbie, yet without being inconsistent.

Saturday 31 January 2009

“There is nothing wrong with change, if it is in the right direction”

So, the new semester's started, and there's some changes coming this way. First off, I'm moving out, away from the host family I've been staying with, and into to the dorms. This is at the same time a sad and happy tiding, but I believe it to be a necessary and good step in my program. Betty and I discovered that my social life was gravitating away from her house and in to the dorms. In short, I figured I was spending so much time there I might as well get myself a bed. They are way more comfortable then the chairs I spent nights in, in rooms not my own.

I haven't been doing a whole lot of interesting stuff since California. Classes are progressing normally, Chinese is hard, my algebra class is ridiculously easy (come on people, linear equations? It's a flashback to year one of my Vossius career), but getting more interesting every class. On the free time side of things, there's the regular multi player starcraft sessions like always (needless to say, I can kick the asses of a lot, but not everyone in the dorms. Ulukman is like a machine, unbelievable). Also, I've started to play the old Diablo games in anticipation of Diablo 3. While the original will get old in a matter of minutes, Diablo 2 is definitely a time-sucker. I recommend it to anyone who wants to play a good old hack-n-slasher.

The other thing soaking up my time (what I have left after taking off my social life) is Stargate: Atlantis. I have come into the possession of all five seasons of this sci-fi show. I watched the older Stargate: SG-1 a few years ago, and it was pretty good. Atlantis is a step up, however. I can recommend this series to anyone who is lamenting the absence of new, good star trek shows, and also fans of sci-fi in general. This is stuff worth watching.

So, tomorrow is move day. I do not know my room yet, but my roommate will most likely be Kersten, who is also moving out of Betty's house. We will still visit them of course, and call regularly. But it'll be a new and exciting semester.

Today's quote is from Winston Churchill. Couldn't find anything better, sorry.