I'm currently taking a course in Typography. For a class project, we were asked to investigate the relationship between typography and language. While individual letterforms are interesting from a design standpoint, they are traditionally used to convey language and emotion.

Because we could choose any medium for the project, I decided to write software. I recently have become very familiar with Core Text, and I wanted to try my hand at various natural language processing toolkits.

I had this idea, early on in the semester, of animating type in a fullscreen application. I have a strange passion for large text fields and unusually colored text carets. I ran with these ideas for the project.

I wanted to make something to visualize the emotion given in a particular sentence or phrase. I thought natural language processing sounded perfect for determining the emotion of a sentence, and then I could use Core Text to typeset it, and Core Animation to animate it.

First, though, how do we determine the emotion present in a given set of words?

I searched around, and found a very interesting component of a project called NodeBox. This en Python library is capable of evaluating the emotion of a given noun, verb, adverb or noun. It includes a lite version of nltk, and has no external dependencies. Awesome.

Now that I had emotions for given words, I could begin to write some Objective-C to do the typesetting and animation. I bugged Gaynor for help with using Python.framework to glue Python into an app.

I wrote a text attributer, which would utilize the emotion mapping given by en to assign a custom "emotion" attribute to each word in a provided phrase of text. Then, a font assignment object would parse these emotions, and set the typeface accordingly. Finally, using some (very hacky disgusting) Core Animation, I put each glyph into its own CALayer, and animated them all.

For setting fonts, an easy choice would have been to find typefaces that match the emotions expressed in that sentence. This seemed too obvious. Alex pointed out that it would be cool to assign fonts representing the opposite emotion. This sounded very interesting. Once I implemented all the fonts, reading sad messages in "happy" typefaces and vice-versa felt unusual and uncomfortable. This emotional connection was surprising to me, and exactly what I wanted for the project.

Once I glued all these components together, something still felt off about the project. Due to the inherent nature of Python (and the en library), processing the emotion of text was slow. 2 or so seconds, but still, not instant and visceral like I wanted.

How could I get instant emotion for a phrase, as the user types it? No natural language toolkit (that runs on commodity hardware) is that fast. But, if you can't do something fast enough at runtime, maybe you can precompute it!

It took very little time to write a program to test the emotional status of every word in /usr/share/dict/words. 50 hours of Mac Pro time later, I pickle'd these to a Python dict. Then, I used PyObjC (in the interpreter!) to translate this dict into an NSDictionary property list I could load into my app.

I was initially worried about having enough disk space to hold this emotional mapping file. After discovering it was less than 50K, I jumped at the opportunity to have it in memory.

Combing through the emotional mapping, I noticed that there were many emotional synonyms. "sadness" and "depression", "joy" and "elation", and so on. I wrote a quick "synonymizer" object to resolve these synonyms, thereby increasing the word scope of the app.

Emotive Text Screenshot

And now it's instant, and pretty much done. You can clone it and play with it at the GitHub Repo. It requires OS X 10.7 "Lion" or later.

(Please excuse the dirty, disgusting CA hacks and unoptimized NSDictionary code. This was written quickly, and for one purpose, and is not something that I will need to maintain).