My adventures with Python, functional programming, Korean, Test Driven Development and more

June 2012 update

2012-06-07, by Dan Bravender <dan.bravender@gmail.com>

Sooyeon and I have been very busy for the past 7 months with our brand new baby daughter Haereen. When I get a few spare seconds I still work on my personal projects but it's been pretty slow going as of late. I'm definitely not complaining since Haereen is totally worth it. Anyway, here is a summary of some of the things I've worked on in my free time.

dongsa.net 2.0 preview

dongsa.net is a Korean verb conjugation algorithm that explains the contractions and exceptional rules for many tenses and levels of politeness. The current version is written in Python but there was a whole rewrite of the engine into JavaScript over two years ago to make it easier to port to Android and iOS. I've had the rewrite sitting in a branch for around a year now but it's only been a month or so since I pushed up preview.dongsa.net.

qc a QuickCheck implementation in Python


>>> def simple_adder(a : int, b : int) -> int:
...    return a + b
...
>>> from qc import check_annotations
>>> check_annotations(simple_adder)
>>>
>>> def lying_adder(a : int, b : int) -> int:
...     return 'result: %d' % (a + b,)
...
>>> check_annotations(lying_adder)
Traceback (most recent call last):
  File "", line 1, in 
  File "qc/__init__.py", line 137, in check_annotations
    f.__name__, output, response_type, test_args))
AssertionError: Was expecting lying_adder to return <class 'int'> but got <class 'str'> with these arguments: {'a': 0, 'b': 0}
>>>

dbmigrate

git-based fabric deploys are awesome

2012-05-11, by Dan Bravender <dan.bravender@gmail.com>

When I was pointed to Python Deployment Anti-Patterns by a colleague I was a little shocked to see that the way we had been deploying applications with fabric and git over the past two years (over 1500 deployments) with no problems was being called an Anti-Pattern. There are definitely many ways to deploy software applications and they all have their pros and cons. Our process is by no means perfect but the way that we use git within fabric is definitely one of the best parts of our deployment process.

In his follow-up article Hynek made the case that deploying with native packages is better. On my team we actually started out deploying packages but since developers deploy we got sick of waiting for the packages to build and upload so we switched to git-based deploys. Packages are, of course, a valid way to deploy software, but I think the criticisms leveled against fabric git-based deploys might have been against doing these deploys in a specific way. I'm writing this article to show you how we have been successful using git-based fabric deployments.

I agree with many of his points:

Upstart is my personal favorite because it is very stable and the configuration is succinct. Here's an example of a daemon that I've had running on one of my personal projects for several years with no issues:

start on runlevel [12345]
stop on runlevel [0]

respawn

exec sudo -u www-data PATH=path/to/app VIRTUAL_ENV=path/to/virtual_env path/to/python_server_script

Why anyone would want to write a billion line init script now that upstart exists is beyond me. Perhaps they don't know about upstart. It could also be that they are stuck on CentOS or RedHat. My heart goes out to you if that's the case. I know how that feels.

Here are some of the points I disagree with:

I've seen others make this same claim and on the face of it it makes sense up to a point. On my team developers deploy so we keep templates of configurations and the differences are kept in context variables that are passed into the templates. If there is sensitive information we keep it outside of version control. Really, if you want to test changes from dev through staging and onto production why not keep the configuration as similar as possible? On projects where teams are creating very generic apps that are being deployed with many different configurations I understand the need for this but most web application developers are deploying to a very specific target (production). It makes sense to keep your development settings as close to that target as possible. For example, if staging and production have the ENCRYPT_STUFF setting set to TRUE then your development environment should have it set too. But they should all have different keys and the production setting should be kept out of version control.

It doesn't scale. As soon as you have more than a single deployment target, it quickly becomes a hassle to pull changes, check dependencies and restart the daemon on every single server. A new version of Django is out? Great, fetch it on every single server. A new version of psycopg2? Awesome, compile it on each of n servers.

Fabric will roll through all commands on all servers in a predictable manner one after the other. That way they can be taken out of the load balanced pool before the service is HUP'd and put them back in after it comes back. If this is done automatically with unattended package upgrades (as proposed later in the article) isn't there the possibility that all your servers become unavailable at the same time?

You should always run pip and if there is nothing to upgrade it will simply do nothing. There's no need to download all of the packages - you can have them seeded on each server before starting the upgrade.

It's hard to integrate with Puppet/Chef. It's easy to tell Puppet "on server X, keep package foo-bar up-to-date or keep it at a special version!" That's a one-liner. Try that while baby sitting git and pip.

I can't speak to integrating fabric with Puppet and Chef but it's basically a one-liner to update a remote target with fabric:

cd path/to/git/repo && git reset --hard [deployment-sha1] && pip install -r path/to/requirements.txt

It can leave your app in an inconsistent state. Sometimes git pull fails halfway through because of network problems, or pip times out while installing dependencies because PyPI went away (I heard that happens occasionally cough). Your app at this point is – put simply – broken.

A git pull will not leave your app in an inconsistent state. If the network fails it won't change your working copy and fabric will stop the script because git will return an error. That said I don't think you should use git pull anyway since it is one more moving part that can fail during deployment and it requires that your private repository be open to the world. Since git is distributed a developer can push their repo's immutable store to the target using git push during deployment. Running git reset --hard [deployment-sha1] after the push is finished will update the working copy. Since there is a repo on the other end you'll only be sending the new objects since the last push to the target. This is why git-based deploys beat packages speed-wise. Most of our code deploys take a fraction of a second.

Even a private PyPI mirror can fail. Why not upload the packages to the target and run pip like this?

pip install --no-index --find-links file:///[local-path-to-packages] -r requirements.txt

You could even store your packages in a git submodule and sync your submodules at the same time. (We sync submodules as well, it's only a little extra work.)

Weird race conditions can happen. Imagine you're pulling from git and at the same time, the app decides to import a module that changed profoundly since its last deployment. A plain crash is the best case scenario here.

When you install with a package you have to stop and restart the app. You need to do the same thing if you use git and fabric. With git, it takes much less time to update because only the modified files are swapped out. Packages copy whole trees of files many of which are most likely not modified between releases so the app will be down longer while this disk IO takes place.

Check out the gitric fabric module I wrote that performs git deployments in the way I've described above.

One other valid problem I've heard raised about git-based deploys is that you can end up with cruft in your working copy that sticks around like .pyc files where the original .py file is deleted and there is the chance that this file could still be imported even though the original .py was deleted. Since cloning a local git repository uses hard links you can seed your remote repository and then clone it locally on the same machine (even for slightly large projects this only takes a little extra time). Stop your server, move the old repository out of the way and move the new cloned repo where the old one was (or use a current symlink) and then restart the server.

Git-based deployments make sense for scripting languages where there isn't a compile step so the repo can be sent as-is to production (so it wouldn't make sense for a Java application). It's worth harnessing git to make deployments faster. If we only had to deploy once a month we might've settled for package-based deployments but we push often and got sick of waiting for packages to build and upload.

Why I don't use git's staging area as part of my normal workflow

2012-04-01, by Dan Bravender <dan.bravender@gmail.com>

Git has a lot of bells and whistles and there are a lot of different ways to achieve any given task. I've seen several workflow documents explaining how to use the staging area and git add --patch to only commit some of the changes in your working copy so you can keep nice clean logical commits. I love the idea of having a clean history and logical commits but I think there are some drawbacks to using the index as part of a normal workflow.

The problem with the staging area

I always want to commit working code (if possible) because I could switch to another task and I don't want to come back to broken code (or even worse - pass along broken code to a colleague). That's why I always try to commit all changes in my working copy. When you start getting fancy and using the index to commit partial changes your working copy and your index get out of sync and it's possible for your code to appear as though it's working when you are using it or running your tests but the code that you commit might not work. One thought-experiment example: it's possible to commit a new test but not the new function or method that the test is checking even though the test is passing. Whenever your tests or tools are running they are running against your working copy. Whenever you are running your code interactively you are exercising the code that is in the working copy. Your file system does not understand that you only have some chunks of your changes staged for a commit. When I want to remove in-progress or half-baked code I use (and recommend) git stash --patch. It is the opposite of git add --patch -- it removes changes interactively and creates a stash of the unfinished chunks of code. Like any other stash, the changes can be popped or applied later. Once you have removed the in-progress code you can run your tests and know that you are committing working code. Another benefit of patch stashing unfinished changes is that they become part of the immutable history which can be used as a backup for in-progress code.

There you have it. That's why I avoid the index and frequently use git stash --patch in my git workflow.

When Failure is the Best Option

2011-11-22, by Dan Bravender <dan.bravender@gmail.com>

In Python (and most sane scripting languages) when something unexpected happens an exception is raised and execution stops. Damien Katz calls this the "Get the Hell out of Dodge" error handling method in his seminal Error codes or Exceptions? Why is Reliable Software so Hard?. In his article Damien explains several different ways of handling errors. None of the options is to ignore that something went wrong. That's because ignoring problems only makes them worse. But that's exactly what PHP and MySQL do for certain classes of errors.

Here's how Python handles failure:

% python
>>> print a
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
NameError: name 'a' is not defined
>>>

PHP's default behavior is to just keep chugging along ignoring problems that could cause huge issues:

 % php 2> >(while read line; do echo -e "stderr> $line"; done)
<?
printf("%d\n", $a);
?>
0
stderr> PHP Notice:  Undefined variable: a in - on line 2
stderr> PHP Stack trace:
stderr> PHP   1. {main}() -:0

A "notice", eh? Really? If you try to delete a record from a database using sprintf to ensure it is a decimal and accidentally pass in an undefined variable as the id PHP will happily tell the database to delete the record with the id of "0". In my opinion this deserves more than a "notice" in the logs. PHP's default error-handling behavior is a recipe for disaster.

Fortunately, if you must use PHP, there is a way to make PHP behave in a more sane manner and force every unexpected event to raise an exception (exception_error_handler from http://www.php.net/manual/en/class.errorexception.php):

 % php 2> >(while read line; do echo -e "stderr> $line"; done)
<?
function exception_error_handler($errno, $errstr, $errfile, $errline ) {
    throw new ErrorException($errstr, 0, $errno, $errfile, $errline);
}
set_error_handler("exception_error_handler");

printf("%d\n", $a);
?>
stderr> PHP Fatal error:  Uncaught exception 'ErrorException' with message 'Undefined variable: a' in -:7
stderr> Stack trace:
stderr> #0 -(7): exception_error_handler(8, 'Undefined varia...', '-', 7, Array)
stderr> #1 {main}
stderr> thrown in - on line 7

There is one huge problem with this. If you are building on an existing PHP project or have a ton of PHP code it's likely that you will see frequent breaks once you make failure the default. That's a direct result of the language designers choosing such lenient default behavior. If you are starting a new project using PHP you should get your head checked (see phpsadness.com). If you pass a psychological evaluation and you still for some reason want to build a new project using PHP you should turn on immediate failure by using the error handler mentioned above and write tests to exercise your code. You'll thank me later.

Now, let's look at default behaviors of some popular databases:

 % psql
# create table simple_table (col varchar(10));
CREATE TABLE
# insert into simple_table (col) values ('1234567890a');
ERROR:  value too long for type character varying(10)

 % mysql
mysql> create table simple_table (col varchar(10));
Query OK, 0 rows affected (0.23 sec)

mysql> insert into simple_table (col) values ('1234567890a');
Query OK, 1 row affected, 1 warning (0.08 sec)

mysql> select * from simple_table;
+------------+
| col        |
+------------+
| 1234567890 |
+------------+
1 row in set (0.00 sec)

Yup, by default MySQL just silently truncates your data. Ronald Bradford, a self-proclaimed MySQL Expert sums it up nicely: "By default, MySQL does not enforce data integrity." That should set off alarm bells in your head if you are using or considering using MySQL. The whole point of a database is to store valid data. The simple solution is to use a database that cares about your data like Postgres but if you must use MySQL you should set

SQL_MODE=STRICT_ALL_TABLES

For more on why this is necessary see Ronald Bradford's Why SQL_MODE is Important blog post.

PHP and MySQL are widely used. Maybe it is because their default settings are so lenient that it makes it easy for beginners to pick up. No one really cares if there was an error saving a hit on your personal homepage to your database. The problem is that these settings are not conducive to writing quality software. When starting from scratch it's better to choose technologies that have smarter defaults like Python and PostgreSQL because the libraries and software written using these technologies will properly fail instead of doing unexpected things and filling your database with garbage.

PS

You can (and should in most cases) also force hard failure for bash scripts by running set -e at the top of the script. See David Pashley's Writing Robust Shell Scripts for more.

Why cherry-picking should not be part of a normal git workflow

2011-10-20, by Dan Bravender <dan.bravender@gmail.com>

Cherry-picking Workflow

Changes are made in a maintenance branch off of the release where the bug was found. The commitid from this change is then cherry-picked into the current integration branch.

% git checkout -b maintenance-branch <release tag or commitid> # (if the maintenance branch doesn't yet exist)
% git checkout -t origin/maintenance-branch # (if the branch already exists)
% git commit -am "Made a bug fix" # note the commitid
% git push origin maintenance-branch
% git checkout integration-branch # (e.g. master)
% git cherry-pick <commitid>
# resolve conflicts
% git push origin integration-branch

Problems with the cherry-picking workflow

For example:

Under the cherry-pick workflow, even though a bugfix was cherry-picked into the integration branch git cherry -v reports that the integration branch is missing this commit from the maintenance branch.

% git cherry -v maintenance-branch integration-branch
- 33de19776f4446d92b45e1fdfb2d9c37b3a867a7 Made a bug fix

Merge Workflow

Changes are made in a maintenance branch off of the release where the bug was found (same as in the cherry-picking workflow). The maintenance branch is then merged into the current integration branch.

% git checkout -b maintenance-branch <commitid> # (if the branch doesn't yet exist)
% git checkout -t origin/maintenance-branch # (if the branch already exists)
% git commit -am "Made a bug fix"
% git push origin maintenance-branch
% git checkout integration-branch # (e.g. master)
% git merge origin/maintenance
# resolve conflicts
% git push origin integration-branch

Benefits

Confusingly enough one of the most useful tools in the merge workflow to check that the state is correct is named "cherry". It shows the commits that were made in one branch and not the other. It should show no missing changes when following the merge workflow because all the changes from the maintenance should make their way into the integration branch:

% git cherry -v maintenance-branch integration-branch
[nothing]

Draws

dongsa.net Korean Verb Conjugation Android App 2.0

2011-07-07, by Dan Bravender <dan.bravender@gmail.com>

The iPhone port of dongsa.net has had a native interface for several months now thanks to the work of Max Christian. The native interface for Android has been ready for a while but as I built it I added way too many new features. Instead of waiting until everything was fully polished I decided to strip back the new features and release an update to get the native UI out there. For those of you that were using the built-in Korean keyboard you will need to download a new input from the Market.

If you have an Android phone you can download the app directly or get it on the Android Market.

The same Javascript conjugation engine is used for both the iPhone and Android. Only the UI code has to be maintained separately. If you are curious about how this is done you can look through the source at GitHub.

One Way to Build a Federated Social Network Part 2

2011-06-03, by Dan Bravender <dan.bravender@gmail.com>

In my last post I wrote up a scheme to share structured information with friends that doesn't require a central service like Facebook or Twitter. If you didn't read that post this post will make very little sense to you. In this post I will explain how losing central control might not mean losing everything you are used to and I will revise a couple of the implementation details.

Some Questions

So, you might be thinking, if there is no central control can we find each other? Can we have a feature similar to Twitter's hashtags? Great questions. The answer is "definitely!" Currently, websites are completely federated and there are many services that allow you to quickly get publically shared information: search engines. The protocol will make it so you mark whatever information you want as public. If you don't want to be discovered you don't have to be. Search companies can write bots that crawl the nodes to gather up public information just like they do with websites today. Private will be the default setting so you will have to explicitly mark content as public. I think this is a huge step forward from centralized services because no middle man ever has to see your private information but we can still have the benefits of the centralized services.

What about backups? What if my hard disk fails? Will I lose all the photos and status updates I've posted over the years? Another great question. Let's add a feature so every post is signed digitally by your private key. In the event that something does happen you can authenticate with your private key to your friends' nodes and request all the personal information that you have shared with them and then verify the integrity of the information that you received. You will definitely want to back up your non-public assets. We could add another feature to make it so you can export encrypted copies of your content and you can back it up however you please.

Implementation Changes

Requiring a VPS or a host that is directly accessible via the internet is probably going to limit who can use a system like this. I'm starting to think that the client should run on your machine and connect with other clients via NAT hole punching. NAT punching is used to share information among peers on P2P networks so it is perfectly suited for this project. There would need to be a service that connects clients in this scenario. Perhaps the lookup could be based on some user UUID or public key signature. This is the point where, if you are paranoid, outsiders could potentially see who is connecting with whom. There would have to be a way for people to connect directly to one another as well so you can avoid the matching services if you are paranoid. I looked at a Google project called libjingle which implements TCP on UDP for telephony. Also, Skype's protocol was partially reverse engineered very recently. Some existing library will make this functionality possible.

Git is starting to look like a pretty bad choice for synchronizing data. Since you have to synchronize all or none of the data in a repository it makes it impossible to share only some of the data with certain peers. I'm going to replace Git with a much simpler protocol that offers more flexibility. Since it knows which friend is making a particular request the system can limit what data is shared with them based on your settings. The synchronization protocol will be very similar to the protocol that CouchDB uses to show what updates have been made to a database. This is what the CouchDB update feed spits out:

{"seq":12,"id":"couchid","changes":[{"rev":"1-beef2479643c2b380f99507a7767f3d5"}]}

Similarly, in the new synchronization protocol after a client authenticates to another client with their key (all clients will run SSH servers) the requesting client would make an HTTP request for changes since their last successful synchronization. The response would be a list of all the ids that have changed or been added which are visible to the peer making the request:

f572d396fae9206628714fb2ce00f72e94f2258f
7269918432597df3ec42b62acd81643d79134cf8
...

I don't want to make too grand a statement about the importance of having a decentralized replacement for services like Facebook and Twitter. I will say that I think email would have failed spectacularly if it had been centralized instead of federated and my guess is that it will be better for everyone except the investors and owners of the centralized social networks if we move to more secure distributed systems.

One Way to Build a Federated Social Network

2011-05-31, by Dan Bravender <dan.bravender@gmail.com>

There are companies making millions of dollars off of your personal information in exchange for giving you a way to easily share data with your friends. Facebook, Twitter and all the rest of these networks are all centralized services. You give them your data, they keep a copy and hopefully they share the data with only the people you told them to share it with. The funny thing is that for decades we have had email which is a federated service that gives us a less structured way to share data with our friends. With email we could send pictures to our friends. With Facebook we get the power of croud-sourcing. Our friends can tag and comment on our pictures. Surely there must be a way for us to do this in a federated way without requiring that we hand our data over to a middle-man.

There have been attempts at building a Federated Social Network. Diaspora is one such attempt that drew a lot of early buzz and funding. When I saw it I thought "thank goodness someone is solving that problem". I must say that one year on it appears to me as though they are not addressing the real problem. I was thoroughly disappointed with the result of their work: a Rails-based clone of Facebook. In my opinion what is needed here is a new federated protocol that can be easily extended with new content types and that protects access to data with private keys. On top of that new clients (web, desktop, mobile, whatever) can be built.

The following is a brain dump of one way of doing this.

Every user would have their own node or share a node with a group of people that they trust on a server of their choice. A working title for this project could be "A League of Nodes" but hopefully we'll come up with something better than that.

Basic infrastructure

Very few systems are as efficient as Git is when it comes to synchronizing data so it will be employed for sending and receiving updates.

Data will be stored in UUID filenames, similar to the way that git stores its data in .git/objects, but we will store these objects in the working tree. The files will be either JSON strings or binary data. The one required JSON field will be type. Creation date and author can be extracted from the Git logs.

A NoSQL document store such as CouchDB or MongoDB would be used to store the files and the JSON documents. At this point if you are familiar with CouchDB and its awesome built-in synchronization capabilities you might be questioning my sanity about implementing a new synchronization protocol. The problem with CouchDB's synchronization is that if we want to share with another user they would automatically get all of our friends' data as well. (There might be a way around this, please leave me a comment if you know of a way.) When an update is received from another user the UUIDs in your database would be updated with the latest content. To prevent tomfoolery UUIDs would be prefixed with your own unique UUID for the user who made the update so people could not clobber or update existing UUIDs in your database. When an update is received it is merged into your database.

A Twitter timeline or Facebook status listing is a single query:

> db.content.find({'type': 'update'}).sort({'date': -1})
{ "_id" : ObjectId("4de3d4a4475e87b4e7ce60d1"), "type" : "update", "user" : "Dan", "body" : "Dan welcomes everyone else", "date" : "Tue May 31 2011 02:32:20 GMT+0900 (KST)" }
{ "_id" : ObjectId("4de3d3f9668d1f97b29312ad"), "type" : "update", "user" : "jane", "body" : "Jane says: here I am", "date" : "Tue May 31 2011 02:29:29 GMT+0900 (KST)" }
{ "_id" : ObjectId("4de3d3db668d1f97b29312ac"), "type" : "update", "user" : "fred", "body" : "First post from Fred", "date" : "Tue May 31 2011 02:28:59 GMT+0900 (KST)" }

Your Facebook photo albums are a little more work on the client (styling and such) but not too much:

> db.content.find({'type': {'$in': ['photo', 'photo-tag', 'photo-comment']}}).sort({'date': -1})
{ "_id" : ObjectId("4de3d746475e87b4e7ce60d4"), "type" : "photo-tag", "user" : "Dan", "photo" : ObjectId("4de3d6f1475e87b4e7ce60d2"), "date" : "Tue May 31 2011 02:43:34 GMT+0900 (KST)", "x" : 20, "y" : 20, "body" : "There I am!" }
{ "_id" : ObjectId("4de3d721475e87b4e7ce60d3"), "type" : "photo-comment", "user" : "Dan", "photo" : ObjectId("4de3d6f1475e87b4e7ce60d2"), "date" : "Tue May 31 2011 02:42:57 GMT+0900 (KST)", "body" : "Nice photo if I do say so myself" }
{ "_id" : ObjectId("4de3d6f1475e87b4e7ce60d2"), "type" : "photo", "user" : "Dan", "photo" : "pointer to file in GridFS", "date" : "Tue May 31 2011 02:42:09 GMT+0900 (KST)" }

Another thing that is great about this system is that it can handle new content types that don't need to be imagined when the system is created. In the same way that web browsers handled unknown tags during their Cambrian Explosion unknown content types can either be ignored or a little blurb can be shown explaining that the client doesn't know how to handle it. Clients could even give users the option to view the raw JSON of an entry to see if there is any useful information therein.

Some problems that need addressing:

This is of course an explanation of the technical implementation of a truly federated social network. The actual implementation would need to be much more user friendly and hide these technical details from the user.

See part 2.

Korean Romanization

2011-02-15, by Dan Bravender <dan.bravender@gmail.com>

The state of Korean Romanization is a total disaster. After I learned Hahn.geul (the Korean script) I got rid of all of my textbooks that used Romanization because they were more confusing than they were helpful. Still, Korean needs a better Romanization system for foreigners visiting the country. There is a way to Romanize which is much closer to the way that words are actually pronounced in Korean.

Every book you pick up that has Romanized Korean in it seems to use a different system and they are all terrible. This is because in many cases there is no direct mapping between some Korean sounds and English sounds. Another reason is that there are many languages that use the Latin alphabet and they don't all pronounce every letter or diphthong in the same way. In 2000 the Korean government came up with yet another system (Revised Romanization) which, in my opinion, didn't do enough to fix the problems in the existing systems.

Here's one example for the Romanization of wall (벽):

Revised RomanizationMcCune-ReischauerYale
byeok (byeog)pyŏkpyek

As you can see, in some systems the initial "ㅂ" is transliterated as a "b" and in some it is a "p". I'm not entirely sure that this is something that can be addressed in a Romanization system because the sound in Korean is between the "p" and "b" in English. One of the biggest problems with the older systems is the use of accents to denote different vowels. Surely there must be a way to write out the vowels so they can be read without having to look up how the accent transforms the vowel. That is one good thing about the new system: no accents.

I believe the "eo" comes from the French who gave us "Seoul". This always trips up my non-Korean-speaking friends. In my system I have taken the sound of the Korean vowels and changed them so that the sound of the vowel is unambiguous. In this case "eo" is more like "uh" and then "oo" smashed together. In my system "서울" is "Suh.ool". Periods are placed between consonants.

My wife attended "Soongsil" University. I believe it was Romanized this way because it is transliterated. If you take each component of "숭실" and turn it into a list you would get "ㅅㅜㅇㅅㅣㄹ". Transliterate that without context and you would end up with something similar to "Soongsil". However, in Korean if you have a "ㅅ" followed by certain vowels it actually becomes "sh". Another confusing bit of the existing transliteration is "ㅣ" to "i". Usually without an "e" the "i" is short like in "sit". The actual sound is usually more like "ee" in "eel". When you meet a Korean whose last name is Lee their actual name is actually just "ee". There is no "l" sound at all in the beginning of their name (unless they are North Korean... but you probably won't have too many chances to meet many North Koreans). In the system I have created "숭실" becomes "Soong.sheel" because that's how it's actually pronounced.

The system I have come up with is not a direct transliteration. It first runs the Korean string through pronunciation rules and then it is transliterated from the output of the pronunciation engine.

You can try it out below or on dongsa.net but I doubt that it will work in older browsers or Internet Explorer:


dongsa.net iOS App

2010-11-12, by Dan Bravender <dan.bravender@gmail.com>

Thanks to the work of Max Christian dongsa.net now has an iOS port which should work on your iPhone, iPad or iPod. You can download it from the iTunes App Store.

Thanks Max!

Here are some screenshots: