I’ve been mostly working on two things lately – finishing my next game, and learning the Chinese language. While the former might sound more computation-related, there are some interesting things to note about the latter.
The main challenge in learning Chinese is, of course, mastering the huge amount of characters. Having a long interest in memory improvement techniques, I decided to try using those to help. Basically, I’m using a home-made variant of the peg system, where each symbol that can appear in a chinese character gets a peg. So far it seems to be working fairly well – I’m pretty sure my vocabulary has been increasing quite rapidly since I started.
And this makes me wonder about the possibilities of it. How long should it take to memorize, say, 2,000 characters and their meanings? Using ordinary methods, the answer seems to be a long, long time. But if we look at it from a pure, information-theory point of view, is that justified? It took me about a week of somewhat-intensive reading to finish “Moby Dick”. How much information did I get from it, compared to the amount of information in a Chinese dictionary? And what about a 90 minute movie? Does the fact “Indiana Jones is an archaeologist who wears a hat” contain less information than “白 represents the color white”? My guess is no, and that means that watching such a movie gives me the equivalent of a huge boost to my vocabulary, but it gets spent on less useful information. If this is the case, all that remains is to find a proper translation between the information I want to get (i.e the meaning of, say, 2000 Chinese characters) and information I can easily remember. This is exactly what the peg system does, but maybe it’s not enough. Could there be a stronger system? Maybe the science fiction idea that we could “upload” a large body of information immediately to our brains is not that far from being possible – it might not take a few second like some cybernetic implant from a movie, but if we could maintain our maximum rate of information absorbing and limit it to the information we want – I’m not sure learning an entire language’s vocabulary in a month is that far fetched.
As far as I know, there’s not much research into it. There are world memory competitions, but all I could find about it were repeating discussions of the same few known methods. What I haven’t seen so far, is an examination of it from a computer science perspective – which is odd, because I think it is obviously a branch of computer science – it has the same need for data structures and algorithms, the only difference is that instead of running the algorithms on electrical microchips, we run them in our brains. But the link system and the peg system are the same linked list and array we know from our computers. What we need is to write algorithms that are better suited for our brain’s hardware – dealing less with performance speed, and more with reliability, as our C array isn’t likely to suddenly forget what was in cell 2, but for our brain’s array this is the main concern.
Hopefully I’ll find some interesting results.
 As far as I know, being still in early stages.
 My new favourite word: 伥 (chāng), which my dictionary defines as “Ghost of somebody devoured by a tiger who helps the tiger devour others”.
 Of course, there are several ways to define “learning” a vocabulary. Being fluent in a language includes a wide array of vaguely defined associations you have for each word that such learning will not get you. But a “robotic” dictionary translation available in your brain for each word is a huge step forward.
 Interestingly, while computers generally use arrays and linked lists as the basic, “native” data structures, and implement other data structures with them, the peg system uses a map to implement the array, implying that our basic data structures in brain algorithms are the linked list and the map.