Category Archives: computational linguistics

I want ONE MEELYUN sentences

I’ve been planning to do some work on my term extractor to make it a bit smarter. It’s currently a rule based system on top of various machine learning tools. This is perfectly legitimate, but it’s starting to hit the limitations of that approach. I’d like to experiment with a more intelligent approach using machine learning more directly.

To do this though I need a training set. My plan is to do this by building a first pass using the existing version on some sentence corpus and then editing that to taste.

Of course, to do this I need a decent sentence corpus. So today I set out to generate one. It was a lot fiddlier than it should have been, but I think in the end I’ve got a decent one.

I’m presumably not the only person to need something like this, so I’m making a largish sample of it available. It’s not hard to generate yourself but it’s something of a pain, so maybe I can save you some effort.

So, here you go. A bzipped list of one million random sentences from wikipedia.

The format is obvious: Plain text, one sentence per line.

I make no guarantees about the quality of the data (there’s definitely some noise), and I definitely don’t claim this to be a statistically fair sample of Wikipedia. But initial impressions are that it’s a reasonable good list. Certainly it should be good enough for my purposes.

This entry was posted in computational linguistics, programming on by .

Command line tools for NLP and Machine Learning

I’m a huge fan of command line tools.

I may be 40 years late to the party, but over the last couple of months I’ve been increasingly finding that The Unix Way (described by a friend of mine as “‘loosely coupled’, at least, in the sense that all IPC took the form of text files with ad-hoc formats piped through shell scripts”) is a marvelous way to work.

NLP, Machine Learning and related tasks map very well onto this I find. They’re very often directly concerned with the manipulation of text and, where not, can usually be expressed in quite simple formats (of course, you’ll often need just a big binary blob for model files and the like, but that’s ok). So I’d like to see more tools available for such. Here are some I’m familiar with:

dbacl dbacl is a command line text classifier. It’s uses bigrams for features and, as far as I can tell (I’ve only skimmed the source) builds a maximum entropy model for classification. I’ve only played with it a little bit, but my impressions so far are that it’s easy to use, fast and produces high quality results
MCL MCL is a fast unsupervised clustering algorithm for weighted graphs. This is a command line tool produced by its originator. It appears to be a very solid tool, and the results are always interesting (the larger clusters it produces are often a bit strange, but there’s a lot of interesting info at the small to medium range). I’ve cheerfully fed several hundred MB of graph data through it and had it produce a good result (it took a few minutes to do so, but it worked)
Hunpos Hunpos is a part of speech tagger. We’ve used it successfully in production (though the latest versions of SONAR no longer use it having switched to OpenNLP’s version) and found it to be pretty fast and to produce decent results.
binary-pearsons My only contribution to the list so far. It reads a sequence of labelled events (one per line) and outputs the pearsons correlation between the labels as a measure of their similarity. I’ve not yet got it to a point where I want to release a version, but I’ve already found it very useful (we’re using it in SONAR to dramatically speed up calculations from our previous version, which is where it comes from)
SRILM The SRI Language Modelling toolkit seems to be primarily a library for language modelling, but exposes a lot of its functionality through a collection of command line tools. I’ve not used it, but it seems to offer a bunch of potentially quite useful functionality. (Thanks to Aaron Harnly for the recommendation)
OpenFST OpenFST is a C++ class library for creating and using finite state transducers which also exposes all its functionality as a collection of shell tools. (Thanks to cypherx on reddit for the mention)

That’s all I can think of at the moment, though I swear I’ve encountered a couple more which I’ve found useful in the past. What do you use?

This entry was posted in computational linguistics, programming on by .

Cleaning up a set of tags, part 1

In order to demonstrate some stuff I wanted to have a set of tagged data to play with. Delicious, flickr, that sort of thing. After some digging around on places like theinfo.org I found out that CiteULike (like delicious but targetted at academic papers) makes a dump of their data available. Unfortunately, it’s a bit messy. Not the data format itself, which is a simple pipe separated text file, but the quality of the tags themself. This is more or less to be expected for user reported tagging. It would be nice to have something a bit cleaner though.

I thought it might be illustrative to clean it up in public. It’s a completely hacky process, and not a particularly smart one, but it might be interesting or helpful to someone.

So, first things first. Get the data: http://static.citeulike.org/data/current.bz2. It’s zipped, so not too large, but will be about 300M unzipped.

It looks roughly like this:

42|61baaeba8de136d9c1aa9c18ec3860e8|2004-11-04 02:25:05.373798+00|ecoli
42|61baaeba8de136d9c1aa9c18ec3860e8|2004-11-04 02:25:05.373798+00|metabolism
42|61baaeba8de136d9c1aa9c18ec3860e8|2004-11-04 02:25:05.373798+00|barabasi
42|61baaeba8de136d9c1aa9c18ec3860e8|2004-11-04 02:25:05.373798+00|networks
43|61baaeba8de136d9c1aa9c18ec3860e8|2004-11-04 02:25:51.839281+00|control
43|61baaeba8de136d9c1aa9c18ec3860e8|2004-11-04 02:25:51.839281+00|engineering
43|61baaeba8de136d9c1aa9c18ec3860e8|2004-11-04 02:25:51.839281+00|robustness
44|61baaeba8de136d9c1aa9c18ec3860e8|2004-11-04 02:26:33.156319+00|networks
44|61baaeba8de136d9c1aa9c18ec3860e8|2004-11-04 02:26:33.156319+00|strogatz
44|61baaeba8de136d9c1aa9c18ec3860e8|2004-11-04 02:26:33.156319+00|survey
44|61baaeba8de136d9c1aa9c18ec3860e8|2004-11-04 02:26:33.156319+00|review
45|61baaeba8de136d9c1aa9c18ec3860e8|2004-11-04 02:27:38.983179+00|pleiotropy
45|61baaeba8de136d9c1aa9c18ec3860e8|2004-11-04 02:27:38.983179+00|barabasi
45|61baaeba8de136d9c1aa9c18ec3860e8|2004-11-04 02:27:38.983179+00|notsmall

A couple things to note:

As mentioned, it’s pipe separated. We have a document id, an anonymized user id, a date and a tag. There can be multiple tags for the same (document, user) pair.

Another thing to note is that sometimes it contains concatenatedwords. e.g. “notsmall” (a tag which seems to appear only once, for “Wrestling with pleiotropy: genomic and topological analysis of the yeast gene expression network” for some reason. I think it’s a mistag and was meant to go on “The metabolic world of Escherichia coli is not small”).

Let’s get a sense of what the tags for this look like. We’ll plot a distribution graph like so:

 cat Desktop/current | ruby -ne 'puts $_.split("|")[-1]' | sort | uniq -c | sort -g -r > citeulike_tags

What’s this doing? Not much. We’re catting the data to standard out, feeding it through ruby to split off the last column and sorting the results, giving us a big list of tags with repetitions. We then pipe this through uniq -c, which collates consecutive unique lines with a count prepended (that’s what the -c does). We then sort again in generalised numerical order, reversed, and save the output to a file.

Unix is fun.

Here’s what the results look like:

david@percy:~$ head citeulike_tags
 212611 bibtex-import
 156901 no-tag
  27926 elegans
  27886 celegans
  27825 c_elegans
  27795 nematode
  27738 wormbase
  27736 caenorhabditis_elegans
  18897 review
  15280 all-articles
david@percy:~$ tail citeulike_tags
      1 00301512
      1 0025
      1 00208
      1 001287275ep1114608epa00128727epb00128727
      1 0010521342
      1 0009811908
      1 0009390946
      1 000
      1 -------------
      1 ___

So there’s clearly a bunch of random noise there. The top two are unimportant – they’re some sort of autogenerated thing – and the bottom lot are garbage. So, we’ll throw away the top two and everything with only 1 occurrence.

david@percy:~$ vim citeulike_tags

Let’s look at the data again.

27926 elegans
27886 celegans
27825 c_elegans
27795 nematode
27738 wormbase
27736 caenorhabditis_elegans
18897 review
15280 all-articles
14597 evolution
13694 meeting_abstract
david@percy:~$ tail citeulike_tags
2 035
2 0325
2 0315
2 021
2 01012
2 004692
2 003
2 0022
2 —-
2 __
The top is looking better. I’m a bit skeptical of “all-articles”. Looking at a few examples it seems to be something generated along with bibtex-import. But we’ll leave that for now.

We’ve got a bunch of duplication there. “celegans”, “c_elegans”, “caenorhabditis_elegans”. Citeulike seems to have a nematode obsession. Not much we can do about that right now though.

The bottom half is the more serious issue. It consists entirely of numbers, which is rubbish. So let’s filter out anything that doesn’t contain some text:

david@percy:~$ mv citeulike_tags citeulike_tags_old
david@percy:~$ cat citeulike_tags_old | ruby -ne ‘puts $_ if !($_ =~ /^[^a-z]+$/)’ > citeulike_tags
david@percy:~$ tail citeulike_tags
2 0graphs
2 0ds-flicker
2 0-doek
2 0compas
2 0cath
2 05-wise-00-01
2 05matheco30_theoretic
2 05matheco25_theoretic
2 04-sose-00
2 041500040u1

Hm. Those are still pretty shitty. At this point I’m tempted to filter out everything which appears only twice. But after a quick trawl through with less I shall resist the temptation, as one finds things like this:

2 accessibility-technology-for-the-deaf
2 accessibilitystandards
2 accesory
2 acceptorphysiology
2 acceptormetabolismphysiology
2 acceptorgeneticsmetabolism
2 acceptability-judgments
2 accents
2 accent_learning
2 accentedness
2 accelerator-physics
2 acceleration-measurement
2 accelerationadverse
2 accelerating-admixture
2 accelerated-combinatorial-synthesis

Which it would be a shame to lose out on if we can avoid it.

At this point I noticed something annoying: There’s absolutely no consistency in how people space things in these tags. Even ignoring the people who concatenatewords, some people use _, some use -, some even have things like “academic-libraries—-collection-development”, which is just aggravating.

Let’s try to get some consistency out of this.

At this point I break out of the console and move to irb for some more interactive hacking. Unfortunately trying to record this turned out to be a pain due to IRB’s tendency to print the whole giant hash. So here’s it is as a ruby script. This looks for all tags which differ only in terms of the presence and type of _s and -s and conflates them all. In each case it chooses the most commonly occuring one, takes that as canonical and gives it the sum of all occurrences of equivalent tags. It then normalises the separator to an underscore.

I ran this as

david@percy:~$ cat citeulike_tags | ruby group_duplicates.rb > normalized_tags

Now, we’ve still got quite a few tags left:

david@percy:~$ wc -l normalized_tags
118785 normalized_tags

Let’s see if we can figure out some good ways of reducing this (or at least cutting out noise).

I dug around in it for a bit and noticed that there were a lot of tags of the form “file-import-something”. Not that many (291) but it’s a start. We’ll probably continue blacklisting things as we fine them.

Here’s an example of where we’ve got redundant tags: We’ve got genomic and genomics. statistic and statistics. population and populations. i.e. plurals. Let’s fix that.

For this step we’ll use a stemmer. Snowball has a decent binding for ruby, so we’ll use that. We’ll consider two tags to be equivalent if their parts stem to the same words.

We’re going to repurpose the spacing script above. Software reuse by cut and paste, yay. :-) I’ll need to do this all properly later, so I’ll clean it up for then, but now we’re just experimenting with data. So here’s a rewrite to identify things by stemming.

At this point we’re down to 106229 tags, from 118494 prior to stemming and 272919 originally. Not doing badly, given that most of what we’ve thrown away is junk or redundant information. If we threw out the tags which only appear twice we’d lose another 30352 tags (cat normalized_tags_2 | grep ”    2 ” | wc -l). I was resisting doing that because some of them are quite good quality, but really I don’t think we have enough information to clean the remainder up.

We’re nearly at the point where we’ve run out of what we can do with frequency and word based information – there’s still plenty more we could do in principle, but we’ll hit the point of diminishing returns pretty rapidly from here on out. One thing I have noticed though is that there are a bunch of tags like “of” and “and”. At this point I shall put out the favourite hacky linguistic hammer: The stopword list.

The one I tend to use was compiled for the SMART system by Chris Buckley and Gerard Salton at Cornell University. You can download a copy here. All we’ll use this for is to filter out any tags which happen to be stopwords. Here’s an obvious script to use it to remove stopwords.

This loses us 46 tags. Doesn’t sound like many, but they were mostly fairly high frequency ones, so it’s a nice win.

At this point I declare this part of the work done. We’ve compiled a good list of tags and, although I’ve not actually written the code to do so, because each tag was arrived at by grouping a bunch of existing tags, it’s easy to see how you could figure out which documents are tagged with which of the new tags (if you can’t see, don’t worry, I’ll be tidying this up and using it to generate a tag list next time). In order to go further, we’ll need to actually look at sets of tagged documents. And, to be honest, I don’t know how much further it will take us. It may be that there’s not much more to add. But I’m hopeful.

If you want to have a play with the data, here’s the end result.

This entry was posted in computational linguistics, programming on by .

Computational linguistics and Me

Apparently I’m a computational linguistics blogger. This is sortof news to me. The closest I’ve come to blogging about computational linguistics is in writing a borderline rant about academia.

That being said, I do work in computational linguistics: SONAR is basically a great big NLP system.

This fact, however, is almost totally unrepresented in my blogging.

Actually, that’s part of why I’ve been blogging so much less recently. Since moving onto SONAR my brain has been afire with newly acquired knowledge and trying to figure out how best to apply it to work problems. This has left relatively little time for most of the other stuff I think about that normally generates blogging.

Of course the obvious solution is that I should be blogging about computational linguistics. But that has some obstacles. Primarily:

Confidentiality

All the computational linguistics stuff I do is for work. I tinker around with it at home, but haven’t really done anything useful. This makes it difficult to know what I can blog about: I certainly can’t go “HEY GUYS. I FIGURED OUT THIS AWESOME ALGORITHM WHICH WE’RE USING IN SONAR” for everything. We rather rely on some of that magic to make us money. :-)

That being said, there’s definitely stuff I can blog about. e.g. there’s nothing particularly confidential in how we extract likely candidate phrases from a document, and it’s at least mildly interesting (probably more to non-linguists, but who knows?). In fact, we’re actually all encouraged to blog more about what we do but never find the time. So, really, work isn’t that much of an obstacle to blogging about this. It just requires a bit of careful thought.

Experience

I’m very new to computational lingusitics. As such, I’ve a much less clear idea what’s bloggable about in it. If we look at my blogging history, I started blogging about programming in february 2007. That’s just shy of a year after I started working as a programmer (which, effectively, is just shy of a year after I started programming anything in earnest). And I think it took another six months of blogging before I actually wrote anything worth reading. In comparison, I’ve not even worked in computational linguistics for 6 months (I think I started work on SONAR in september and had no exposure to it before that). So I’m very much still sortof fumbling along, trying to figure out the best way to do things.

From a work point of view that’s fine. Actually some of my best work is done when I don’t know what I’m doing: I’m more able to ask stupid questions and get useful answers and I come at things from a sufficiently different angle to normal that sometimes I produce unexpected results.

But from a blogging point of view it’s pretty likely that what I end up writing about will range from the trivial to the wrong, until I find my feet. Some of it might be of interest to non-linguists but too basic to be of interest to linguists. Some of it might be so esoteric that it would only be of interest to linguists, at least it would if they weren’t so easily able to point out why it’s wrong. Some of it might be of interest only to me.

But actually this is a really piss poor excuse to not blog about it. Because, frankly, I do not write to amuse you. Writing for other people is, to me, a waste of time. I write about what is of interest to me. With any luck other people will find it interesting too, but that isn’t the primary point.

So…

In conclusion, my two main reasons for not blogging more about comptuational linguistics, natural language processing, etc. suck. So expect to see more about it here in the future. This probably means you’ll see more Ruby as well, as that’s what we use at work and I don’t expect I’ll bother translating into Scala except when I have a specific reason to do so.