This is just a quick announcement to let people know that we’ve open sourced our JRuby library for term extraction. You can get the code from my github page.
Unlike a lot of term extraction libraries, this doesn’t take any stance as to the “significance” of the terms it extracts. It’s purely about looking at the syntax and determining where good boundaries for terms are. There are a couple reasons for this, but basically we’ve found that it’s more effective to separate the two steps and makes it easier to tinker around with them independently. The criteria for “interestingness” of terms seem to be largely distinct from those for terms which simply make sense linguistically. So we have a two stage pipeline, one which extracts semantically meaningful terms and one which determines what terms are actually interesting in the context of the document. The second step is much more complicated, and we’re not open sourcing that (yet? probably not any time soon, if ever. Even if we wanted to, it relies on a lot more global information across the document corpus and so is very tied in with how SONAR operates, making it much harder to isolate).
So, how does it work? Black magic and voodoo!
Actually, no. It’s pretty straightforward. It builds on top of the excellent OpenNLP library, using its tools for part of speech tagging, sentence splitting (a much harder problem than you’d imagine) and phrase chunking. It’s currently a rules based system on top of there, as while you’re figuring things out it makes much more sense to stick with something so easily fine tunable. Our expectation is that we’ll gradually start replacing bits of it with machine learning based techniques as we start to hit the limitations of a rules based system, but for now it’s working pretty well.
Let’s have an example. If we feed the second paragraph of this post into the term extractor, we get the following terms back:
term extraction libraries stance terms syntax good boundaries couple reasons two steps steps criteria interestingness sense two stage pipeline stage pipeline semantically meaningful terms context context of the document document second step open sourcing time document corpus SONAR
Hope you find this useful. Let us know if you build anything cool with it!