commit | b71b51fe1e63950dc84d18a3face5c5eefc5915f | [log] [tgz] |
---|---|---|
author | Akron <nils@diewald-online.de> | Mon Mar 04 15:39:02 2024 +0100 |
committer | Akron <nils@diewald-online.de> | Mon Mar 04 15:39:02 2024 +0100 |
tree | 0791bf0c2fc829dfa85817d861a908a41bfec273 | |
parent | 0b9aa666d6e824452935c84e8f88637f5017499c [diff] |
Add comment to tokenwriter Change-Id: Ib00a0f15a193fc84d9a87dd0939d49362b5a8d62
Implementation of a finite state automaton for high-performance large-scale natural language tokenization, based on a finite state transducer generated with Foma.
The repository currently contains precompiled tokenizer models for
The focus of development is on the tokenization of DeReKo, the german reference corpus.
Datok can be used as a standalone tool or as a library in Go.
Chart showing speed comparison of different tokenizers and sentence splitters for German. Effi
refers to tokenizing and/or sentence splitting of one issue of Effi Briest. Datok is optimized for large batch sizes, while other tools may perform better in other scenarios. For further benchmarks, especially regarding the quality of tokenization, see Diewald/Kupietz/Lüngen (2022).
Usage: datok tokenize --tokenizer=STRING <input> Arguments: <input> Input file to tokenize (use - for STDIN) Flags: -h, --help Show context-sensitive help. -t, --tokenizer=STRING The Matrix or Double Array Tokenizer file --[no-]tokens Print token surfaces (defaults to true) --[no-]sentences Print sentence boundaries (defaults to true) -p, --token-positions Print token offsets (defaults to false) --sentence-positions Print sentence offsets (defaults to false) --newline-after-eot Ignore newline after EOT (defaults to false)
The special END OF TRANSMISSION
character (\x04
) can be used to mark the end of a text.
Caution: When experimenting with STDIN and echo, you may need to disable history expansion.
Usage: datok convert --foma=STRING --tokenizer=STRING Flags: -h, --help Show context-sensitive help. -i, --foma=STRING The Foma FST file -o, --tokenizer=STRING The Tokenizer file -d, --double-array Convert to Double Array instead of Matrix representation
package main import ( "github.com/KorAP/datok" "os" "strings" ) func main () { // Load transducer binary dat := datok.LoadTokenizerFile("tokenizer_de.matok") if dat == nil { panic("Can't load tokenizer") } // Create a new TokenWriter object tw := datok.NewTokenWriter(os.Stdout, datok.TOKENS|datok.SENTENCES) defer tw.Flush() // Create an io.Reader object refering to the data to tokenize r := strings.NewReader("Das ist <em>interessant</em>!") // The transduceTokenWriter accepts an io.Reader // object and a TokenWriter object to transduce the input dat.TransduceTokenWriter(r, tw) }
The FST generated by Foma must adhere to the following rules, to be convertible by Datok:
@_TOKEN_BOUND_@
, that denotes the end of a token.@_TOKEN_BOUND_@
.@_TOKEN_BOUND_@
s mark a sentence end.@_TOKEN_BOUND_@
marks the end of a token instead.A minimal usable tokenizer written in XFST and following the guidelines to tokenizers in Beesley and Karttunen (2003) and Beesley (2004) would look like this:
define TB "@_TOKEN_BOUND_@"; define WS [" "|"\u000a"|"\u0009"]; define PUNCT ["."|"?"|"!"]; define Char \[WS|PUNCT]; define Word Char+; ! Compose token bounds define Tokenizer [[Word|PUNCT] @-> ... TB] .o. ! Compose Whitespace ignorance [WS+ @-> 0] .o. ! Compose sentence ends [[PUNCT+] @-> ... TB \/ TB _ ]; read regex Tokenizer;
Hint: For development in Foma it's easier to replace
@_TOKEN_BOUND_@
with a newline symbol.
To build the tokenizer tool, run
$ make build
To create a foma file from the example sources, first install Foma, then run in the root directory of this repository
$ cd src && \ foma -e "source de/tokenizer.xfst" \ -e "save stack ../mytokenizer.fst" -q -s && \ cd ..
This will load and compile the german tokenizer.xfst
and will save the compiled FST as mytokenizer.fst
in the root directory.
To generate a Datok FSA (matrix representation) based on this FST, run
$ datok convert -i mytokenizer.fst -o mytokenizer.datok
To generate a Datok FSA (double array representation*) based on this FST, run
$ datok convert -i mytokenizer.fst -o mytokenizer.datok -d
The final datok file can then be used as a model for the tokenizer.
Internally the FSA is represented either as a matrix or as a double array.
Both representations mark all non-word-character targets with a leading bit. All ε (aka tokenend) transitions mark the end of a token or the end of a sentence (2 subsequential ε). The transduction is greedy with a single backtracking option to the last ε transition.
The double array representation (Aoe 1989) of all transitions in the FST is implemented as an extended DFA following Mizobuchi et al. (2000) and implementation details following Kanda et al. (2018).
Please cite this work as:
Diewald, Nils (2022): Matrix and Double-Array Representations for Efficient Finite State Tokenization. In: Proceedings of the 10th Workshop on Challenges in the Management of Large Corpora (CMLC-10) at LREC 2022. Marseille, France, pp. 20-26.
The library contains sources for a german tokenizer based on KorAP-Tokenizer.
For speed and quality analysis in comparison to other tokenizers for German, please refer to this article:
Diewald, Nils, Marc Kupietz, Harald Lüngen (2022): Tokenizing on scale - Preprocessing large text corpora on the lexical and sentence level. In: Proceedings of EURALEX 2022. Mannheim, Germany, pp. 208-221.
The benchmarks can be reproduced using this test suite.
Datok is published under the Apache 2.0 License.
The german and english tokenizer shipped is based on work done by the Lucene project (published under the Apache License), David Hall (published under the Apache License), Çağrı Çöltekin (published under the MIT License), and Marc Kupietz (published under the Apache License).
The english clitics list is based on Zwicky & Pullum (1983).
The foma parser is based on foma2js, written by Mans Hulden (published under the Apache License).
Aoe, Jun-ichi (1989): An Efficient Digital Search Algorithm by Using a Double-Array Structure. IEEE Transactions on Software Engineering, 15 (9), pp. 1066-1077.
Beesley, Kenneth R. & Lauri Karttunen (2003): Finite State Morphology. Stanford, CA: CSLI Publications.
Beesley, Kenneth R. (2004): Tokenizing Transducers. https://web.stanford.edu/~laurik/fsmbook/clarifications/tokfst.html
Hulden, Mans (2009): Foma: a finite-state compiler and library. In: Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, Association for Computational Linguistics, pp. 29-32.
Mizobuchi, Shoji, Toru Sumitomo, Masao Fuketa & Jun-ichi Aoe (2000): An efficient representation for implementing finite state machines based on the double-array. Information Sciences 129, pp. 119-139.
Kanda, Shunsuke, Yuma Fujita, Kazuhiro Morita & Masao Fuketa (2018): Practical rearrangement methods for dynamic double-array dictionaries. Software: Practice and Experience (SPE), 48(1), pp. 65–83.
Zwicky, Arnold M., Geoffrey K. Pullum (1983): Cliticization vs. Inflection: English N’T. Language, 59, pp. 502-513.