Merge "Add R scripts to plot performance charts"
diff --git a/Dockerfile b/Dockerfile
index 2d92c07..5901005 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -23,7 +23,7 @@
 RUN apt-get install -y python3-dev \
     python3 \
     python3-pip && \
-    pip3 install SoMaJo
+    pip3 install SoMaJo==2.2.0
 
 RUN echo "SOMAJO\n" && somajo-tokenizer --split_sentences ./example.txt
 
@@ -186,7 +186,7 @@
 # Install SpaCy #
 #################
 
-RUN pip3 install -U spacy
+RUN pip3 install -U spacy==3.2.3
 
 COPY spacy /euralex/spacy/
 
@@ -223,13 +223,24 @@
 # Install Cutter #
 ##################
 
-RUN pip3 install cutter-ng
+RUN pip3 install cutter-ng==2.5
 
 COPY cutter /euralex/cutter/
 
 RUN echo "Cutter\n" && python3 ./cutter/cutter.py nosent example.txt
 
 
+#####################
+# Install BlingFire #
+#####################
+
+RUN pip3 install -U blingfire==0.1.8
+
+COPY blingfire /euralex/blingfire/
+
+RUN echo "BlingFire\n" && python3 ./blingfire/blingfire_tok.py example.txt
+
+
 #################
 # Install Datok #
 #################
diff --git a/Readme.md b/Readme.md
index 8f75046..b9c289d 100644
--- a/Readme.md
+++ b/Readme.md
@@ -1,23 +1,18 @@
-# Creating the container
+# EURALEX 2022 - Tokenization Benchmark
+
+This repository contains benchmark scripts for comparing different tokenizers and sentence segmenters of German.  For trouble-free testing, all tools are provided in a Dockerfile.
+
+## Creating the container
 
 To build the Docker image, run
 
 ```shell
 $ docker build -f Dockerfile -t korap/euralex22 .
 ```
-This will download and install an image of approximately 6GB.
-
-It will download and install the following
-tokenizers in an image to your system:
-
-...
-
-To run the evaluation suite ...
-
-...
+This will create and install an image of approximately 12GB.
 
 
-# Running the evaluation suite
+## Running the evaluation suite
 
 To run the benchmark, call
 
@@ -30,7 +25,7 @@
 
 The supported benchmark scripts are:
 
-## `benchmark.pl`
+### `benchmark.pl`
 
 Performance measurements of the tools. See the tools section for some
 remarks to take into account. Accepts two numerical parameters:
@@ -38,8 +33,7 @@
 - The duplication count of the example file
 - The number of iterations
 
-
-## `empirist.pl`
+### `empirist.pl`
 
 To run the empirist evaluation suite, you first need to download
 the empirist gold standard corpus and tooling, and extract it into
@@ -53,8 +47,6 @@
 $ unzip empirist_gold_web.zip -d corpus
 ```
 
-Quality measurements based on EmpiriST 2015.
-
 To investigate the output, start the benchmark with mounted
 output folders
 
@@ -63,7 +55,7 @@
 -v ${PWD}/output_web:/euralex/empirist_web
 ```
 
-## `ud_tokens.pl`
+### `ud_tokens.pl`
 
 To run the token evaluation suite against the 
 [Universal Dependency](https://github.com/UniversalDependencies/UD_German-GSD)
@@ -75,54 +67,13 @@
   -O corpus/de_gsd-ud-train.conllu
 ```
 
-## `ud_sentences.pl`
+### `ud_sentences.pl`
 
 To run the sentence evaluation suite, first download the corpus
 as explained above.
 
 
-# Tools
-
-## Waste
-- Tokenization
-
-## OpenNLP
-- Tokenization
-
-## TreeTagger
-- Tokenization
-
-## JTok
-- Tokenization
-
-## SynTok
-- Tokenization
-
-## SoMaJo
-- Tokenization
-
-## Stanford CoreNLP
-- Tokenization
-
-All tools are run using [pipelining](https://stanfordnlp.github.io/CoreNLP/pipeline.html),
-which obviously introduces some overhead, that needs to be taken into account.
-
-## KorAP-Tokenizer
-- Tokenization + Sentence Splitting
-
-## Datok
-- Tokenization + Sentence Splitting
-
-
-# Licenses
-
-For Treetagger:
-Please read the [license terms](https://cis.uni-muenchen.de/~schmid/tools/TreeTagger/Tagger-Licence),
-before you download the software!
-By downloading the software, you agree to the terms stated there. 
-
-
-# Caveat
+## Caveat
 
 When running this benchmark using Docker you may need
 to run all processes privileged to get
@@ -132,4 +83,50 @@
 docker run --privileged -v
 ```
 
-# Literature
+## Tools
+
+### Our tools for token and sentence boundary detection:
+
+- [KorAP-Tokenizer](https://github.com/KorAP/KorAP-Tokenizer) is rule-based and compiles, using the lexical analysis generator framework [JFlex](https://jflex.de/), a list of regular expressions into a deterministic finite state automaton that can introduce segment boundaries at terminal nodes. The ruleset is based on [Apache Lucene](https://lucene.apache.org/)'s tokenizer and has been extensively modified. Rule sets are available for English, French and German. KorAP-Tokenizer is used productively for tokenization and (among other tools) for sentence segmentation of DeReKo.
+- [Datok](https://github.com/KorAP/Datok) is rule-based and generates an extended finite deterministic state automaton based on a finite state transducer generated by XFST (Beesley & Karttunen 2003) which is reduced to a few transition rules and can be interpreted by Datok for tokenization and sentence segmentation. The rule set of KorAP-Tokenizer was transferred to XFST for this purpose. The generation is done with Foma (Hulden 2009). Rule sets are only available for German at this time. Datok is currently being evaluated experimentally.
+
+### Tools for token and sentence boundary detection: 
+
+- [SoMaJo](https://github.com/tsproisl/SoMaJo) (Proisl & Uhrig 2016) is rule-based and applies a list of regular expressions to segment a text. SoMaJo won first place in the competition of the aforementioned EmpiriST 2015 Shared Task for tokenizing German-language Web and CMC corpora and has been regularly improved since then. SoMaJo is available specifically for German.
+- [Cutter](https://pub.cl.uzh.ch/wiki/public/cutter/start) (Graën et al. 2018) is rule-based and recursively applies language-specific and language-independent rules to a text to segment it. Compared to other rule-based tools, Cutter uses a context-free rather than a regular grammar.
+- [OpenNLP](https://opennlp.apache.org/) is a framework that offers both tokenizers and sentence segmenters in different models. Both tools are based on a maximum entropy approach. In addition, OpenNLP offers SimpleTokenizer, a tool based on simple character class decisions.
+- [JTok](https://github.com/DFKI-MLT/JTok) is based on cascading regular expressions that segment tokens until they can be assigned to a token class, which (cf. SoMaJo) can also be returned. Rules exist for English, German and Italian.
+- [Waste](https://kaskade.dwds.de/waste/) (Jurish/Würzner 2013) is based on a hidden Markov model in which a pre-segmented stream of (pseudo)tokens are re-evaluated at the boundaries found and classified as to whether they are word-initial or sentence-initial.
+- [Stanford Tokenizer](https://nlp.stanford.edu/software/tokenizer.shtml) is rule-based, and relies on JFlex (cf. KorAP tokenizer) to compile a deterministic finite state automaton based on a list of regular expressions that can introduce segment boundaries at terminal nodes.
+- [SpaCy](https://spacy.io/usage/linguistic-features) is a framework in which the tokenization stage is rule-based and runs in several phases in which the tokens are split into increasingly finer segments. Rule sets are provided for numerous languages. Different models are offered for sentence segmentation: Sentencizer is rule-based, Dependency Parser performs a syntactic analysis, Statistical segments based on a simple statistical model.
+- [Syntok](https://github.com/fnl/syntok) is rule-based and applies successive separation rules, primarily in the form of regular expressions, to an input string for segmentation. There is both a tokenizer and a sentence segmenter based on it. Rules exist for Spanish, English, and German.
+- [BlingFire](https://github.com/microsoft/BlingFire) is rule-based and compiles a deterministic finite state automaton based on regular expressions, which segments at terminal nodes. The tested model is implemented cross-language with a focus on English.
+
+### Tools for token boundary detection only:
+
+- [TreeTagger](https://cis.uni-muenchen.de/~schmid/tools/TreeTagger/) (Schmid 1994) is a part-of-speech tagger that carries a separate rule-based tokenization tool that also uses a set of regular expressions to segment a text. TreeTagger does not itself introduce markers for sentence boundaries. [license terms](https://cis.uni-muenchen.de/~schmid/tools/TreeTagger/Tagger-Licence).
+- [Elephant](https://gmb.let.rug.nl/elephant/about.php) (Evang et al. 2013) is a machine-trained system for segmentation based on Conditional Random Fields and Recurrent Neural Networks. We evaluate here a [wrapper implementation](https://github.com/erwanm/elephant-wrapper) (Moreau/Vogel, 2018) that considers only token segmentation and not sentence segmentation, although Elephant provides both.
+
+### Tools for sentence boundary detection only:
+
+- [Deep-EOS](https://github.com/dbmdz/deep-eos) (Schweter/Ahmed 2019) is based on different implementations of neural networks with long short-term memory (LSTM), bidirectional LSTM, and convolutional neural networks. It is not based on pre-tokenization and operates directly on character streams.
+- [NNSplit](https://bminixhofer.github.io/nnsplit/) is a machine-trained approach based on a byte-level LSTM neural network.
+
+
+## Results
+
+
+In terms of speed, the native output of the tools was measured, while in terms of accuracy, further reshaping was necessary to make it comparable to the gold standard.
+
+
+## Literature
+
+- Beesley, K. R./Karttunen, L. (2003): Finite State Morphology. CSLI Publications.
+- Evang, K./Basile, V./ChrupaƂa, G./Bos, J. (2013): Elephant: Sequence Labeling for Word and Sentence Segmentation. Proceedings of the EMNLP 2013: Conference on Empirical Methods in Natural Language Processing, Seattle, US.
+- Graën, J./Bertamini, M./Volk, M. (2018): [Cutter – a universal multilingual tokenizer](https://doi.org/10.5167/uzh-157243). In: Cieliebak, M./Tuggener, D./Benites, F. (eds.): Swiss text analytics conference, Nr. 2226, pp. 75–81. CEUR-WS.
+- Hulden, M. (2009): Foma: A finite-state toolkit and library. Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pp. 29–32.
+- Jurish, B./Würzner, K.-M. (2013): Word and Sentence Tokenization with Hidden Markov Models. JLCL, 28 (2), pp. 61–83.
+- Moreau, E./Vogel, C. (2018): [Multilingual Word Segmentation: Training Many Language-Specific Tokenizers Smoothly Thanks to the Universal Dependencies Corpus](https://aclanthology.org/L18-1180). Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). Miyazaki, Japan.
+- Proisl, T./Uhrig, P. (2016): SoMaJo: State-of-the-art tokenization for German web and social media texts. Proceedings of the 10th Web as Corpus Workshop, pp. 57–62.
+- Schmid, H. (1994): Probabilistic Part-of-Speech Tagging Using Decision Trees. Proceedings of International Conference on New Methods in Language Processing.
+- Schweter, S./Ahmed, S. (2019): Deep-EOS: General-Purpose Neural Networks for Sentence Boundary Detection. Proceedings of the 15th Conference on Natural Language Processing (KONVENS). KONVENS, Erlangen, Germany.
diff --git a/benchmarks/benchmark.pl b/benchmarks/benchmark.pl
index 6b74456..a273c9c 100644
--- a/benchmarks/benchmark.pl
+++ b/benchmarks/benchmark.pl
@@ -101,6 +101,12 @@
   cutter => sub {
     system 'python3 ./cutter/cutter.py nosent ./corpus/'.$FILE.' > /dev/null'
   },
+  blingfire_tok => sub {
+    system 'python3 ./blingfire/blingfire_tok.py ./corpus/'.$FILE.' > /dev/null'
+  },
+  blingfire_sent => sub {
+    system 'python3 ./blingfire/blingfire_sent.py ./corpus/'.$FILE.' > /dev/null'
+  },
   spacy_tok => sub {
     system 'python3 ./spacy/spacy_tok.py ./corpus/'.$FILE.' > /dev/null'
   },
@@ -135,36 +141,38 @@
   },
 };
 
-delete $models->{'SoMaJo'};
-delete $models->{'SoMaJo_p2'};
-delete $models->{'SoMaJo_p4'};
-delete $models->{'SoMaJo_p8'};
-delete $models->{'Datok_matok'};
-delete $models->{'Datok_datok'};
-delete $models->{'OpenNLP_Simple'};
-delete $models->{'OpenNLP_Tokenizer_de-ud-gsd'};
-delete $models->{'OpenNLP_Sentence_de-ud-gsd'};
-delete $models->{'TreeTagger'};
-delete $models->{'deep-eos_bi-lstm-de'};
-delete $models->{'deep-eos_cnn-de'};
-delete $models->{'deep-eos_lstm-de'};
-delete $models->{'JTok'};
-delete $models->{'KorAP-Tokenizer'};
-delete $models->{'Syntok_tokenizer'};
-delete $models->{'Syntok_segmenter'};
-delete $models->{'Waste'};
-delete $models->{'nnsplit'};
-delete $models->{'elephant'};
-delete $models->{'Stanford'};
-delete $models->{'Stanford_t2'};
-delete $models->{'Stanford_t4'};
-delete $models->{'Stanford_t8'};
+#delete $models->{'SoMaJo'};
+#delete $models->{'SoMaJo_p2'};
+#delete $models->{'SoMaJo_p4'};
+#delete $models->{'SoMaJo_p8'};
+#delete $models->{'Datok_matok'};
+#delete $models->{'Datok_datok'};
+#delete $models->{'OpenNLP_Simple'};
+#delete $models->{'OpenNLP_Tokenizer_de-ud-gsd'};
+#delete $models->{'OpenNLP_Sentence_de-ud-gsd'};
+#delete $models->{'TreeTagger'};
+#delete $models->{'deep-eos_bi-lstm-de'};
+#delete $models->{'deep-eos_cnn-de'};
+#delete $models->{'deep-eos_lstm-de'};
+#delete $models->{'JTok'};
+#delete $models->{'KorAP-Tokenizer'};
+#delete $models->{'Syntok_tokenizer'};
+#delete $models->{'Syntok_segmenter'};
+#delete $models->{'Waste'};
+#delete $models->{'nnsplit'};
+#delete $models->{'elephant'};
+#delete $models->{'Stanford'};
+#delete $models->{'Stanford_t2'};
+#delete $models->{'Stanford_t4'};
+#delete $models->{'Stanford_t8'};
 #delete $models->{'Stanford_tokonly'};
 #delete $models->{'cutter'};
 #delete $models->{'spacy_tok'};
 #delete $models->{'spacy_sentencizer'};
 #delete $models->{'spacy_dep'};
 #delete $models->{'spacy_stat'};
+#delete $models->{'blingfire_tok'};
+#delete $models->{'blingfire_sent'};
 
 
 
diff --git a/benchmarks/cleanup/split_conllu.pl b/benchmarks/cleanup/split_conllu.pl
index 441be36..e372d68 100644
--- a/benchmarks/cleanup/split_conllu.pl
+++ b/benchmarks/cleanup/split_conllu.pl
@@ -5,11 +5,19 @@
 our @ARGV;
 
 my $file = $ARGV[0];
+my $file_name = $file;
+$file_name =~ s!^.+?/([^/]+?)$!$1!;
+
+
+my $out = $ARGV[1];
 
 open(X, '<' . $file);
-open(RAW, '>' . $file . '.raw');
-open(SPLIT, '>' . $file . '.split');
-open(EOS, '>' . $file . '.eos');
+unlink $file . '.raw';
+open(RAW, '>' . $out . '/' . $file_name . '.raw') or die $!;
+unlink $file . '.split';
+open(SPLIT, '>' . $out . '/' . $file_name . '.split') or die $!;
+unlink $file . '.eos';
+open(EOS, '>' . $out . '/' . $file_name . '.eos') or die $!;
 
 my $init;
 
diff --git a/benchmarks/empirist.pl b/benchmarks/empirist.pl
index 86e21dd..b20af0d 100644
--- a/benchmarks/empirist.pl
+++ b/benchmarks/empirist.pl
@@ -59,6 +59,10 @@
     my $raw = $gold_path . $_[1] . '/raw/' . $_[0];
     system 'python3 ./spacy/spacy_tok.py ' . $raw . ' > ' . $empirist_path . $_[1] . '/spacy/' . $_[0];
   },
+  blingfire => sub {
+    my $raw = $gold_path . $_[1] . '/raw/' . $_[0];
+    system 'python3 ./blingfire/blingfire_tok.py ' . $raw . ' | sed "s/\s/\n/g" > ' . $empirist_path . $_[1] . '/blingfire/' . $_[0];
+  },
   cutter => sub {
     my $raw = $gold_path . $_[1] . '/raw/' . $_[0];
     system 'python3 ./cutter/cutter.py nosent ' . $raw . ' > ' . $empirist_path . $_[1] . '/cutter/' . $_[0];
@@ -72,19 +76,20 @@
   }
 );
 
-# delete $tools{waste};
-# delete $tools{datok};
-# delete $tools{korap_tokenizer};
-# delete $tools{opennlp_simple};
-# delete $tools{opennlp_tokenizer};
-# delete $tools{tree_tagger};
-# delete $tools{jtok};
-# delete $tools{syntok};
-# delete $tools{somajo};
-# delete $tools{stanford};
-# delete $tools{spacy};
-# delete $tools{elephant};
-# delete $tools{cutter};
+delete $tools{waste};
+delete $tools{datok};
+delete $tools{korap_tokenizer};
+delete $tools{opennlp_simple};
+delete $tools{opennlp_tokenizer};
+delete $tools{tree_tagger};
+delete $tools{jtok};
+delete $tools{syntok};
+delete $tools{somajo};
+delete $tools{stanford};
+delete $tools{spacy};
+delete $tools{elephant};
+delete $tools{cutter};
+delete $tools{blingfire};
 
 # Create project folders
 foreach (keys %tools) {
diff --git a/benchmarks/ud_sentences.pl b/benchmarks/ud_sentences.pl
index c5f80c6..ea2774b 100644
--- a/benchmarks/ud_sentences.pl
+++ b/benchmarks/ud_sentences.pl
@@ -4,11 +4,11 @@
 
 # Comparison path
 my $cmd = '/euralex/corpus/empirist_gold_cmc/tools/compare_tokenization.perl';
-# my $cmd = '/euralex/corpus/deep-eos/eval.py';
 
 my $cleanup = 'perl /euralex/benchmarks/cleanup/';
 my $tokenize_eos = $cleanup . 'tokenize_eos.pl';
 my $tokenize_nn = $cleanup . 'tokenize_nn.pl';
+my $tokenize_simple = $cleanup . 'tokenize_simple.pl';
 
 # Output path
 my $ud_path = '/euralex/ud_eos';
@@ -18,11 +18,11 @@
 
 # Split files
 chdir '/euralex/corpus/';
-system 'perl /euralex/benchmarks/cleanup/split_conllu.pl /euralex/corpus/' . $base;
+system 'perl /euralex/benchmarks/cleanup/split_conllu.pl /euralex/corpus/' . $base . ' ' . $ud_path;
 chdir '/euralex';
 
-my $gold = '/euralex/corpus/' . $base . '.eos';
-my $raw = '/euralex/corpus/' . $base . '.raw';
+my $gold = $ud_path . '/' . $base . '.eos';
+my $raw = $ud_path . '/' . $base . '.raw';
 
 my %tools = (
   waste => sub {
@@ -46,7 +46,7 @@
     chdir '/euralex';
   },
   syntok => sub {
-    system 'python3 -m syntok.segmenter ' . $raw . ' | ' . $cleanup . '/tokenize_simple.pl > ' . $ud_path . '/syntok/' . $base;
+    system 'python3 -m syntok.segmenter ' . $raw . ' | ' . $tokenize_simple . ' > ' . $ud_path . '/syntok/' . $base;
   },
   somajo => sub {
     system 'somajo-tokenizer --split_sentences ' . $raw . ' 2> /dev/null | ' . $tokenize_nn . ' > ' . $ud_path . '/somajo/' . $base;
@@ -69,6 +69,9 @@
   spacy_sentencizer => sub {
     system 'python3 ./spacy/spacy_sent.py sentencizer ' . $raw . ' | ' . $tokenize_eos . ' > ' . $ud_path . '/spacy_sentencizer/' . $base
   },
+  blingfire => sub {
+    system 'python3 ./blingfire/blingfire_sent.py ' . $raw . ' | ' . $tokenize_simple . ' > ' . $ud_path . '/blingfire/' . $base;
+  },
   'deep-eos_bi-lstm-de' => sub {
     system 'python3 ./deep-eos/main.py --input-file '.$raw.' --model-filename ./deep-eos/bi-lstm-de.model --vocab-filename ./deep-eos/bi-lstm-de.vocab --eos-marker "</eos>" tag | ' . $tokenize_eos . ' > ' . $ud_path . '/deep-eos_bi-lstm-de/' . $base;
   },
@@ -81,22 +84,23 @@
 );
 
 
-#delete $tools{waste};
-#delete $tools{datok};
-#delete $tools{korap_tokenizer};
-#delete $tools{'opennlp_sentence'};
-#delete $tools{jtok};
-#delete $tools{syntok};
-#delete $tools{somajo};
-#delete $tools{stanford};
-#delete $tools{nnsplit};
-#delete $tools{'deep-eos_bi-lstm-de'};
-#delete $tools{'deep-eos_cnn-de'};
-#delete $tools{'deep-eos_lstm-de'};
-#delete $tools{'spacy_dep'};
-#delete $tools{'spacy_stat'};
-#delete $tools{'spacy_sentencizer'};
-#delete $tools{'cutter'};
+# delete $tools{waste};
+# delete $tools{datok};
+# delete $tools{korap_tokenizer};
+# delete $tools{'opennlp_sentence'};
+# delete $tools{jtok};
+# delete $tools{syntok};
+# delete $tools{somajo};
+# delete $tools{stanford};
+# delete $tools{nnsplit};
+# delete $tools{'deep-eos_bi-lstm-de'};
+# delete $tools{'deep-eos_cnn-de'};
+# delete $tools{'deep-eos_lstm-de'};
+# delete $tools{'spacy_dep'};
+# delete $tools{'spacy_stat'};
+# delete $tools{'spacy_sentencizer'};
+# delete $tools{'blingfire'};
+# delete $tools{'cutter'};
 
 
 # Create project folders
diff --git a/benchmarks/ud_tokens.pl b/benchmarks/ud_tokens.pl
index 6e30ef1..685e6da 100644
--- a/benchmarks/ud_tokens.pl
+++ b/benchmarks/ud_tokens.pl
@@ -13,11 +13,11 @@
 
 # Split files
 chdir '/euralex/corpus/';
-system 'perl /euralex/benchmarks/cleanup/split_conllu.pl /euralex/corpus/' . $base;
+system 'perl /euralex/benchmarks/cleanup/split_conllu.pl /euralex/corpus/' . $base . ' ' . $ud_path;
 chdir '/euralex';
 
-my $gold = '/euralex/corpus/' . $base . '.split';
-my $raw = '/euralex/corpus/' . $base . '.raw';
+my $gold = $ud_path . '/' . $base . '.split';
+my $raw = $ud_path . '/' . $base . '.raw';
 
 my %tools = (
   waste => sub {
@@ -52,6 +52,9 @@
   spacy => sub {
     system 'python3 ./spacy/spacy_tok.py ' . $raw . ' > ' . $ud_path . '/spacy/' . $base;
   },
+  blingfire => sub {
+    system 'python3 ./blingfire/blingfire_tok.py ' . $raw . ' | sed "s/\s/\n/g" > ' . $ud_path . '/blingfire/' . $base;
+  },
   cutter => sub {
     system 'python3 ./cutter/cutter.py nosent ' . $raw . ' > ' . $ud_path . '/cutter/' . $base;
   },
@@ -79,6 +82,7 @@
 # delete $tools{elephant};
 # delete $tools{spacy};
 # delete $tools{cutter};
+# delete $tools{blingfire};
 
 # Create project folders
 foreach (keys %tools) {
diff --git a/blingfire/blingfire_sent.py b/blingfire/blingfire_sent.py
new file mode 100644
index 0000000..1f1bfed
--- /dev/null
+++ b/blingfire/blingfire_sent.py
@@ -0,0 +1,8 @@
+import sys
+from blingfire import *
+
+with open(sys.argv[1], 'r') as f:
+    contents = f.read()
+
+    print(text_to_sentences(contents))
+
diff --git a/blingfire/blingfire_tok.py b/blingfire/blingfire_tok.py
new file mode 100644
index 0000000..4929cdb
--- /dev/null
+++ b/blingfire/blingfire_tok.py
@@ -0,0 +1,8 @@
+import sys
+from blingfire import *
+
+with open(sys.argv[1], 'r') as f:
+    contents = f.read()
+
+    print(text_to_words(contents))
+
diff --git a/spacy/spacy_sent.py b/spacy/spacy_sent.py
index b132b07..f9658bb 100644
--- a/spacy/spacy_sent.py
+++ b/spacy/spacy_sent.py
@@ -20,8 +20,10 @@
 with open(sys.argv[2], 'r') as f:
     contents = f.read()
 
-    doc = nlp(contents)
+    nlp.max_length = len(contents) + 100
 
+    doc = nlp(contents, disable = ['ner'])
+    
     for sent in doc.sents:
         print(sent.text)
         print(" </eos> ")