commit | 54c5221b610a6cfb80733ce7ff805755b5d28971 | [log] [tgz] |
---|---|---|
author | Akron <nils@diewald-online.de> | Mon Mar 07 18:56:21 2022 +0100 |
committer | Akron <nils@diewald-online.de> | Mon Mar 07 18:56:21 2022 +0100 |
tree | bf600066a200531f3f6fc59c33026dfe05f0599a | |
parent | ff35ed4db22a63aad74d4b1737b14658aef9f615 [diff] |
Add UD evaluation Change-Id: I87b50f7b46c7f1d111e5e8ad3f925ca5280d74a2
To build the Docker image, run
$ docker build -f Dockerfile -t korap/euralex22 .
This will download and install an image of approximately 6GB.
It will download and install the following tokenizers in an image to your system:
...
To run the evaluation suite ...
...
To run the benchmark, call
$ docker run --rm -i \ -v ${PWD}/benchmarks:/euralex/benchmarks \ -v ${PWD}/corpus:/euralex/corpus \ korap/euralex22 benchmarks/[BENCHMARK-SCRIPT]
The supported benchmark scripts are:
benchmark.pl
Performance measurements of the tools. See the tools section for some remarks to take into account.
empirist.pl
To run the empirist evaluation suite, you first need to download the empirist gold standard corpus and tooling, and extract it into the corpus directory.
$ wget https://sites.google.com/site/empirist2015/home/shared-task-data/empirist_gold_cmc.zip $ unzip empirist_gold_cmc.zip -d corpus $ wget https://sites.google.com/site/empirist2015/home/shared-task-data/empirist_gold_web.zip $ unzip empirist_gold_web.zip -d corpus
Quality measurements based on EmpiriST 2015.
To investigate the output, start the benchmark with mounted output folders
-v ${PWD}/output_cmc:/euralex/empirist_cmc -v ${PWD}/output_web:/euralex/empirist_web
ud-tokens.pl
To run the evaluation suite against the Universal Dependency corpus, first install the empirist tooling as explained above, and download the corpus.
$ wget https://github.com/UniversalDependencies/UD_German-GSD/raw/master/de_gsd-ud-train.conllu \ -O corpus/de_gsd-ud-train.conllu
$ docker run --rm -it \ -v ${PWD}/benchmarks:/euralex/benchmarks \ -v ${PWD}/corpus:/euralex/corpus \ korap/euralex2 benchmarks/empirist.pl
All tools are run using pipelining, which obviously introduces some overhead, that needs to be taken into account.
For Treetagger: Please read the license terms, before you download the software! By downloading the software, you agree to the terms stated there.
When running this benchmark using Docker you may need to run all processes privileged to get meaningful results.
docker run --privileged -v