Update README.md

Change-Id: I25c4af0e6d1aad706ab4f9ce092bd5c020dc6e05
1 file changed
tree: 002eb585b7f76f8a8eb9a3f47147492946270114
  1. ci/
  2. src/
  3. tests/
  4. .gitignore
  5. .gitlab-ci.yml
  6. CMakeLists.txt
  7. LICENSE
  8. README.md
README.md

dereko2vec

Fork of wang2vec with extensions for re-training and count based models, support for tokens with frequencies > 2³² and a more accurate ETA prognosis.

Installation

Dependencies

Build and install

cd dereko2vec
mkdir build
cd build
cmake ..
make && ctest3 --extra-verbose && sudo make install

Run

The command to build word embeddings is exactly the same as in the original version, except that we added type 5 for setting up a purely count based collocation database.

The -type argument is a integer that defines the architecture to use. These are the possible parameters:
0 - cbow
1 - skipngram
2 - cwindow (see below)
3 - structured skipngram(see below)
4 - collobert's senna context window model (still experimental)
5 - build a collocation count database instead of word embeddings

Example

./dereko2vec -train input_file -output embedding_file -type 0 -size 50 -window 5 -negative 10 -nce 0 -hs 0 -sample 1e-4 -threads 1 -binary 1 -iter 5 -cap 0

Generate dereko2vec training input files from KorAP-XML ZIPs

The KorAP-XML-CoNLL-U tool can be used to generate input files for dereko2vec from KorAP-XML ZIPs using its tokenization and setence boundary information, for example:

korapxml2conllu --word2vec wpd19.zip > wpd19.w2vinput

References

@InProceedings{Ling:2015:naacl,  
author = {Ling, Wang and Dyer, Chris and Black, Alan and Trancoso, Isabel},  
title="Two/Too Simple Adaptations of word2vec for Syntax Problems",  
booktitle="Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",  
year="2015",  
publisher="Association for Computational Linguistics",  
location="Denver, Colorado",  
}

@InProceedings{FankhauserKupietz2019,
author    = {Peter Fankhauser and Marc Kupietz},
title     = {Analyzing domain specific word embeddings for a large corpus of contemporary German},
series = {Proceedings of the 10th International Corpus Linguistics Conference},
publisher = {University of Cardiff},
address   = {Cardiff},
year      = {2019},
note      = {\url{https://doi.org/10.14618/ids-pub-9117}}
}