commit | 6eab2f47380d335f9943058c18d2e3bdade0dcad | [log] [tgz] |
---|---|---|
author | Marc Kupietz <kupietz@ids-mannheim.de> | Thu Dec 21 11:15:14 2023 +0100 |
committer | Marc Kupietz <kupietz@ids-mannheim.de> | Thu Dec 21 11:44:41 2023 +0100 |
tree | 12a3c803a4ebf64847fd6083c370fdb4cd2c3acc | |
parent | cdb91bae592879cdbafac7be0fe1e3fdbe26247b [diff] |
Fix build with gcc 13 by including <cstdint> Like other versions before, gcc 13 moved some includes around and as a result <cstdint> is no longer transitively included [1]. Explicitly include it for uint64_t. [1] https://gcc.gnu.org/gcc-13/porting_to.html#header-dep-changes Change-Id: I99c349fd2564fd6ee03f0fccc52c821c4e0a3235
Fork of wang2vec with extensions for re-training and count based models and a more accurate ETA prognosis.
cd dereko2vec mkdir build cd build cmake .. make && ctest3 --extra-verbose && sudo make install
The command to build word embeddings is exactly the same as in the original version, except that we added type 5 for setting up a purely count based collocation database.
The -type argument is a integer that defines the architecture to use. These are the possible parameters:
0 - cbow
1 - skipngram
2 - cwindow (see below)
3 - structured skipngram(see below)
4 - collobert's senna context window model (still experimental)
5 - build a collocation count database instead of word embeddings
./dereko2vec -train input_file -output embedding_file -type 0 -size 50 -window 5 -negative 10 -nce 0 -hs 0 -sample 1e-4 -threads 1 -binary 1 -iter 5 -cap 0
The KorAP-XML-CoNLL-U tool can be used to generate input files for dereko2vec from KorAP-XML ZIPs using its tokenization and setence boundary information, for example:
korapxml2conllu --word2vec wpd19.zip > wpd19.w2vinput
@InProceedings{Ling:2015:naacl, author = {Ling, Wang and Dyer, Chris and Black, Alan and Trancoso, Isabel}, title="Two/Too Simple Adaptations of word2vec for Syntax Problems", booktitle="Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", year="2015", publisher="Association for Computational Linguistics", location="Denver, Colorado", } @InProceedings{FankhauserKupietz2019, author = {Peter Fankhauser and Marc Kupietz}, title = {Analyzing domain specific word embeddings for a large corpus of contemporary German}, series = {Proceedings of the 10th International Corpus Linguistics Conference}, publisher = {University of Cardiff}, address = {Cardiff}, year = {2019}, note = {\url{https://doi.org/10.14618/ids-pub-9117}} }