Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Package of word embeddings of Czech from a large corpus

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Additional Information
    • Publication Information:
      Charles University, Faculty of Mathematics and Physics, Institute of Formal and Applied Linguistics (UFAL)
    • Publication Date:
      2022
    • Collection:
      OLAC: Open Language Archives Community
    • Abstract:
      This package comprises eight models of Czech word embeddings trained by applying word2vec (Mikolov et al. 2013) to the currently most extensive corpus of Czech, namely SYN v9 (KÅ™en et al. 2022). The minimum frequency threshold for including a word in the model was 10 occurrences in the corpus. The original lemmatisation and tagging included in the corpus were used for disambiguation. In the case of word embeddings of word forms, units comprise word forms and their tag from a positional tagset (cf. https://wiki.korpus.cz/doku.php/en:pojmy:tag) separated by '>', e.g., koÄka>NNFS1 --- A --- . The published package provides models trained on both tokens and lemmas. In addition, the models combine training algorithms (CBOW and Skipgram) and dimensions of the resulting vectors (100 or 500), while the training window and negative sampling remained the same during the training. The package also includes files with frequencies of word forms (vocab-frequencies.forms) and lemmas (vocab-frequencies.lemmas).
    • Relation:
      http://hdl.handle.net/11234/1-4920
    • Online Access:
      http://hdl.handle.net/11234/1-4920
    • Rights:
      Creative Commons - Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) ; http://creativecommons.org/licenses/by-nc-sa/4.0/
    • Accession Number:
      edsbas.1B0AC5F8