Update tokenization note in collocationAnalysis function
Change-Id: If375acab2e022eb9e4e54b784154bb6b4195217b
diff --git a/R/collocationAnalysis.R b/R/collocationAnalysis.R
index aedadec..b844556 100644
--- a/R/collocationAnalysis.R
+++ b/R/collocationAnalysis.R
@@ -19,8 +19,9 @@
#' To increase speed at the cost of accuracy and possible false negatives,
#' you can decrease searchHitsSampleLimit and/or topCollocatesLimit and/or set exactFrequencies to FALSE.
#'
-#' Note that currently not the tokenization provided by the backend, i.e. the corpus itself, is used, but a tinkered one.
-#' This can also lead to false negatives and to frequencies that differ from corresponding ones acquired via the web
+#' Note that some outdated non-DeReKo back-ends might not yet support returning tokenized matches (warning issued).
+#' In this case, the client library will fall back to client-side tokenization which might be slightly less accurate.
+#' This might lead to false negatives and to frequencies that differ from corresponding ones acquired via the web
#' user interface.
#'
#' @family collocation analysis functions