small readme fix for chatgpt example
diff --git a/R/gpt3_completions.R b/R/gpt3_completions.R
index 0c86135..01e3f4e 100644
--- a/R/gpt3_completions.R
+++ b/R/gpt3_completions.R
@@ -30,11 +30,11 @@
 #' @param param_frequency_penalty numeric (default: 0) between -2.00  and +2.00 to determine the penalisation of repetitiveness based on the frequency of a token in the text already (from the official API documentation: _Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim._). See also: [https://beta.openai.com/docs/api-reference/parameter-details](https://beta.openai.com/docs/api-reference/parameter-details)
 #' @param param_best_of numeric (default: 1) that determines the space of possibilities from which to select the completion with the highest probability (from the official API documentation: _Generates `best_of` completions server-side and returns the "best" (the one with the highest log probability per token)_). See details.
 #'
-#' @return A list with two data tables (if `param_output_type` is the default "complete"): [[1]] contains the data table with the columns `n` (= the mo. of `n` responses requested), `prompt` (= the prompt that was sent), `gpt3` (= the completion as returned from the GPT-3 model) and `id` (= the provided `id_var` or its default alternative). [[2]] contains the meta information of the request, including the request id, the parameters of the request and the token usage of the prompt (`tok_usage_prompt`), the completion (`tok_usage_completion`), the total usage (`tok_usage_total`), and the `id` (= the provided `id_var` or its default alternative).
+#' @return A list with two data tables (if `param_output_type` is the default "complete"): `[[1]]` contains the data table with the columns `n` (= the mo. of `n` responses requested), `prompt` (= the prompt that was sent), `gpt3` (= the completion as returned from the GPT-3 model) and `id` (= the provided `id_var` or its default alternative). `[[2]]` contains the meta information of the request, including the request id, the parameters of the request and the token usage of the prompt (`tok_usage_prompt`), the completion (`tok_usage_completion`), the total usage (`tok_usage_total`), and the `id` (= the provided `id_var` or its default alternative).
 #'
-#' If `output_type` is "text", only the data table in slot [[1]] is returned.
+#' If `output_type` is "text", only the data table in slot `[[1]]` is returned.
 #'
-#' If `output_type` is "meta", only the data table in slot [[2]] is returned.
+#' If `output_type` is "meta", only the data table in slot `[[2]]` is returned.
 #' @examples
 #' # First authenticate with your API key via `gpt3_authenticate('pathtokey')`
 #'
@@ -63,7 +63,7 @@
 #'    , param_model = 'text-babbage-001'
 #'    , param_max_tokens = 50
 #'    , param_temperature = 0.4)
-#' @export
+#'@export
 gpt3_completions = function(prompt_var
                               , id_var
                               , param_output_type = 'complete'
diff --git a/R/gpt3_embeddings.R b/R/gpt3_embeddings.R
index 2a7b7c7..e8c09f6 100644
--- a/R/gpt3_embeddings.R
+++ b/R/gpt3_embeddings.R
@@ -15,7 +15,7 @@
 #'
 #' These vectors can be used for downstream tasks such as (vector) similarity calculations.
 #' @param input_var character vector that contains the texts for which you want to obtain text embeddings from the GPT-3 model
-#' #' @param id_var (optional) character vector that contains the user-defined ids of the prompts. See details.
+#' @param id_var (optional) character vector that contains the user-defined ids of the prompts. See details.
 #' @param param_model a character vector that indicates the [embedding model](https://beta.openai.com/docs/guides/embeddings/embedding-models); one of "text-embedding-ada-002" (default), "text-similarity-ada-001", "text-similarity-curie-001", "text-similarity-babbage-001", "text-similarity-davinci-001"
 #' @return A data.table with the embeddings as separate columns; one row represents one input text. See details.
 #' @examples
@@ -33,7 +33,7 @@
 #' ## Obtain text embeddings for the completion texts:
 #' emb_travelblogs = gpt3_embeddings(input_var = travel_blog_data$gpt3)
 #' dim(emb_travelblogs)
-#' @export
+#'@export
 gpt3_embeddings = function(input_var
                                 , id_var
                                 , param_model = 'text-embedding-ada-002'){
diff --git a/R/gpt3_single_embedding.R b/R/gpt3_single_embedding.R
index ac0cee5..1e475c0 100644
--- a/R/gpt3_single_embedding.R
+++ b/R/gpt3_single_embedding.R
@@ -25,8 +25,8 @@
 #' gpt3_single_embedding(input = sample_string)
 #'
 #' ## Change the model:
-#' #' gpt3_single_embedding(input = sample_string
-#'     , model = 'text-similarity-curie-001')
+#' gpt3_single_embedding(input = sample_string
+#'   , model = 'text-similarity-curie-001')
 #' @export
 gpt3_single_embedding = function(input
                                , model = 'text-embedding-ada-002'
diff --git a/README.md b/README.md
index 78293ac..8500600 100644
--- a/README.md
+++ b/README.md
@@ -186,13 +186,14 @@
 
 **Example 1: making a single chat completion request**
 
-This request "tells" ChatGPT to write a cynical text about human nature (five times) with a sampling temperature of 1.5 and a maximium length of 100 tokens.
+This request "tells" ChatGPT to write a cynical text about human nature (five times) from the perspective of an old, male writer with a sampling temperature of 1.5 and a maximium length of 100 tokens.
 
 ```{r}
-chatgpt_example_1 = gpt3_single_completion(prompt_input = 'Write a cynical text about human nature:'
-                    , temperature = 0.9
-                    , max_tokens = 100
-                    , n = 5)
+chatgpt_example_1 = chatgpt_single(prompt_role = 'user'
+                                   , prompt_content = 'You are a cynical, old male writer. Write a cynical text about human nature:'
+                                           , temperature = 1.5
+                                           , max_tokens = 100
+                                           , n = 5)
 ```
 
 The returned list contains the actual instruction + output in `chatgpt_example_1[[1]]` and meta information about your request in `chatgpt_example_1[[2]]`.