updated docs for v0.3
diff --git a/DESCRIPTION b/DESCRIPTION
index 42235d9..1ce09c2 100644
--- a/DESCRIPTION
+++ b/DESCRIPTION
@@ -1,6 +1,6 @@
 Package: rgpt3
 Title: Making requests from R to the GPT-3 API
-Version: 0.2.1
+Version: 0.3
 Authors@R: 
     person("Bennett", "Kleinberg",  email = "bennett.kleinberg@tilburguniversity.edu", role = c("aut", "cre"))
 Description: With this package you can interact with the powerful GPT-3 models in two ways: making requests for completions (e.g., ask GPT-3 to write a novel, classify text, answer questions, etc.) and retrieving text embeddings representations (i.e., obtain a low-dimensional vector representation that allows for downstream analyses). You need to authenticate with your own Open AI API key and all requests you make count towards you token quota. For completion requests and embeddings requests, two functions each allow you to send either sinlge requests (`gpt3_single_request()` and `gpt3_single_embedding()`) or send bunch requests where the vectorised structure is used (`gpt3_requests()` and `gpt3_embeddings()`).
diff --git a/man/gpt3_completions.Rd b/man/gpt3_completions.Rd
index 4fa2416..f1dc978 100644
--- a/man/gpt3_completions.Rd
+++ b/man/gpt3_completions.Rd
@@ -40,7 +40,7 @@
 
 \item{param_n}{numeric (default: 1) specifying the number of completions per request (from the official API documentation: \emph{How many completions to generate for each prompt. \strong{Note: Because this parameter generates many completions, it can quickly consume your token quota.} Use carefully and ensure that you have reasonable settings for max_tokens and stop.})}
 
-\item{param_logprobs}{numeric (default: NULL) (from the official API documentation: \emph{Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. The maximum value for logprobs is 5. If you need more than this, please contact support@openai.com and describe your use case.})}
+\item{param_logprobs}{numeric (default: NULL) (from the official API documentation: \emph{Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. The maximum value for logprobs is 5. If you need more than this, please go to \url{https://help.openai.com/en/} and describe your use case.})}
 
 \item{param_stop}{character or character vector (default: NULL) that specifies after which character value when the completion should end (from the official API documentation: \emph{Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.})}
 
diff --git a/man/gpt3_single_completion.Rd b/man/gpt3_single_completion.Rd
index ae540ca..1b51c00 100644
--- a/man/gpt3_single_completion.Rd
+++ b/man/gpt3_single_completion.Rd
@@ -37,7 +37,7 @@
 
 \item{n}{numeric (default: 1) specifying the number of completions per request (from the official API documentation: \emph{How many completions to generate for each prompt. \strong{Note: Because this parameter generates many completions, it can quickly consume your token quota.} Use carefully and ensure that you have reasonable settings for max_tokens and stop.})}
 
-\item{logprobs}{numeric (default: NULL) (from the official API documentation: \emph{Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. The maximum value for logprobs is 5. If you need more than this, please contact support@openai.com and describe your use case.})}
+\item{logprobs}{numeric (default: NULL) (from the official API documentation: \emph{Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. The maximum value for logprobs is 5. If you need more than this, please go to \url{https://help.openai.com/en/} and describe your use case.})}
 
 \item{stop}{character or character vector (default: NULL) that specifies after which character value when the completion should end (from the official API documentation: \emph{Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.})}
 
diff --git a/rgpt3_0.2.1.pdf b/rgpt3_0.3.pdf
similarity index 71%
rename from rgpt3_0.2.1.pdf
rename to rgpt3_0.3.pdf
index 41b0303..3afadb6 100644
--- a/rgpt3_0.2.1.pdf
+++ b/rgpt3_0.3.pdf
Binary files differ