added ChatGPT support
diff --git a/R/base_urls.R b/R/base_urls.R
index 41cb50f..5b1b9ee 100644
--- a/R/base_urls.R
+++ b/R/base_urls.R
@@ -6,3 +6,4 @@
 url.completions = "https://api.openai.com/v1/completions"
 url.embeddings = "https://api.openai.com/v1/embeddings"
 url.fine_tune = "https://api.openai.com/v1/fine-tunes"
+url.chat_completions = "https://api.openai.com/v1/chat/completions"
diff --git a/R/chatgpt.R b/R/chatgpt.R
new file mode 100644
index 0000000..a8dbab9
--- /dev/null
+++ b/R/chatgpt.R
@@ -0,0 +1,130 @@
+#' Makes bunch chat completion requests to the ChatGPT API
+#'
+#' @description
+#' `chatgpt()` is the package's main function for the ChatGPT functionality and takes as input a vector of prompts and processes each prompt as per the defined parameters. It extends the `chatgpt_single()` function to allow for bunch processing of requests to the Open AI GPT API.
+#' @details
+#' The easiest (and intended) use case for this function is to create a data.frame or data.table with variables that contain the prompts to be requested from ChatGPT and a prompt id (see examples below).
+#' For a general guide on the chat completion requests, see [https://platform.openai.com/docs/guides/chat/chat-completions-beta](https://platform.openai.com/docs/guides/chat/chat-completions-beta). This function provides you with an R wrapper to send requests with the full range of request parameters as detailed on [https://platform.openai.com/docs/api-reference/chat/create](https://platform.openai.com/docs/api-reference/chat/create) and reproduced below.
+#'
+#'
+#' If `id_var` is not provided, the function will use `prompt_1` ... `prompt_n` as id variable.
+#'
+#' Parameters not included/supported:
+#'   - `logit_bias`: [https://platform.openai.com/docs/api-reference/chat/create#chat/create-logit_bias](https://platform.openai.com/docs/api-reference/chat/create#chat/create-logit_bias)
+#'   - `stream`: [https://platform.openai.com/docs/api-reference/chat/create#chat/create-stream](https://platform.openai.com/docs/api-reference/chat/create#chat/create-stream)
+#'
+#' @param prompt_role_var character vector that contains the role prompts to the ChatGPT request. Must be one of 'system', 'assistant', 'user' (default), see [https://platform.openai.com/docs/guides/chat](https://platform.openai.com/docs/guides/chat)
+#' @param prompt_content_var character vector that contains the content prompts to the ChatGPT request. This is the key instruction that ChatGPT receives.
+#' @param id_var (optional) character vector that contains the user-defined ids of the prompts. See details.
+#' @param param_model a character vector that indicates the [ChatGPT model](https://platform.openai.com/docs/api-reference/chat/create#chat/create-model) to use; one of "gpt-3.5-turbo" (default), "gpt-3.5-turbo-0301"
+#' @param param_output_type character determining the output provided: "complete" (default), "text" or "meta"
+#' @param param_max_tokens numeric (default: 100) indicating the maximum number of tokens that the completion request should return (from the official API documentation: _The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens)._)
+#' @param param_temperature numeric (default: 1.0) specifying the sampling strategy of the possible completions (from the official API documentation: _What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both._)
+#' @param param_top_p numeric (default: 1) specifying sampling strategy as an alternative to the temperature sampling (from the official API documentation: _An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both._)
+#' @param param_n numeric (default: 1) specifying the number of completions per request (from the official API documentation: _How many chat completion choices to generate for each input message. **Note: Because this parameter generates many completions, it can quickly consume your token quota.** Use carefully and ensure that you have reasonable settings for max_tokens and stop._)
+#' @param param_stop character or character vector (default: NULL) that specifies after which character value when the completion should end (from the official API documentation: _Up to 4 sequences where the API will stop generating further tokens._)
+#' @param param_presence_penalty numeric (default: 0) between -2.00  and +2.00 to determine the penalisation of repetitiveness if a token already exists (from the official API documentation: _Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics._). See also: [https://beta.openai.com/docs/api-reference/parameter-details](https://beta.openai.com/docs/api-reference/parameter-details)
+#' @param param_frequency_penalty numeric (default: 0) between -2.00  and +2.00 to determine the penalisation of repetitiveness based on the frequency of a token in the text already (from the official API documentation: _Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim._). See also: [https://beta.openai.com/docs/api-reference/parameter-details](https://beta.openai.com/docs/api-reference/parameter-details)
+#'
+#' @return A list with two data tables (if `output_type` is the default "complete"): [[1]] contains the data table with the columns `n` (= the mo. of `n` responses requested), `prompt_role` (= the role that was set for the prompt), `prompt_content` (= the content that was set for the prompt), `chatgpt_role` (= the role that ChatGPT assumed in the chat completion) and `chatgpt_content` (= the content that ChatGPT provided with its assumed role in the chat completion). [[2]] contains the meta information of the request, including the request id, the parameters of the request and the token usage of the prompt (`tok_usage_prompt`), the completion (`tok_usage_completion`), the total usage (`tok_usage_total`) and the `id` (= the provided `id_var` or its default alternative).
+#'
+#' If `output_type` is "text", only the data table in slot [[1]] is returned.
+#'
+#' If `output_type` is "meta", only the data table in slot [[2]] is returned.
+#' @examples
+#' # First authenticate with your API key via `gpt3_authenticate('pathtokey')`
+#'
+#' # Once authenticated:
+#' # Assuming you have a data.table with 3 different prompts:
+#' dt_prompts = data.table::data.table('prompts_content' = c('What is the meaning if life?', 'Write a tweet about London:', 'Write a research proposal for using AI to fight fake news:')
+#'     , 'prompts_role' = rep('user', 3)
+#'     , 'prompt_id' = c(LETTERS[1:3]))
+#'chatgpt(prompt_role_var = dt_prompts$prompts_role
+#'    , prompt_content_var = dt_prompts$prompts_content
+#'    , id_var = dt_prompts$prompt_id)
+#'
+#' ## With more controls
+#' chatgpt(prompt_role_var = dt_prompts$prompts_role
+#'     , prompt_content_var = dt_prompts$prompts_content
+#'     , id_var = dt_prompts$prompt_id
+#'     , param_max_tokens = 50
+#'     , param_temperature = 0.5
+#'     , param_n = 5)
+#'
+#' ## Reproducible example (deterministic approach)
+#' chatgpt(prompt_role_var = dt_prompts$prompts_role
+#'     , prompt_content_var = dt_prompts$prompts_content
+#'     , id_var = dt_prompts$prompt_id
+#'     , param_max_tokens = 50
+#'     , param_temperature = 0
+#'     , param_n = 3)
+#'
+#' @export
+chatgpt = function(prompt_role_var
+                   , prompt_content_var
+                   , id_var
+                   , param_output_type = 'complete'
+                   , param_model = 'gpt-3.5-turbo'
+                   , param_max_tokens = 100
+                   , param_temperature = 1.0
+                   , param_top_p = 1
+                   , param_n = 1
+                   , param_stop = NULL
+                   , param_presence_penalty = 0
+                   , param_frequency_penalty = 0){
+
+  data_length = length(prompt_role_var)
+  if(missing(id_var)){
+    data_id = paste0('prompt_', 1:data_length)
+  } else {
+    data_id = id_var
+  }
+
+  empty_list = list()
+  meta_list = list()
+
+  for(i in 1:data_length){
+
+    print(paste0('Request: ', i, '/', data_length))
+
+    row_outcome = chatgpt_single(prompt_role = prompt_role_var[i]
+                                 , prompt_content = prompt_content_var[i]
+                                 , model = param_model
+                                 , output_type = param_output_type
+                                 , max_tokens = param_max_tokens
+                                 , temperature = param_temperature
+                                 , top_p = param_top_p
+                                 , n = param_n
+                                 , stop = param_stop
+                                 , presence_penalty = param_presence_penalty
+                                 , frequency_penalty = param_frequency_penalty)
+
+    row_outcome[[1]]$id = data_id[i]
+    row_outcome[[2]]$id = data_id[i]
+
+    empty_list[[i]] = row_outcome[[1]]
+    meta_list[[i]] = row_outcome[[2]]
+
+  }
+
+
+  bunch_core_output = try(data.table::rbindlist(empty_list), silent = T)
+  if("try-error" %in% class(bunch_core_output)){
+    bunch_core_output = data.table::rbindlist(empty_list, fill = T)
+  }
+  bunch_meta_output = try(data.table::rbindlist(meta_list), silent = T)
+  if("try-error" %in% class(bunch_meta_output)){
+    bunch_meta_output = data.table::rbindlist(meta_list, fill = T)
+  }
+
+  if(param_output_type == 'complete'){
+    output = list(bunch_core_output
+                  , bunch_meta_output)
+  } else if(param_output_type == 'meta'){
+    output = bunch_meta_output
+  } else if(param_output_type == 'text'){
+    output = bunch_core_output
+  }
+
+  return(output)
+}
diff --git a/R/chatgpt_single.R b/R/chatgpt_single.R
new file mode 100644
index 0000000..c2497a6
--- /dev/null
+++ b/R/chatgpt_single.R
@@ -0,0 +1,141 @@
+#' Makes a single chat completion request to the ChatGPT API
+#'
+#' @description
+#' `chatgpt_single()` sends a single [chat completion request](https://platform.openai.com/docs/guides/chat) to the Open AI GPT API. Doing so, makes this equivalent to the sending single completion requests with `gpt3_single_completion()`. You can see the notes on chat vs completion requests here: [https://platform.openai.com/docs/guides/chat/chat-vs-completions](https://platform.openai.com/docs/guides/chat/chat-vs-completions). This function allows you to specify the role and content for your API call.
+#' @details For a general guide on the completion requests, see [https://platform.openai.com/docs/api-reference/chat](https://platform.openai.com/docs/api-reference/chat). This function provides you with an R wrapper to send requests with the full range of request parameters as detailed on [https://beta.openai.com/docs/api-reference/completions](https://beta.openai.com/docs/api-reference/completions) and reproduced below.
+#'
+#' Parameters not included/supported:
+#'   - `logit_bias`: [https://platform.openai.com/docs/api-reference/chat/create#chat/create-logit_bias](https://platform.openai.com/docs/api-reference/chat/create#chat/create-logit_bias)
+#'   - `stream`: [https://platform.openai.com/docs/api-reference/chat/create#chat/create-stream](https://platform.openai.com/docs/api-reference/chat/create#chat/create-stream)
+#'
+#'
+#' @param prompt_role character (default: 'user') that contains the role for the prompt message in the ChatGPT message format. Must be one of 'system', 'assistant', 'user' (default), see [https://platform.openai.com/docs/guides/chat](https://platform.openai.com/docs/guides/chat)
+#' @param prompt_content character that contains the content for the prompt message in the ChatGPT message format, see [https://platform.openai.com/docs/guides/chat](https://platform.openai.com/docs/guides/chat). This is the key instruction that ChatGPT receives.
+#' @param model a character vector that indicates the [ChatGPT model](https://platform.openai.com/docs/api-reference/chat/create#chat/create-model) to use; one of "gpt-3.5-turbo" (default), "gpt-3.5-turbo-0301"
+#' @param output_type character determining the output provided: "complete" (default), "text" or "meta"
+#' @param max_tokens numeric (default: 100) indicating the maximum number of tokens that the completion request should return (from the official API documentation: _The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens)._)
+#' @param temperature numeric (default: 1.0) specifying the sampling strategy of the possible completions (from the official API documentation: _What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both._)
+#' @param top_p numeric (default: 1) specifying sampling strategy as an alternative to the temperature sampling (from the official API documentation: _An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both._)
+#' @param n numeric (default: 1) specifying the number of completions per request (from the official API documentation: _How many chat completion choices to generate for each input message. **Note: Because this parameter generates many completions, it can quickly consume your token quota.** Use carefully and ensure that you have reasonable settings for max_tokens and stop._)
+#' @param stop character or character vector (default: NULL) that specifies after which character value when the completion should end (from the official API documentation: _Up to 4 sequences where the API will stop generating further tokens._)
+#' @param presence_penalty numeric (default: 0) between -2.00  and +2.00 to determine the penalisation of repetitiveness if a token already exists (from the official API documentation: _Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics._). See also: [https://beta.openai.com/docs/api-reference/parameter-details](https://beta.openai.com/docs/api-reference/parameter-details)
+#' @param frequency_penalty numeric (default: 0) between -2.00  and +2.00 to determine the penalisation of repetitiveness based on the frequency of a token in the text already (from the official API documentation: _Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim._). See also: [https://beta.openai.com/docs/api-reference/parameter-details](https://beta.openai.com/docs/api-reference/parameter-details)
+#'
+#' @return A list with two data tables (if `output_type` is the default "complete"): [[1]] contains the data table with the columns `n` (= the mo. of `n` responses requested), `prompt_role` (= the role that was set for the prompt), `prompt_content` (= the content that was set for the prompt), `chatgpt_role` (= the role that ChatGPT assumed in the chat completion) and `chatgpt_content` (= the content that ChatGPT provided with its assumed role in the chat completion). [[2]] contains the meta information of the request, including the request id, the parameters of the request and the token usage of the prompt (`tok_usage_prompt`), the completion (`tok_usage_completion`) and the total usage (`tok_usage_total`).
+#'
+#' If `output_type` is "text", only the data table in slot [[1]] is returned.
+#'
+#' If `output_type` is "meta", only the data table in slot [[2]] is returned.
+#' @examples
+#' # First authenticate with your API key via `gpt3_authenticate('pathtokey')`
+#'
+#' # Once authenticated:
+#'
+#' ## Simple request with defaults:
+#' chatgpt_single(prompt_content = 'You are a teacher: explain to me what science is')
+#'
+#' ## Instruct ChatGPT to write ten research ideas of max. 150 tokens with some controls:
+#' chatgpt_single(prompt_role = 'user', prompt_content = 'Write a research idea about using text data to understand human behaviour:'
+#'    , temperature = 0.8
+#'    , n = 10
+#'    , max_tokens = 150)
+#'
+#' ## For fully reproducible results, we need `temperature = 0`, e.g.:
+#' chatgpt_single(prompt_content = 'Finish this sentence:/n There is no easier way to learn R than'
+#'     , temperature = 0.0
+#'     , max_tokens = 50)
+#'
+#' @export
+chatgpt_single = function(prompt_role = 'user'
+                          , prompt_content
+                          , model = 'gpt-3.5-turbo'
+                          , output_type = 'complete'
+                          , max_tokens = 100
+                          , temperature = 1.0
+                          , top_p = 1
+                          , n = 1
+                          , stop = NULL
+                          , presence_penalty = 0
+                          , frequency_penalty = 0
+                          ){
+
+  #check for request issues with `n` and `best_of`
+
+  if(temperature == 0 & n > 1){
+    n = 1
+    message('You are running the deterministic model, so `n` was set to 1 to avoid unnecessary token quota usage.')
+  }
+
+  messages = c = data.frame(role = prompt_role
+                            , content = prompt_content)
+
+  parameter_list = list(messages = messages
+                        , model = model
+                        , max_tokens = max_tokens
+                        , temperature = temperature
+                        , top_p = top_p
+                        , n = n
+                        , stop = stop
+                        , presence_penalty = presence_penalty
+                        , frequency_penalty = frequency_penalty
+                        )
+
+  request_base = httr::POST(url = url.chat_completions
+                            , body = parameter_list
+                            , httr::add_headers(Authorization = paste("Bearer", api_key))
+                            , encode = "json")
+
+  request_content = httr::content(request_base)
+
+  if(n == 1){
+    core_output = data.table::data.table('n' = 1
+                                         , 'prompt_role' = prompt_role
+                                         , 'prompt_content' = prompt_content
+                                         , 'chatgpt_role' = request_content$choices[[1]]$message$role
+                                         , 'chatgpt_content' = request_content$choices[[1]]$message$content)
+  } else if(n > 1){
+
+    core_output = data.table::data.table('n' = 1:n
+                                         , 'prompt_role' = rep(prompt_role, n)
+                                         , 'prompt_content' = rep(prompt_content, n)
+                                         , 'chatgpt_role' = rep("", n)
+                                         , 'chatgpt_content' = rep("", n))
+
+    for(i in 1:n){
+      core_output$chatgpt_role[i] = request_content$choices[[i]]$message$role
+      core_output$chatgpt_content[i] = request_content$choices[[i]]$message$content
+    }
+
+  }
+
+
+  meta_output = data.table::data.table('request_id' = request_content$id
+                                       , 'object' = request_content$object
+                                       , 'model' = request_content$model
+                                       , 'param_prompt_role' = prompt_role
+                                       , 'param_prompt_content' = prompt_content
+                                       , 'param_model' = model
+                                       , 'param_max_tokens' = max_tokens
+                                       , 'param_temperature' = temperature
+                                       , 'param_top_p' = top_p
+                                       , 'param_n' = n
+                                       , 'param_stop' = stop
+                                       , 'param_presence_penalty' = presence_penalty
+                                       , 'param_frequency_penalty' = frequency_penalty
+                                       , 'tok_usage_prompt' = request_content$usage$prompt_tokens
+                                       , 'tok_usage_completion' = request_content$usage$completion_tokens
+                                       , 'tok_usage_total' = request_content$usage$total_tokens)
+
+
+  if(output_type == 'complete'){
+    output = list(core_output
+                  , meta_output)
+  } else if(output_type == 'meta'){
+    output = meta_output
+  } else if(output_type == 'text'){
+    output = core_output
+  }
+
+  return(output)
+
+}
diff --git a/R/gpt3_completions.R b/R/gpt3_completions.R
index 7414e35..0c86135 100644
--- a/R/gpt3_completions.R
+++ b/R/gpt3_completions.R
@@ -1,7 +1,7 @@
 #' Makes bunch completion requests to the GPT-3 API
 #'
 #' @description
-#' `gpt3_completions()` is the package's main function for rquests and takes as input a vector of prompts and processes each prompt as per the defined parameters. It extends the `gpt3_single_completion()` function to allow for bunch processing of requests to the Open AI GPT-3 API.
+#' `gpt3_completions()` is the package's main function for requests and takes as input a vector of prompts and processes each prompt as per the defined parameters. It extends the `gpt3_single_completion()` function to allow for bunch processing of requests to the Open AI GPT-3 API.
 #' @details
 #' The easiest (and intended) use case for this function is to create a data.frame or data.table with variables that contain the prompts to be requested from GPT-3 and a prompt id (see examples below).
 #' For a general guide on the completion requests, see [https://beta.openai.com/docs/guides/completion](https://beta.openai.com/docs/guides/completion). This function provides you with an R wrapper to send requests with the full range of request parameters as detailed on [https://beta.openai.com/docs/api-reference/completions](https://beta.openai.com/docs/api-reference/completions) and reproduced below.
diff --git a/R/request_prices.R b/R/request_prices.R
deleted file mode 100644
index 9e1ec54..0000000
--- a/R/request_prices.R
+++ /dev/null
@@ -1,9 +0,0 @@
-#' Contains the pricing for completion requests (see: [https://openai.com/api/pricing/#faq-completions-pricing](https://openai.com/api/pricing/#faq-completions-pricing))
-#'
-#' @description
-#' These are the prices listed for 1k tokens of requests for the various models. These are needed for the `rgpt3_cost_estimate(...)` function.
-#' @export
-price_base_davinci = 0.02
-price_base_curie = 0.002
-price_base_babbage = 0.0005
-price_base_ada = 0.0004