added ChatGPT support
diff --git a/CITATION.cff b/CITATION.cff
index 9a7a921..854ece2 100644
--- a/CITATION.cff
+++ b/CITATION.cff
@@ -4,8 +4,8 @@
 - family-names: "Kleinberg"
   given-names: "Bennett"
   orcid: "https://orcid.org/0000-0003-1658-9086"
-title: "rgpt3: Making requests from R to the GPT-3 API"
-version: 0.3.1
-date-released: 2022-12-23
+title: "rgpt3: Making requests from R to the GPT-3 API and ChatGPT"
+version: 0.4
+date-released: 2023-03-05
 url: "https://github.com/ben-aaron188/rgpt3"
 doi: "10.5281/zenodo.7327667"
diff --git a/DESCRIPTION b/DESCRIPTION
index d613c67..7b75663 100644
--- a/DESCRIPTION
+++ b/DESCRIPTION
@@ -1,14 +1,14 @@
 Package: rgpt3
-Title: Making requests from R to the GPT-3 API
-Version: 0.3.1
+Title: Making requests from R to the GPT-3 API and ChatGPT
+Version: 0.4
 Authors@R: 
     person("Bennett", "Kleinberg",  email = "bennett.kleinberg@tilburguniversity.edu", role = c("aut", "cre"))
-Description: With this package you can interact with the powerful GPT-3 models in two ways: making requests for completions (e.g., ask GPT-3 to write a novel, classify text, answer questions, etc.) and retrieving text embeddings representations (i.e., obtain a low-dimensional vector representation that allows for downstream analyses). You need to authenticate with your own Open AI API key and all requests you make count towards you token quota. For completion requests and embeddings requests, two functions each allow you to send either sinlge requests (`gpt3_single_request()` and `gpt3_single_embedding()`) or send bunch requests where the vectorised structure is used (`gpt3_requests()` and `gpt3_embeddings()`).
+Description: With this package you can interact with the powerful GPT-3/GPT-3.5 models in two ways: making requests for completions (e.g., ask GPT-3 to write a novel, classify text, answer questions, etc.) and retrieving text embeddings representations (i.e., obtain a low-dimensional vector representation that allows for downstream analyses). You can also interact with ChatGPT. You need to authenticate with your own Open AI API key and all requests you make count towards you token quota. For completion requests and embeddings requests, two functions each allow you to send either sinlge requests (`gpt3_single_request()` and `gpt3_single_embedding()`) or send bunch requests where the vectorised structure is used (`gpt3_requests()` and `gpt3_embeddings()`).
 URL: https://github.com/ben-aaron188/rgpt3
 License: GPL (>= 3)
 Encoding: UTF-8
 Roxygen: list(markdown = TRUE)
-RoxygenNote: 7.2.1
+RoxygenNote: 7.2.3
 Imports: 
     data.table,
     httr
diff --git a/NAMESPACE b/NAMESPACE
index cc8834a..7705a53 100644
--- a/NAMESPACE
+++ b/NAMESPACE
@@ -1,11 +1,12 @@
 # Generated by roxygen2: do not edit by hand
 
+export(chatgpt)
+export(chatgpt_single)
 export(gpt3_authenticate)
 export(gpt3_completions)
 export(gpt3_embeddings)
 export(gpt3_single_completion)
 export(gpt3_single_embedding)
 export(gpt3_test_completion)
-export(price_base_davinci)
 export(to_numeric)
 export(url.completions)
diff --git a/R/base_urls.R b/R/base_urls.R
index 41cb50f..5b1b9ee 100644
--- a/R/base_urls.R
+++ b/R/base_urls.R
@@ -6,3 +6,4 @@
 url.completions = "https://api.openai.com/v1/completions"
 url.embeddings = "https://api.openai.com/v1/embeddings"
 url.fine_tune = "https://api.openai.com/v1/fine-tunes"
+url.chat_completions = "https://api.openai.com/v1/chat/completions"
diff --git a/R/chatgpt.R b/R/chatgpt.R
new file mode 100644
index 0000000..a8dbab9
--- /dev/null
+++ b/R/chatgpt.R
@@ -0,0 +1,130 @@
+#' Makes bunch chat completion requests to the ChatGPT API
+#'
+#' @description
+#' `chatgpt()` is the package's main function for the ChatGPT functionality and takes as input a vector of prompts and processes each prompt as per the defined parameters. It extends the `chatgpt_single()` function to allow for bunch processing of requests to the Open AI GPT API.
+#' @details
+#' The easiest (and intended) use case for this function is to create a data.frame or data.table with variables that contain the prompts to be requested from ChatGPT and a prompt id (see examples below).
+#' For a general guide on the chat completion requests, see [https://platform.openai.com/docs/guides/chat/chat-completions-beta](https://platform.openai.com/docs/guides/chat/chat-completions-beta). This function provides you with an R wrapper to send requests with the full range of request parameters as detailed on [https://platform.openai.com/docs/api-reference/chat/create](https://platform.openai.com/docs/api-reference/chat/create) and reproduced below.
+#'
+#'
+#' If `id_var` is not provided, the function will use `prompt_1` ... `prompt_n` as id variable.
+#'
+#' Parameters not included/supported:
+#'   - `logit_bias`: [https://platform.openai.com/docs/api-reference/chat/create#chat/create-logit_bias](https://platform.openai.com/docs/api-reference/chat/create#chat/create-logit_bias)
+#'   - `stream`: [https://platform.openai.com/docs/api-reference/chat/create#chat/create-stream](https://platform.openai.com/docs/api-reference/chat/create#chat/create-stream)
+#'
+#' @param prompt_role_var character vector that contains the role prompts to the ChatGPT request. Must be one of 'system', 'assistant', 'user' (default), see [https://platform.openai.com/docs/guides/chat](https://platform.openai.com/docs/guides/chat)
+#' @param prompt_content_var character vector that contains the content prompts to the ChatGPT request. This is the key instruction that ChatGPT receives.
+#' @param id_var (optional) character vector that contains the user-defined ids of the prompts. See details.
+#' @param param_model a character vector that indicates the [ChatGPT model](https://platform.openai.com/docs/api-reference/chat/create#chat/create-model) to use; one of "gpt-3.5-turbo" (default), "gpt-3.5-turbo-0301"
+#' @param param_output_type character determining the output provided: "complete" (default), "text" or "meta"
+#' @param param_max_tokens numeric (default: 100) indicating the maximum number of tokens that the completion request should return (from the official API documentation: _The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens)._)
+#' @param param_temperature numeric (default: 1.0) specifying the sampling strategy of the possible completions (from the official API documentation: _What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both._)
+#' @param param_top_p numeric (default: 1) specifying sampling strategy as an alternative to the temperature sampling (from the official API documentation: _An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both._)
+#' @param param_n numeric (default: 1) specifying the number of completions per request (from the official API documentation: _How many chat completion choices to generate for each input message. **Note: Because this parameter generates many completions, it can quickly consume your token quota.** Use carefully and ensure that you have reasonable settings for max_tokens and stop._)
+#' @param param_stop character or character vector (default: NULL) that specifies after which character value when the completion should end (from the official API documentation: _Up to 4 sequences where the API will stop generating further tokens._)
+#' @param param_presence_penalty numeric (default: 0) between -2.00  and +2.00 to determine the penalisation of repetitiveness if a token already exists (from the official API documentation: _Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics._). See also: [https://beta.openai.com/docs/api-reference/parameter-details](https://beta.openai.com/docs/api-reference/parameter-details)
+#' @param param_frequency_penalty numeric (default: 0) between -2.00  and +2.00 to determine the penalisation of repetitiveness based on the frequency of a token in the text already (from the official API documentation: _Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim._). See also: [https://beta.openai.com/docs/api-reference/parameter-details](https://beta.openai.com/docs/api-reference/parameter-details)
+#'
+#' @return A list with two data tables (if `output_type` is the default "complete"): [[1]] contains the data table with the columns `n` (= the mo. of `n` responses requested), `prompt_role` (= the role that was set for the prompt), `prompt_content` (= the content that was set for the prompt), `chatgpt_role` (= the role that ChatGPT assumed in the chat completion) and `chatgpt_content` (= the content that ChatGPT provided with its assumed role in the chat completion). [[2]] contains the meta information of the request, including the request id, the parameters of the request and the token usage of the prompt (`tok_usage_prompt`), the completion (`tok_usage_completion`), the total usage (`tok_usage_total`) and the `id` (= the provided `id_var` or its default alternative).
+#'
+#' If `output_type` is "text", only the data table in slot [[1]] is returned.
+#'
+#' If `output_type` is "meta", only the data table in slot [[2]] is returned.
+#' @examples
+#' # First authenticate with your API key via `gpt3_authenticate('pathtokey')`
+#'
+#' # Once authenticated:
+#' # Assuming you have a data.table with 3 different prompts:
+#' dt_prompts = data.table::data.table('prompts_content' = c('What is the meaning if life?', 'Write a tweet about London:', 'Write a research proposal for using AI to fight fake news:')
+#'     , 'prompts_role' = rep('user', 3)
+#'     , 'prompt_id' = c(LETTERS[1:3]))
+#'chatgpt(prompt_role_var = dt_prompts$prompts_role
+#'    , prompt_content_var = dt_prompts$prompts_content
+#'    , id_var = dt_prompts$prompt_id)
+#'
+#' ## With more controls
+#' chatgpt(prompt_role_var = dt_prompts$prompts_role
+#'     , prompt_content_var = dt_prompts$prompts_content
+#'     , id_var = dt_prompts$prompt_id
+#'     , param_max_tokens = 50
+#'     , param_temperature = 0.5
+#'     , param_n = 5)
+#'
+#' ## Reproducible example (deterministic approach)
+#' chatgpt(prompt_role_var = dt_prompts$prompts_role
+#'     , prompt_content_var = dt_prompts$prompts_content
+#'     , id_var = dt_prompts$prompt_id
+#'     , param_max_tokens = 50
+#'     , param_temperature = 0
+#'     , param_n = 3)
+#'
+#' @export
+chatgpt = function(prompt_role_var
+                   , prompt_content_var
+                   , id_var
+                   , param_output_type = 'complete'
+                   , param_model = 'gpt-3.5-turbo'
+                   , param_max_tokens = 100
+                   , param_temperature = 1.0
+                   , param_top_p = 1
+                   , param_n = 1
+                   , param_stop = NULL
+                   , param_presence_penalty = 0
+                   , param_frequency_penalty = 0){
+
+  data_length = length(prompt_role_var)
+  if(missing(id_var)){
+    data_id = paste0('prompt_', 1:data_length)
+  } else {
+    data_id = id_var
+  }
+
+  empty_list = list()
+  meta_list = list()
+
+  for(i in 1:data_length){
+
+    print(paste0('Request: ', i, '/', data_length))
+
+    row_outcome = chatgpt_single(prompt_role = prompt_role_var[i]
+                                 , prompt_content = prompt_content_var[i]
+                                 , model = param_model
+                                 , output_type = param_output_type
+                                 , max_tokens = param_max_tokens
+                                 , temperature = param_temperature
+                                 , top_p = param_top_p
+                                 , n = param_n
+                                 , stop = param_stop
+                                 , presence_penalty = param_presence_penalty
+                                 , frequency_penalty = param_frequency_penalty)
+
+    row_outcome[[1]]$id = data_id[i]
+    row_outcome[[2]]$id = data_id[i]
+
+    empty_list[[i]] = row_outcome[[1]]
+    meta_list[[i]] = row_outcome[[2]]
+
+  }
+
+
+  bunch_core_output = try(data.table::rbindlist(empty_list), silent = T)
+  if("try-error" %in% class(bunch_core_output)){
+    bunch_core_output = data.table::rbindlist(empty_list, fill = T)
+  }
+  bunch_meta_output = try(data.table::rbindlist(meta_list), silent = T)
+  if("try-error" %in% class(bunch_meta_output)){
+    bunch_meta_output = data.table::rbindlist(meta_list, fill = T)
+  }
+
+  if(param_output_type == 'complete'){
+    output = list(bunch_core_output
+                  , bunch_meta_output)
+  } else if(param_output_type == 'meta'){
+    output = bunch_meta_output
+  } else if(param_output_type == 'text'){
+    output = bunch_core_output
+  }
+
+  return(output)
+}
diff --git a/R/chatgpt_single.R b/R/chatgpt_single.R
new file mode 100644
index 0000000..c2497a6
--- /dev/null
+++ b/R/chatgpt_single.R
@@ -0,0 +1,141 @@
+#' Makes a single chat completion request to the ChatGPT API
+#'
+#' @description
+#' `chatgpt_single()` sends a single [chat completion request](https://platform.openai.com/docs/guides/chat) to the Open AI GPT API. Doing so, makes this equivalent to the sending single completion requests with `gpt3_single_completion()`. You can see the notes on chat vs completion requests here: [https://platform.openai.com/docs/guides/chat/chat-vs-completions](https://platform.openai.com/docs/guides/chat/chat-vs-completions). This function allows you to specify the role and content for your API call.
+#' @details For a general guide on the completion requests, see [https://platform.openai.com/docs/api-reference/chat](https://platform.openai.com/docs/api-reference/chat). This function provides you with an R wrapper to send requests with the full range of request parameters as detailed on [https://beta.openai.com/docs/api-reference/completions](https://beta.openai.com/docs/api-reference/completions) and reproduced below.
+#'
+#' Parameters not included/supported:
+#'   - `logit_bias`: [https://platform.openai.com/docs/api-reference/chat/create#chat/create-logit_bias](https://platform.openai.com/docs/api-reference/chat/create#chat/create-logit_bias)
+#'   - `stream`: [https://platform.openai.com/docs/api-reference/chat/create#chat/create-stream](https://platform.openai.com/docs/api-reference/chat/create#chat/create-stream)
+#'
+#'
+#' @param prompt_role character (default: 'user') that contains the role for the prompt message in the ChatGPT message format. Must be one of 'system', 'assistant', 'user' (default), see [https://platform.openai.com/docs/guides/chat](https://platform.openai.com/docs/guides/chat)
+#' @param prompt_content character that contains the content for the prompt message in the ChatGPT message format, see [https://platform.openai.com/docs/guides/chat](https://platform.openai.com/docs/guides/chat). This is the key instruction that ChatGPT receives.
+#' @param model a character vector that indicates the [ChatGPT model](https://platform.openai.com/docs/api-reference/chat/create#chat/create-model) to use; one of "gpt-3.5-turbo" (default), "gpt-3.5-turbo-0301"
+#' @param output_type character determining the output provided: "complete" (default), "text" or "meta"
+#' @param max_tokens numeric (default: 100) indicating the maximum number of tokens that the completion request should return (from the official API documentation: _The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens)._)
+#' @param temperature numeric (default: 1.0) specifying the sampling strategy of the possible completions (from the official API documentation: _What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both._)
+#' @param top_p numeric (default: 1) specifying sampling strategy as an alternative to the temperature sampling (from the official API documentation: _An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both._)
+#' @param n numeric (default: 1) specifying the number of completions per request (from the official API documentation: _How many chat completion choices to generate for each input message. **Note: Because this parameter generates many completions, it can quickly consume your token quota.** Use carefully and ensure that you have reasonable settings for max_tokens and stop._)
+#' @param stop character or character vector (default: NULL) that specifies after which character value when the completion should end (from the official API documentation: _Up to 4 sequences where the API will stop generating further tokens._)
+#' @param presence_penalty numeric (default: 0) between -2.00  and +2.00 to determine the penalisation of repetitiveness if a token already exists (from the official API documentation: _Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics._). See also: [https://beta.openai.com/docs/api-reference/parameter-details](https://beta.openai.com/docs/api-reference/parameter-details)
+#' @param frequency_penalty numeric (default: 0) between -2.00  and +2.00 to determine the penalisation of repetitiveness based on the frequency of a token in the text already (from the official API documentation: _Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim._). See also: [https://beta.openai.com/docs/api-reference/parameter-details](https://beta.openai.com/docs/api-reference/parameter-details)
+#'
+#' @return A list with two data tables (if `output_type` is the default "complete"): [[1]] contains the data table with the columns `n` (= the mo. of `n` responses requested), `prompt_role` (= the role that was set for the prompt), `prompt_content` (= the content that was set for the prompt), `chatgpt_role` (= the role that ChatGPT assumed in the chat completion) and `chatgpt_content` (= the content that ChatGPT provided with its assumed role in the chat completion). [[2]] contains the meta information of the request, including the request id, the parameters of the request and the token usage of the prompt (`tok_usage_prompt`), the completion (`tok_usage_completion`) and the total usage (`tok_usage_total`).
+#'
+#' If `output_type` is "text", only the data table in slot [[1]] is returned.
+#'
+#' If `output_type` is "meta", only the data table in slot [[2]] is returned.
+#' @examples
+#' # First authenticate with your API key via `gpt3_authenticate('pathtokey')`
+#'
+#' # Once authenticated:
+#'
+#' ## Simple request with defaults:
+#' chatgpt_single(prompt_content = 'You are a teacher: explain to me what science is')
+#'
+#' ## Instruct ChatGPT to write ten research ideas of max. 150 tokens with some controls:
+#' chatgpt_single(prompt_role = 'user', prompt_content = 'Write a research idea about using text data to understand human behaviour:'
+#'    , temperature = 0.8
+#'    , n = 10
+#'    , max_tokens = 150)
+#'
+#' ## For fully reproducible results, we need `temperature = 0`, e.g.:
+#' chatgpt_single(prompt_content = 'Finish this sentence:/n There is no easier way to learn R than'
+#'     , temperature = 0.0
+#'     , max_tokens = 50)
+#'
+#' @export
+chatgpt_single = function(prompt_role = 'user'
+                          , prompt_content
+                          , model = 'gpt-3.5-turbo'
+                          , output_type = 'complete'
+                          , max_tokens = 100
+                          , temperature = 1.0
+                          , top_p = 1
+                          , n = 1
+                          , stop = NULL
+                          , presence_penalty = 0
+                          , frequency_penalty = 0
+                          ){
+
+  #check for request issues with `n` and `best_of`
+
+  if(temperature == 0 & n > 1){
+    n = 1
+    message('You are running the deterministic model, so `n` was set to 1 to avoid unnecessary token quota usage.')
+  }
+
+  messages = c = data.frame(role = prompt_role
+                            , content = prompt_content)
+
+  parameter_list = list(messages = messages
+                        , model = model
+                        , max_tokens = max_tokens
+                        , temperature = temperature
+                        , top_p = top_p
+                        , n = n
+                        , stop = stop
+                        , presence_penalty = presence_penalty
+                        , frequency_penalty = frequency_penalty
+                        )
+
+  request_base = httr::POST(url = url.chat_completions
+                            , body = parameter_list
+                            , httr::add_headers(Authorization = paste("Bearer", api_key))
+                            , encode = "json")
+
+  request_content = httr::content(request_base)
+
+  if(n == 1){
+    core_output = data.table::data.table('n' = 1
+                                         , 'prompt_role' = prompt_role
+                                         , 'prompt_content' = prompt_content
+                                         , 'chatgpt_role' = request_content$choices[[1]]$message$role
+                                         , 'chatgpt_content' = request_content$choices[[1]]$message$content)
+  } else if(n > 1){
+
+    core_output = data.table::data.table('n' = 1:n
+                                         , 'prompt_role' = rep(prompt_role, n)
+                                         , 'prompt_content' = rep(prompt_content, n)
+                                         , 'chatgpt_role' = rep("", n)
+                                         , 'chatgpt_content' = rep("", n))
+
+    for(i in 1:n){
+      core_output$chatgpt_role[i] = request_content$choices[[i]]$message$role
+      core_output$chatgpt_content[i] = request_content$choices[[i]]$message$content
+    }
+
+  }
+
+
+  meta_output = data.table::data.table('request_id' = request_content$id
+                                       , 'object' = request_content$object
+                                       , 'model' = request_content$model
+                                       , 'param_prompt_role' = prompt_role
+                                       , 'param_prompt_content' = prompt_content
+                                       , 'param_model' = model
+                                       , 'param_max_tokens' = max_tokens
+                                       , 'param_temperature' = temperature
+                                       , 'param_top_p' = top_p
+                                       , 'param_n' = n
+                                       , 'param_stop' = stop
+                                       , 'param_presence_penalty' = presence_penalty
+                                       , 'param_frequency_penalty' = frequency_penalty
+                                       , 'tok_usage_prompt' = request_content$usage$prompt_tokens
+                                       , 'tok_usage_completion' = request_content$usage$completion_tokens
+                                       , 'tok_usage_total' = request_content$usage$total_tokens)
+
+
+  if(output_type == 'complete'){
+    output = list(core_output
+                  , meta_output)
+  } else if(output_type == 'meta'){
+    output = meta_output
+  } else if(output_type == 'text'){
+    output = core_output
+  }
+
+  return(output)
+
+}
diff --git a/R/gpt3_completions.R b/R/gpt3_completions.R
index 7414e35..0c86135 100644
--- a/R/gpt3_completions.R
+++ b/R/gpt3_completions.R
@@ -1,7 +1,7 @@
 #' Makes bunch completion requests to the GPT-3 API
 #'
 #' @description
-#' `gpt3_completions()` is the package's main function for rquests and takes as input a vector of prompts and processes each prompt as per the defined parameters. It extends the `gpt3_single_completion()` function to allow for bunch processing of requests to the Open AI GPT-3 API.
+#' `gpt3_completions()` is the package's main function for requests and takes as input a vector of prompts and processes each prompt as per the defined parameters. It extends the `gpt3_single_completion()` function to allow for bunch processing of requests to the Open AI GPT-3 API.
 #' @details
 #' The easiest (and intended) use case for this function is to create a data.frame or data.table with variables that contain the prompts to be requested from GPT-3 and a prompt id (see examples below).
 #' For a general guide on the completion requests, see [https://beta.openai.com/docs/guides/completion](https://beta.openai.com/docs/guides/completion). This function provides you with an R wrapper to send requests with the full range of request parameters as detailed on [https://beta.openai.com/docs/api-reference/completions](https://beta.openai.com/docs/api-reference/completions) and reproduced below.
diff --git a/R/request_prices.R b/R/request_prices.R
deleted file mode 100644
index 9e1ec54..0000000
--- a/R/request_prices.R
+++ /dev/null
@@ -1,9 +0,0 @@
-#' Contains the pricing for completion requests (see: [https://openai.com/api/pricing/#faq-completions-pricing](https://openai.com/api/pricing/#faq-completions-pricing))
-#'
-#' @description
-#' These are the prices listed for 1k tokens of requests for the various models. These are needed for the `rgpt3_cost_estimate(...)` function.
-#' @export
-price_base_davinci = 0.02
-price_base_curie = 0.002
-price_base_babbage = 0.0005
-price_base_ada = 0.0004
diff --git a/README.md b/README.md
index 490aac8..78293ac 100644
--- a/README.md
+++ b/README.md
@@ -2,7 +2,7 @@
 
 # `rgpt3` 
 
-**Making requests from R to the GPT-3 API**
+**Making requests from R to the GPT-3 API and to ChatGPT**
 
 
 _Note: this is a "community-maintained” package (i.e., not the official one). For the official OpenAI libraries (python and node.js endpoints), go to [https://beta.openai.com/docs/libraries/python-bindings](https://beta.openai.com/docs/libraries/python-bindings)_
@@ -75,18 +75,21 @@
 
 `rgpt3` currently is structured into the following functions:
 
-- Making requests (i.e. prompting the model)
+- Making _standard_ requests (i.e. prompting the GPT models)
     - single requests: `gpt3_single_completion()`
     - make multiple prompt-based requests from a source data.frame or data.table: `gpt3_completions()`
+- Interacting with ChatGPT
+    - single chat requests: `chatgpt_single()`
+    - make multiple prompt-based chat requests from a source data.frame or data.table: `chatgpt()`
 - Obtain embeddings
     - obtain embeddings for a single text input: `gpt3_single_embedding`
     - obtain embeddings for multiple texts from a source data.frame or data.table: `gpt3_embeddings()`
 
-The basic principle is that you can (and should) best use the more extensible `gpt3_completions()` and `gpt3_embeddings()` functions as these allow you to make use of R's vectorisation. These do work even if you have only one prompt or text as input (see below). The difference between the extensible functions and their "single" counterparts is the input format.
+The basic principle is that you can (and should) best use the more extensible `gpt3_completions()`, `chatgpt()` and `gpt3_embeddings()` functions as these allow you to make use of R's vectorisation. These do work even if you have only one prompt or text as input (see below). The difference between the extensible functions and their "single" counterparts is the input format.
 
 This R package gives you full control over the parameters that the API contains. You can find these in detail in the package documentation and help files (e.g., `?gpt3_completions`) on the Open AI website for [completion requests](https://beta.openai.com/docs/api-reference/completions/create) and [embeddings](https://beta.openai.com/docs/api-reference/embeddings/create).
 
-Note: this package enables you to use the core functionality of GPT-3 (making completion requests) and provides a function to obtain embeddings. There are additional functionalities in the core API such as fine-tuning models (i.e., providing labelled data to update/retrain the existing model) and asking GPT-3 to make edits to text input. These are not part of this package since the focus of making GPT-3 accessible from R is on the completion requests.
+Note: this package enables you to use the core functionality of GPT-3 (making completion requests) and ChatGPT, and provides a function to obtain embeddings. There are additional functionalities in the core API such as fine-tuning models (i.e., providing labelled data to update/retrain the existing model) and asking GPT-3 to make edits to text input. These are not (yet) part of this package since the focus of making GPT-3 accessible from R is on the completion requests.
 
 
 ## Examples
@@ -95,7 +98,7 @@
 
 Note that due to the [sampling temperature parameter](https://beta.openai.com/docs/api-reference/completions/create#completions/create-temperature) of the requests - unless set to `0.0` - the results may vary (as the model is not deterministic then).
 
-### Making requests
+### Making requests (standard GPT models, i.e. before ChatGPT)
 
 The basic form of the GPT-3 API connector is via requests. These requests can be of various kinds including questions ("What is the meaning of life?"), text summarisation tasks, text generation tasks and many more. A whole list of examples is on the [Open AI examples page](https://beta.openai.com/examples).
 
@@ -103,7 +106,7 @@
 
 **Example 1: making a single completion request**
 
-This request "tells" GPT-3 to write a cynical text about human nature (five times) with a sampling temperature of 0.9, a maximium length of 100 tokens.
+This request "tells" GPT-3 to write a cynical text about human nature (five times) with a sampling temperature of 0.9 and a maximium length of 100 tokens.
 
 ```{r}
 example_1 = gpt3_single_completion(prompt_input = 'Write a cynical text about human nature:'
@@ -175,6 +178,52 @@
 # [1]    10 1025 # because we have ten rows of 1025 columns each (by default 1024 embeddings elements and 1 id variable)
 ```
 
+## ChatGPT
+
+In principle, ChatGPT can do all the things that the _standard_ GPT models (e.g., DaVinci-003) can do, but just a little better. An excellent brief summary is provided by OpenAI here: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt). In the examples below, we will reproduce the ones from the `gpt3_single_completion()` and `gpt3_completions()` as listed above. 
+
+The biggest change in how we interact with ChatGPT's API compared to the previous models is that we send the requests with a **role** and **content** of the prompts. The role must be one of 'user', 'system' or 'assistant' and you essentially tell ChatGPT in wich role the content you send is to be interpreted. The content is analogous to the standard prompts. The reason why the role is necessary is that it allows you provide a full back-and-forth conversational flow (e.g., [https://platform.openai.com/docs/guides/chat/introduction](https://platform.openai.com/docs/guides/chat/introduction)).
+
+**Example 1: making a single chat completion request**
+
+This request "tells" ChatGPT to write a cynical text about human nature (five times) with a sampling temperature of 1.5 and a maximium length of 100 tokens.
+
+```{r}
+chatgpt_example_1 = gpt3_single_completion(prompt_input = 'Write a cynical text about human nature:'
+                    , temperature = 0.9
+                    , max_tokens = 100
+                    , n = 5)
+```
+
+The returned list contains the actual instruction + output in `chatgpt_example_1[[1]]` and meta information about your request in `chatgpt_example_1[[2]]`.
+
+A verbatim excerpt of the produced output (from the `chatgpt_example_1[[1]]$chatgpt_content` column) here is: 
+
+> Settle in, young one. Let me impart to you some hard-earned wisdom about human nature. It is a wild creature, not easily tamed or discernible. Some claim they understand it fully, but they are as deluded as a chimpanzee wearing a top hat. I've seen people  at their worst: manipulating, deceiving, and diminishing others simply to assert their misguided superiority. [...]
+
+
+**Example 2: multiple prompts**
+
+We can extend the example and make multiple requests by using a data.frame / data.table as input for the `chatgpt()` function:
+
+```{r}
+my_chatgpt_prompts = data.frame('prompts_roles' = rep('user', 3)
+                          , 'prompts_contents' =
+                            c('You are a bureacrat. Complete this sentence: universities are'
+                              , 'You are an award-winning poet. Write a poem about music:'
+                              , 'Which colour is better and why? Red or blue?')
+                        ,'prompt_id' = c(LETTERS[1:3]))
+
+chatgpt_example_2 = chatgpt(prompt_role_var = my_chatgpt_prompts$prompts_roles
+                            , prompt_content_var = my_chatgpt_prompts$prompts_contents
+                             , id_var = my_chatgpt_prompts$prompt_id
+                             , param_max_tokens = 100
+                             , param_n = 5
+                             , param_temperature = 0.4)
+```
+
+Note that this completion request produced 5 (`param_n = 5`) completions for each of the three prompts, so a total of 15 completions.
+
 
 
 ## Cautionary note
@@ -192,11 +241,11 @@
 
 ## Changelog/updates
 
-- [update] 29 Nov 2022: the just released [davinci-003 model](https://beta.openai.com/docs/models/gpt-3) for text completions is now the default model for the text completion functions.
-- [minor fix] 3 Dec 2022: included handling for encoding issues so that `rbindlist` uses `fill=T` (in `gpt3_completions(...)`)
-- [update] 23 Dec 2022: the embeddings functions now default to the second generation embeddings "text-embedding-ada-002".
+- [new release] 5 Mar 2023: the package now supports ChatGPT 
 - [update] 30 Jan 2023: added error shooting for API call errors
-
+- [update] 23 Dec 2022: the embeddings functions now default to the second generation embeddings "text-embedding-ada-002".
+- [minor fix] 3 Dec 2022: included handling for encoding issues so that `rbindlist` uses `fill=T` (in `gpt3_completions(...)`)
+- [update] 29 Nov 2022: the just released [davinci-003 model](https://beta.openai.com/docs/models/gpt-3) for text completions is now the default model for the text completion functions.
 
 ## Citation
 
@@ -204,11 +253,11 @@
 @software{Kleinberg_rgpt3_Making_requests_2022,
     author = {Kleinberg, Bennett},
     doi = {10.5281/zenodo.7327667},
-    month = {12},
-    title = {{rgpt3: Making requests from R to the GPT-3 API}},
+    month = {3},
+    title = {{rgpt3: Making requests from R to the GPT-3 API and ChatGPT}},
     url = {https://github.com/ben-aaron188/rgpt3},
-    version = {0.3.1},
-    year = {2022}
+    version = {0.4},
+    year = {2023}
 }
 ```
 
diff --git a/man/chatgpt.Rd b/man/chatgpt.Rd
new file mode 100644
index 0000000..6d399cf
--- /dev/null
+++ b/man/chatgpt.Rd
@@ -0,0 +1,97 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/chatgpt.R
+\name{chatgpt}
+\alias{chatgpt}
+\title{Makes bunch chat completion requests to the ChatGPT API}
+\usage{
+chatgpt(
+  prompt_role_var,
+  prompt_content_var,
+  id_var,
+  param_output_type = "complete",
+  param_model = "gpt-3.5-turbo",
+  param_max_tokens = 100,
+  param_temperature = 1,
+  param_top_p = 1,
+  param_n = 1,
+  param_stop = NULL,
+  param_presence_penalty = 0,
+  param_frequency_penalty = 0
+)
+}
+\arguments{
+\item{prompt_role_var}{character vector that contains the role prompts to the ChatGPT request. Must be one of 'system', 'assistant', 'user' (default), see \url{https://platform.openai.com/docs/guides/chat}}
+
+\item{prompt_content_var}{character vector that contains the content prompts to the ChatGPT request. This is the key instruction that ChatGPT receives.}
+
+\item{id_var}{(optional) character vector that contains the user-defined ids of the prompts. See details.}
+
+\item{param_output_type}{character determining the output provided: "complete" (default), "text" or "meta"}
+
+\item{param_model}{a character vector that indicates the \href{https://platform.openai.com/docs/api-reference/chat/create#chat/create-model}{ChatGPT model} to use; one of "gpt-3.5-turbo" (default), "gpt-3.5-turbo-0301"}
+
+\item{param_max_tokens}{numeric (default: 100) indicating the maximum number of tokens that the completion request should return (from the official API documentation: \emph{The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens).})}
+
+\item{param_temperature}{numeric (default: 1.0) specifying the sampling strategy of the possible completions (from the official API documentation: \emph{What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or \code{top_p} but not both.})}
+
+\item{param_top_p}{numeric (default: 1) specifying sampling strategy as an alternative to the temperature sampling (from the official API documentation: \emph{An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10\% probability mass are considered. We generally recommend altering this or \code{temperature} but not both.})}
+
+\item{param_n}{numeric (default: 1) specifying the number of completions per request (from the official API documentation: \emph{How many chat completion choices to generate for each input message. \strong{Note: Because this parameter generates many completions, it can quickly consume your token quota.} Use carefully and ensure that you have reasonable settings for max_tokens and stop.})}
+
+\item{param_stop}{character or character vector (default: NULL) that specifies after which character value when the completion should end (from the official API documentation: \emph{Up to 4 sequences where the API will stop generating further tokens.})}
+
+\item{param_presence_penalty}{numeric (default: 0) between -2.00  and +2.00 to determine the penalisation of repetitiveness if a token already exists (from the official API documentation: \emph{Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.}). See also: \url{https://beta.openai.com/docs/api-reference/parameter-details}}
+
+\item{param_frequency_penalty}{numeric (default: 0) between -2.00  and +2.00 to determine the penalisation of repetitiveness based on the frequency of a token in the text already (from the official API documentation: \emph{Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.}). See also: \url{https://beta.openai.com/docs/api-reference/parameter-details}}
+}
+\value{
+A list with two data tables (if \code{output_type} is the default "complete"): [\link{1}] contains the data table with the columns \code{n} (= the mo. of \code{n} responses requested), \code{prompt_role} (= the role that was set for the prompt), \code{prompt_content} (= the content that was set for the prompt), \code{chatgpt_role} (= the role that ChatGPT assumed in the chat completion) and \code{chatgpt_content} (= the content that ChatGPT provided with its assumed role in the chat completion). [\link{2}] contains the meta information of the request, including the request id, the parameters of the request and the token usage of the prompt (\code{tok_usage_prompt}), the completion (\code{tok_usage_completion}), the total usage (\code{tok_usage_total}) and the \code{id} (= the provided \code{id_var} or its default alternative).
+
+If \code{output_type} is "text", only the data table in slot [\link{1}] is returned.
+
+If \code{output_type} is "meta", only the data table in slot [\link{2}] is returned.
+}
+\description{
+\code{chatgpt()} is the package's main function for the ChatGPT functionality and takes as input a vector of prompts and processes each prompt as per the defined parameters. It extends the \code{chatgpt_single()} function to allow for bunch processing of requests to the Open AI GPT API.
+}
+\details{
+The easiest (and intended) use case for this function is to create a data.frame or data.table with variables that contain the prompts to be requested from ChatGPT and a prompt id (see examples below).
+For a general guide on the chat completion requests, see \url{https://platform.openai.com/docs/guides/chat/chat-completions-beta}. This function provides you with an R wrapper to send requests with the full range of request parameters as detailed on \url{https://platform.openai.com/docs/api-reference/chat/create} and reproduced below.
+
+If \code{id_var} is not provided, the function will use \code{prompt_1} ... \code{prompt_n} as id variable.
+
+Parameters not included/supported:
+\itemize{
+\item \code{logit_bias}: \url{https://platform.openai.com/docs/api-reference/chat/create#chat/create-logit_bias}
+\item \code{stream}: \url{https://platform.openai.com/docs/api-reference/chat/create#chat/create-stream}
+}
+}
+\examples{
+# First authenticate with your API key via `gpt3_authenticate('pathtokey')`
+
+# Once authenticated:
+# Assuming you have a data.table with 3 different prompts:
+dt_prompts = data.table::data.table('prompts_content' = c('What is the meaning if life?', 'Write a tweet about London:', 'Write a research proposal for using AI to fight fake news:')
+    , 'prompts_role' = rep('user', 3)
+    , 'prompt_id' = c(LETTERS[1:3]))
+chatgpt(prompt_role_var = dt_prompts$prompts_role
+   , prompt_content_var = dt_prompts$prompts_content
+   , id_var = dt_prompts$prompt_id)
+
+## With more controls
+chatgpt(prompt_role_var = dt_prompts$prompts_role
+    , prompt_content_var = dt_prompts$prompts_content
+    , id_var = dt_prompts$prompt_id
+    , param_max_tokens = 50
+    , param_temperature = 0.5
+    , param_n = 5)
+
+## Reproducible example (deterministic approach)
+chatgpt(prompt_role_var = dt_prompts$prompts_role
+    , prompt_content_var = dt_prompts$prompts_content
+    , id_var = dt_prompts$prompt_id
+    , param_max_tokens = 50
+    , param_temperature = 0
+    , param_n = 3)
+
+}
diff --git a/man/chatgpt_single.Rd b/man/chatgpt_single.Rd
new file mode 100644
index 0000000..d7edaf9
--- /dev/null
+++ b/man/chatgpt_single.Rd
@@ -0,0 +1,82 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/chatgpt_single.R
+\name{chatgpt_single}
+\alias{chatgpt_single}
+\title{Makes a single chat completion request to the ChatGPT API}
+\usage{
+chatgpt_single(
+  prompt_role = "user",
+  prompt_content,
+  model = "gpt-3.5-turbo",
+  output_type = "complete",
+  max_tokens = 100,
+  temperature = 1,
+  top_p = 1,
+  n = 1,
+  stop = NULL,
+  presence_penalty = 0,
+  frequency_penalty = 0
+)
+}
+\arguments{
+\item{prompt_role}{character (default: 'user') that contains the role for the prompt message in the ChatGPT message format. Must be one of 'system', 'assistant', 'user' (default), see \url{https://platform.openai.com/docs/guides/chat}}
+
+\item{prompt_content}{character that contains the content for the prompt message in the ChatGPT message format, see \url{https://platform.openai.com/docs/guides/chat}. This is the key instruction that ChatGPT receives.}
+
+\item{model}{a character vector that indicates the \href{https://platform.openai.com/docs/api-reference/chat/create#chat/create-model}{ChatGPT model} to use; one of "gpt-3.5-turbo" (default), "gpt-3.5-turbo-0301"}
+
+\item{output_type}{character determining the output provided: "complete" (default), "text" or "meta"}
+
+\item{max_tokens}{numeric (default: 100) indicating the maximum number of tokens that the completion request should return (from the official API documentation: \emph{The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens).})}
+
+\item{temperature}{numeric (default: 1.0) specifying the sampling strategy of the possible completions (from the official API documentation: \emph{What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or \code{top_p} but not both.})}
+
+\item{top_p}{numeric (default: 1) specifying sampling strategy as an alternative to the temperature sampling (from the official API documentation: \emph{An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10\% probability mass are considered. We generally recommend altering this or \code{temperature} but not both.})}
+
+\item{n}{numeric (default: 1) specifying the number of completions per request (from the official API documentation: \emph{How many chat completion choices to generate for each input message. \strong{Note: Because this parameter generates many completions, it can quickly consume your token quota.} Use carefully and ensure that you have reasonable settings for max_tokens and stop.})}
+
+\item{stop}{character or character vector (default: NULL) that specifies after which character value when the completion should end (from the official API documentation: \emph{Up to 4 sequences where the API will stop generating further tokens.})}
+
+\item{presence_penalty}{numeric (default: 0) between -2.00  and +2.00 to determine the penalisation of repetitiveness if a token already exists (from the official API documentation: \emph{Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.}). See also: \url{https://beta.openai.com/docs/api-reference/parameter-details}}
+
+\item{frequency_penalty}{numeric (default: 0) between -2.00  and +2.00 to determine the penalisation of repetitiveness based on the frequency of a token in the text already (from the official API documentation: \emph{Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.}). See also: \url{https://beta.openai.com/docs/api-reference/parameter-details}}
+}
+\value{
+A list with two data tables (if \code{output_type} is the default "complete"): [\link{1}] contains the data table with the columns \code{n} (= the mo. of \code{n} responses requested), \code{prompt_role} (= the role that was set for the prompt), \code{prompt_content} (= the content that was set for the prompt), \code{chatgpt_role} (= the role that ChatGPT assumed in the chat completion) and \code{chatgpt_content} (= the content that ChatGPT provided with its assumed role in the chat completion). [\link{2}] contains the meta information of the request, including the request id, the parameters of the request and the token usage of the prompt (\code{tok_usage_prompt}), the completion (\code{tok_usage_completion}) and the total usage (\code{tok_usage_total}).
+
+If \code{output_type} is "text", only the data table in slot [\link{1}] is returned.
+
+If \code{output_type} is "meta", only the data table in slot [\link{2}] is returned.
+}
+\description{
+\code{chatgpt_single()} sends a single \href{https://platform.openai.com/docs/guides/chat}{chat completion request} to the Open AI GPT API. Doing so, makes this equivalent to the sending single completion requests with \code{gpt3_single_completion()}. You can see the notes on chat vs completion requests here: \url{https://platform.openai.com/docs/guides/chat/chat-vs-completions}. This function allows you to specify the role and content for your API call.
+}
+\details{
+For a general guide on the completion requests, see \url{https://platform.openai.com/docs/api-reference/chat}. This function provides you with an R wrapper to send requests with the full range of request parameters as detailed on \url{https://beta.openai.com/docs/api-reference/completions} and reproduced below.
+
+Parameters not included/supported:
+\itemize{
+\item \code{logit_bias}: \url{https://platform.openai.com/docs/api-reference/chat/create#chat/create-logit_bias}
+\item \code{stream}: \url{https://platform.openai.com/docs/api-reference/chat/create#chat/create-stream}
+}
+}
+\examples{
+# First authenticate with your API key via `gpt3_authenticate('pathtokey')`
+
+# Once authenticated:
+
+## Simple request with defaults:
+chatgpt_single(prompt_content = 'You are a teacher: explain to me what science is')
+
+## Instruct ChatGPT to write ten research ideas of max. 150 tokens with some controls:
+chatgpt_single(prompt_role = 'user', prompt_content = 'Write a research idea about using text data to understand human behaviour:'
+   , temperature = 0.8
+   , n = 10
+   , max_tokens = 150)
+
+## For fully reproducible results, we need `temperature = 0`, e.g.:
+chatgpt_single(prompt_content = 'Finish this sentence:/n There is no easier way to learn R than'
+    , temperature = 0.0
+    , max_tokens = 50)
+
+}
diff --git a/man/gpt3_completions.Rd b/man/gpt3_completions.Rd
index f1dc978..231d310 100644
--- a/man/gpt3_completions.Rd
+++ b/man/gpt3_completions.Rd
@@ -58,7 +58,7 @@
 If \code{output_type} is "meta", only the data table in slot [\link{2}] is returned.
 }
 \description{
-\code{gpt3_completions()} is the package's main function for rquests and takes as input a vector of prompts and processes each prompt as per the defined parameters. It extends the \code{gpt3_single_completion()} function to allow for bunch processing of requests to the Open AI GPT-3 API.
+\code{gpt3_completions()} is the package's main function for requests and takes as input a vector of prompts and processes each prompt as per the defined parameters. It extends the \code{gpt3_single_completion()} function to allow for bunch processing of requests to the Open AI GPT-3 API.
 }
 \details{
 The easiest (and intended) use case for this function is to create a data.frame or data.table with variables that contain the prompts to be requested from GPT-3 and a prompt id (see examples below).
diff --git a/man/price_base_davinci.Rd b/man/price_base_davinci.Rd
deleted file mode 100644
index b78e21e..0000000
--- a/man/price_base_davinci.Rd
+++ /dev/null
@@ -1,16 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/request_prices.R
-\docType{data}
-\name{price_base_davinci}
-\alias{price_base_davinci}
-\title{Contains the pricing for completion requests (see: \url{https://openai.com/api/pricing/#faq-completions-pricing})}
-\format{
-An object of class \code{numeric} of length 1.
-}
-\usage{
-price_base_davinci
-}
-\description{
-These are the prices listed for 1k tokens of requests for the various models. These are needed for the \code{rgpt3_cost_estimate(...)} function.
-}
-\keyword{datasets}
diff --git a/rgpt3_0.3.1.pdf b/rgpt3_0.3.1.pdf
deleted file mode 100644
index 0976ede..0000000
--- a/rgpt3_0.3.1.pdf
+++ /dev/null
Binary files differ
diff --git a/rgpt3_0.4.pdf b/rgpt3_0.4.pdf
new file mode 100644
index 0000000..b87d231
--- /dev/null
+++ b/rgpt3_0.4.pdf
Binary files differ