fixed function names
diff --git a/man/gpt3_requests.Rd b/man/gpt3_completions.Rd
similarity index 90%
rename from man/gpt3_requests.Rd
rename to man/gpt3_completions.Rd
index b85d374..59eb046 100644
--- a/man/gpt3_requests.Rd
+++ b/man/gpt3_completions.Rd
@@ -1,10 +1,10 @@
% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/gpt3_requests.R
-\name{gpt3_requests}
-\alias{gpt3_requests}
+% Please edit documentation in R/gpt3_completions.R
+\name{gpt3_completions}
+\alias{gpt3_completions}
\title{Makes bunch completion requests to the GPT-3 API}
\usage{
-gpt3_requests(
+gpt3_completions(
prompt_var,
id_var,
param_output_type = "complete",
@@ -58,13 +58,13 @@
If \code{output_type} is "meta", only the data table in slot [\link{2}] is returned.
}
\description{
-\code{gpt3_requests()} is the package's main function for rquests and takes as input a vector of prompts and processes each prompt as per the defined parameters. It extends the \code{gpt3_single_request()} function to allow for bunch processing of requests to the Open AI GPT-3 API.
+\code{gpt3_completions()} is the package's main function for rquests and takes as input a vector of prompts and processes each prompt as per the defined parameters. It extends the \code{gpt3_single_completion()} function to allow for bunch processing of requests to the Open AI GPT-3 API.
}
\details{
The easiest (and intended) use case for this function is to create a data.frame or data.table with variables that contain the prompts to be requested from GPT-3 and a prompt id (see examples below).
For a general guide on the completion requests, see \url{https://beta.openai.com/docs/guides/completion}. This function provides you with an R wrapper to send requests with the full range of request parameters as detailed on \url{https://beta.openai.com/docs/api-reference/completions} and reproduced below.
-For the \code{best_of} parameter: The \code{gpt3_single_request()} (which is used here in a vectorised manner) handles the issue that best_of must be greater than n by setting \code{if(best_of <= n){ best_of = n}}.
+For the \code{best_of} parameter: The \code{gpt3_single_completion()} (which is used here in a vectorised manner) handles the issue that best_of must be greater than n by setting \code{if(best_of <= n){ best_of = n}}.
If \code{id_var} is not provided, the function will use \code{prompt_1} ... \code{prompt_n} as id variable.
@@ -81,24 +81,24 @@
# Once authenticated:
# Assuming you have a data.table with 3 different prompts:
dt_prompts = data.table::data.table('prompts' = c('What is the meaning if life?', 'Write a tweet about London:', 'Write a research proposal for using AI to fight fake news:'), 'prompt_id' = c(LETTERS[1:3]))
-gpt3_requests(prompt_var = dt_prompts$prompts
+gpt3_completions(prompt_var = dt_prompts$prompts
, id_var = dt_prompts$prompt_id)
## With more controls
-gpt3_requests(prompt_var = dt_prompts$prompts
+gpt3_completions(prompt_var = dt_prompts$prompts
, id_var = dt_prompts$prompt_id
, param_max_tokens = 50
, param_temperature = 0.5
, param_n = 5)
## Reproducible example (deterministic approach)
-gpt3_requests(prompt_var = dt_prompts$prompts
+gpt3_completions(prompt_var = dt_prompts$prompts
, id_var = dt_prompts$prompt_id
, param_max_tokens = 50
, param_temperature = 0.0)
## Changing the GPT-3 model
-gpt3_requests(prompt_var = dt_prompts$prompts
+gpt3_completions(prompt_var = dt_prompts$prompts
, id_var = dt_prompts$prompt_id
, param_model = 'text-babbage-001'
, param_max_tokens = 50
diff --git a/man/gpt3_single_request.Rd b/man/gpt3_single_completion.Rd
similarity index 90%
rename from man/gpt3_single_request.Rd
rename to man/gpt3_single_completion.Rd
index ae3f39a..78eab5e 100644
--- a/man/gpt3_single_request.Rd
+++ b/man/gpt3_single_completion.Rd
@@ -1,10 +1,10 @@
% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/gpt3_single_request.R
-\name{gpt3_single_request}
-\alias{gpt3_single_request}
+% Please edit documentation in R/gpt3_single_completion.R
+\name{gpt3_single_completion}
+\alias{gpt3_single_completion}
\title{Makes a single completion request to the GPT-3 API}
\usage{
-gpt3_single_request(
+gpt3_single_completion(
prompt_input,
model = "text-davinci-002",
output_type = "complete",
@@ -55,7 +55,7 @@
If \code{output_type} is "meta", only the data table in slot [\link{2}] is returned.
}
\description{
-\code{gpt3_single_request()} sends a single \href{https://beta.openai.com/docs/api-reference/completions}{completion request} to the Open AI GPT-3 API.
+\code{gpt3_single_completion()} sends a single \href{https://beta.openai.com/docs/api-reference/completions}{completion request} to the Open AI GPT-3 API.
}
\details{
For a general guide on the completion requests, see \url{https://beta.openai.com/docs/guides/completion}. This function provides you with an R wrapper to send requests with the full range of request parameters as detailed on \url{https://beta.openai.com/docs/api-reference/completions} and reproduced below.
@@ -75,21 +75,21 @@
# Once authenticated:
## Simple request with defaults:
-gpt3_single_request(prompt_input = 'How old are you?')
+gpt3_single_completion(prompt_input = 'How old are you?')
## Instruct GPT-3 to write ten research ideas of max. 150 tokens with some controls:
-gpt3_single_request(prompt_input = 'Write a research idea about using text data to understand human behaviour:'
+gpt3_single_completion(prompt_input = 'Write a research idea about using text data to understand human behaviour:'
, temperature = 0.8
, n = 10
, max_tokens = 150)
## For fully reproducible results, we need `temperature = 0`, e.g.:
-gpt3_single_request(prompt_input = 'Finish this sentence:/n There is no easier way to learn R than'
+gpt3_single_completion(prompt_input = 'Finish this sentence:/n There is no easier way to learn R than'
, temperature = 0.0
, max_tokens = 50)
## The same example with a different GPT-3 model:
-gpt3_single_request(prompt_input = 'Finish this sentence:/n There is no easier way to learn R than'
+gpt3_single_completion(prompt_input = 'Finish this sentence:/n There is no easier way to learn R than'
, model = 'text-babbage-001'
, temperature = 0.0
, max_tokens = 50)
diff --git a/man/gpt3_test_completion.Rd b/man/gpt3_test_completion.Rd
new file mode 100644
index 0000000..60100da
--- /dev/null
+++ b/man/gpt3_test_completion.Rd
@@ -0,0 +1,20 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/test_completion.R
+\name{gpt3_test_completion}
+\alias{gpt3_test_completion}
+\title{Make a test request to the GPT-3 API}
+\usage{
+gpt3_test_completion(verbose = T)
+}
+\arguments{
+\item{verbose}{(boolean) if TRUE prints the actual prompt and GPT-3 completion of the test request (default: TRUE).}
+}
+\value{
+A message of success or failure of the connection.
+}
+\description{
+\code{gpt3_test_completion()} sends a basic \href{https://beta.openai.com/docs/api-reference/completions}{completion request} to the Open AI GPT-3 API.
+}
+\examples{
+gpt3_test_completion()
+}
diff --git a/man/gpt3_test_request.Rd b/man/gpt3_test_request.Rd
deleted file mode 100644
index d757118..0000000
--- a/man/gpt3_test_request.Rd
+++ /dev/null
@@ -1,20 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/test_request.R
-\name{gpt3_test_request}
-\alias{gpt3_test_request}
-\title{Make a test request to the GPT-3 API}
-\usage{
-gpt3_test_request(verbose = T)
-}
-\arguments{
-\item{verbose}{(boolean) if TRUE prints the actual prompt and GPT-3 completion of the test request (default: TRUE).}
-}
-\value{
-A message of success or failure of the connection.
-}
-\description{
-\code{gpt3_test_request()} sends a basic \href{https://beta.openai.com/docs/api-reference/completions}{completion request} to the Open AI GPT-3 API.
-}
-\examples{
-gpt3_test_request()
-}