Title: | 'Open Scoring' API Client |
Version: | 1.0.4 |
Description: | Creativity research involves the need to score open-ended problems. Usually done by humans, automatic scoring using AI becomes more and more accurate. This package provides a simple interface to the 'Open Scoring' API https://openscoring.du.edu/docs, leading creativity scoring technology by Organiscak et al. (2023) <doi:10.1016/j.tsc.2023.101356>. With it, you can score your own data directly from an R script. |
License: | MIT + file LICENSE |
Encoding: | UTF-8 |
RoxygenNote: | 7.3.2 |
URL: | https://github.com/jakub-jedrusiak/openscoring |
BugReports: | https://github.com/jakub-jedrusiak/openscoring/issues |
Imports: | cli, dplyr, glue, httr, jsonlite, lifecycle, purrr, rlang, stringr |
Suggests: | testthat (≥ 3.0.0) |
Config/testthat/edition: | 3 |
NeedsCompilation: | no |
Packaged: | 2024-08-23 22:08:49 UTC; jakub |
Author: | Jakub Jędrusiak |
Maintainer: | Jakub Jędrusiak <kuba23031999@gmail.com> |
Repository: | CRAN |
Date/Publication: | 2024-08-24 07:30:02 UTC |
openscoring: 'Open Scoring' API Client
Description
Creativity research involves the need to score open-ended problems. Usually done by humans, automatic scoring using AI becomes more and more accurate. This package provides a simple interface to the 'Open Scoring' API https://openscoring.du.edu/docs, leading creativity scoring technology by Organiscak et al. (2023) doi:10.1016/j.tsc.2023.101356. With it, you can score your own data directly from an R script.
Author(s)
Maintainer: Jakub Jędrusiak kuba23031999@gmail.com (ORCID) (University of Wroclaw) [copyright holder]
Other contributors:
Peter Organisciak peter.organisciak@du.edu (ORCID) (University of Denver) [contributor]
Selcuk Acar (ORCID) (University of North Texas) [contributor]
Denis Dumas (ORCID) (University of Georgia) [contributor]
Pier-Luc de Chantal (ORCID) (Université du Québec à Montréal) [contributor]
Kelly Berthiaume (ORCID) (University of North Texas) [contributor]
See Also
Useful links:
Report bugs at https://github.com/jakub-jedrusiak/openscoring/issues
Score with an AI A basic function to score the creativity with an AI. See the OpenScoring site for more information. Requires an internet connection.
Description
Score with an AI A basic function to score the creativity with an AI. See the OpenScoring site for more information. Requires an internet connection.
Usage
oscai(
df,
item,
answer,
model = c("1.6", "1-4o", "davinci3", "chatgpt2", "1.5", "chatgpt", "babbage2",
"davinci2"),
language = "English",
scores_col = ".originality",
quiet = FALSE
)
Arguments
df |
A data frame. |
item |
The column name of the items or other kind of prompt. |
answer |
The column name of the responses. Commas will be replaced with spaces for scoring. |
model |
The model to use. Should be one of "1.6", "1-4o", "davinci3", "chatgpt2". Deprecated models are kept for compatibility. |
language |
The language of the input. Only works for the 1.5 model upwards. Should be one of "Arabic", "Chinese", "Dutch", "English", "French", "German", "Hebrew", "Italian", "Polish", "Russian", "Spanish". |
scores_col |
The column name to store the scores in. Defaults to ".originality". |
quiet |
Whether to print the citation reminder. |
Details
Available models:
ocsai-1.6: Update to the multi-lingual, multi-task 1.5 model, trained on GPT 4o instead of 3.5.
ocsai1-4o: GPT-4o-based model, trained with more data and supporting multiple tasks. Last update to the Ocsai 1 models (i.e. the original ones).
ocsai-chatgpt2: GPT-3.5-size chat-based model, trained with more data and supporting multiple tasks. Scoring is slower, with slightly better performance than ocsai-davinci.
ocsai-davinci3: GPT-3 Davinci-size model. Trained with the method from Organisciak et al. 2023, but with the additional tasks (uses, consequences, instances, complete the sentence) from Acar et al 2023, and trained with more data.
ocsai-1.5: Beta version of new multi-lingual, multi-task model, trained on GPT 3.5.
ocsai-chatgpt: GPT-3.5-size chat-based model, trained with same format and data as original models. Scoring is slower, with slightly better performance than ocsai-davinci2. For more tasks and trained on more data, use davinci-ocsai2
ocsai-babbage2: GPT-3 Babbage-size model from the paper, retrained with new model API. Deprecated, mainly because other models work better.
ocsai-davinci2: GPT-3 Davinci-size model from the paper, retrained with a new model API.
Value
The input data frame with the scores added.
Examples
df <- data.frame(
stimulus = c("brick", "hammer", "sponge"),
response = c("butter for trolls", "make Thor jealous", "make it play in a kids show")
)
df <- oscai(df, stimulus, response, model = "davinci3")
# The 1.5 model and upwards works for multiple languages
df_polish <- data.frame(
stimulus = c("cegła", "młotek", "gąbka"),
response = c("masło dla trolli", "wywoływanie zazdrości u Thora", "postać w programie dla dzieci")
)
df_polish <- oscai(df_polish, stimulus, response, model = "1.5", language = "Polish")