Type: | Package |
Title: | Multidimensional Top Scoring for Creativity Research |
Version: | 2.0.0 |
Description: | Implementation of Multidimensional Top Scoring method for creativity assessment proposed in Boris Forthmann, Maciej Karwowski, Roger E. Beaty (2023) <doi:10.1037/aca0000571>. |
License: | MIT + file LICENSE |
Encoding: | UTF-8 |
LazyData: | true |
URL: | https://github.com/jakub-jedrusiak/mtscr |
BugReports: | https://github.com/jakub-jedrusiak/mtscr/issues |
RoxygenNote: | 7.3.2 |
Depends: | R (≥ 4.2.0) |
Imports: | broom.mixed, cli, dplyr (≥ 1.1.0), glmmTMB, glue, lifecycle, methods, purrr, readr, rlang, stats, stringr, tibble, tidyr |
Suggests: | shiny, covr, datamods, DT, roxygen2, shinyWidgets, testthat (≥ 3.0.0), withr, writexl |
Config/testthat/edition: | 3 |
NeedsCompilation: | no |
Packaged: | 2025-07-09 13:11:47 UTC; jakub |
Author: | Jakub Jędrusiak |
Maintainer: | Jakub Jędrusiak <jakub.jedrusiak2@uwr.edu.pl> |
Repository: | CRAN |
Date/Publication: | 2025-07-09 13:30:02 UTC |
mtscr: Multidimensional Top Scoring for Creativity Research
Description
Implementation of Multidimensional Top Scoring method for creativity assessment proposed in Boris Forthmann, Maciej Karwowski, Roger E. Beaty (2023) doi:10.1037/aca0000571.
Author(s)
Maintainer: Jakub Jędrusiak jakub.jedrusiak2@uwr.edu.pl (ORCID) (University of Wrocław) [copyright holder]
Authors:
Boris Forthmann boris.forthmann@uni-muenster.de (ORCID) (University of Münster) [reviewer]
Roger E. Beaty rebeaty@psu.edu (ORCID) (Pennsylvania State University)
Maciej Karwowski maciej.karwowski@uwr.edu.pl (ORCID) (University of Wrocław)
See Also
Useful links:
Report bugs at https://github.com/jakub-jedrusiak/mtscr/issues
Create MTS model
Description
Create MTS model for creativity analysis. Use with summary.mtscr()
and predict.mtscr()
.
Usage
mtscr(
df,
id_column,
score_column,
item_column = NULL,
top = 1,
ties_method = c("random", "average"),
normalise = TRUE,
self_ranking = NULL
)
Arguments
df |
Data frame in long format. |
id_column |
Name of the column containing participants' id. |
score_column |
Name of the column containing divergent thinking scores (e.g. semantic distance). |
item_column |
Optional, name of the column containing distinct trials (e.g. names of items in AUT). |
top |
Integer or vector of integers (see examples), number of top answers to prepare indicators for. Default is 1, i.e. only the top answer. |
ties_method |
Character string specifying how ties are treated when
ordering. Can be |
normalise |
Logical, should the creativity score be normalised? Default is |
self_ranking |
Name of the column containing answers' self-ranking.
Provide if model should be based on top answers self-chosen by the participant.
Every item should have its own ranks. The top answers should have a value of 1,
and the other answers should have a value of 0. In that case, the |
Value
The return value depends on length of the top
argument. If top
is a single
integer, a mtscr
model is returned. If top
is a vector of integers, a mtscr_list
object
is returned, with names corresponding to the top
values, e.g. top1
, top2
, etc.
See Also
-
summary.mtscr()
for the fit measures of the model. -
predict.mtscr()
for getting the scores.
Examples
data("mtscr_creativity", package = "mtscr")
mtscr_creativity <- mtscr_creativity |>
dplyr::slice_sample(n = 500) # for performance, ignore
# single model for top 1 answer
mtscr(mtscr_creativity, id, SemDis_MEAN, item) |>
summary()
# three models for top 1, 2, and 3 answers
fit3 <- mtscr(
mtscr_creativity,
id,
SemDis_MEAN,
item,
top = 1:3,
ties_method = "average"
)
# add the scores to the database
predict(fit3)
# get the socres only
predict(fit3, minimal = TRUE)
Shiny GUI for mtscr
Description
Shiny app used as graphical interface for mtscr. Simply invoke mtscr_app()
to run.
Usage
mtscr_app()
Details
To use the GUI you need to have the following packages installed:
DT
, broom.mixed
, datamods
, writexl
.
First thing you see after running the app is datamods
window for importing your data. You can use the data already loaded in your environment
or any other option. Then you'll see four dropdown lists used to choose arguments for the functions.
Consult the documentation for more details (execute ?mtscr
in the console).
When the parameters are chosen, click "Generate model" button. After a while
(up to a dozen or so seconds) models' parameters and are shown along with a scored dataframe.
You can download your data as a .csv or an .xlsx file using buttons in the sidebar. You can either download the scores only (i.e. the dataframe you see displayed) or your whole data with scores columns added.
For testing purposes, you may use mtscr_creativity
dataframe. In the importing window change
"Global Environment" to "mtscr" and our dataframe should appear in the upper dropdown list.
Use id
for the ID column, item
for the item column and SemDis_MEAN
for the score column.
Value
Runs the app. No explicit return value.
See Also
mtscr()
for more information on the arguments.
mtscr_creativity for more information about the example dataset.
Forthmann, B., Karwowski, M., & Beaty, R. E. (2023). Don’t throw the “bad” ideas away! Multidimensional top scoring increases reliability of divergent thinking tasks. Psychology of Aesthetics, Creativity, and the Arts. doi:10.1037/aca0000571
Examples
if(interactive()){
mtscr_app()
}
Creativity assessment through semantic distance dataset
Description
A dataset from Forthmann, Karwowski & Beaty (2023) paper. It contains a set of responses in Alternative Uses Task for different items with their semantic distance assessment.
Usage
mtscr_creativity
Format
mtscr_creativity
A tibble
with 4585 rows and 10 columns:
- id
patricipant's unique identification number
- response
response in AUT
- item
item for which alternative uses were searched for
- SemDis_MEAN
mean semantic distance
Value
a tibble
Source
References
Create MTS model
Description
This function was deprecated in favour of mtscr()
.
Create MTS model for creativity analysis.
Usage
mtscr_model(
df,
id_column,
item_column = NULL,
score_column,
top = 1,
prepared = FALSE,
ties_method = c("random", "average"),
normalise = TRUE,
self_ranking = NULL
)
Arguments
df |
Data frame in long format. |
id_column |
Name of the column containing participants' id. |
item_column |
Optional, name of the column containing distinct trials (e.g. names of items in AUT). |
score_column |
Name of the column containing divergent thinking scores (e.g. semantic distance). |
top |
Integer or vector of integers (see examples), number of top answers to include in the model. Default is 1, i.e. only the top answer. |
prepared |
Logical, is the data already prepared with |
ties_method |
Character string specifying how ties are treated when
ordering. Can be |
normalise |
Logical, should the creativity score be normalised? Default is |
self_ranking |
Name of the column containing answers' self-ranking.
Provide if model should be based on top answers self-chosen by the participant.
Every item should have its own ranks. The top answers should have a value of 1,
and the other answers should have a value of 0. In that case, the |
Value
The return value depends on length of the top
argument. If top
is a single
integer, a glmmTMB
model is returned. If top
is a vector of integers, a list
of glmmTMB
models is returned, with names corresponding to the top
values,
e.g. top1
, top2
, etc.
Examples
## Not run:
data("mtscr_creativity", package = "mtscr")
mtscr_creativity <- mtscr_creativity |>
dplyr::slice_sample(n = 300) # for performance, ignore
mtscr_model(mtscr_creativity, id, item, SemDis_MEAN) |>
summary()
# three models for top 1, 2, and 3 answers
mtscr_model(mtscr_creativity, id, item, SemDis_MEAN, top = 1:3) |>
mtscr_model_summary()
# you can prepare data first
data <- mtscr_prepare(mtscr_creativity, id, item, SemDis_MEAN)
mtscr_model(data, id, item, SemDis_MEAN, prepared = TRUE)
# extract effects for creativity score by hand
model <- mtscr_model(mtscr_creativity, id, item, SemDis_MEAN, top = 1)
creativity_score <- glmmTMB::ranef(model)$cond$id[, 1]
## End(Not run)
Summarise a model
Description
This function was deprecated in favour of simple summary.mtscr()
that you can use with models fitted with mtscr()
.
Summarise a model generated with mtscr_model
with
some basic statistics; calculate the empirical reliability
and the first difference of the empirical reliability.
Usage
mtscr_model_summary(model)
Arguments
model |
A model generated with |
Value
A data frame with the following columns:
- model
The model number
- nobs
Number of observations
- sigma
The square root of the estimated residual variance
- logLik
The log-likelihood of the model
- AIC
The Akaike information criterion
- BIC
The Bayesian information criterion
- df.residual
The residual degrees of freedom
- emp_rel
The empirical reliability
- FDI
The first difference of the empirical reliability
Examples
## Not run:
data("mtscr_creativity", package = "mtscr")
mtscr_model(mtscr_creativity, id, item, SemDis_MEAN, top = 1:3) |>
mtscr_model_summary()
## End(Not run)
Prepare database for MTS
Description
Starting mtscr 2.0.0 you should not use it
by hand but rely on mtscr()
function. It is exported for backwards compatibility.
Prepare database for MTS analysis.
Usage
mtscr_prepare(
df,
id_column,
item_column = NULL,
score_column,
top = 1,
minimal = FALSE,
ties_method = c("random", "average"),
normalise = TRUE,
self_ranking = NULL
)
Arguments
df |
Data frame in long format. |
id_column |
Name of the column containing participants' id. |
item_column |
Optional, name of the column containing distinct trials (e.g. names of items in AUT). |
score_column |
Name of the column containing divergent thinking scores (e.g. semantic distance). |
top |
Integer or vector of integers (see examples), number of top answers to prepare indicators for. Default is 1, i.e. only the top answer. |
ties_method |
Character string specifying how ties are treated when
ordering. Can be |
normalise |
Logical, should the creativity score be normalised? Default is |
self_ranking |
Name of the column containing answers' self-ranking.
Provide if model should be based on top answers self-chosen by the participant.
Every item should have its own ranks. The top answers should have a value of 1,
and the other answers should have a value of 0. In that case, the |
Value
The input data frame with additional columns:
.z_score
Numerical, z-score of the creativity score
.ordering
Numerical, ranking of the answer relative to participant and item
.ordering_topX
Numerical, 0 for X top answers, otherwise value of
.ordering
Number of .ordering_topX
columns depends on the top
argument. If minimal = TRUE
,
only the new columns and the item and id columns are returned. The values are
relative to the participant AND item, so the values for different
participants scored for different tasks (e.g. uses for "brick" and "can") are distinct.
Examples
## Not run:
data("mtscr_creativity", package = "mtscr")
# Indicators for top 1 and top 2 answers
mtscr_prepare(mtscr_creativity, id, item, SemDis_MEAN, top = 1:2, minimal = TRUE)
## End(Not run)
Score creativity with MTS
Description
This function was deprecated in favour of mtscr()
. Also see predict.mtscr()
for extracting the scores. Note that the item column is now after the score column.
Usage
mtscr_score(
df,
id_column,
item_column = NULL,
score_column,
top = 1,
format = c("minimal", "full"),
ties_method = c("random", "average"),
normalise = TRUE,
self_ranking = NULL
)
Arguments
df |
Data frame in long format. |
id_column |
Name of the column containing participants' id. |
item_column |
Optional, name of the column containing distinct trials (e.g. names of items in AUT). |
score_column |
Name of the column containing divergent thinking scores (e.g. semantic distance). |
top |
Integer or vector of integers (see examples), number of top answers to prepare indicators for. Default is 1, i.e. only the top answer. |
format |
Character, controls the format of the output data frame. Accepts:
|
ties_method |
Character string specifying how ties are treated when
ordering. Can be |
normalise |
Logical, should the creativity score be normalised? Default is |
self_ranking |
Name of the column containing answers' self-ranking.
Provide if model should be based on top answers self-chosen by the participant.
Every item should have its own ranks. The top answers should have a value of 1,
and the other answers should have a value of 0. In that case, the |
Value
A tibble with creativity scores. If format = "full"
, the original data frame is
returned with scores columns added. Otherwise, only the scores and id columns are returned.
number of creativity scores columns (e.g. creativity_score_top2
) depends on the top
argument.
See Also
tidyr::pivot_wider for converting the output to wide format by yourself.
Examples
## Not run:
data("mtscr_creativity", package = "mtscr")
mtscr_score(mtscr_creativity, id, item, SemDis_MEAN, top = 1:2)
# add scores to the original data frame
mtscr_score(mtscr_creativity, id, item, SemDis_MEAN, format = "full")
# use self-chosen best answers
data("mtscr_self_rank", package = "mtscr")
mtscr_score(mtscr_self_rank, subject, task, avr, self_ranking = top_two)
## End(Not run)
Self-chosen best answers
Description
An example dataset with best answers self-chosen by the participant. Use with self_ranking
argument in mtscr().
Usage
mtscr_self_rank
Format
mtscr_self_rank
A tibble with 3225 rows and 4 columns:
- subject
patricipant's unique identification number
- task
divergent thinking task number
- avr
average judges' raiting
- top_two
indicator of self-chosen two best answers; 1 if chosen, 0 if not
Source
References
Prepare database for MTS
Description
Prepare database for MTS analysis.
Usage
mtscr_wrangle(
df,
id_column,
item_column = NULL,
score_column,
top = 1,
minimal = FALSE,
ties_method = c("random", "average"),
normalise = TRUE,
self_ranking = NULL
)
Arguments
df |
Data frame in long format. |
id_column |
Name of the column containing participants' id. |
item_column |
Optional, name of the column containing distinct trials (e.g. names of items in AUT). |
score_column |
Name of the column containing divergent thinking scores (e.g. semantic distance). |
top |
Integer or vector of integers (see examples), number of top answers to prepare indicators for. Default is 1, i.e. only the top answer. |
ties_method |
Character string specifying how ties are treated when
ordering. Can be |
normalise |
Logical, should the creativity score be normalised? Default is |
self_ranking |
Name of the column containing answers' self-ranking.
Provide if model should be based on top answers self-chosen by the participant.
Every item should have its own ranks. The top answers should have a value of 1,
and the other answers should have a value of 0. In that case, the |
Value
The input data frame with additional columns:
.z_score
Numerical, z-score of the creativity score
.ordering
Numerical, ranking of the answer relative to participant and item
.ordering_topX
Numerical, 0 for X top answers, otherwise value of
.ordering
Number of .ordering_topX
columns depends on the top
argument. If minimal = TRUE
,
only the new columns and the item and id columns are returned. The values are
relative to the participant AND item, so the values for different
participants scored for different tasks (e.g. uses for "brick" and "can") are distinct.
Construct mtscr objects
Description
These constructors are for internal use.
Usage
new_mtscr(x, df, ...)
Extract scores from mtscr model
Description
Extract the scores from a model fitted with mtscr()
.
Usage
## S3 method for class 'mtscr'
predict(object, ..., minimal = FALSE, id_col = TRUE)
## S3 method for class 'mtscr_list'
predict(object, ..., minimal = FALSE, id_col = TRUE)
Arguments
object |
A model or a model list fitted with |
... |
Additional arguments. Currently not used. |
minimal |
If |
id_col |
If |
Value
The return value is always a tibble but its content depends mainly on the minimal
argument:
If
minimal = FALSE
(default), the original data frame is returned with the creativity scores columns added.If
minimal = TRUE
, only the creativity scores are returned (i.e., one row per person).
Functions
-
predict(mtscr_list)
: Extract scores from a model list fitted withmtscr()
.
Examples
data("mtscr_creativity", package = "mtscr")
mtscr_creativity <- mtscr_creativity |>
dplyr::slice_sample(n = 500) # for performance, ignore
fit <- mtscr(mtscr_creativity, id, SemDis_MEAN, item, top = 1:3)
# for a single model from a list
predict(fit$top1)
# for a whole list of models
predict(fit)
# person-level scores only
predict(fit, minimal = TRUE)
# you can also achieve more classic predict() behaviour
mtscr_creativity$score <- predict(fit, id_col = FALSE)
mtscr_creativity |>
tidyr::unnest_wider(score, names_sep = "_") # Use to expand list-col
Fit measures for mtscr model
Description
Summarise the overall fit of a single model fitted with mtscr()
.
Usage
## S3 method for class 'mtscr'
summary(object, ...)
## S3 method for class 'mtscr_list'
summary(object, ...)
Arguments
object |
mtscr model or a mtscr_list object. |
... |
Additional arguments. Currently not used. |
Value
A tibble with the following columns:
- model
The model number (only if a list of models is provided)
- nobs
Number of observations
- sigma
The square root of the estimated residual variance
- logLik
The log-likelihood of the model
- AIC
The Akaike information criterion
- BIC
The Bayesian information criterion
- df.residual
The residual degrees of freedom
- emp_rel
The empirical reliability
- FDI
The first difference of the empirical reliability
Functions
-
summary(mtscr_list)
: Get fit measures for a list of models fitted withmtscr()
.
Examples
data("mtscr_creativity", package = "mtscr")
mtscr_creativity <- mtscr_creativity |>
dplyr::slice_sample(n = 500) # for performance, ignore
fit1 <- mtscr(mtscr_creativity, id, SemDis_MEAN, item, ties_method = "average")
fit3 <- mtscr(mtscr_creativity, id, SemDis_MEAN, item, top = 1:3, ties_method = "average")
summary(fit1)
summary(fit3)