Forced choice and multiple choice task

  • 27th Sep 2024
  • Updated on 14th May 2025
Under construction

This page and the experiments are under construction. The experiments may not work and may differ from the description.

Experiment element only

This documentation describes only one part of an experiment. For other tasks, see the related pages.

Forced-choice task

A forced choice task is a research method where participants are presented with multiple options and must choose only one. This task is widely used to study language processing, comprehension, and preferences.

A forced-choice can be used in tandem with a self-paced reading task as an attention check or a way to record the outcome of the interpretation. In contrast to acceptability ratings, the forced-choice task yields discrete responses rather than grades.

Overview

I present two templates for forced-choice tasks. The first one displays a prompt and requires an answer. As soon as an answer is given, the experiment continues. There are no predetermined correct answers and feedback is not provided.

The second template displays several questions. One of the questions has a predetermined correct answer. When the wrong answer is chosen or no answer is given, they are given feedback. The participants must choose an answer or the correct answer before continuing to the next trial.

If you wish to use the forced-choice task as an attention check in your experiment, you can adapt these templates or use this old self-paced reading template.

The structure of the code is as follows:

  1. Setup and Configuration
  2. Experiment Structure (order of the experiment stages)
    1. Set Latin square group/select list
    2. Exercise
    3. Start of main experiment notice: "start_experiment"
    4. Main experiment: rshuffle("experiment-filler", "experiment-item")
    5. Sending results: SendResults()
  3. List counter
  4. Start of experiment notification
  5. Experiment trial structure

Dependencies

  1. Resources
    • items.csv list of exercise stimuli
  2. Scripts
    • main.js
  3. Aesthetics
    • global_main.css
    • PennController.css
  4. Modules
    • PennController.js

Other files, modules and aesthetics may be necessary if your experiment uses other trials, e.g. a form for recording participant information.

Stimuli file

Names

The column names and their capitalization matters. If you change the column names and file name, adjust your code accordingly.

The stimuli are contained in the file items.csv. The file items.csv has the structure as in the table below. Importantly, the column names and their capitalization matters. Experiments where there is a correct answer require the column CORRECT_ANSWER. Experiments without a correct answer or where feedback is not provided can safely leave out this column.

  • ITEM: the item ID
  • CONDITION: the condition ID
  • TYPE: the stimulus type (exercise, item, or filler)
    • exercise: shown only during the exercise trials in random order
    • item and filler: shown only during the main experiment trials in pseudorandomized order
  • SENTENCE: the sentence to be displayed during rating
  • LIST: the list ID or Latin square group, used for experiment version presentation
ITEMCONDITIONSENTENCETYPEQUESTIONCORRECT_ANSWER
11kirchlicher AmtsinhaberexperimentKann sich kirchlich auf Amt beziehen?
22heimtückisches AnschlagsopferexperimentKann sich heimtückisch auf Anschlag beziehen?
10010bunter HummelgartenexerciseKann sich bunt auf Hummel beziehen?
10020wertvolles TeameventexerciseKann sich wertvoll auf Team beziehen?

In the exercise and the main experiment, the sentences are presented in random order. In the main experiment, the sentence order is randomized and the filler order is randomized. Then, the items and fillers are shuffled in a random order.

Read more on randomization

in the PCIbex and Ibex documentations.

One question with automatic continuation

A single forced-choice question

In this demo, the participants see a phrase, a question relating to the phrase, and three answer alternatives. The order of the answers is randomly shuffled. As soon as the participant choses one of the answers, the experiment continues. The participant can neither change their answer nor stop the experiment.

  1. Sentence Display: The sentence for each trial is displayed in bold and centered at the top of the screen.
  2. Question Display:
    • A question from the items.csv file is displayed on the left.
    • On the right three possible responses are displayed in fixed order: a checkmark (✓) and a cross (✖).
    • The experiment does not continue until a response has been given. Then it continues automatically.
  3. Data Logging: For each trial, the item number, condition, and sentence are logged.
// ###################################################################
// Trial
// ###################################################################
Template("items.csv", row =>
    newTrial("items-" + row.TYPE,
        newText("sentence", row.SENTENCE)
            .cssContainer({"margin-bottom":"0em", "font-weight":"bold"})
            .center()
            .print()
        ,
        newScale("answer1", "&#x2714","&#10006","<b>?</b>")
            .center()
            .settings.before(newText("question", row.QUESTION).cssContainer({"padding":"1em", "height": "fit-content", "width":"15em" }))
            .radio()
            .log()
            .labelsPosition("right")
            .print()
            .wait()
        ,
    )
    // Record trial data
    .log("ITEM", row.ITEM)
    .log("CONDITION", row.CONDITION)
    .log("SENTENCE", row.SENTENCE)
);

Multiple forced choice questions

Multiple forced-choice questions

This demo shows two questions, one of which has a correct answer that must be chosen before the trial continues.

  1. Sentence Display: The sentence for each trial is displayed in bold and centered at the top of the screen.
  2. First Question:
    • A scale is created with two options: checkmark ✔ (&#x2714) and cross ✖ (&#10006).
    • A callback is added so that once an option is selected, any error messages related to the first question will be removed.
  3. Second Question:
    • A scale is created with three options.
    • A callback is added to remove error messages once an answer is selected.
  4. Continue Button:
    • The button checks that both questions have been answered. If not, error messages are shown in red.
    • The button also checks if the response to the first question is correct. If it's incorrect, an error message is displayed indicating the wrong answer.
  5. Logging: For each trial, the item number, condition, and sentence are logged.
// ###################################################################
// Trial
// ###################################################################
Template("items.csv", row =>
    newTrial("items-" + row.TYPE,
        newText("sentence", row.SENTENCE)
            .cssContainer({"margin-bottom":"0em", "font-weight":"bold"})
            .center()
            .print()
      ,
        // First question and scale (correct/incorrect)
        newScale("answer1", "&#x2714", "&#10006")
            .settings.before(newText("question", row.QUESTION)
                .cssContainer({"padding":"1em", "height": "fit-content", "width":"15em"}))
            .radio()
            .log()
            .labelsPosition("right")
            .print()
            // Callback to remove the first error message on selection
            .callback(getText("errorSelect1").remove(), getText("errorWrong1").remove())
        ,
        // Second question and scale (e.g., personal response)
        newScale("answer2", "Ja", "Nein", "<b>?</b>")
            .settings.before(newText("question2", "Würden Sie so einen Ausdruck äußern?")
                .cssContainer({"padding":"1em", "height": "fit-content", "width":"15em"}))
            .radio()
            .log()
            .labelsPosition("right")
            .print()
            // Callback to remove the second error message on selection
            .callback(getText("errorSelect2").remove())
        ,

        // Button to check answers
        newButton("next_item", "Weiter")
            .cssContainer({"margin":"1em"})
            .center()
            .print()
            .wait(
                // Ensure both questions are answered
                getScale("answer1").test.selected()
                    .failure(
                        newText('errorSelect1', "Bitte beantworten Sie die erste Frage.")
                            .color("Crimson")
                            .print()
                    )
                .and(
                    getScale("answer2").test.selected()
                        .failure(
                            newText('errorSelect2', "Bitte beantworten Sie die zweite Frage.")
                                .color("Crimson")
                                .print()
                        )
                )
                // Check if the first question is answered correctly
                .and(
                    getScale("answer1").test.selected(row.CORRECT_ANSWER)
                        .failure(
                            newText('errorWrong1', "Die Antwort auf die erste Frage ist leider falsch.")
                                .color("Crimson")
                                .print()
                        )
                )
            )
    )
    // Record trial data
    .log("ITEM", row.ITEM)
    .log("CONDITION", row.CONDITION)
    .log("SENTENCE", row.SENTENCE)
);

Other code

The following code customizes the progress bar text progressBarText, the automatic sending results message sendingResultsMessage (automatic message displayed when data is being sent), and the completion message completionMessage (automatic message displayed upon successful data transmission). In the absence of an ending display, the completion message is the last thing participants see in an experiment. It also sets up the counter, which ensures that a new list is started for the next participant.

// ###################################################################
// Customized messages and counter
// ###################################################################

var sendingResultsMessage = "Die Daten werden übertragen.";
var progressBarText = "Fortschritt";
var completionMessage = "Die Ergebnisse wurden erfolgreich übertragen. Vielen Dank!";

SetCounter("setcounter")

// ###################################################################
// Event sequence
// ###################################################################

Sequence("setcounter", randomize("items-exercise"), "start_experiment", randomize("items-experiment"), SendResults())

The following code sets up the beginning of an experiment by initializing a new trial and displaying a message to the participants that indicates the experiment is starting. The participants must click on the button Continue to proceed to the next part of the experiment. It's equivallent between both templates.

// ###################################################################
// Start experiment screen
// ###################################################################

newTrial( "start_experiment" ,
    newText("The main experiment begins now.")
        .print()
    ,
    newButton("go_to_experiment", "Continue")
        .print()
        .wait()
);

Running the experiments

To run the experiment, clone or download the repository from GitHub repository containing the experiment files or download them directly. Alternatively, use the demo links provided in the repository to test the experiment online before deploying it:

  • Multiple questions with revision possibility demo link
  • One question without revision demo link

Once uploaded, launch the experiment through PCIbex to start collecting data.