Acceptability rating study

  • 28th Sep 2024
  • Updated on 14th May 2025
Experiment element only

This documentation describes only one part of an experiment. For other tasks, see the related pages.

Acceptability ratings

An acceptability rating or acceptability judgment study is a research method used to understand how people judge the quality or "naturalness" of sentences. Participants are presented with sentences and asked to rate how acceptable or "normal" each sentence sounds, usually on a numerical scale from 1 to 5 or 7. Through these ratings, researchers can learn more about the results of the interpretation processing.

While this demo has a linguistic focus, the template can be used for recording ratings in response to other questions.

Overview

This documentation presents two experiments. One uses the Ibex native AcceptabilityJudgment.js script to show only one prompt and scale (jump to section). The other uses a combination of PCIbex text and scale scripts to present one or more questions concurrently (jump to section). The experiments contain an exercise and main experiment section. The item, condition, list, and item type variables are logged, in addition to the responses and response time.

The structure of the code for both experiments is as follows:

  1. Setup and Configuration
  2. Experiment Structure (order of the experiment stages)
    1. Set Latin square group/select list
    2. Exercise: randomize("items-exercise")
    3. Start of main experiment notice: "start_experiment"
    4. Main experiment: randomize("items-experiment")
    5. Sending results: SendResults()
  3. Stimuli Presentation
  4. Start of experiment notification

Unlike the self-paced reading study, this study uses simple randomization. For a fixed order of presentation, remove randomize() from the events sequence and keep only "items-exercise" and "items-experiment".

Read more on randomization

in the PCIbex and Ibex documentations.

Dependencies

  1. Resources
    • items.csv list of exercise stimuli
  2. Scripts
    • main.js
  3. Aesthetics
    • global_main.css
    • PennController.css
    • Scale.css
    • Question.css (for multiple questions)
  4. Modules
    • AcceptabilityJudgment.js
    • FlashSentence.js (for single question)
    • PennController.js
    • Question.js (for single question)
    • Scale.js
    • VBox.js (for single question)

Other files, modules and aesthetics may be necessary if your experiment uses other trials, e.g. a form for recording participant information.

Stimulus file

Names

The column names and their capitalization matters. If you change the column names and file name, adjust your code accordingly.

The stimuli are contained in the file items.csv. The file items.csv has the structure as in the table below. Those files are equivalent, although the contents of the files differ between the experiments. You can switch them around if you'd like.

  • ITEM: the item ID
  • CONDITION: the condition ID
  • TYPE: the stimulus type (exercise, item, or filler)
    • exercise: shown only during the exercise trials in random order
    • item and filler: shown only during the main experiment trials in pseudorandomized order
  • SENTENCE: the sentence to be displayed during rating
  • LIST: the list ID or Latin square group, used for experiment version presentation
ITEMCONDITIONTYPESENTENCELIST
1CitemNiklas war freiwillig zufrieden.1
1CitemNiklas war freiwillig zufrieden.1
1AitemNiklas war absichtlich zufrieden.2
1AitemNiklas war absichtlich zufrieden.2
1BitemNiklas war bewusst zufrieden.3
1001fillerfillerHenrike fuhr nach Brasilien.1
1001fillerfillerHenrike fuhr nach Brasilien.2
1001fillerfillerHenrike fuhr nach Brasilien.3
1001fillerfillerHenrike fuhr nach Brasilien.1
1001fillerfillerHenrike fuhr nach Brasilien.2
1001fillerfillerHenrike fuhr nach Brasilien.3
2001exerciseexerciseIn Internetforen diskutierten Eltern.1
2001exerciseexerciseIn Internetforen diskutierten Eltern.2
2001exerciseexerciseIn Internetforen diskutierten Eltern.3

Single question

Acceptability judgment task with one question

Participants are presented with sentences and are simultaneously asked to rate their naturalness on a 1–7 Likert scale. This implementation ensures randomized presentation of stimuli and result logging for accurate data collection.

Answers can be given by clicking on the number or pressing a predetermined button, e.g. 1-7. When clicking or hovering, the selected option is highlighted, but when pressing a button, the experiment immediately continues.

Acceptability rating with automatic continuation (version 1)

This acceptability rating code is governed by a combination of AcceptabilityJudgment.js,Question.js, Scale.js, and Vbox.js, as well as their corresponding aesthetics.

The AcceptabilityJudgment controller must include the following parameters:

  • s the sentence
  • q the question
  • as the answers

There are other optional parameters, e.g. whether there is a correct answer, which are described in detail in the Ibex documentation.

// Trial
Template("items.csv", row =>
    newTrial("items-" + row.TYPE,
        newController("AcceptabilityJudgment", {
            s: row.SENTENCE,  // The sentence to present
            q: "Wie natürlich klingt dieser Satz?",  // The question to ask
            as: ["1", "2", "3", "4", "5", "6", "7"],  // 7-point scale answers
            presentAsScale: true,  // Display answers as a scale
            leftComment: "sehr unnatürlich",  // Comment for the left side
            rightComment: "sehr natürlich"  // Comment for the right side
        })
        .center()
        .print()
        .log()
        .wait()
        .remove()
    )
    .log("LIST", row.LIST)
    .log("ITEM", row.ITEM)
    .log("CONDITION", row.CONDITION)
    .log("TYPE", row.TYPE)
);

Acceptability rating with automatic continuation (version 2)

If you prefer to display only one question, without the possibility of changing the answer and without having to click on the Continue button, you need to modify the Template() like so:

// Trial

// Acceptability rating trial
Template("items.csv", row =>
    newTrial( "items-"+row.TYPE,
        // Display the sentence and the scale
        newText("sentence", row.SENTENCE)
            .cssContainer({"margin-top":"2em", "margin-bottom":"1em"})
            .center()
            .print()
            ,
        newText("question", "Wie natürlich klingt dieser Satz?")
            .cssContainer({"margin-top":"2em", "margin-bottom":"1em"})
            .center()
            .print()
            ,        
        newScale("natural", 5)
          .cssContainer({"width": "60vw"})
          .before( newText("left", "sehr<br>unnatürlich") )
          .after( newText("right", "sehr<br>natürlich") )
          .keys("1","2","3","4","5")
          .log()
          .once()
          .labelsPosition("top")
          .center()
          .print()
        ,
        // Wait briefly to display which option was selected
        newTimer("wait", 300)
            .start()
            .wait()
    )

    // Record trial data
    .log("ITEM"     , row.ITEM)
    .log("CONDITION", row.SENTENCE)
    .log("LIST", row.LIST)
);

The key differences from multiple questions are:

  • Adding once() to the newScale() so that only the first answer is recorded and it cannot be changed.
  • Adding newTimer() to briefly show which answer was selected.

Multiple questions

Acceptability judgment task with multiple questions

Participants are presented with sentences and are simultaneously asked to rate their naturalness on a 1–5 Likert scale. This implementation ensures randomized presentation of stimuli, validating responses, and result logging for accurate data collection.

Answers can be given by clicking on the number or pressing a predetermined button. Each scale needs its own unique buttons, e.g. 1-5 for scale 1, qwert for scale 2, and asdfg for scale 3. The selected option is highlighted and the experiment continues only if all required questions have been answered.

Multiple questions with manual continuation

This is the default option for this template. The participant must answer three questions before continuing to the next display. They can change the answers.

// Trial
Template("items.csv", row =>
    newTrial( "items-"+row.TYPE,
        newText("sentence", row.SENTENCE)
            .cssContainer({"margin-top":"2em", "margin-bottom":"1em"})
            .center()
            .print()
            ,
        newText("question_1", "Wie natürlich klingt dieser Satz?")
            .cssContainer({"margin-top":"2em", "margin-bottom":"1em"})
            .center()
            .print()
            ,        
        newScale("natural", 5)
          .cssContainer({"width": "60vw"})
          .before( newText("left", "sehr<br>unnatürlich") )
          .after( newText("right", "sehr<br>natürlich") )
          .keys("1","2","3","4","5")
          .log()
          .labelsPosition("top")
          .center()
          .print()
            ,
        newText("question_2", "Wie verständlich ist dieser Satz?")
            .cssContainer({"margin-top":"2em", "margin-bottom":"1em"})
            .center()
            .print()
            ,   
        newScale("comprehensible", 5)
          .cssContainer({"width": "60vw"})
          .before( newText("left", "sehr<br>unverständlich") )
          .after( newText("right", "sehr<br>verständlich") )
          .keys("q","w","e","r","t")
          .log()
          .labelsPosition("top")
          .center()
          .print()
        ,
        newText("question_3", "Wie stilistisch wohlgeformt ist dieser Satz?")
            .cssContainer({"margin-top":"2em", "margin-bottom":"1em"})
            .center()
            .print()
            ,   
       newScale("style", 5)
        .cssContainer({"width": "60vw"})
        .before( newText("left", "stilistisch<br>sehr schlecht") )
        .after( newText("right", "stilistisch<br>sehr gut") )
        .keys("a","s","d","f","g")
        .log()
        .labelsPosition("top")
        .center()
        .print()
    ,
    // Clear error messages if the participant changes the input
    newKey("check answers", "") 
        .callback( getText("errorSelect").remove() )
   ,
   // Continue only if all the scales have an input
    newButton("next_trial", "Weiter")
        .cssContainer({"margin-top":"1em"})
        .center()
        .print()
        .wait(
             newFunction('dummy', ()=>true).test.is(true)
            .and( getScale("natural").test.selected())
            .and( getScale("comprehensible").test.selected())
            .and( getScale("style").test.selected())
                .failure( 
                    newText('errorSelect', "Bitte beantworten Sie alle Fragen.")
                        .color("Crimson").center().print()
                         )
            )
        )
    // Record trial data
    .log("ITEM"     , row.ITEM)
    .log("CONDITION", row.SENTENCE)
    .log("LIST", row.LIST)
);

Other code

The following code customizes the progress bar text progressBarText, the automatic sending results message sendingResultsMessage (automatic message displayed when data is being sent), and the completion message completionMessage (automatic message displayed upon successful data transmission). In the absence of an ending display, the completion message is the last thing participants see in an experiment. It also sets up the counter, which ensures that a new list is started for the next participant.

// Customized messages and counter
var sendingResultsMessage = "Die Daten werden übertragen.";
var progressBarText = "Fortschritt";
var completionMessage = "Die Ergebnisse wurden erfolgreich übertragen. Vielen Dank!";

SetCounter("setcounter")

The following code sets up the beginning of an experiment by initializing a new trial and displaying a message to the participants that indicates the experiment is starting. The participants must click on the button Continue to proceed to the next part of the experiment. It's equivallent between both templates.

// Experiment start display
newTrial( "start_experiment" ,
    newText("<h2>Jetzt beginnt der Hauptteil der Studie.</h2>")
        .print()
    ,
    newButton("go_to_experiment", "Experiment starten")
        .print()
        .wait()
)

Running the code

To run the experiment, clone or download the repository from GitHub. repository containing the experiment files or download them directly. Alternatively, use the demo links provided in the repository to test the experiment online before deploying it:

Once uploaded, launch the experiment through PCIbex to start collecting data.

Old acceptability rating template

An online questionnaire experiment. Participants read a sentence and are simultaneously presented with a 7-point Likert scale. They must to rate how natural the sentence sounds to them before proceeding to the next sentence. This template is no longer maintained.

Download for PCIbex 1.9, Demonstration link.