(1) Overview

Context

Collection Date(s)

October 2019

Background

People across the political spectrum differ on a number of attitudes. This includes attitudes tightly connected to politics (e.g., support for gay marriage or presidential candidates; [1]), group-related attitudes (e.g., attitudes about Black people or religious people; [2]), house-hold products (e.g., energy efficient lightbulbs; [3]), and abstract concepts (e.g., social change; [4]). Less research has assessed what type of attitudes are more or less likely to exhibit ideological differences. For example, should researchers expect similarly sized ideological differences when assessing all types of attitudes, or only different specific sub-types of attitudes?

To answer these questions and to start mapping the space of ideological similarities and differences we [5] analysed data from the Attitudes, Identities, and Individual Differences (AIID) Study [6]. Using the AIID, we estimated the size of ideological differences on both self-report and reaction time/implicit measures of attitudes for 95 attitude pairs (e.g., Gay People – Straight People, Security – Freedom) and self-report measures of attitudes for 190 individual attitudes (e.g., Gay People, Straight People, Security, Freedom).

To better understand why ideological differences are larger for some attitudes compared to others, we collected new data from two additional samples. These new samples rated each attitude pair or each individual attitude. Specifically, these samples rated the reasons people may disagree over the attitudes. For example, some theories suggest that ideological differences arise from ideological differences in dispositions for complexity and threat [4]. Other theories suggest that ideological differences primarily arise from differences in political, moral, or religious values [2, 7] (for more justification for the use of these constructs see [5]). Therefore, we asked these new samples of participants to rate the extent to which disagreement over the attitude pair or the individual attitudes was due to reasons related to threat, complexity, morality, politics, religion, or harm.

We found that attitude-pairs associated with disagreements due to threat, complexity, morality, politics, religion, or harm had larger ideological differences than attitude-pairs scoring low on the measures. When all possible ratings were used to predict the size of ideological differences in a regression analysis, only differences due to political disagreements remained a significant correlate of ideological differences. Here we are describing our data of the attitude ratings, which may be valuable for other scholars.

(2) Methods

Data were collected via a questionnaire distributed on Prolific [8, 9], a webservice that links potential participants with researchers and facilitates participant payment. Researchers post short descriptions of studies for participants to complete and potential participants can pick to complete the studies that they are qualified for. We collected data from two samples, which we call Rating Sample 1 and Rating Sample 2. Rating Sample 1 participants rated attitude pairs. Rating Sample 2 participants rated individual attitudes. The individual attitudes are the same attitudes as the attitude pairs, just presented individually rather than jointly.

Sample

For Rating Sample 1, we collected data from 383 participants (152 men, 221 women, 5 reporting other gender identities and 5 with missing values, Mage = 34.7, SDage = 12.4). A study was opened on Prolific entitled Attitudes and Opinions and participants could select into the study. We restricted the study to American citizens who are also American residents using Prolific’s built-in screening tool. Participants from a prior pilot study (reported in [5]) and from Rating Sample 2 were not allowed to participate. Participants were paid £0.90.

For Rating Sample 2, we collected data from 778 participants (386 men, 371 women, 3 reporting other gender identities and 18 with missing values, Mage = 34.9, SDage = 12.8). A study was opened on Prolific entitled Attitudes and Opinions and participants could select into the study. We restricted the study to American citizens who are also American residents using Prolific’s built-in screening tool. Participants from a prior pilot study (reported in [5]) and from Rating Sample 1 were not allowed to participate. Participants were paid £0.90.

Materials

We used all of the available attitude pairs (Rating Sample 1) and individual attitudes (Rating Sample 2) from the AIID.

For Rating Sample 1, participants were given the following instructions, “Imagine two people talking about two options (Option A and Option B). These two people may disagree about whether Option A or Option B is better for a variety of reasons. In this task, we are interested in the extent to which you think different reasons could affect whether people disagree with one another (i.e., why one person prefers option A and the other prefers option B).” Then, on subsequent pages participants were given the following instructions for each attitude pair, “People could disagree on whether they prefer [Attitude X] versus [Attitude Y] based on:” where Attitude X and Attitude Y were replaced with the attitude objects. They then rated each of the different possible reasons for disagreement on a scale ranging from 1 to 7 with the following labels, Strongly disagree, Disagree, Somewhat disagree, Neither agree nor disagree, Somewhat agree, Agree, and Strongly Agree. The different possible reasons were Perceived threat, The complexity of the topic, Moral reasons, Political reasons, Religious reasons, and Perceived harm. See Figure 1 for an example. Each participant completed this process for a random selection of 10 different attitude pairs (M n/item = 39.8, SD = 5.5, Range [27, 55]). Following the rating task, participants completed demographic measures including age, gender, and self-reported political ideology.

Figure 1 

Example rating task completed by Sample 1.

For Rating Sample 2, participants were given the following instructions, “Imagine two people talking about an option (Option A). These two people may disagree about whether Option A is good for a variety of reasons. In this task, we are interested in the extent to which you think specific reasons could affect whether people disagree with one another (i.e., why one person likes Option A and the other dislikes it).” Then, on subsequent pages participants were given the following instructions for each attitude, “People could disagree on the extent to which they prefer Attitude X based on:” where Attitude X was replaced with the attitude objects. They then rated each of the different possible reasons for disagreement on a scale ranging from 1 to 7 with the following labels, Strongly disagree, Disagree, Somewhat disagree, Neither agree nor disagree, Somewhat agree, Agree, and Strongly Agree. The different possible reasons were Perceived threat, The complexity of the topic, Moral reasons, Political reasons, Religious reasons, and Perceived harm. See Figure 2 for an example. Each participant completed this process for a random selection of 10 different attitudes (M n/item = 40.3, SD = 6.0, Range [23, 60]). Due to a randomization error, the item “Jews” was completed by all participants as the last, and 11th item. Following the rating task, participants completed demographic measures including age, gender, and self-reported political ideology.

Figure 2 

Example rating task completed by Sample 2.

The distribution of the average ratings for each attitude of Rating Samples 1 and 2, as well as their associations with one another are in Figures 3 and 4. The correlations between the ratings are all relatively high. This may be because the ratings are aggregated for each attitude, which removes error, or because participants see these different constructs as related to one another.

Figure 3 

Rating Sample 1 summary of mean ratings for the attitude pairs. Below the diagonal: Scatterplot of the ratings for each attitude pair. On the diagonal: Density plots of the ratings for each attitude pair. Above the diagonal: Correlation between the ratings for each attitude pair.

Figure 4 

Rating Sample 2 summary of mean ratings for the individual attitudes. Below the diagonal: Scatterplot of the ratings for each individual attitude. On the diagonal: Density plots of the ratings for each individual attitude. Above the diagonal: Correlation between the ratings for each individual attitude.

Procedures

The Rating Sample 1 study was opened on Prolific for 380 participants. After selecting into our study, participants were directed to a Qualtrics survey where they first completed a consent form. Then, participants were randomly assigned to complete 10 of the 95 attitude pairs. Following the rating task, participants completed the demographic measures. The entire study took a median time of 4.7 minutes.

The Rating Sample 2 study was opened on Prolific for 760 participants. After selecting into our study, participants were directed to a Qualtrics survey where they first completed a consent form. Then, participants were randomly assigned to complete 10 of the 190 attitude pairs (with the exception of the item “Jews” described above). Following the rating task, participants completed the demographic measures. The entire study took a median time of 4.7 minutes.

Quality Control

Prolific uses a number of procedures to ensure quality participants and prevent bots [10]. Additionally, we restricted our study to participants with an approval rate greater than 90% and to naïve participants that had not completed our pilot study, nor the two primary studies described here. Approval rate refers to the percentage of studies participants have had approved by other researchers. A lower approval rate is typically indicative of a participant who is a consistently careless responder.

Ethical issues

Any potential identifying information was removed from the data (i.e. IP addresses). The project was approved by the ethics committee at Tilburg University’s School of Social and Behavioral Sciences. All participants completed an informed consent prior to engaging in the study.

(3) Dataset description

Object name

Repository contains

codebook-sample1.html

codebook-sample2.html

datapaper.sample1.R

datapaper.sample2.R

Sample_1_AIID_Study.docx

Sample_2_AIID_Study.docx

Sample_1_AIID_Study Simplified.docx

Sample_2_AIID_Study Simplified.docx

sample1anon.csv

sample1demo.xlsx

sample1means.csv

sample1scatter.png

sample2anon.csv

sample2demo.xlsx

sample2means.csv

sample2scatter.png

Data type

These are primary data, scripts to produce numbers and figures in the manuscript, and files describing the data.

Format names and versions

sample1anon.csv and sample2anon.csv are the raw and anonymized data files. The variables can be interpreted with the help of the study materials (Sample_1_AIID_Study.docx and Sample_2_AIID_Study.docx; simplified version Sample_1_AIID_Study Simplified.docx and Sample_2_AIID_Study Simplified.docx) and the codebooks (codebook-sample1.html and codebook-sample2.html).

For the numbers and figures in this manuscript, the script datapaper.sample1.R and datapaper.sample2.R were used. They created Excel files containing the demographic information reported in the text (sample1demo.xlsx and sample2demo.xlsx), the two scatterplots reported in the text (sample1scatter.png and sample2scatter.png), and computed the means for each attitude and rating (sample1means.csv and sample2means.csv). These latter files are useful for anyone simply interested in which attitudes score the highest on which domains.

Data Collectors

The questionnaire was designed by Emily Kubin (Tilburg University) and Mark Brandt (Tilburg University). It was programmed by Emily Kubin. Mark Brandt was responsible for data collection.

Language

All materials are in English.

License

CC-By

Repository location

https://osf.io/p253v/

DOI: https://doi.org/10.17605/OSF.IO/P253V

Publication date

28/01/2020

(4) Reuse potential

The data collected here can be used for multiple purposes. First, it can be combined with AIID data, like we have done [5], to test additional hypotheses about attitudes. For example, AIID includes additional measures of attitude strength, personality, and demographic information that may have different correlates for different types of attitudes. One hypothesis that could be tested is if stronger attitudes (as determined by responses in the AIID data) are more likely to be attitudes that are characterized by political disagreements (as rated by the participants in our study). Second, our data can be used to plan studies on attitudinal disagreement. For example, when choosing stimuli it might be beneficial to choose a diverse range of attitudes to enhance generalizability. One consideration is the extent of disagreement over the attitude pair is believed to stem from the different reasons we collected in this project. This might mean including attitudes high and low in moral-related disagreement, high and low in political disagreement and so on.