1. Participants – who took part?
  2. Materials – what variables were included and how were they measured?
  3. Procedure – what instructions were given and what did participants actually do?

NOTES: For the methods section, the main differences from an experimental report lie in the Materials subsection. In this section, you must thoroughly explain how the items on the questionnaire were chosen, generated, and refined. That is, how did you go about creating the items on this questionnaire? What response scale did you choose and why? How did you choose which items to keep and which ones to get rid of, to arrive at the final list of items that the participants responded to? Remember to add clear headings for each three subsections (NO DESİGN SECTİON REQUİRED FOR THİS ASSİGNMENT).



Questionnaire details: The survey included the statistics self-efficacy items that you’ve developed during the practicals (and on Padlet). In addition, it also included a few other scales. Some of these have been included so that you can use to test the convergent and discriminant validity of your scale.  These are described in more details below. It is up to you to decide which of these you want to include in your report and which you expect to correlate with your scale (convergent validity) and which ones you expect to not correlate (discriminant validity). However, you must include validity tests in your report.

A complete PDF printout of the Qualtrics study is below from page 1 to 27. (You should not include this in your report; it is for your reference only.):

An Excel file with only the variables that we are using for this lab report can be seen below:

From 1 to 115 is here:

71 to 115 below:

The Dataset


Because this survey is part of an actual research project, we cannot provide detailed demographic details for participants (you!) in this study.  Otherwise, you may be able to identify each others’ responses based on age/gender profiles. Instead, I’ve provided some summary demographics below, which you can include when discussing your participant profile:

226 total cases (out of which 212 are complete)


  • 178 females (78.8%), 41 males (18.1%),  7 non-responses (3.1%)


  • Mean = 21.18, Standard Deviation = 3.61
  • Mode = 20
  • Median = 20
  • Min = 19, Max = 49

Other scales (for testing convergent and discriminant validity)

The references provided below are the original references for these scales. You can use them as a starting point to read about other research using these scales. You can do so by using either Google Scholar or Web of Science (or any of your preferred  online databases and library resources) to check studies citing these scales to learn more about them.

When reading about these scales, you may notice that they have either been computed as a total score derived from their items or as separate scores reflecting various facets. Here, we have used the total scores in the Output provided below.

Statistics Anxiety Rating Scale (STARS; Cruise, Cash, & Bolton, 1985)

  • Items named stats_anx_[number], 23 items in total

Revised Maths Anxiety Rating Scale (R-MARS; Baloğlu & Zelhart, 2007)

  • Items named maths_anx_[number], 19 items in total

State-Trait Inventory for Cognitive and Somatic Anxiety (STICSA; Ree, French, MacLeod, & Locke, 2008)

  • The State and Trait items should be computed as different scales
  • Trait Anxietyitems are named trait_[number], 21 items in total
  • State Anxiety items are named post_state_[number], 21 items in total

The state anxiety items were given to participants after completing a series of maths and stats exercises. The maths and statistics multiple-choice tests have been designed specifically for this study to be challenging enough to elicit anxiety and not intended as a measure of statistics or maths knowledge.

Self-efficacy related to learning statistics

This is the scale that you’ve developed and testing how well it performs is the central part of your assessment.

  • Items are named self_efficacy_[number], 28 items in total

The Output

This is the output that you will have to write about in your lab report. 

Your task is to report these results and comment on each analysis decision made — whether you would have done anything differently or not and why/why not. While the analysis has been done for you, I have made no exclusions, however you may decide that exclusions should have been made based on deviations from normality, intercorrelations etc. You cannot make these exclusions and rerun the analysis but you can (and should!) comment on this in your lab report. The same applies to sampling adequacy, sphericity, extraction method,  rotation method and the reliability analysis decisions. Your report will be judged on how well you can comment on these decisions.

  1. Frequencies are all below:
  2. Factor analysis is all below:
  • Correlation table in Excel (for easier inspection, as discussed in Week 8 practicals) can be found
  • Extraction was done based on Kaiser’s criterion
  • Rotation used was Direct Oblimin rotation


This output suggests extracting 4 factors. And items were distributed to factors based on their primary loadings  when there was a cross-loading. As you would in a normal analysis, inspect this and see if this makes conceptual sense and comment on it.  You must name your factors; in subsequent analyses, these are simply named Factor 1, Factor 2, Factor 3 and Factor 4. Also, comment on whether your factors make conceptual sense .

         3. Reliability analyses

I have reverse-coded the necessary items before performing the reliability analysis. Given previous outputs and how some of the items in our scale were reverse-meaning items, comment on which items had to be reverse-coded (hint: look at rotated factor loadings).

Look at the output and decide whether your scales show good reliability. Comment on what items you may consider excluding to improve the scales’ reliability.

4. Testing convergent and discriminant validity

Before looking at the output in this section, think about the other variables in your dataset and think about how you expect each of your new factors to relate to these. Were these expectations confirmed by the findings in this output? Keep this in mind when evaluating the convergent and discriminant validity of your scales.


You should minimally have a factor table presenting every item on your questionnaire along with its loading on the relevant factor, the alpha/r, and any other relevant information. How you present the other required information is up to you, but you must be careful to justify any decisions and interpret all results in full.

5. Discussion

  1. Summarise how successful the factor analysis was overall in capturing the target construct.
  2. Strengths and applications – Explain the strengths of the scale in comparison to other existing scales measuring the same construct, including possible uses/applications in other research. You could also think about whether the findings granted you a new understanding of defining the target construct; this is particularly relevant when discussing more hard-to-define constructs (depending on what you want to say, this can also be part of limitations/future directions).
  3. Limitations and future directions – Discuss any problems that arose during the scale development and/or difficulties with the scale (such as items that failed to load, subscales with low reliability, problems with validity) and explain how this impacts the usability, reliability, and/or validity of the scale, and how your findings may inform future attempts to develop another scale for the same or a related construct.
  4. Overall takeaway and conclusion

6. References

7. Appendices

You do not need appendices for this lab report unless you have run additional analysis (optional and not expected). These appendices are helpful for your marker to understand the decisions you’ve made you should include the following appendices. You should include any table from which you have drawn information that is presented in the text. This will likely include:

  1. Descriptives and summary statistics
  2. Correlation matrix
  3. Suitability checks
  4. Rotated factor matrix
  5. Scree plot
  6. Reliability results for each subscale
  7. Validity results for each subscale

This output does not need to be in APA style, but it should be legible.

    Customer Area

    Make your order right away

    Confidentiality and privacy guaranteed

    satisfaction guaranteed