Unit 0: Research Design and Methods

Synopsis

In this unit we provide an overview of the tools and methods that are in use in the study of music perception and cognition. These include qualitative and quantitative methods, specifically methods from cognitive science, specifically computational modeling, experimental psychology, and cognitive neuroscience. Statistics, which is the scientific understanding of data for the purpose of drawing conclusions, is the backbone of understanding across all these fields of study. We conclude with some practical information on getting started in conducting your own music science research, such as ethical principles and considerations in interdisciplinary research.

Research Methods in Music Perception and Cognition

Music perception and cognition is an interdisciplinary field concerned with applying the methods of cognitive science (experimental, computational, and neuroscientific) — to issues and problems in the study of music. Cognitive scientists use a host of methods to study perception and cognition. Every method has limitations, but cognitive scientists use different methods to be able to provide powerful convergent evidence the questions they investigate. This chapter provides an overview of the terminology, research methods, and experimental techniques through which cognitive scientists investigate questions about the perception and cognition of music.

Experimental Psychology

Psychology is a branch of science concerned with studying the mind and behavior. While there are many subfields of psychology, three of them – cognitive psychology, animal psychology, and neuropsychology – have contributed a great deal to the study of perception through experimentation. Unlike other areas of psychology such as social psychology that attempt to draw conclusions about human behavior by just observing the world, these three areas of psychology often employ some form of active manipulation in their research. Hence, each of these subfields falls under the general umbreall of experimental psychology.

As the name implies, experimental psychology is a field of psychology whereby a researcher seeks to understand the world using the scientific method. Typically, this consists of observing the world in order to come up with some sort of theory or scientific proposition about the world (Future Cite Guest ). A theory will often make causal claims about how the world works. For example, the theory of gravity would claim that the reason that things fall toward the floor when you let go of them is because of a force of gravity. Gravity is proposed as a property of the world that has regular and predictable patterns that can be observed and studied.

Unlike working with physical objects, experimental psychologists attempt to make causal claims about the mind and behavior. They also form hypotheses, test these hypotheses by requiring subjects to perform a task, then observe the subject’s behavior. After seeing how people’s behavior changes in relation to some sort of manipulation, they will then think critically about how what they saw in their experiment aligns with their intial ideas. Outcomes of experiments are evaluated in relation to the positive or negative data collected. Unsurprisingly, there are numerous methods, instruments, and techniques used by experimental psychologists to test hypotheses about perception.

Some of methods used by psychologists require presenting participants with a stimulus and then having the subject report on what they experienced. This type of design is referred to as self-report since it requires the participant to explicitly tell the experimenter what they experienced. While this type of design is often used in experimental designs, it is far from perfect. Participants in experiments are always not the best at describing what they experienced and sometimes when working with people, they choose not to be entirely accurate if they’re being asked about a sensitive topic.

Contrasting to self-rport, other methods require the use of instruments to record a subject’s behavior as they perform a perceptual task. Examples of this include studies that track measurements that are typically beyond the control of the participant. For example it is quite difficult to consciously control eye-movements, heart rate, electrodermal activity, or any measure having to deal with brain activity. Experimental psychologists use these implicit measures of data collection for many reasons. For example, in studies seeking to understand aspects of musical emotion, you could imagine that it might be distracting to a participant if every ten seconds they are asked how they feel when listening to a music using self-report. Instead, it might be better to play them music and use an implicit measure like functional magnetic resonance imaging (fMRI) to measure their brain activity as they listen.

These types of implicit, physiological measures are quite common in the study of music perception and cognition. For that reason, let us next turn to an overview of neuroscientific research methods.

Neuroscientific Methods

Given the complexity of the human brain and behavior, it should come no surprise that there are numerous fields that all attempt to understand how the brain works. Just as there are subfields of psychology, there are subfields of neuroscience. Cognitive neuroscience, computational neuroscience, neurology, and neurobiology are among the main subfields that have contributed to our understanding of perception.

Human perception occurs as a result of information processing in several kinds of systems: sensory systems (visual, auditory, somatosensory, olfactory, and gustatory), attentional systems, memory systems (for both storage and retrieval), and motor systems to name a few. Not only do neuroscientists study all of these systems, but they do so at every structural level of organization. Because no one research method can answer every question at every level of processing for every system neuroscientists use a variety of methods, instruments, and techniques to study perception. If we ignore the role of the computer as a research tool in modeling how the brain works (a topic we shall deal with below), neuroscientific methods fall into two classes, invasive ones and noninvasive ones.

Invasive methods are research techniques that require the introduction of an “instrument” into a subject’s brain. These methods are invasive because the researchers use tools such as scalpels, a probes, or electrode to actually come in contact with the brain. There are several methods of this sort and fortunatly for human subjects have become safer over time.

Surgery is the oldest and is responsible for an enormous amount of knowledge about the functional organization of the brain through modern neurosurgery upon conscious patients. Since the brain does not have any nerve endings, it is possible to actively manipulate someone’s brain while they are awake with their skull cut open. Here, the researcher can stimulate the brain using something like an electrode to invoke a response in their patient. This type of approach is not common within neuroscientific studies, but can lead to direct understanding of what manipulations of the brain affect certain aspects of perception.

Lesion studies are another classic invasive neuroscientific method used to study the brain. A lesion is a “damaged” area of the brain resulting from trauma (“insult”) or disease. To give an example, an individual might experiene damage to their brain after having a stroke. If the stroke damages an area of the brain that is responsible for something in specific, for example speech production, then damage to this area could be linked to speech processing.

Over a century ago, in order to do this type of study the researchers would have to link behavior to brain lesions after the patient had died using an autopsy. Strokes are not the only way this would have happened. One of the classic examples that lead to our early understandings of the brain is the case of Phineas Gage. Gage survived a traumatic work accident then went on to experience severe changes in their behavior. You can read more about him in the work of neuroscientist Antonio Damasio’s book Descarte’s Error.

Today we often do not have to wait until someone dies in order to look at their brain thanks to advances in neuroimaging techniques. Techiques such as electroencephalagraphy, fMRI, and transcranial magnetic stimulation (TMS) all allow for similar observation of the brain while the subject is not only alive, but awake an conscious. Since these methods do not require the patient to undergo any surgery, these techniques are called non-invasive methods.

Almost all neuroimaging methods used today are noninvasive. The few notable execeptions of examples of invasive techniques working with humans depend on the good will of people who are candiates for surgery and volunteer to do psychological experiments during their stay in the hospital. Other neuroscientific methods found when working with animals that require directly recording brain activity on a living animal are not performed on humans for ethical reasons.

In additon to the non-invasive techniques mentiond above, other methods used today also include conventional radiographs (X-rays), computerized tomography (CT), and magnetic resonance imaging (MRI). These methods are able to capture very high quality pictures of the brain and it’s structure. Others methdods that are used to identify functional areas of the brain inlcude functional autoradiography, positron emission tomography (PET), and functional magnetic resonance imaging (fMRI). The advantage of these methods is that you can capture brain activity instead of just an image. As we explore the neuroscience portion of the curriculum, we shall explore these methods and their underlying assumptions in greater detail.

Computer Modeling

EDIT LINE

The cogntive sciences depend on computational sciences to help further understanding of both the mind and behavior. The main reason for this is a belief among cognitive scientists that cognition requires computation. A lot of ink has been spilled over the metaphor between brains and computers. Generally speaking, the human brain can be considered a computer in that it computes things. It is not a computer in the way that a laptop or PC works in that there is a motherboard, graphics card, and random access memory.^[Link here]

Since the brain does compute things, the computational sciences play a large role in cognitive science research. We discuss a few ways here. First, computers themselves are the subject of intense (sometimes controversial) research into the nature of computation. Second, in every field of cognitive science, computers are used as a research tool to model how information processing occurs in complex systems. The focus of this next section will look at how computers are used to model complex systems.

What is a computer model? A computer model is a simulation of something. Typically a computer model is a reduction in complexity of a an idea. For example, a computer might model the temperature of Berlin, Germany every year. The amount of factors that affect the weather in real life Berlin is multifarious and complex, but a computer simulation might only take into account the annual temperature of the last few years. From this model, a computer scientist would be able to make an educated guess about what the temperature on any given day in Berlin.

While this is a simplistic example, computers can be used to simulate almost any process – tidal waves, weather patterns, economies, traffic jams, . . ., you name it. While it is correct to say that what you see on the computer is an imitation of some natural, social, economic, or other process, it would NOT be correct to say that what you see is a duplication of that natural, social, economic, or other process. To simulate is to imitate, not to duplicate. Such is the reason why you would not run for high ground upon seeing a simulation of a tidal wave.

With duplication, things are different. After all, a simulation of a tidal wave on your computer does not duplicate a tidal wave. To duplicate a wave is to create a “real” wave – even if it is only a “mini” or scale one created under controlled conditions. While a “mini” tidal wave may not make you run for high ground either, it can at least get you wet. Similarly, to clone a frog is to duplicate it – to create a “real” frog. To duplicate something is to reproduce its essential properties. Whenever we do this, an emulation of something has been created, not a simulation. To emulate is to duplicate, not to imitate.

One of the major controversies in cognitive science today is whether any existing computer models of human cognition go beyond mere simulation to something closer to emulation. Regardless of the correct answer is to this question, computer models, whether as simulations or emulations, are valuable research tools.

But what about perception (cognition, intelligence, or some other aspect of intelligent systems under study by cognitive scientists)? Does a computer model of a perceptual process (say vision) qualify as a simulation or an emulation? For several reasons, it is not obvious what the “correct” answer to this question is. First, if the computer metaphor is literally true, then a computer program that produces the relevant perceptual output would be an emulation of the perceptual process. Second, if the computer metaphor is merely a useful research strategy, then a computer program that produces the relevant perceptual output would only simulate the perceptual process, not emulate it. Third, to complicate matters further, whether the computer metaphor is true depends on the answer to these and other questions about the nature of computation: What does it mean to be a computer? Is your brain literally a computer or something fundamentally different? If your brain is a computer, is it a digital computer (like a Mac or a PC), or is it some other (nonclassical) type of computer? These questions highlight some of the controversies surrounding the widespread use of computers to model and to explain perception.

Designing a Research Study

When creating an empirical research study, it is important for scientists to spend a lot of time considering the design of the study so the results are meaningful. Everything from getting the research question as exact as possible, the variables to measure, the number of observations needed and measures of structures that might confound the results need to be taken seriously or the results of the entire experiment could be compromised. Next, we introduce some terms that you will see over the trajectory of this course when learning about experimental design.

Sample Size and Statistical Power

In psychological research, sample size refers to the number of observations (cases, individuals, units) included in a selection of items to be studied. This is usually denoted N (for the study as a whole) or n (for subgroups from the study). Generally speaking, the more observations that you have, the better your study is going to be at being able to be confident in your results. How you decide to define your confidence depends on the statistical methods you choose to use when you analyze your data.

By far the most common way to do statistical analyses comes from the Frequestist school of thinking. For frequentists, sample size helps increase the probability that in an experimental study you are able to detect a real effect. The higher your sample size, the greater your power. That said, you will need a very large sample to detect a small effect. Within typcial practice your statistical power ranges from 0 to 1 (Cohen, 1992). A power of 1 would mean that if an effect exists, you are 100% likely to detect it with your experimental design. Many psychological experiments will be conducted with a power of .80, meaning that they will be able to find a true effect 80% of the time.

The other way to do statistical analyses is the Bayesian school of thinking. Like frequentists, more data will also help with a Bayesian data analysis. Unlike frequestists, Bayesians tend to be more interested in quantifying the amount of error associated wtih a measurement than whether or not we can conclude if an effect exists or not. Baysians are intersted in the probability of your hypothesis given you data, rather than the other way around. To learn more about Bayesian data analysis, there are textbooks that focus on this.

Variables

The different aspects that can change within an experiment are called variables. Variables are what we measure in the world. Some variables we can measure directly such as someone’s height or eye color. Some variables that we are interested we can’t measure directly. Variables such as personality are assumed to exist, but we have to measure them indirectly.

The extent that a varible measures what we think it should measure is referred to as its validity. A good example of a measure with high validity might be using a digital scale to measure the weight of your cat. An example a measure with low validity might be trying to measure your cat’s intelligence by counting the number of mice it catches in a week. In the first example, the variable of interest– weight– has a very clear mapping to an established construct: weight. In the second example, the variable of interest –intelligence– might well exist, but just using the number of mice it catches could be confounded by other variables. For example, if your cat doesn’t live near a lot of mice, you might think they’re not that bright.

The extent that you are able to get the same measure on your variable is referred to as its reliablity. Returning to the example from above, a digital scale would also be a reliable measure of weight since every time you weigh your cat it will give you the same number. Measuring your cat’s intelligence based on their mouse catching might also have low reliablity. For example, if one week your cat has eaten a lot of food or the weather is bad, the cat is not always going to be able to catch a similar amount of mice.

Validity and reliablity are highly related, but it is important to note how they can be different. They become all the more important when considering them in an experimenal context.

Experimental Terminology

In experiments, we often want have a variable that we are interested in studying. In the examples from above, these would be variables like weight and personality in cats. In music psychology, these might be quetions about performance ability, musical memory capacity, or how surprising a chord is. The variable that we want to measure and study is called our dependent variable.

Within the context of experiment, we often want to investigate possible causal relationships between variables. If we think we can make a change in one variable and it will affect another, we might want to set up an experiment where we can manipulate that. In this context, the variable that we manipulate is the independent variable.

It is important before setting up an experiment that we make a prediction about what will happen. If we don’t, people are often very good at looking at the world and trying to make a story about how it ended up that way. In order for scientists to keep track of what they think is going to happen in an experiment, scientists come up with hypotheses.

A hypothesis is a predicted outcome that will result from an experiment. The result of this hypothesis often is derived from a theory, which is an attempt to describe how something works.

Often in experimental designs, it is important to have some sort of control. Within the context of an experiment, a control is an aspect of an experiment that is not being manipulated. Typically when two ideas are being tested against each other, the control group acts as a baseline measure where no manipulation occurs. If changes happen in the group where one of the independent variables was manipulated and nothing in the control group where the independent variable was not manipulated, this would be evidence to suggest you have found a causal effect!

The word control also has a definition within statistical models. A control in a statistical model is a variable thought to also affect the dependent variable, but is not of direct interest. For example, if you wanted to investigate the effects of certain way of practicing piano on making errors in performance in children, you might want to control for age. Kids differ a lot from one another in systematic ways. Older kids tend to be able to perform the piano better than younger kids. There are ways to account for this in a statistical model and this is what is meant when the word control is used in a statistical context.
If you do not do this, the results of your experiment might be explained better by something else, or confounded.

It is also important when considering experimental designs to randomize your samples. Randomization within an experimental design is the best way to account for biases and confounds in your study. Randomizing ensures that each member of your population is equally likely to be chosen so that when you get the results of your experiment, you can be more sure that they generalize.

Theories and Logical Reasoning

Lastly, we will discuss some common terms related to scientific reasoning. Specifically, we want to discuss inductive and deuctive reasonsing.

Inductive reasoning occurs when a scientist observes a pattern in the the world, then makes a theory based on their observations of the world. From their theory, they are able to make certain predictions in the forms of hypotheses. Most scientific reasoning starts from a point of induction, with some noteable exceptions. With inductive reasoning, you are starting with specific examples then moving to a larger theory. An example of this would be Charles Darwin seeing the similarities in many species of animals then formulating his theory of evolution.

Unlike inductive reasoning, deductive reasoning starts with a theory, then tests hypotheses. By starting with a theory, you already have an idea of how the world works. Because you are able to say this ahead of time, there is a possiblity you might be proven wrong. This is important because if you have a theory about the world and there is no way to prove it wrong, some scientists would not even consider it a scientific idea!

Questions related to inductive and deductive reasoning are of the highest importance to how scientists know what they know. For more reading on this important topic, check out Explaining Psychology as Science.

Lastly, it is important to note that because the world is complex, it is important for scientists to be as exact as possible when describing their theories. One way scientists can do this is to formulate computational models of the world. The idea here, to borrow from Guest and Martin, is to ensure that “your ideas area able to run on someone else’s comptuer”. As noted by Farell and Lewandowsky, without formulating some sort of computational model of a theory, some verbal descriptions of the world can have multiple versions, all compatable with the verbal description.

A good theory accounts for most of the data, is testable, is not too restrictive, has parsimony, and is able to predict the outcome of future experiments, and the best theories help answer ultimate questions (why questions) rather than just proximate questions (what questions). (David W. Martin, Doing Psychology Experiments)

Quiz

  1. Suppose you hypothesize that musical harmony is a culturally determined construct. How would you design a study to test this research question?

  2. What are some possible confounds with such a research design, and how could you eliminate as many confounds as possible given the design of your experiment?

References

Authors

Topics

  • experimental psychology
  • neuroscience
  • computer modeling

Contents