BEES, BEETS AND BRETHREN:
THE DEVELOPMENT OF A RESEARCH MIND-SET
Lee R. Bartel
University of Toronto
Helen took the required research methods course. She understood the procedures for an observational study. She completed the small research assignment successfully. But will Helen now apply this methodology in her teaching? Will research questions emerge from what she sees and does? Will she seek answers to these questions through systematic inquiry? In short, will Helen become a researcher?
While planning my courses, I often reflect on the factors that may cause people to pursue research. Will a particular set of readings, lectures, or assignments transform my students into researchers? I reflect on my education and realize that research courses gave me indispensable knowledge about specific methods and techniques. But, I did not become a researcher because of the courses. I became a researcher because I acquired a researcher's mind-set. I developed a particular set of beliefs about the nature of phenomena. I acquired functional habits of systematic. procedure. And, I found a foundation of values that influences my choice of research questions and procedures.
This mind-set did not develop suddenly; it was the product of years of experience. Three facets of experience contributed to this development: participating in the family beekeeping business, working in the laboratory of a beet-sugar refinery, and growing up in a Mennonite community.
In this chapter I describe typical and memorable situations in my experience that contributed to the beliefs, habits, and values that influence my research. I enunciate a research principle growing out of each experience and, in most cases, demonstrate how the principle has found application in my work. The list is not exhaustive but representative of the type and range of principles I apply in my research.
PRINCIPLE 1: COMPLEX PHENOMENA CAN BE ANALYZED AND UNDERSTOOD.
Bees everywhere! The air was full of buzzing insects. They drifted like a cloud toward the old barn. Who was leading the swarm? How had the destination been chosen? Who gave the signal to move? What motivated the decision? Various questions arose in my mind. The answers to some questions were known; others seemed unknowable. Some needed to be determined. My father headed for the barn to wait for the scouts leading the swarm. He watched intently, with eyes trained by 30 years' experience, as they arrived, settled on the wall, and began the march into a crack. Meanwhile, I studied the behaviour of the bees at the hives to determine the origin of the swarm. Identified three suspects and looked for expected signs. The evidence seemed clear. I took the needed action. At the barn, my father spotted the queen bee and made an efficient capture. The swarm would now return to their hive and resume production.
Researchers believe that observable phenomena can be analyzed and understood. This is a central assumption of science. Without it, we would still view disease as punishment of displeased gods.
The first level of understanding is that which allows researchers to ask meaningful questions. Researchers may ask many questions, but only provide satisfactory answers to a few. Still, answering even a few basic questions allows the researcher to phrase more questions. This in turn may allow a practitioner to begin influencing or coping with the phenomenon.
Scientists do not understand every aspect of bee swarming. Many questions remain, and may never be answered. But, as a beekeeper myself, I understood enough about swarming to identify contributing factors and to alter the behaviour of the bees. Through my work with bees, I understood in a practical way how phenomena that seem unfathomable at first can be analyzed, understood, and controlled.
Human response to music is a complex phenomenon. The cognitive-affective response to music is an aspect of this response that most theorists acknowledge but few have found a way to study. Hargreaves and Colman (1981) analyzed verbal comments of listeners and found clear evidence of two primary dimensions in the response.
I questioned whether individual differences in response on these two dimensions could be measured and to what these differences might be related. I first developed a semantic differential instrument consisting in two dimensions of response (1988, 1992a). I was able to demonstrate that listeners do differ in their responses to music (1989, 1991, 1992b). Analysis and understanding of a phenomenon does not only involve basic description. It involves determining the extent of relationship between changes in the phenomenon and the possible causes of those changes. In other words, it involves theory development. For these reasons, I selected potential contributors to changes in cognitive and affective response and looked for a relationship between these factors and listeners' response. I found differences in individual cognitive-affective response related to personality, experience with music, and style of music.
Just as I never reached a full understanding of all the factors contributing to bee swarming behaviours, I do not understand all aspects of a person's response to music. However, without acceptance of the principle that observable phenomena can be analyzed and understood to some extent, I would not have considered. asking questions about musical response, let alone tried to answer these questions. My acceptance of this principle, basic to a research mind-set, was strengthened every time I experienced the mysteries of a swarm of bees returning to the hive as a result of something I had done.
PRINCIPLE 2. CONTEXT VARIABLES INFLUENCE BEHAVIOUR
I had visited this group of hives nestled among the birch trees just a week ago to remove some surplus honey. Nothing seemed different at first. The bears hadn't knocked over any hives. The sun shone as bright. The crickets chirped. But the behaviour of the bees was different. No purposeful flying over the tree-tops to the nearby field of clover. No constant contented hum. And as I put the first box of honeycombs on the truck, curious scout bees took honey samples. Within 30 minutes it seemed every bee on the site was trying to retrieve the honey I was removing. An important change had occurred in one ecological variable - the amount of available nectar had changed. The flowers had run out of honey.
On the surface this is such a basic assumption it is hardly worth enunciating as a principle. But when faced with an educational problem, many novice researchers fail to identify context variables influencing the problem. To allow for control, or the elimination of confounding variables, researchers need to identify context variables and examine them as potential aspects of the problem under study.
In the early stages of a study of orchestral musicians' stress and performance anxiety, I met with a group of players from the Toronto Symphony Orchestra. We discussed sources of stress and coping strategies. The musicians quickly acknowledged that, although stress and anxiety are related to the music played or the act of playing, the performance context is a tremendous influence. The simple act of placing microphones to record the concert for public broadcast increases stress levels. Then again, performing a work in public concert may be less stressful than rehearsing the work for the critical ears of orchestra managers, principal players, and the conductor. Consequently, an important question in the survey of orchestral musicians asked about the effectiveness of coping strategies in public concerts, recordings, or competitions.
In the business of honey production, success is measured by the value of honey collected above the cost of producing it. Success is largely a matter of understanding, controlling, and reacting to a host of context variables such as temperature, humidity, sunshine, water, type of flowers, health of bees, space in the hive, and the survival motivation of bees. Many context variables are inter-active but, nonetheless, identifiable. Because the combination of variables seems ever-changing, no cookbook approach works. Consequently, beekeeping is as much an art as a science. One effective method we used to track variables and their effects involved recording a daily journal of observations. When an unusual situation arose, we examined past journals for similar situations.
Success in education is not defined as simply as in honey production. Still, to the extent that educational success is definable, educators can identify contributing variables. As in beekeeping, a journal is invaluable in recording changes in context. Noted changes can include simple matters of class and seating arrangements or more complex matters such as type of tasks, prior student experience, teacher behaviour, order of tasks, and student responses. The act of journaling raises the teacher's awareness of the context of student learning. It is a very important step in developing a researcher's mind-set. By keeping a journal of the student learning context, teachers begin the process of research. This process, in essence, involves identifying problems, thinking about the problems, planning changes in the teaching-learning process, acting on plans, and reflecting on the results.
PRINCIPLE 3. TRUSTWORTHY RESULTS REQUIRE ACCURATE DATA AND CAREFUL INFERENCE.
It was the news we were hoping for! North Star Honey wanted to buy 40,000 Ibs of honey. But, it had to be pure water-white clover honey. Would we have enough of the right honey? Out came the journal and calculator. We had located 275 hives of bees on clover fields. Last year we extracted just over 100,000 Ibs of honey from 500 hives. The journal showed we had placed an average of 5.5 boxes of combs on each hive. But the 37 Ibs per box had been exceptional. What did the boxes hold this year? We went to a typical location, randomly selected 50 boxes, extracted the honey, weighed it, and found an average weight of 31 Ibs per box. The journal showed we now averaged 4.5 boxes on the 275 hives. But, if the weather held, we should get to at least 5.5 again. Therefore, accounting for a margin of error, we estimated that with an average product weight of 31 Ibs per box, we would produce the honey required to meet the contract.
One of the challenges in any research with a quantitative dimension is to reach trustworthy conclusions about the entire population of interest. Suppose you want to know what proportion of students in your choir have access to a piano at home. To find out, you simply ask students at a rehearsal. Brit, if you want to know how that proportion compares with the population in your school, you will need to ask all the students in your school. You might accomplish this quite easily with the cooperation of all the teachers. However, if you want to compare the proportion of pianos in the homes of music students versus nonmusic students in your province, the task becomes prohibitive. One solution is to take a random sample of schools to study and draw inferences about the whole province from the sample statistics. If these inferences are to be trustworthy, they must meet certain criteria. This is the challenge of quantification in research.
One of the most recent attempts to describe aspects of music education in Canada was made by Cooper (1989). Cooper drew a substantial sample from the membership of the Canadian Music Educators Association (CMEA) and sent each teacher a questionnaire. My first observation about Cooper's results is the fact that the CMEA includes only a portion of all music teachers in Canada. The response rate of about 35% raises further doubts about the validity of drawing inferences from the results about the whole population of music educators in Canada. One of the known differences between volunteers (those who voluntarily returned the questionnaire) and nonvolunteers (nonrespondents to the questionnaires) is the education level. Cooper's results indicate that his respondents included a higher proportion of teachers with graduate degrees than most people knowing music educators in Canada would expect. Statistical procedures designed to make inferences from a random sample of the population assume that the whole sample has been tested or has responded. If a significant portion of the sample does not respond, the results lack validity. Cooper wisely asked readers to be cautious about generalization.
In a study I conducted with Patricia Shand (Bartel & Shand, forthcoming), we were interested in the proportion of Canadian to non-Canadian pieces of music recommended for study or performance in curriculum guides produced by ministries of education in Canada since 1980. We obtained all possible guides and then decided to classify all references to specific pieces of music. No inferential statistics were necessary since we classified the entire population of music references. The challenge in this study was not validity but reliability - accurate classification of all references. Our analysts occasionally missed references and, at times, could find no information on specific pieces. More than one person was needed to assure reliable classification. After analysis by one person, and checking by two more, we had confidence in the reliability of our results.
In another study Patricia Shand and I conducted (Shand & Bartel, forthcoming), we decided that we would probably not be able to obtain information from all 807 school boards in Canada and, therefore, decided to select a proportional stratified sample of 25% of school boards. After several follow-up contacts, we obtained a response rate close to 94%. Such a response rate to a mailed questionnaire is exceptionally high. It is close enough to the whole sample that the results of statistical procedures very confidently indicate the population parameters on the questions of interest.
To have confidence in quantitative results, the individual data must be accurate, and inferential statistics must be used appropriately. The weight of responsibility felt by educational researchers should be no less than when financial loss or gain is at stake for the honey producer estimating his potential to meet a contract. In both cases, accurate data and careful inferences lead to trustworthy results.
PRINCIPLE 4. INDIVIDUAL DIFFERENCES EXIST, BUT GENERALIZABLE CHARARACTERISTICS CAN BE IDENTIFIED.
It was a typical spring evening when we started to release the bees just arrived from Alabama. The bees from Texas were already in their hives. Releasing them had resulted in the expected total of a dozen stings. So, we were prepared for stings, but not for what hit us that evening. The wire-mesh packages were opened one at-a-time. With one: 3 stings. With another: 6 stings. Another: 1 sting. I was averaging 3 stings per package. After 30 packages, I was sore indeed. There was no doubt in my mind that Alabama bees differed from Texas bees in their aggression level. Although many variables could have contributed to my experience that evening, differences continued to exist all summer. Some hives in the Texas group were also aggressive. But, when I worked with Texas bees on the same pleasant summer day I never got the number of stings I received from Alabama bees. And believe me, the difference of a few well placed stings is highly significant.
A current debate in educational research focuses on the type of questions considered worthy of effort and the manner in which these questions are answered. Labels applied to the two sides in the debate, although an over-simplification, include qualitative vs. quantitative, positivistic vs post positivistic, and context free vs context specific. Quantitative researchers are frequently accused of ignoring individual differences in the quest for common characteristics. Qualitative researchers argue that, because individual differences exist, generalizations are useless, if not harmful. The quantitative researchers maintain that qualitative research may contribute useful insights on an instance of a phenomenon, but knowledge about a specific instance is useless unless some insight can inform our understandings or actions in another instance (generalization in other words ).
My experience with Alabama bees made the need for a generalization about that strain of bees painfully obvious. The next time I worked with Alabama bees I made sure I wore elbow-length gloves, although gloves were not needed with Texas bees. Did my generalization negate an acceptance of individual differences among Alabama bees? No. It was merely an inference from reliable (certainly empirical) data. However, most of my relation to those bees continued to be premised on the understanding that differences would exist from hive to hive in feed requirement, reproduction rates, disease susceptibility, honey gathering potential, swarming tendencies, and s~ on. Understanding the generalization in this case allowed me to meet individual differences more thoroughly. I recently conducted a study (Bartel, forthcoming) of cultural differences in cognitive-affective response to music and self- perception of musical ability. I found statistically significant differences among cultural groups in self-perception. Knowledge about such antecedent cultural differences contributes to the creation of music experiences that allow all students to reach their potential. Such knowledge results from generalization based on representative samples. Does a conclusion of this type negate acknowledgement of individual differences? Absolutely not! It simply adds additional information on which teachers can draw to construct educational experiences. To be an effective researcher in music education, I believe both realities must be acknowledged: individual differences exist, but generalizable characteristics can be identified.
PRINCIPLE 5. RESULTS FROM A REPRESENTATIVE SAMPLE CAN BE GENERALIZED.
My first responsibility at the sugar refinery was sample gathering. Each hour I made a trip around the refinery and picked up samples representing stages in the process from the initial cooking of beet pulp to the spin drying of sugar crystals. At some stages such as the cooking of pulp, the substances were relatively homogeneous because of constant mechanical mixing. There I would only need to take a single substantial sample and test it. At other points the products of several dehydration vats would be combined and, because of fairly high viscosity, the syrups would not homogenize completely. Consequently, the operator would take frequent small samples and combine them in a container. I would take this compound sample, mix it thoroughly, and compute one sugar purity test. The results of this test were then generalized to represent the contents of the whole vat.
Emphasis here is on the representativeness of the sample. The ideal sample is a microcosm accurately reflecting the whole population. Random sampling from all units under study is the typical method employed. Practically, in educational situations we rarely use random sampling because it is almost im possible to gain access to the whole population of interest. If nothing is known of ~e variation in the population, simple random sampling is indispensable. But if patterns of variation are known, which is usually the case in education, purposive sampling can provide representativeness: the sample adequately represents the population. Researchers must, however, demonstrate this representativeness by supplying data to establish the. population validity of the sample. As the characteristics of the sample are described, readers of the research report can decide whether these characteristics are like those to which application of results are to be made.
In our study of the administration of music programs (Shand & Bartel, forthcoming), were able to use random sampling. We had the names of all school boards in Canada. All were accessible by mail; and all were equally likely to respond. But, to assure regional representativeness, we stratified the sample by province. The results of our study were generalizable with the confidence intervals specified because of this representativeness.
In my current study of self-perception of musical ability, I could not obtain a random sample. I selected science, mathematics, and English classes in an attempt to obtain subjects typical of all students in the schools selected. I chose schools in various regions of metropolitan Toronto to obtain representation from a typical range of suburban students. Each school can be viewed as a sample. Comparison among these samples sheds light on population validity questions. Specific questions on the instrument were designed to provide data needed to establish population validity. To the extent that groups of students fit the characteristics of my sample, the results will be generalizable.
PRINCIPLE 6. VARIABLES NOT DIRECTLY OBSERVABLE MAY BE OPERATIONALIZED WITH CLOSELY RELATED INDICATORS.
The first stop on my sample gathering route was the 100 ft. vat in which the beet pulp was cooked. It was possibly also the most important stop on my route because micro-organisms in the pulp multiplied rapidly at certain temperatures and quickly altered the sugar yield of the beets. Although the micro-organism could have been observed. directly, this would have been difficult and time consuming. An easier way to track micro-organism action in the beet pulp was to measure changes in the ph level. Consequently, every hour I walked down the incline of the catwalk over the vat, scooped up a bucket of cooking pulp and took it to the lab to test the ph level. Careful tracking of ph levels allowed me to prescribe formaldehyde treatments at appropriate times to assure that the factory would produce sugar.
Basic to all research is the analysis of concepts related to the questions motivating the research. The process of conceptual. analysis involves precise definition of what we mean by particular terms. For example, most people have a concept of musical talent but it is usually not clearly defined. A researcher might define talent with examples of talented people, specific abilities of such people, examples of untalented behaviour, and so on. The researcher may also rename the defined form of the concept (now a construct) from talent to aptitude. Level of musical aptitude varies from person to person and so the researcher may want to measure this variable. To operationalize the variable, the researcher decides what specific questions or tasks (indicators) will be used to measure musical aptitude. Because musical aptitude is not an observable quality like weight or height, an indicator (or multiple indicators) must be selected by the researcher. To be valid, these indicators must be closely related to the variable as defined for the research.
I am currently on a team of researchers supervising a study evaluating an adjunctive musical attention training program with head injured adolescents at Hugh MacMillan Rehabilitation Centre in Toronto. We all have a concept of attention since we were instructed to pay attention as children. But, what do we really mean by paying attention? In this study, we adopted a cognitive-psychological model of attention that identifies and defines specific levels of attention. Since attention is something that happens inside our head, we needed to select indicators of attention. For this research we selected measurable musical tasks involving attention. One of the musical tasks involves' the subject hearing a piece of music that features a recurring musical motif. Each time the motif occurs, the subject must playa specified note on an electronic keyboard. Success at this task is taken as an indicator of simple attention. Is playing the key attention? No. But doing so each time is an indication that the subject is attending to the music. Playing the key each time is an indicator closely related to the unobservable variable.
Understanding this fundamental principle of research is essential to the mind-set of a researcher. Even though a variable is not directly observable, a researcher does not dismiss the phenomenon from serious study. A researcher must, of course, be able to define concepts and select the best indicators of variables.
PRINCIPLE 7. THE RISK OF ERROR IN RESULTS AND INTERPRETATION MUST BE RECOGNIZED AND MINIMIZED BY THE RESEARCHER.
SHUT DOWN ALL SYSTEMS! The message urgently sounded through the factory while the fire alarm signalled trouble at the pulp dryer. My first reaction was fear and a sharp pang of guilt. Had I failed? Had I provided an incorrect reading? The pulp dryer took the wet beet pulp after the sugar was removed and processed it into dry cattle feed. My task was to take pulp samples provided by the heat controller, test for moisture content, and supply the readings so that flame controls could be set. But, the problem was that it took 30 minutes for the pulp to pass through the gas flame-heated dryer on a conveyor belt. The moisture reading of the finished pulp was used to change the dryer temperature. If I made an error indicating that the pulp was wetter than it actually was, the controller would increase the heat beyond the required point and risked setting the pulp on fire. The controllers would try to assess the moisture content with their hands. Some were very good at it, while others relied heavily on the periodic measurements by the laboratory. This was the one test where I could no~ accept being right only nineteen times out of twenty!
In an educational setting, scientific assessments or research results are, fortunately, not required for hour-by-hour or day-to-day decision making. Fires do not break out if a researcher's finding is not accurate. Yet, at a fundamental level, research findings can have deep and far-reaching effects on educational practice. The danger exists that practitioners may attempt to apply reported research results in settings for which they are not valid if such a limitation is not made explicit by the researcher. Administrators or politicians may make sweeping policy changes with lasting effect based on inadequate, inaccurate, or misinterpreted research findings.
A common error is to infer cause from correlation and then to address the assumed cause with policy change. This may be partly responsible for the current changes in Ontario schools to eliminate academic level streaming in grade nine and ten. A government report several years ago identified a correlation between drop-out rate and enrolment in the vocational track in high school. Flawed reasoning led to the conclusion that enrolment in the vocational level caused students to drop out. But perhaps students who tend to drop-out enrol in the vocational track. These students would tend to drop out in higher numbers than other students regardless of the track in which they enroled. There may be other good reasons to eliminate streaming, but the relationship between drop-out rate and vocational stream is not legitimate. Researchers responsible for such findings must make the implication and meaning explicitly clear to all concerned.
I recently conducted research into differences in music response and self-perception of musical ability on the basis of culture (Bartel, forthcoming). I found differences between cultural groups that might be simplistically interpreted as inherent racial differences. Since the differences between cultural groups include such factors as attitudes toward music, socioeconomic level, and type of music listened to, any conclusion that the observed difference is inherently racial is unwarranted. To explore these more specific factors involved in differences among cultural groups, I am currently conducting a study that examines the relationship of various social factors to musical aptitude and self-perception of ability. I must assure that potential risks for misinterpretation and misapplication are minimized by identifying alternative explanations for conclusions and then exploring those alternatives. That is the essence of an on-going research agenda. That is the ethical responsibility of a researcher.
PRINCIPLE 8. RELIABILITY AND VALIDITY OF RESULTS MUST ALWAYS BE QUESTIONED.
The production area supervisor stared intently at the lab results posted in the window, raised his eyebrows in surprise, and quickly returned to his post to adjust the numerous important-looking dials and knobs. He believed the results accurately represented the state of the sugar refining process. After all, were the results not valid because they were based on real samples gathered from the very vats he controlled? Was the equipment not scientifically accurate and consequently reliable? Little did he know that the night-shift chemist responsible for his area had become a master at extrapolating from earlier results. He would analyze one sample at the beginning of the shift to make sure the previous shift had tested the samples, prepare a set of numbers to be posted hourly by an assistant, go to sleep, and do one sample before the end of the shift so that the next chemist would see no inexplicable change in results. The unsuspecting production supervisor continued to adjust controls. The results looked valid but were not based on the phenomenon they were supposed to describe. And even when the tests were done, any carelessness in cleaning equipment could have resulted in unreliable sugar purity results. It was no surprise when I occasionally saw sugar a shade of grey. I knew the supervisor had been responding to invalid or unreliable data.
Rarely, if ever, will educational researchers actually fudge results like the unethical lab technician. However, educational research findings do not always have construct validity: what a researcher claims to be describing is not always what the data actually describe. A researcher may create an instrument intended to measure musical aptitude when, in fact, it may be assessing culture-bound learning in music. Assumptions about aptitude in such cases are as invalid as the extrapolated numbers of the lab technician.
Ability to criticize research findings should be developed in all members of the education profession. Interpretation and application of research is often primarily dependent on common sense. Hence, novice critics of educational research tend to focus on the interpretation of results. Much more crucial, however, is careful criticism of the conceptual framework and the design of a study. Flaws in these two aspects invalidate results. A teacher may read a study, accept an interpretation of the results, and apply it in a specific educational context. But, if these results emerge from a study with serious design errors, faith in these results is as unfounded as that shown by the sugar refinery supervisor. A basic principle of research criticism is to question the validity and reliability of all reported research conclusions.
PRINCIPLE 9. IMPORTANT PROCEDURAL DECISIONS ARE BEST MADE THROUGH CONSULTATION AND CONSENSUS.
Mr. Friesen, our neighbour on the farm down the road and the elected representative to the inter-community committee, presented the plans in detail to the meeting. The evening gathering in our Mennonite fellowship had been called to discuss the proposed plans for a nursing home expansion. The cost was to be shared by the 8 groups in the area and the work was to be done by volunteers next summer. Brother Dueck, one of the poorer farmers, raised a concern about the amount of money each would be expected to contribute and was assured that each could contribute as he was able. Mainly the discussion focused on details of procedure - who would draw plans, who would raise money, who would coordinate the building, how would admission decisions be made, and so on. When agreement seemed to be reached and the vote practically a foregone conclusion, Mr. Friesen called for a vote of support. The ones who would pay for the cost, do the work, and eventually use the facility agreed by consensus to give the project their support.
Most researchers take their first research steps as graduate students under the watchful eyes of a committee of advisors. Ideas are discussed, proposals are submitted, criticised, and revised, results discussed, and the final report edited. However, when the student graduates and returns to teaching, the support network for research often disappears. Because the music education faculty of most Canadian universities are small, and because universities are often separated by considerable distances, teachers often find themselves, isolated from sources and systems of support. Researchers need colleagues who can offer criticism, advice, and encouragement.
My cultural background led me to an appreciation and acceptance of the role of consultation and consensus. My first research effort (1984) was not only supervised by a committee, it drew heavily on consultation and consensus for substance. I determined to identify criteria for the evaluation of junior high school guitar programs in Manitoba. First, I interviewed eleven experts to determine their views on what ought to characterize a good guitar program. I then analyzed these interview comments, created a list of possible criteria for a good guitar program, and had 80% of guitar teachers in junior high schools in Manitoba rate the importance of each. The result was a set of criteria rated as important by both experts and practitioners.
One of my current studies demands consultation and has as its central purpose the development of consensus: a Delphi study of faculty and student views on the evaluation of teaching. In my opinion, it is an under-used research technique in education.
One of my motivations for the establishment of the Canadian Music Education Research Centre (CMERC) in 1989 was the conviction that there needed to be greater opportunity for collegial consultation, criticism of research proposals, and collaboration in research efforts. An example of this procedure was the Shand & Bartel study (forthcoming) of school board music administration policy. The research questions and proposed procedures were circulated to the fourteen research associates of CMERC for criticism~ Following this stage, the proposed questionnaire was circulated for comment. Some associates offered extensive comments, others only brief encouragement but a sense of consensus and joint purpose was achieved. The conduct of the study was further facilitated by colleagues, especially Raymond Ringuette at Laval University.
Valuable insight and criticism of research questions and procedures can also be obtained at conference presentations. I recently presented an overview of my research in self-perception of musical ability to a conference of the Ontario Music Educators' Association. In the same session I explained my plans for the next stage of research. I received important criticism and challenges that proved useful in the development of the study.
In my experience, participation in research teams has been a productive means of learning from others and contributing to the development of studies that stretch the boundaries of present knowledge. In the study of an attention training program for head- injured adolescents at the Hugh MacMillan Rehabilitation Centre, I am a partner on a team consisting of two neuropsychologists, two music therapists, a composer, and a computer programmer. The acknowledgement of each other's special insight and contribution enriches the study and our professional lives.
Although the process of consultation and consensus development does not guarantee flawless research, it does contribute to excellence. In addition, it adds a human dimension to the research process that is worthwhile in itself.
PRINCIPLE 10. SHARED EFFORT MULTIPLIES EFFECT.
The space around our Mennonite meeting place had become too small. The cemetery was crowded, more and more vehicles needed to be parked, and space was needed for children to play. So, Saturday was a community workday. When my father, brother and I arrived the borders of the space to be cleared were being decided. Soon the sound of axes and saws filled the air. The thirty men and children worked in coordinated effort. Trees came down and were cut up, logs were stacked, brush was cleared, and a bonfire added a festive air. By early afternoon, the space was ready.
This principle hardly needs elaboration. When responsibilities for a job are shared and each person contributes in the area of greatest skill, the job gets done efficiently and with relative ease. Most research projects require a variety of tasks, e.g., conceptualization, planning, organization, data gathering, data management, analysis, writing, and editing. Most people are more skilled at some of these tasks than others. A research team that contributes strengths in each area should progress effectively and efficiently.
PRINCIPLE 11. IMPORTANT PROJECTS DESERVE RIGOR AND HARD WORK
The Red River was rising fast. It was already over the banks and within 2 days would flood the town of Morris# An urgent call for volunteers came: from the Mennonite Disaster Service. Within hours I stood holding bags while a high school friend filled them with sand. Half the day we filled bags as fast as possible. My back hurt. My feet were cold. My hands were sore. We started stacking the bags around the hospital we were defending. A double row lengthwise, one row across, another row lengthwise. . . The need for careful placement was evident to all volunteers. One weak spot in the dam would negate the day's work.
Research criticisms appear regularly in The Bulletin of the Council for Research in Music Education. Most negative criticism does not focus on fundamental errors of design or analysis due to lack of knowledge. Instead, lack of rigor and simple hard work account for such common research flaws as: weak literature reviews, inadequate sample sizes, the omission of pilot tests and second pilot tests after major revision, and the failure to calculate inter-judge reliabilities. Why do researchers fail to be rigorous? Probably because of pressures to meet degree deadlines, too little time due to a full-time teaching assignment, or an inadequate understanding of excellence in research. The researcher may fail to realize how crucial each step in a study can be to its ultimate value and sloppily places a sand bag so that the dike erodes and allows the hospital to flood.
Two types of research rigor can be identified. Careful attention to the design and conduct of research is called macro- rigor. It is at this level that the big errors occur. The errors which are mentioned, for example, in Bulletin of the CRME critiques. There is also micro-rigor. an attention to the smallest detail. Errors at this level can change results significantly but cannot be detected by the reader. The researcher must take the time and make the effort to assure that each number is correct in the data set, that each instance has been classified correctly, that each comment has been taken into account. One can become obsessive in one's search for errors, but every reasonable effort must be made to assure data integrity .
With each study during the past nine years, I have had experiences that reinforce the need to examine data sets carefully. Errors detected have included misclassification of items, incorrect entry of numbers into databases, mistakes in calculations on test papers, computer skips of data in transformation, incorrect transcription of notes, omissions in analysis, and copying errors. In most cases, I can give full credit for the mistakes to my paid assistants. But, unfortunately, detection of errors has usually been the responsibility of my co-researchers and myself. I have been particularly thankful for the meticulous care my colleague Patricia Shand has demonstrated on our joint projects. She has served as a superb example to me of micro-rigor and has reinforced the truth of Principle 10.
PRINCIPLE 12. NO DECISIONS ARE VALUE FREE
The provincial entomologist and university scientists had established the cause of death: grasshopper poison injected into every hive. The police had checked the area for evidence and found footprints. They could identify a suspect. The honey was found to be contaminated and was condemned. All further crop potential was gone. Now a sombre group of 18 men and boys cleaned out the 88 hives, buried the bees, stacked the hundreds of boxes on the 7 trucks. The men were friends of my father. All were Mennonite brethren. All were beekeepers who understood the tremendous loss obviously caused maliciously. All suspected a jealous non-Mennonite beekeeper whose bees only 300 meters away were alive and well. But not one urged my father to agree to the police request to press charges. Retribution and legal justice was not the Mennonite way in situations like this. "Love your enemies." "Do not repay evil with evil but repay evil with good." Values and principles are important and must be followed practically in life. Decisions must be based consciously on an examined value system
I might not have made the decision my father made at the time, but he made it on the basis of a thorough commitment to a particular set of values. It was an outstanding example to me of faithfulness to a conscious value system. Each person operates within a value system although it may not always be conscious. Postpositivist theorists emphasize that no decisions are value free and, consequently, a researcher should acknowledge the value assumptions that influence the research decisions that are made. The decisions are as basic as what research questions are asked and how answers are sought.
As I examine my research efforts over the past few years, I see my basic value system reflected in the choice of research topics. A more challenging exercise is to examine the methods used in those studies for adherence to values I would publicly espouse. In enunciating principles of research as I have done here, I have at least begun the process.
I have described incidents from three aspects of my experience that contribute to a research mind-set. My work with bees was not overtly scientific. I learned a beekeeping practice through day- to-day observation and work under the direction of my father - an expert beekeeper. His approach to beekeeping featured attitudes, habits, and systems that contributed to an understanding of research method in practice. What I learned simply as practical method related to my understanding of research theory.
My work in the control laboratory of the sugar refinery was overtly scientific. What I learned about scientific method in science classes found practical application in functional science. What had earlier appeared to be academic assertions and teacherly injunctions about legitimate procedure had immediate and important effect on efficiency and production. What I learned as theoretical scientific method related to the practice of research.
Practical knowledge and scientific theory are essential aspects of a researcher's mind-set. Both find application in a value context. That base of values should be examined and clarified by the researcher.
Helen may learn to understand research methods theoretically. She may supply all the right answers on a final examination. She may even gain practical experience with one research method. But if Helen does not develop an integrated understanding of theoretical and practical research principles, she may continue to be a one-method researcher. To utilize effectively the method most appropriate to the question of interest, Helen must develop a research mind-set. She must integrate methodological theory, practical ability, and value-consciousness.
Bartel. Lee R. & Shand. P. (forthcoming) Canadian music In the school curriculum: illusion or reality. In T. J. McGee (Ed.) Taking a Stand: Essays In Honour of John Beckwith. Toronto: University of Toronto Press.
Bartel. Lee. R. (forthcoming) Cognitive-affective response to music: 1m exploration of cultural differences. Canadian Music Educator.
Bartel. Lee R. (1992a) The development of the Cognitive-Affective Response Test - Music. Psychomusicology. 11/1.
Bartel. Lee R. (1992b) The effect of preparatory set on musical response in college students. Journal of Research In Music Education. 40/1. Spring.
Bartel. Lee R. (1991) A study of relationships among listener characteristics and the cognitive-affective response to music. Canadian Music Educator: Research Edition. Special Supplement. Thirteenth International Research Seminar in Music Education. 33.
Bartel. Lee R. (1989) A study of differences between musicians and nonmusicians In Manitoba colleges. Canadian Journal of Research In Music Education. 31/1.
Bartel. L R. (1988) A study of the cognitive - affective response to music. Unpublished doctoral dissertation. University of Illinois at Urbana-Champaign.
Bartel. Lee R. (1984) Identification of criteria for the evaluation of junior high guitar programs in Manitoba. Unpublished Masters Thesis. University of Manitoba.
Cooper. T. (1989) School music teaching In Canada. Canadian Journal of Research In Music Education. 31/1.
Hargreaves. D. J. & Colman. A. M. (1981) The dimensions of aesthetic reactions to music.
Psychology of Music. 9/1.
Shand. P. &. Bartel. L. R. (forthcoming) The administration of music programs In Canadian schools. Canadian Music Educator.