Page loading ... Please wait.
Learning Sciences Institute






 
 

Program Outline

Ask Better Questions
Use Better Methods
Find Better Answers


How Do I Apply?

 

Postdoctoral Opportunities

 

Postdoctoral Traning Fellowship Program on Rigorous Methods in the Learning Sciences

ExPERT Postdoctoral Training Program

 

Current Postdocs

Chris Hulleman | Joy Lesnick | Jason Luellen

 

 


Program Overview

The interdisciplinary educational research training program at Vanderbilt University will use multiple educational and research activities to train a sizable cadre of education scientists who are experts in conducting randomized field experiments of theory-based interventions and approaches aimed at enhancing student learning in educational settings. These activities include newly crafted graduate courses, extensive research experience with faculty who conduct randomized field trials, four summer workshops, monthly interdisciplinary lectures and colloquia, teaching experiences, internships, and conference attendance. Over the next five years, 35 predoctoral trainees will acquire expertise in planning, executing, and analyzing high quality randomized field trials of educational programs and other strategies that are firmly grounded in theoretical frameworks and supported by prior empirical evidence on the viability of the proposed intervention. Coupled with skills in the use of meta-analytic procedures, the accumulation of evidence from such studies will provide an additional basis for answering questions of what works for whom and under what circumstances. To enhance the caliber of theories guiding practice, the development of interventions, based on theories and research about how people learn in educational settings, is a particular focus of the training program. The training program’s ultimate aim is to develop a new breed of education scientists who are both committed and well-equipped to articulate models of effective educational practice that are rooted in principles of learning and high quality empirical evidence.

Organizational Placement and Structure
Recognizing the need for an interdisciplinary perspective to address the important educational problems facing our nation’s schools, Vanderbilt University recently created the Learning Science Institute (LSI). Vanderbilt’s explicit rationale for creating the LSI was to dissolve intellectual barriers among its Schools and departments, thus, affording researchers from relevant disciplines (e.g., education, psychology, neuroscience, anthropology, engineering, and computer science) the opportunity to effectively collaborate on common problems of learning, achievement, and education. Given its University-wide organizational placement and its mission, the LSI provides the organizational home for the proposed IES-sponsored training program. Placed within the LSI, the proposed training program brings together over two dozen faculty, many with substantial national reputations, from across four departments within the Vanderbilt community. These faculty represent three program specialties in the Department of Psychology and Human Development (Cognitive Studies, Developmental Psychology, and Quantitative Methods and Evaluation) within the College of Education and Human Development, three other core departments within the College (Teaching and Learning, Special Education, and Leadership, Policy and Organizations). Additional expertise in statistics, economics, advanced research methods, cognition, and neuroscience is available from departments across the University. The LSI serves as a liaison to these other disciplines.

Themes and Goals
Figure 1 presents a stylized depiction of the targeted interface of the three major themes embodied in the proposed training model -- training in randomized field trials, training that occurs in educational settings, and training that is grounded in strong theories and principles about how optimal learning occurs. The emphasis for all trainees, re-gardless of their department of origin, is a shared knowledge base that is represented by the intersection of these three areas. Concerning the first theme, recent federal legislation has substantially raised the bar for all educational researchers interested in the effects of educational interventions and strategies in several notable ways. There is now a stated preference for the use of randomized field trials (RFTs) to estimate the effects of educational programs and strategies designed to improve student learning. Consequently, providing training in the plan-ning, execution, and analysis of RFTs constitutes a dominant focus of the training program.

However, the skills associated with conducting randomized trials are not sufficient by themselves to solve educational problems. An essential feature of a RFT is the specification of an intervention in an educational setting that is well grounded in relevant theories and supported by prior research evidence. Within the perspective of evidence-based practice, there are numerous sources of testable hypotheses (interventions) about how to enhance learning or remove barriers to learning. Knowledge of the educational setting derived from contemporary educational theory and research reveals at least three generic approaches to improving learning: (1) systemic reforms whereby the broad changes are introduced (e.g., school takeovers and the introduction of learning standards); (2) efforts to enhance the quality of teaching (e.g., professional development, and preservice training); and (3) the development of new materials directed at learners. Educational investigators must also understand the context within which these efforts can be initiated in order to successfully design and implement high quality research. Finally, repeated assessments by the National Academy of Sciences (e.g., National Research Council, 1999, 2000) and other researchers (see Craver & Klahr, 2001) have concluded that cutting-edge theory and research from such fields as cognitive psychology and neuroscience hold substantial promise for understanding the mechanisms of how people learn. At the same time, studies of basic and higher order cognitive processes often are undertaken within laboratories, using materials and topics that are unlike those needed in educational settings. Consequently, in addition to enhancing the technical quality of research methods, some members of the next generation of researchers must “extend laboratory-derived knowledge to teaching and learning in complex, real world environments” (IES, 2004, p.3). Furthermore, the complexity of pressing educational problems requires consideration of theories, evidence and methodologies from multiple disciplines. By grounding predoctoral training in these three themes, our expectation is that graduates will enhance the pool of educational scientists who are well-equipped to meet this challenge, function effectively within interdisciplinary teams, and conduct research that is responsive to the major problems confronting education.

Interdisciplinarity
Figure 1 and its interlocking pieces of a puzzle not only highlight the overlap of the four academic departments within the College of Education and Human Development in certain regards but also signal their individual perspectives on and experiences with educational problems. Research programs in the Department of Special Education are directed by pioneers of evidence-based practice (notably Professors Doug and Lynne Fuchs). Their collective record of accomplishments in conducting randomized field tests in educational settings provides a valuable fund of experience that can be drawn upon by other faculty and trainees. Professor Elliott, the newly appointed Dunn Professor and Director of the Center for Assessment and Intervention Research, also brings substantial expertise in the area of assessment and testing.

Programs of study and research in the Department of Psychology and Human (PHD) Development make two distinctive contributions to the interdisciplinary focus of the training grant. First, several faculty in the Cognitive Studies and Developmental Psychology programs are adopting and adapting theories developed from laboratory-based studies of basic and higher-order cognitive processes and testing them in actual educational settings (e.g., Professors Carr, Hoover-Dempsey, and Rittle-Johnson). Second, Ph.D. faculty in the Program on Quantitative Methods and Evaluation (QME) provide expertise in fundamental and advanced statistical methods (e.g., Professors Cordray, Lipsey, and Steiger) and field experimentation, quasi-experimental design and program evaluation (Professors Bickman, Cordray, and Lipsey). Faculty in the Department of Teaching and Learning investigate models of learning and instruction (Professor Lehrer), as well as evidence-based practices in early childhooddevelopment (Professor Farran).

The Department of Leadership, Policy and Organizations (LPO) makes three distinct contributions to the interdisciplinary nature of the proposed training grant. First, LPO faculty examine the effects of broad-scale educational reforms (e.g., Professors Porter and Wong) and specific systemic reforms that involve changes in professional development programs and teachers’ pay (e.g., Professors Ballou, Desimone, and Porter). To complement the statistical and methodological expertise in QME program, faculty in LPO have specialized expertise in Hierarchical Linear Modeling (Professor Smith), multiple regression analysis and econometric modeling (Professor Ballou), sampling and survey design (Berends), and program evaluation (Professor Desimone). Faculty in LPO also add substantially to the collective interdisciplinary representation within the proposed IES training program. Additional disciplines that are represented include: Economics (Ballou), Sociology (Berends), Political Science (Wong), Educational Administration/Policy (Guthrie and Smreker), Policy Analysis (Desimone), and Educational Theory and Policy (Smith).

Organizational Change and Institutionalization: Incrementalism
The proposed IES training program has deliberately focused on crossing departmental boundaries of the four major departments within the Peabody College of Education and Human Development that have the clearest relevance to improving learning, education and educational reform efforts. As the program becomes institutionalized through its University-wide placement within the LSI, it is expected that other departments across the University will participate as full partners. One major goal for this training program is to reinforce the mission of the LSI by contributing to the institutionalization of interdisciplinary research on educational problems at Vanderbilt. Adding a training function within the LSI creates another important mechanism by which faculty and graduate students from different disciplines and perspectives can work collaboratively. We anticipate that this program will serve as a catalyst for bringing together an ever increasing number of scientists for the purpose of enhancing education through evidence-based practices, based on the best available educational, cognitive, neuroscience and organizational theories and research about how to improve learning in educational settings.

Need for the Proposed IES Training Program
The proposed training program was developed, based on: (1) the track record of students using RFTs in their dissertation; (2) an analysis of the full range of conceptual, statistical and methodological skills needed to provide compelling answers to questions of what works for whom under what circumstances; and (3) an analysis of the strengths and weaknesses of the current predoctoral training methods. Before describing the proposed training program, these perspectives are briefly described in the next few paragraphs.

RFTs and Dissertations
Whitehurst (2003) reported that only 6% of research reported in AERA’s two premier journals utilized a randomized trial. Within the past three years, 48 dissertations have been issued from the four departments represented in this proposal; based on their abstracts, 8% used a randomized field trial. Counting dissertations that employed either an RFT or a quasi-experimental design (in a field setting), the rate jumps to 25%. The majority of the rest were based on qualitative methods (27%) and correlational methods (35%). Implications for training. Although it is unreasonable to expect that all educational research would entail an interest in answering causal questions, it appears that there is room for more emphasis on using randomized field trails, within the Vanderbilt/Peabody context.

What Works for Whom Under What Circumstances? Knowledge and Skills
As stated in the quote from the RFA that was cited at the beginning of this proposal, answers to questions of what works for whom under what circumstances are causal questions. RFTs represent the most trustworthy vehicle for testing the causal effects of interventions. Underlying the question of what works is the need to develop a trustworthy knowledge base in order to achieve evidence-based practices in education (Whitehurst, 2002). The spirit of IES’s statement in the RFA has the backing of a number of prominent education researchers (e.g., Boruch, deMoya, & Synder, 2001; Burkhardt & Schoenfield, 2002; Coalition for Evidence-Based Policy, 2004; Cook, 2001; Slavin, 2002, 2004). On the other hand, these ideas have not been embraced by all educational researchers. Some have offered cautions (e.g., Berliner, 2002; Pellegrino and Goldman, 2002), but seemed predisposed to give the ideas a chance to mature. Still others (e.g., Olson, 2002; St. Pierre, 2002) appear to reject the evidence-based perspective altogether. Taking into account the recommendations of those who are cautious and being mindful of the damage that could be inflicted by basing all educational research on a single method, the proposed training program attempts to contextualize the scientific process (Berliner, 2002). So, what needs to be known? What skills beyond training in RFTs are required?

We agree that successful implementation and maintenance of randomization provides an internally valid basis for concluding that the cause (intervention) is uniquely responsible for the observed effect. Assuming sufficient statistical power (an aspect of statistical conclusion validity), the resulting unbiased estimate of effect is taken as evidence that the intervention “works.” More precisely, given the counterfactual model of causality underlying the use of high quality RFTs, the result is an unbiased estimate of the relative effects of an intervention on an outcome. From a strictly technical point of view, proper interpretation of this relative effect requires consideration of factors that are not directly controlled by randomization. The interrelated set of threats to validity (statistical, internal, construct, and external) presented by Shadish, Cook & Campbell (2002) provide a useful framework for unpacking the statistical and methodological issues that require attention. By extension, their scheme illuminates the array of skills and knowledge needed to construct a body of knowledge for evidence-based practice in education.

Issues of construct validity are particularly important in assessing what works. In addition to the technical skills associated with assessing construct validity, in-depth knowledge of theories and research underlying these constructs, the educational context within which they are assessed (effects) or installed (causes), and practical knowledge about educational settings is necessary. In particular, evidence-based educational practices that deal with constructs associated with causes (e.g., feedback) and effects (e.g., learning) are of interest rather than particular operationalizations (e.g., a standardized test score) of constructs. Theoretical constructs are rooted in substantive areas (e.g., cognition and learning), requiring expertise beyond the specific mechanics of conducting a randomized field trial. Because cause or effect constructs can be represented by a multitude of operations or methods, some of which are better than others, substantive training is needed to make wise design choices.

Conceptually, educational interventions can vary in their causal strength and complexity, involving a single construct (e.g., class size) or a package of constructs (e.g., professional development). In practice, the fidelity with which interventions are implemented can vary across settings and time. The counterfactual model of causality embodied in the RFT paradigm adds to the complexity; because the causal agent is really the difference between the treatment and control conditions (i.e. the relative strength of the intervention). This difference defines the what of what works. Not only does the intervention condition need to be fully described but so does the counterfactual condition. The understanding and measurement of conventional and innovative educational processes, contexts, and practices are essential if researchers are to provide meaningful answers about what works and identify the implications of their research for educational practice.

Optimizing the likely statistical conclusion validity of an RFT can be undertaken only after the intervention and counterfactual conditions are articulated. The nature of the innovation will determine the units of assignment (students, students within classes, classes/teachers within schools, schools within districts, and so on). Judgments or evidence about the relative strength of the intervention set the stage for establishing expectations about the likely relative effects. For example, with variances and covariances associated with clusters, subjects and assessment intervals, sufficient and efficient sample sizes can be determined to assure the RFT has adequate statistical power. It is critical that training provide the skills and resources for making these determinations. Adding “for whom and under what circumstances” to the question also moves the discussion to issues of generalizability or external validity. Both the recognition that an RFT provides an unbiased estimate of the average relative effect of an intervention (Holland, 1986), unless random sampling and a factorial RFTs is planned (thereby, greatly expanding the size of the trial), and determination as to whether the average effect is generalizable or applicable to subgroups require the use of more sophisticated statistical models. Identifying the circumstances under which an intervention works requires some kind of non-statistical and conceptual framework for enumerating the range of applications that are possible.

Implications for training. This brief assessment suggests that trainees require substantial familiarity with educational theories, research, processes, and context if they are to contribute to answering questions of what works for whom under what circumstances. So, in addition to broader methodological and statistical training, it is important that there is training in both the context of education and principles of learning.

Strengths and Weaknesses of Current Pre-doctoral Training
The many strengths of graduate training within the Peabody College of Education and Human Development are reflected in the high national ranking of the College as a whole and the high rankings of specific Departments (notably Special Education and Leadership, Policy and Organizations). Paradoxically, this success is partly due to tight disciplinary boundaries and the uniqueness of the theories, populations, and interventions that are studied by faculty in each department. These conditions make interdisciplinary efforts difficult, albeit not impossible. Disciplinary boundaries also affect the type of methodological and statistical training used to satisfy degree requirements. When students attempt to take courses in other disciplines or interdisciplinary courses, prior training may be insufficient or from a paradigm that makes it difficult for them to comprehend the value of the material. For advanced courses (e.g., quasi-experimental analysis and design), students often enter without shared and prerequisite background skills and knowledge. On the other hand, courses on the structure, content and context of teaching are often under appreciated (or avoided) by quantitatively oriented students because of their lack of precise theories and formulations.

Implications for training. In crafting the IES Training Program, a core set of statistical, methodology, and interdisciplinary education courses is delineated. As described in the next section, the technical courses have been sequenced so that new skills and knowledge build upon prior courses. The technical and interdisciplinary education courses are linked so that examples and problems are mutually reinforcing. In addition to formal training, there appears to be a need for a change in the scientific culture (Feuer, Towne & Shavelson, 2002).


Current Students

.

 


Student Publications

Presentations:


ExpERT Lecture Series

Date: Tuesday, January 8, 2008
Time: 2:00 p.m.
   
Speaker: Geoffrey Borman, Professor, Educational Leadership and Policy Analysis
University of Wisconsin, Madison
 

 

     
   
Location: 105 Payne
 

 

Date: Tuesday, January 22, 2008
Time: 11:00 a.m.
   
Speaker: Bethany Rittle-Johnson, assistant professor, psychology and human development
Peabody College, Vanderbilt University
 

 

     
   
Location: 223 Wyatt
 

 

     
     
 
     
     

Previous Lectures

Research Symposium |Howard Bloom | Grover J. Whitehurst | Margaret Burchinal
Jessaca Skybrook | Robert C. Granger, Ed.D | Joy K. Lesnick | Jason K. Luellen


ExpERT Program | Program Outline | How Do I Apply? | Postdoctoral Opportunities
Program Overview | Program Specifics | Program Operation |Faculty | Current Students | Student Publications | ExpERT Lecture Series

©2006 Vanderbilt University Learning Sciences Institute/ExpERT Program. All rights reserved.
 
LSI Video Feature Learning Sciences Institute Vanderbilt University ExpERT Lecture Series