15 Role Expansion in the 21st Century

Key takeaways for this chapter…

  • School psychologists might expand their roles to enhance students’ mental health
  • Special education students are entitled to a free appropriate public education (FAPE) that sometimes concerns social-emotional needs, providing one point of potential role expansion
  • Regarding IEPs, school psychologists might encourage routine use of social-emotional instruments for progress monitoring and tap “Relative Change Indexes” (RCIs) to aid numeric verification
  • School psychologists might also promote mental health via universal screening coupled with Multi-tiered Systems of Support (MTSS)
  • Various published screening instruments, easily understood by school psychologists, are available to help identify at-risk students
  • Further role expansion might involve either direct provision of evidence-based counseling and psychotherapy services or by sharing assessment results with others providing these services

Cases/vignettes in this chapter include…

  • Emily, role expansion embracing progress monitoring
  • Kimani, role expansion embracing mental health screening
  • Flora, role expansion embracing matching assessment results to treatment
  • Kayla, a student with special needs and her IEP
  • Lex, proper planning based on proper understanding

 

Emily Ekberg completed her school psychology training 10 years ago. In the ensuing years, she has become comfortable in her practice, which involves services delivered in one elementary school and one middle school. Visiting her hometown on a school break, Emily speaks with her aunt, someone she has not seen for several years. The conversation turns to work. Emily’s aunt, aware of her niece’s profession, says the following, “It must be so fulfilling to work in a job where you help children.” Emily’s automatic response was, “Yes, it really is.” But her answer left Emily feeling disquieted. Later that evening, when alone, Emily reflected on her aunt’s inquiry and on her own response. Did her job with children actually involve helping? That certainly was her intention when selecting school psychology as her life’s work. Similarly, she implicitly thinks of herself as a “helping” professional, someone with knowledge and compassion situated in a setting where helping others is possible. In light of this line of thinking, Emily committed to a course of professional self-analysis.

Back at work, Emily pulled her calendar from the preceding months. Fortunately, she is a detail-oriented individual who consistently tracks her activities on an electronic calendar.  Happily, Emily discovered two cases of behavioral consultation. She proudly recalled both. Much like the procedures she learned in her behaviorally-oriented graduate program, her consultation in these cases involved a structured approach. She helped teachers to define problems, consider environmental influences, modify suspected environmental contingencies, and examine progress. Each case appeared to turn out well. Less well tracked, but nonetheless still memorable, were instances of informal consultation. Adept at behavioral conceptualizations, Emily also helped two other teachers brainstorm potential reasons for classroom behavior problems. On reflection, perhaps she really was a helping professional—an authentic mental health provider.

But review of the rest of her calendar left her less positive. During those three months, she had conducted 27 special education evaluations, about one-third of which were obligatory triennial reevaluations. The reevaluations all involved mandatory IEPs, but Emily had no hand in their formulation nor in judgments about whether annual goals were reasonable, consistent with the students’ needs or even validly measured. Likewise, among the plethora of initial evaluations the result of most was documentation of special education eligibility, again compelling formulation of an IEP. She discovered only one assessment case that did not concern special education per se. It involved a parent-initiated referral about the prospect of ADHD (Emily interpreted this as a non-special education referral, although other school psychologists might have decided eligibility was at issue; see Chapter 12). Emily was displeased with her emerging professional portrait and what it might imply. In truth, much of her professional life involved special education concerns—paperwork to enable and document evaluations, test administration, report preparation, meeting attendance. Drilling down for details, Emily discovered that she had administered 61 broadband rating scales, 8 narrowband scales, conducted 28 child interviews, and sat in classrooms for student observations 39 times. Much of this flurry of activities seemed unconcerned with truly helping children achieve success, happiness or personal adjustment.

Disheartened, Emily shared her self-study and its apparent findings with two trusted school psychology colleagues, Kimani Johnson and Flora Rivera. Kimani has 25 years of experience; Flora is in her first year. Both found Emily’s discovery interesting and potentially important. Kimani, who was assigned to two elementary campuses, said that he suspected that his time was spent largely the same way. He doubted, however, that his schedule would include the happy occurrence of four cases of formal or informal behavioral consultation. He could recall, perhaps, a couple instances of chatting with teachers about non-referred students, but that was it. He, like Emily, concurred that he had quite little to do with IEP formulation or tracking student progress. Although many of Kimani’s evaluations concerned potential SLDs, several centered around perceived social-emotional problems. Teacher referrals kicked off all of the social-emotional referrals, and nearly all of them involved boys with classroom and playground behavioral excesses (i.e., disruptive behavior). Says Kimani, “The traditional teacher nomination process might work out okay for learning problems. But for emotional issues it seems to result in just a few teachers launching most of the referrals. There may be another flaw in the system. Some referrals seem to have more to do with a teacher’s personal intolerance for certain behaviors than with the objective severity of the behavior itself. There seems to be a big bias toward referring conduct problems and a bias against referring internalizing problems. It seems like we could do a better job of addressing the emotional aspects of some of our students.”

A colleague of Emily and Kimani, Flora Rivera found herself in a different situation. She was assigned to two middle schools. As such, she confronted a workload heavy on reevaluations, relatively light on initial evaluations, plus a noteworthy set of counseling duties. New to her school (and indeed to school psychology practice), Flora’s view is that she inherited a role created largely by her predecessor—a proponent of school mental health services. For example, Flora was told by her principal that she was to continue two ongoing social skills groups that were the darling of her recently retired colleague. Many group members were carryovers when Flora arrived, and many teachers (and even a few parents) had come to request that students enter these groups. Similarly, Flora was expected to carry an individual counseling caseload. Disappointed to find counseling so central to her role, and so otherwise buried herself with special education tasks like her two colleagues, Flora resorted to the only counseling skills she possessed (non-directive therapy).

When they put their heads together, the three school psychologists shared an epiphany—they may not actually be the skilled helpers of children that they had set out to be when entering practice. Their roles were either too constrained by special education obligations (Emily and Kimani) or precast by those who came before (Flora). The group decided to work in concert to do better. The three aimed to expand their roles to offer services that truly help students. They chose to do so using elements of some of the procedures you have already read about in this book. Somewhat sheepishly, Emily proposed the following title for the group: DM3 (the “do more three”). Equally sheepishly, Kimani and Flora agreed. The DM3 split up responsibilities. Here’s how things broke out. Hoping to add more to IEPs, especially regarding emotional and behavioral concerns, Emily addressed special-education eligible students—those with whom she was already entirely familiar. She specifically aimed to improve their mental health services by formulating better IEPs (and Section 504 plans). Wishing to promote mental health for all students and investigate the social-emotional referral process, Kimani in turn selected screening. Intent on making better use of her current counseling services, Flora went after information about matching mental health services to assessment-derived diagnoses or other assessment findings. Let’s see how these three endeavors turned out. As you no doubt suspect, the information that follows from the DM3 may be relevant to your own practice. It may reveal ways extend your role into the mental health arena and thereby benefit the students that all school psychologists have committed to help.

Monitoring, and Enhancing, Social-Emotional Goals in IEP

Emily envisioned a role in which she could help oversee the vital topic of students’ social-emotional progress. Broadly, she wished to advocate for inclusion, and especially the careful monitoring, of mental health goals in IEPs. Although she might not herself implement IEP-related interventions, she could still help judge the presence or absence of progress simply by using psychometrically-sound techniques. The ability to contribute to students’ monitoring struck Emily as a happy approximation of her own intention to “truly help” students. In this pursuit, she began to collect information on the nexus of social-emotional assessment tools and intervention-related score changes. She first spoke with a former professor who helped her compile a reading list. She then started looking with a new pair of eyes at cases as they came across her desk. She scrutinized students’ existing IEPs, especially those with ED designations. She wanted to see evidence that services were actually helping students. Emily indeed found IEPs in the file of each special education student, as required by law. Carefully reading the files of students with ED, however, she was struck by several realities. The first is that many ED students’ IEPs read just like those of their SLD counterparts. That is, annual progress goals concerned solely academics, leaving out any legitimate behavioral or emotional goals. Consider the case of Kayla Novotny, an ongoing special education student whose file Emily accessed at her home campus. Kayla’s report revealed that she had always been a reticent, self-effacing girl who sometimes gave up too easily on assignmented work. A fourth grader at the time of her assessment, she was falling farther and farther behind in math. Details from her full psychoeducational assessment confirmed several other facts: average general cognitive ability; extremely low scores on math reasoning (but only somewhat low scores on math computation); confirmation of poor self-esteem and negative expectations for school success during a clinical interview; elevations on anxiety dimensions on teacher and parent completed rating scales. Once Kayla was deemed eligible for special education and related services (in the category of ED), her special education teacher used psychoeducational evaluation results to create an IEP. Consider the four annual goals listed below.

  • Kayla will show positive self-esteem on 3 out of 5 occasions.
  • Kayla will improve her rate of making positive choices when feeling anxious.
  • Kayla will demonstrate a significant one-year performance gain on the Kaufman Test of Educational Achievement-3 Math Concepts & Applications subtest.
  • Kayla will demonstrate a significant one-year performance gain on the Kaufman Test of Educational Achievement-3 Math Computations subtest.

On the one hand, Emily was impressed that this ED student’s IEP included social-emotional goals. On the other hand, she recognized Kayla’s social-emotional goals were poorly conceptualized and inadequately written. Even a casual examination reveals they are also drastically inferior to her academic goals. It’s easy to see that Kayla’s beginning-of-the-year math skills could (at the end of the school year) be measured with the KTEA-3 (Kaufman & Kaufman, 2014) and a straight-forward comparison made. What’s more, the conceptual and reasoning aspects of math are well sampled by this published achievement instrument, as are the computational aspects of math (i.e., there is good construct representation). Scores are objective and quantified. The KTEA-3 has known indicators of reliability and students’ growth trajectory is known from group data. Consequently, educators wishing to use the KTEA-3 as an index of student progress embedded in an IEP merely enter two sets of scores in the available (online) software to learn if significant progress (i.e., within a range of acceptable probability) was achieved. Reviewing a handful of other IEPs devised for ED students proved similarly disappointing. When Emily talked with her school’s two resource special education teachers, she realized something else. The social-emotional goals were soon forgotten (and there was often no concrete plan for their implementation). This prompted Emily to envision three specific tasks to promote mental health among service-eligible students:

  • Assure that social-emotional goals are routinely present
  • Guarantee that prescribed social-emotional activities are actually implemented
  • Lend technical competence on measuring goal attainment (monitoring

Emily’s thinking seems compatible with the published literature. Viewing students with emotional problems from a legal/administrative perspective, Yell (2019) argued that court decisions (i.e., the U.S. Supreme Court case of Endrew F. v. Douglas County School District) established important obligations. It concerns IDEA’s free appropriate public education (FAPE) dictate. “Students with emotional and behavioral disorders (EBD) present a unique challenge to the collaborative teams that develop their IEPs. This is because these students usually present a combination of academic and behavioral (functional) needs that must be addressed in their IEPs to ensure that their special education programs confer a FAPE” (Yell, 2019, p. 53). Crucially, some IEPs are deficient. Although Kayla’s IEP (happily) included both academic and social-emotional goals, some don’t. But concerning student’s with anxiety (like Kayla), plans may over-emphasize academics at the expense of social-emotional considerations (Green et al., 2017). What’s more, plans may fail to fit the best practices for students with anxiety. One study (Green et al.) looked at IEPs/accommodation plans of students coming to clinic settings for treatment of anxiety. The study revealed one-half of such students were permitted to get up and leave during a school exam if they became anxious. Although seemingly compassionate, the practice might inadvertently invoke a counterproductive escape response paradigm. Similarly, students too anxious to talk with their teachers often had unsatisfactory plans–those falling short of experts’ advice about treating childhood anxiety. Specifically, plans were typically devoid of recognized procedures like prompting reticent students to progressively learn to speak up (e.g., planned rehearsal). Parallel weaknesses seem to exist for students with ADHD. Specifically, Spiel, Evans and Langberg (2014) examined IEPs of a set of middle school students with ADHD. In light of the nature of the students’ problems they hoped to find both social-emotional and academic dimensions among students’ “Annual Measurable Goals and Objectives.” The rate of social-emotional goals proved disappointing. “Even when these areas of need [non-academic/behavior problems] are recognized, close to half of IEPs contained no goals for improving these behaviors” (Spiel, Evans & Langberg, 2014, p. 461).

Given facts like these, Emily’s advocacy for school psychologist’s IEP input seems especially apt. But there’s more. The literature further suggests the importance of not just formulating plans but assuring that they are faithfully implemented (i.e., the notions of treatment integrity, Gresham, Dart & Collins, 2017 and treatment fidelity (Fallon, Cathcart & Feinberg, 2019). Stated simply, if a plan is never put into place to begin with or if behavioral goals are soon forgotten by special educators and classroom teachers, then the prospects of improvement remain dim. The topic is often overlooked, such as in research concerning both academic and social development of students with autism (Gould, Collier-Meek, DeFouw, Silva & Kleinert, 2019). Emily certainly intended faithful implementation of mental health services, not merely well intention goals languishing in students’ administrative folders.

It’s pretty simple to visualize how Emily and the DM3 could introduce mental health concerns into IEPs and Section 504 plans. Their professional schedules would need to be juggled to create time, but school psychologists are already versed in consultation with teachers. They could easily provide expert advice about mental health as a complement to academics. Consider the diverse consultation topics found in even a quick scan of the school psychology literature: promoting social and academic development among students with autism (Garbacz & McIntyre, 2016), helping students with ADHD (Volpe, DuPaul, Jitendra & Tresco, 2009), promoting positive behavior among those expressing behavioral excesses (Sheridan, Witte, Wheeler, Eastberg, Dizona & Gormley, 2019), affording educators guidance on equitable disciplinary practices for students of color (Garro, Giordano, Gubi & Shortway, 2019) as well as engendering academic improvement among gifted students (Knotek, Kovac & Bostwick, 2011). School psychologists could easily redeploy the same set of consultation skills to improve treatment integrity. In other words, at the time of IEP meetings, school psychologists might prompt team members to formalize a method of documenting treatment fidelity. But it is the issue of tracking progress (i.e., social-emotional progress monitoring) that requires special knowledge and expertise. Let’s look in depth at that topic before we turn later in the chapter to actions of Emily’s colleagues, Kimani and Flora.

Progress Monitoring with Psychometric Tools and The Reliable Change Index

In educational terms, progress monitoring concerns ongoing evaluation of student responsiveness as an intervention is applied (Chafouleas, Johnson, Riley-Tillman & Iovino, 2021). In clinical settings, this is akin to the long-recognized requirement to evaluate individuals’ progress during psychotherapy (Howard, Moras, Brill, Martinovich & Lutz, 1996). Emily comprehended that the technical strengths associated with Kayla’s math goals are absent in her social-emotional goals. Now that the school year is ending, Emily wondered what can be determined about Kayla’s anticipated progress. Let’s start with the self-esteem goal for Kayla. Emily’s foundational readings in progress monitoring and her own reflections revealed several problems. To address this goal’s attainment, who will actually judge the presence or absence of “positive” self-esteem? Will it be the special education teacher?  (She might have a risk of bias because she also provides instruction). Alternatively, might it be an unbiased independent observer? What definition of self-esteem might be held by prospective observers? What occasions (as mentioned in the goal) will be used to observe self-esteem? What is the threshold for “positive” self-esteem (as mentioned in the goal)? How reliable is the judgment that “positive” self-esteem was witnessed during each observation? Clearly, this goal suffers many problems that do not plague Kayla’s psychometrically based math goals. Equally problematic, and largely parallel, considerations arise concerning the second social-emotional goal. Who knows what is meant by the goal’s phrase of a “positive choice?” Who knows what is meant by “when feeling anxious?” Would possible observers use the same criteria in the same way when making repeated judgments about Kayla’s actions? A valid and reliable determination of this goal’s mastery, just like with the first goal, seems dubious.

Emily’s reading helped her realize that already-familiar measurement concepts might be applied to monitoring student social-emotional progress. After all, if a student’s academic skill improvements could be evaluated by psychometric tests (e.g., KTEA-3), might not it be possible to do something equivalent using social-emotional psychometric tools repurposed to monitor progress? But how might she proceed? Let’s go back to a single strand from Kayla’s array of objectives and use that one strand to concretely consider measuring social-emotional progress. The very first goal (the one concerning “positive self-esteem”) will serve the purpose. You will see that for the upcoming year’s annual progress determination, Emily proposed that Kayla’s IEP include goals that could be measured psychometrically via changes in objective scores. This method would be used rather than a more subjective method used previously (see Table 15.1).

Table 15.1 Two Contrasting Methods for Measuring Kayla’s Self-esteem Annual Goal

When

goal was used

Formulator of goal

Goal wording

Method of determining goal’s attainment

Last year Teacher “Kayla will show positive self-esteem in 3 out of 5 occasions.” Unclear
Next year Teacher + school psychologist “Kayla will show improved self-esteem as measured by pre-post Piers-Harris-2 Total Score.” RCI based on Piers-Harris 2 Total Score

Emily already knew that there are commercially available measures of “self-concept.” One is entitled the Piers-Harris Children’s Self-Concept Scale, 2nd edition (Piers-Harris-2; Piers & Herzberg, 2002). It is made up of 60 yes-no, self-report items suited to children 7 years and older. To assure that the construct of self-esteem is fully represented the Piers-Harris-2 contains items that concern the following: behavioral adjustment, intellectual and school status, physical appearance and attributes, freedom from anxiety, popularity, as well as happiness and satisfaction. For many purposes, including the present one, a total score (reported as a T-score) can be used. Presumably, part of Kayla’s problem is low self-esteem; improved self-esteem accomplished by participation in special education and related services thus represents a reasonable goal. Part of the IEP’s purpose is to determine if the self-esteem related goal is attained.

Now let’s begin to consider technical issues that confront Emily. It is self-evident that a Piers-Harris-2 Total Score cannot possibly be perfectly reliable. By chance alone, a first and a second Piers-Harris-2 Total Score will differ somewhat (hence the necessity to apply confidence intervals when interpreting scores under any circumstance, such as a one-time measurement of self-esteem during a static, garden variety evaluation). Logically then, when an end-of-the-year observed score seems better than a beginning-of-the-year score, it is necessary to judge whether the observed difference reflects real improvement or is just an artifact of measurement error (i.e., score differences arising from imperfect reliability). Using this line of reasoning, an instrument with good reliability would be especially advantageous when pre-post scores are compared.

To help the team expertly determine if genuine progress has occurred, school psychologists like Emily just need standardized social-emotional instruments whose manuals report reliability statistics. To remind you, a key reliability indicator is standard error of measurement (SEM). SEM is a hypothetical notion. It concerns the standard deviation that would hypothetically arise if the same individual took the same test an infinite number of times. If scores clustered closely around a central value on repeated administrations, the standard deviation of that distribution (the SEM) would be small. Conversely, if scores fan out widely around a central value on repeated administrations, the standard deviation of that distribution (the SEM) would be large. Thus, SEM is just a variant of commonly seen reliability statistics, but one configured for applied use. For example, SEM values can be used to create confidence intervals, as commonly seen for ability and achievement test scores in school psychologists’ reports.

For conscientious school psychologists like Emily to make confident judgments about whether a test score reflects true improvement, they need a statistic entitled the Reliable Change Index (RCI). Put succinctly, “RC [Reliable Change] tells us whether change reflects more than the random fluctuations of an imprecise measuring instrument” (Jacobson & Truax, 1991, p. 14). Emily learned everything she needed to know about RCI in a concise article by Jacobson and Truax (1991). Happily, online RCI tutorials are also available

https://ir.canterbury.ac.nz/bitstream/handle/10092/13399/12664317_Reliable%20Change%5ETutorial%5ENZPsS%5E2016.pdf?sequence=1. And, Appendix E instructs you on preparing RCIs.

Emily, in fact, discovered a non-chance level of improvement in Kayla’s scores. This was confirmed when RCI calculations indicated that Kayla’s observed beginning-of-the-year Piers-Harris T-score of 24 (<1st percentile) compared with her end-of-year T-score was > 35 (7th percentile) represented a reliable (non-chance) improvement in global self-esteem. Given our shared human predispositions to misinterpret (e.g., confirmation bias), the objectivity inherent in using RCI seems to represent a huge advantage. As you have heard repeatedly in this text, swapping rationality and objective thinking (the Reflective System) for intuition and subjectivity (the Automatic System) should be welcomed anywhere in our professional lives where it can be found.

A Few Tools Suited to RCI Calculations

Many school psychologists have never heard of RCI (this included Emily before this undertaking). What’s more, the computation needed to use RCI might engender resistance even among those who gain familiarity. Happily, some manuals now include pre-calculated RCI values (or at least most of the statistics needed to quickly generate RCI values). Emily’s former professor directed her toward some such tools. Among broad-band scales, for instance, the Conners CBRS includes a section on “statistically significant change” on pages 156-157 of its manual. The various tables found on these pages indicate how many raw score and T-score points of positive change are needed to reach statistical significance at the 90% confidence level. The following Conners CBRS dimensions have values available–Disruptive Behavior Disorder Indicator, Learning and Language Disorder Indicator, Mood Disorder Indicator, Anxiety Disorder Indicator, and ADHD Indicator. There is no such coverage, however, found in the BASC-3 manual (instead, the BASC-3 has its own tool entitled the BASC-3 Flex Monitor; Reynolds & Kamphaus, 2016).

Further good news is that several narrow-band scales now embrace use of the RCI (see Table 15.2). Regarding ADHD, for example, the ADHD-5 Rating Scale (DuPaul, Power, Anastopoulos & Reid, 2016) is set up for RCI use. The authors address RCI in their manual and the manual reports standard error of difference (the prime value needed to calculate RCIs) for scores related to Inattention, Hyperactivity-Impulsivity as well as a Total score (see the manual’s p. 88-89). In parallel, when organizational skills are a target of intervention, the Children’s Organizational Skills Scales (Abikoff & Gallagher, 2009) might be used. Raw scores and T-scores at the 95% confidence level are reported in the manual (see the manual’s p. 71-73). For monitoring improvement of depressive symptoms, the Children’s Depression Inventory-2nd ed. (CDI-2 Kovacs, 2011) might be considered. The CDI-2 manual explicitly addresses score changes and guides the user in making judgments about whether a reliable change was achieved. The CDI-2 manual, however, reports necessary T-score value changes without discussing RCI specifically or indicating the confidence threshold (e.g., 90%, 95%) associated with reported values (see the manual’s p. 49-51). School psychologists have other options that might be tapped for use in students’ IEPs. For instance, regarding anger management, the Anger Regulation and Expression Scale (ARES; DiGiuseppe & Tafrate, 2011) is a candidate. The ARES manual reports threshold T-score differences large enough to allow school psychologists to infer that score changes have reached a non-chance level. This is true even though the ARES manual fails to indicate which level of confidence is used (see p. 40-41 in the manual for an explanation of RCI and p. 97 for a table of values). Regarding anxiety, the Multidimensional Anxiety Scale for Children-2nd edition (MASC-2; March, 2013) reports T-scores at the 90% confidence interval (see manual p. 30).

Emily then began to consider the practicalities of RCI use. She realized that many students already have available scores from the very tools cited above; these tools sometimes having been administered during the assessment phase. Thus, accessing assessment-derived scores from popular tests like the Conners CBRS or the MASC-2 might springboard psychometrically measured annual goals in a student’s IEP. Fortunately, end-of-the-year repeated administration of the Conners CBRS or MASC-2 would require relatively little effort. Similarly, the number crunching for pre-post score comparisons is easily accomplished. With these facts in front of her, Emily continued to warm to the idea of expanded participation in progress monitoring.

Emily discovered other realities. The tools listed above might sometimes be used for progress monitoring, but they were not created with that purpose in mind. In other words, most of these tools are marketed to assist in standard assessments (e.g., to assist in a diagnosis), but when deployed for progress monitoring, the scales are being used dimensionally (see Chapter 1 for a discussion of categorical and dimensional considerations). In fact, each manual’s coverage of RCI (and how to document treatment-related improvement) invariably takes a backseat to more conventional uses, such as ruling in or ruling out problems and helping to establish diagnostic conclusions. But Emily found that this was not true for one particular tool–the Behavior Intervention Monitoring Assessment System, 2nd edition (BIMAS-2; McDougal, Bardos & Meier, 2016). In contrast to its assessment-focused counterparts, the BIMAS-2 was originally constructed with sensitivity to change as an explicit mission. Thus, its manual is filled with information about progress monitoring (and screening). Prospective users will need to spend time on the manual and learn all of its capabilities. Simply put, the BIMAS-2 standard form concerns two sets of scales. The first set comprises “Behavioral Concern Scales:”

  • Conduct (e.g. problems managing anger, bullying others)
  • Negative affect (e.g., anxiety, depression)
  • Cognitive/attention (e.g., focus, planning)

The second set comprises “Adaptive Scales:”

  • Social (e.g., maintaining friendships, communication)
  • Academic Functioning (e.g., academic performance, direction following)

The informant options for the BIMAS-2 include teachers, parents, professionals and self. As one might suspect, the BIMAS-2 is prepared to help create RCI determinations. For school psychologists interested in routine use in progress monitoring, like Emily and the DM3, the BIMAS-2 might ultimately work better than the popular tools discussed above. Emily made note of the BIMAS-2 for future consideration. Readers might wish to do the same.

Table 15.2 Examples of Tools for Quantitative Measurement of Progress (including RCI)

Broadband

Name of Tool

Some dimensions measured

BASC-3 Flex Monitor Inattention/hyperactivity, internalizing problems, disruptive behaviors, developmental social disorders, school problems
Conners Comprehensive Behavior Rating Scale Disruptive behaviors, learning and language, mood, anxiety, ADHD
BIMAS-2 Conduct, negative affect, cognitive/inattention
ASEBA Progress & Output App Internalizing, externalizing, total problems
Narrowband

Name of Tool

Dimension measured

ADHD-5 Rating Scale Inattention, hyperactivity/impulsivity, total ADHD symptoms
Piers-Harris 2 Self-concept
Children’s Depression Inventory-2 Depression
Anger Regulation and Expression Scale Management of anger
Children’s Organization Skills Scale Organization

Points to Remember for Psychometric Progress Monitoring

Several other notions that are important when considering RCI. School psychologists might already be familiar with the underlying logic of each, but they warrant mention here.

Statistical vs. Clinical Significance

Emily has come to recognize that RCI, as valuable as it might be, merely detects non-chance pre-post differences. It concerns statistical significance. Thinking back to her statistics courses, she remembers that “statistical significance” and “clinical significance” convey two distinct meanings. These two notions are quite pertinent regarding RCI (Zahra & Hedge, 2010). Precisely, statistical significance concerns probability; it addresses whether differences observed between scores (such as pre vs. post scores on the Piers-Harris-2) are best attributed to chance. In contrast, clinical significance is about practical, real-world relevance. When used for progress monitoring, clinical significance concerns the magnitude of change, not whether it is statistically probable or statistically improbable. For practicing school psychologists, clinical significance helps resolve the question of whether score changes really matter. That is, has improvement been substantial enough to help the student with considerations like personal comfort, quality of life, or adaptive functioning? For example, Kayla might evidence a non-chance (RCI-documented) improvement in her end-of-the-year Piers-Harris-2 score compared to her score at the outset of the year. But do these score changes denote a meaningful self-esteem boost? In the calculation of inferential statistics, researchers often use statistical significance and clinical significance in concert (Warner, 2013). Probability values (e.g., p < .05) and effect sizes (e.g., Cohen’s d = 1.3) can tell the researchers two things. The former statistic indicates that experimental and control group differences are unlikely to be due to chance (at least in 95 out of 100 instances). In contrast, the latter statistic indicates that the experimental and control distributions have means that are pretty far apart (about 1.3 standard deviations separate the distributions’ means). Favorably, the BIMAS-2, for example, provides pre-post related indications of both probability (i.e., RCI) and effect size—they appear on the student’s computer printout.

It gets even more nuanced than that. What if there is improvement (both statistical and clinical) but a student still has unfavorable scores on post-treatment testing? In other words, if both pre and post scores are poor despite improvement, that improvement might be judged insufficient (or even trivial). This might be the case regarding Kayla’s self-esteem. Her end-of-year Piers-Harris-2 Total score is indeed reliably improved over her beginning-of-the-year score, but the latter score (at the 7th percentile) remains unsatisfactory. Thus, Emily decides she will not trust significant RCI scores alone to guide conclusions about whether an intervention has garnered success. She also wants to see large score changes and scores that confirm a student has moved into the average range. Beyond these change-score related measurement issues, there are a couple other important notions that Emily considered.

Regression to the Mean

Regression to the mean is an important concept (Glass & Hopkins, 1996) that can sometimes prove difficult to grasp (Kahneman, 2011). Nonetheless, it holds important implications whenever progress is systematically monitored. Consider this, “Regression to the mean (RTM) is a statistical phenomenon that can make natural variation in repeated data look like real change” (Barnett, van der Pols & Dobson, 2005, p. 215). Emily remembers the topic of regression to the mean from her statistics courses (and from her practice involving ability/achievement discrepancies in SLD determination, including classic articles she read about SLD practices during the 1990s; Evans, 1992). In the simplest terms, regression to the mean indicates that unusually good or unusually bad test scores are generally not repeated. Instead, a second testing tends to result in scores somewhat closer to the mean than the first score. How much closer is determined by the principle of regression (a variation of correlation). When group-level data indicate that a first and second administration of the same test (e.g., WISC-5 Full Scale IQ) are strongly correlated, there will be relatively little regression toward the mean. Conversely, when scores on two administrations of the same test are only modestly correlated (e.g., BASC-3’s SRP Anxiety score), there will be more regression to the mean. (Because correlation values spoken about immediately above merely reflect repeated administration of the same test, these correlations simply represent “test-retest reliability” coefficients). In some ways, a second score’s tendency to regress toward the mean makes intuitive sense. An outlier first score may be so far from the mean in part because of luck (non-systematic variation). But the very luck that contributed to a first unusual score is unlikely to recur with a second score. Luck (e.g., effort, time of day, concentration) are often ephemeral. A second score (e.g., when bad luck evens out for the better, or good luck evens out for the worse) typically results in a more average-appearing score. According to these considerations, blind use of RCI might be flawed because if  ignores regression to the mean. Specifically, an improved second score might have little to do with an intervention. Instead it might simply represent a not-unexpected occurrence in light of the phenomenon of regression to the mean.

What should Emily do about regression to the mean? She and the DM3 decide to be vigilant but not let this consideration stop them from using RCI. There is some evidence to buttress their decision. For example, when this very topic was studied systematically among children outside of schools (i.e., youth treated for a medical condition), researchers concluded that straightforward use of a RCI was just about as trustworthy as a methodology that accounted for regression to the mean (Busch, Lineweaver, Ferguson & Haut, 2015). What’s more, these researchers concluded that because RCIs are easy to compute and turn out to be nearly as accurate as an alternative approach that uses regression, that RCI might reasonably be adopted. Nonetheless, for purists wishing to consider regression in examining pre-post score changes, the communicating author of this research article (Robyn Busch at the Cleveland Clinic) provided an email address that can be used to request access to her Microsoft Excel calculator buschr@ccf.org for regression-based analyses. Interestingly, and despite concern by critical thinkers like Emily, RCI vs. regression-based approaches for school-based change analysis, however, does not appear to have been addressed in the school psychology literature.

Apparent Score Improvement, but What is Cause?

Beyond the topic of regression to the mean, there is more to the issue of whether any change is  actually due to a specific intervention. This is a logical conundrum arising because our school-based interventions necessarily occur without a control group. Emily, always a clear thinker, also recalls learning about experimental design in graduate school and how some designs inherently limit the ability to infer causation. Consider Kayla’s IEP and its self-esteem targets—Kayla’s special education and related services are an intervention aimed to boost self-esteem. Kayla’s intervention roughly approximates what Campbell and Stanley (1963) in their classic text called a “one-shot case study.” This involves “a design in which a single group is studied only once, subsequent to some agent or treatment presumed to cause change” (Campbell & Stanley, 1963, p. 7). In this instance, it is not a single group but just one individual (Kayla) being scrutinized for potential change. Note that in applied behavior analysis terminology, this is akin to an AB design (A = pre-intervention [baseline] condition; B = intervention condition). The problem, of course, is that any apparent improvement might not really be treatment related. Kayla might have simply matured, or she might have come to enjoy a self-esteem boost because the soccer season has begun (and Kayla is a star), or her home life might have improved, which in turn engendered a flood of positive emotions, including better feelings about herself. There is always a risk that special services (or counseling, or medication, or a change in classroom setting) are afforded more credit than they deserve. To be clear, no one is proposing use of a control group for student-level interventions (how could this even be done?). Similarly, it is not feasible to employ reversal designs (e.g., ABAB) to help infer causation in the midst of trying to help a single student (Kazdin, 2013). Nonetheless, school psychologists and the teams with which they work should be modest, and employ hardheaded logic, about the cause of all apparent student improvements.

Screening and Multi-tiered Mental Health Services

Emily’s work on behalf of the DM3 seems to hold promise for role expansion (plus the prospect of truly helping students). But even if adopted, beefed up IEPs and better monitoring of their mental health elements still reflect duties tethered to special education. Kimani’s information-gathering took a somewhat different course and his efforts hold vastly greater possibilities. Because he worked part-time as an adjunct faculty member at a local junior college, Kimani could conduct online literature searches that were impossible for Emily. Consequently, Kimani commenced a search using topic descriptions like “social-emotional,” “screening,” and “school” to discover what the published literature had to say. He also spoke with his sister, a pediatrician, someone who held her own perspective on screening regarding timely detection of social-emotional problems. Favorably, Kimani quickly realized that he was already familiar with the logic behind screening, a central dogma of which is that early problem detection allows maximum time to intervene. As you might suspect, this idea’s intuitive appeal resonates both inside and outside of schools. For example, the American Academy of Pediatrics, as Kimani learned from his sister, proposes routine screening for both developmental and social-emotional concerns beginning with toddlers (Weitzman & Wegner, 2015). Relevant to the upcoming discussion on screening instruments, however, research (Sheldrick, Merchant & Perrin, 2011) suggests that pediatricians’ summary judgments alone aren’t advised. To do so as a means of detecting problems results in unacceptable levels of sensitivity (as low as 14%) and specificity (as low as 69%). This reminded Kimani of his own concern that holistic teacher impressions used to trigger referrals for ED evaluations often seemed inaccurate. Thus, pediatricians are urged in the medical literature to use objective screening tools to enhance these diagnostic utility rates. Doing so would be consistent with the notion that global clinical judgments are often inferior to structured and instrument-associated verdicts (recall the classic work by Meehl & Rosen, 1955 found in Chapter 2). Early problem detection in pediatricians’ offices, however, offers both opportunities and foments challenges. For example, pediatric practitioners recognize they are unable to manage social-emotional problems alone; accordingly, the pediatric literature urges them to link with community resources (Weitzman & Wegner, 2015). Nonetheless longstanding calls for school psychologists and pediatricians to join efforts for the benefit of children (Sheridan, Warnes, Woods, Blevins, Magee & Ellis, 2009; Wodrich & Landau, 1999) seem to have gone largely unanswered. Notwithstanding imperfect pediatrician-school links, early identification efforts of IDEA necessarily depend on health providers (e.g., pediatricians) to detect problems and then direct preschoolers toward school-based (not just health care) services. This is covered in detail by the Birth to Five: Watch me Thrive! module found on the U.S. Department of Education website https://www.acf.hhs.gov/ecd/child-health-development/watch-me-thrive (retrieved November 10, 2020).

Our focus, of course, is the school-based practice of psychology concerning primarily school-age children, not pediatricians’ screening practices. One key educational rationale is that screening can help school districts with their affirmative obligation to identify students with any type of disability (the Child Find proviso of IDEA, see Chapter 10). Screening, thus, can serve as a logical prelude to in-depth, school-based evaluations for one or more of the disability categories described in IDEA. In fact, it is argued that effective screening may redress flaws entrenched in the special education identification process. This is because the special education referral process often hinges on teacher-initiated referrals and, critically, those referrals may reflect teachers’ implicit biases about their students. What’s more, a process dependent on teacher nominations may prioritize overt behavior problems over students’ internalizing problems, such as inconspicuous anxiety or covert depression. Holistic conceptualization of students is vulnerable to bias and a host of heuristics addressed numerous times in this book (e.g., halo effect). In contrast, data driven approaches (e.g., standardized rating scales) tend to dissociate from teacher referrals, which inherently include subjective aspects (Dowdy, Doane, Eklund & Dever, 2013; Eklund et al., 2009). Moreover, use of objective instruments may help to diminish the widespread special education over-representation of ELL and minority students (see Sullivan, 2011; Sullivan & Val, 2013). To this very point, Dever, Raines, Dowdy and Hostutler (2016) found using a self-report screening tool (i.e., the Behavioral Emotional Screening System, BESS, Kamphaus & Reynolds, 2015) might partially ameliorate some current problems with special education identification. The authors of this large study (a nationally representative sample of nearly 5,000 children) concluded, “These findings suggest that a data-driven approach to inform referral for special education may contribute to efforts to reduce the disproportionate placement of students of color and males in special education” (Dever et al, 2016, p. 59).

Figure 14.5. In MTSS, abstract notions of support can become concrete practices to help students.

Several different school-centered screening tools now exist (see review by Jenkins and colleagues, 2014). One commercially available option receives high ratings regarding its psychometric properties and clinical utility; it is the BESS  (Kamphaus & Reynolds, 2015). BESS, available in English and Spanish, screens for those 3-0 to 18-11 years via parent and teacher report and from 8-0 to 18-11 years via self-report. There are 25 to 30 items. The following dimensions are assessed:

  • Behavioral and Emotional Risk Index
  • Externalizing Risk Index
  • Internalizing Risk Index
  • Adaptive Skills Risk Index
  • Self-regulation Risk Index
  • Personal Adjustment Risk Index

The BESS reports T-scores, percentiles, and classification ranges (e.g., “elevated risk”) for its “Behavioral and Emotional Risk Index.” Sometimes school psychologists will also use classification range information available for each subscale. There are also validity scores and critical item indications. Some may question whether this is a tool for a clinic, or perhaps for addressing emotional or behavioral problems during SLD evaluations, but not a universally-applied screener. You will see more on this topic when the notion of tiered levels of services appears later in this chapter.

The BESS also seems to enjoy popularity because it is a member of the wide ranging BASC-3 family. In addition, some researchers have chosen the BESS as a gold standard against which to compare the characteristics and successfulness of other screening tools, further confirming its dominance. For example, a 19-item, teacher-completed instrument entitled the Social, Academic, and Emotional Behavior Risk Screener (SAEBRS; Kilgus & von der Embse, 2015) was compared to the BESS with 704 elementary students. The SAEBRS produced high rates of concordance with the BESS (92% classification agreement), plus strong evidence of sensitivity (.93) and specificity (.93; Kilgus, Eklund, von der Embse, Taylor & Sims, 2016). Interestingly, the same study confirmed the value of a multi-gate procedure (MGP). That is, combining a screening “gate” related to teacher nomination in concert with a screening “gate” from SAEBRS scores was supported. Critically, teacher nominations alone fared less well. Furthermore, the SAEBRS has also been studied empirically to locate optimum cut-scores, including use of a rigorous cross-validation procedure (Kilgus, Taylor & von der Embse, 2018). The link for accessing SAEBRS follows http://www.fastbridge.org/assessments/behavior/behavior/.

Others have also examined the BESS. For example, Kettler, Feeney-Kettler and Dembitzer (2017) investigated the BESS (a single gate procedure) with multi-gate Preschool Behavior Rating Systems (PBSS, Feeney-Kettler, Kratochwill & Kettler, 2009). They found the BESS to be a better predictor of ASEBA scores for parents and teachers.

Kimani already knew about the BASC-3 TRS and PRS from his routine practice, thus the BESS also seemed familiar. Other school psychologists routinely use the Conners CBRS in lieu of the BASC-3. For them Conners CBRS-related screening might be more appealing. To this end, the Conners CBRS Clinical Index, comprised of 24 items, is available as a screener. It provides T-scores in the following areas:

  • Disruptive Behavior Disorders
  • Learning and Language Disorders
  • Mood Disorders
  • Anxiety Disorders
  • ADHD

The manual, however, does not appear to address screening-related issues per se. For example, there are no indications of preferred screening-related cut-scores. Moreover, if entire student populations were screened using a liberal cut-score (e.g., T-score = 60) there is no way to predict roughly how many students might be flagged on at least one of the areas listed above. In other words, if there were a 16% chance that a randomly-selected student would express an elevated T-score on “Disruptive Behavior Disorders” (this might be expected with a cut-score of T-score = 60) plus a 16% chance of an elevated T-score on “ADHD,” etc., then the screening process might eventuate in too many students needing additional assessment.

Kimani discovered another candidate for universal screening, the BIMAS-2, which you already read about regarding progress monitoring. The BIMAS-2 manual tackles screening head on (see McDougal, Bardos & Meier, 2016). It shows that when T-scores of 60 are used for clinical dimensions (and T-scores of 40 for adaptive dimensions) that satisfactory effect sizes (Cohen’s d) as well as indicators of discrimination (discriminant function analysis) are documented. As mentioned above, however, the rate at which students might actually be screened out via a universally-applied process remains unknown when multiple dimensions (e.g., Conduct, Negative Affect, Cognitive/Attention) are checked.

In light of this information, Kimani can foresee social-emotional screening as an antidote to the possible capriciousness of teachers’ ED referrals. But there are potential limitations (and thorny issues) even when an apparently reliable and valid tool (e.g., BESS) is used. These include: (1.) the question of parental permission, (2.) where to place cut-scores, and (3.) access to appropriate post-screening services. These topics are covered below.

The Question of Parental Permission to Screen

Is it necessary to secure parental permission to screen students for mental health problems? If you, like the DM3, are considering mental health screening, this is a crucial question. The answer seems to be, “it depends.” The NASP ethics seem to say one thing. This is spelled out quite clearly by Eklund and Rossen (2016) in an article on trauma screening that also seems to speak to social-emotional screening generally. First, NASP Principles for Professional Ethics indicate that there is no necessity to secure parental consent to “participate in educational screenings conducted as part of a regular program of instruction” (NASP 2010, p. 4; NASP 2020, p. 42). Second, however, focused screening (e.g., concerning social-emotional problems) seems to be a different matter. Specifically, the federal Protection of Pupil Rights Amendment (PPRA) indicates that LEAs need to “obtain prior written consent from parents before students are required to submit to a survey that concerns” several areas, one of which is “mental or psychological problems.” These facts would seem to mandate active written consent for social-emotional screenings. In fact, Eklund and Rossen opine that counting on passive consent (i.e., permission is assumed unless parents “opt out”) probably violates the PPRA. It’s also important that passive consent may be even less ethically acceptable among parents with low English language proficiency and/or limited literacy. Sometimes, however, state statues indicate what must be done. For example, the state of Arizona makes explicit the necessity for active parental permission. This is spelled out in Illustration 15.1. Thus, professionals working in Arizona schools, including school psychologists, are left with no doubt that it will not suffice to simply count on opt out procedures if they hope to conduct universal mental health screenings. In light of these considerations, it is probably the case that local customs determine how this topic is managed, perhaps with input from each school district’s legal counsel.

Illustration 15.1 Arizona Department of Education Guidelines for Screening, Exact Wording.

A. Before it conducts a mental health screening on any pupil, defined as a survey, analysis or evaluation created by a governmental or private third party pursuant to the protection of pupil rights amendment (20 United States Code section 1232h; 34 Code of Federal Regulations part 98), a school district or charter school must have obtained the written consent of the pupil’s parent or legal guardian. The written consent must satisfy all of the following requirements:

1. Contain language that clearly explains the nature of the screening program and when and where the screening will take place.

2. Be signed by the pupil’s parent or legal guardian.

3. Provide notice that a copy of the actual survey, analysis or evaluation questions to be asked of the student is available for inspection upon request by the parent or legal guardian.

B. The chemical abuse and related gang activity survey conducted by the Arizona criminal justice commission pursuant to section 41-2416 is exempt from the provisions of this section if the survey does not include questions related to depression or religiosity.

Selecting Cut-scores for Screening

Kimani also had to consider the practical question of cut scores. He recognized that when screening it might be better to over-identify than to under-identify. That is because screening will prompt a thorough evaluation (e.g., concerning the possible presence of ED). If a student’s detailed (post-screening) evaluation concludes there is no problem, then there was only wasted time and effort but no real damage. On the other hand, if a true case is missed, then that student is deprived of needed, important and legally mandated services. Similarly, as seen below, screening sometimes prompts delivery of low-intensity services. These services may help but they are unlikely to harm. Cumulatively, these considerations argue for a lenient cut-score, one that could tolerate many false positives but relatively few false negatives (misses). For example, a T-score of 70 (denoting clinical range) may make sense for helping to verify a problem’s presence during detailed evaluation. But a T-score of 60 (denoting at risk) may make more sense for screening purposes. Unfortunately, as you saw above, some screening tools’ manuals are silent on the topic of cut-scores. You will see more below about making decisions based on students’ score levels.

Screening Linked to Multi-tiered Systems of Supports

Is the entire point of screening simply to detect potential cases of missed ED (thus triggering detailed eligibility evaluations)? As you might have surmised, stopping here would prove unacceptable for Kimani and the DM3. After all, the group’s purpose is to do more to help (read “truly help”) students. Thinking big, Kimani and the DM3 contemplated systems-level changes that involve addressing mental health delivery. That is, rather than concerning students one-by-one, schools might gear up with an array of social-emotional interventions. Later, these services could be deployed to students who fail a screening or otherwise indicate a need. Seen this way, any effort at screening is a portal to a much larger process. Consider the following perspective on screening, “Schools planning to engage in such work should also ensure that attempts to participate in screening should include considerations for ….. responding to student needs once identified” (Eklund et al. 2018, p. 41). How might this be done? A logical method would be to offer several levels of social-emotional supports and interventions. This is analogous to the Response to Intervention (RTI) approach regarding academic skills wherein needed services are calibrated to several intensity levels (e.g., see the edited volume by Jimerson, Burns & VanDerHeyden, 2015). When this notion concerns social-emotional needs, it is sometimes referred to as Positive Behavior Interventions and Supports, PBIS (or alternatively School Wide Positive Behavior Interventions and Supports, SWPBIS; Kaurudar, 2018). PBIS often comprises three tiers. For example, Tier 1 involves prevention-oriented procedures. In the PBIS world, supports at this level would be made available to all students regardless of their screening results. Tier 2 PBIS services are envisioned to be more restrictive and used only for students who failed a screening or otherwise expressed a risk (e.g., students with frequent discipline referrals). At Tier 3, PBIS efforts concern ameliorating more obvious social-emotional problems. And Tier 3 interventions are especially tailored to individual students (Sugai & Horner, 2008). But PBIS is not just about more intensity in the face of more needs. A hallmark of PBIS is the provision of evidence-based interventions (Wiest, Stevens et al., 2018).

Let’s get a bit more specific. Wiest, Eber and colleagues (2018) advocate for tiered services for students with internalizing problems (including exposure to trauma), as just one example. Internalizing problems are especially relevant for a couple of reasons. One reason is because teachers may overlook such students instead directing their concern toward classmates whose disruptiveness or noncompliance proves harder to miss (McIntosh, Ty & Miller, 2014). Another reason is that children with internalizing problems have treatment needs that are distinct from counterparts with externalizing problems (Wiest, Eber et al., 2018). Universal, Tier 1, supports may benefit all students, including those with internalizing problems. But when problems persist, grow worse, or come with additional indicators of risk, Tier 2 services might be next needed. Tier 2 services might include evidence-based interventions like the Checkin Checkout Procedure (Maggin, Zurheide, Pickett & Baillie, 2015) or the use of Daily Progress Reports (DRPs, Crone, Hawken & Horner, 2010). These omnibus Tier 2 interventions, however, would necessarily need adaptation for this type of student. This might take the form of “a student struggling with anxiety issues could have expectations on the DPR of ‘stay calm and use deep breathing skills when there are classroom disruptions’” (Weist, Eber et al., 2018, p. 178). Also at Tier 2, a student with a history of trauma might receive a program like Support for Students Exposed to Trauma (Jaycox, Langley & Dean, 2009). At Tier 3, manualized programs like Coping Cat (Kendall & Hedtke, 2006) or Cognitive Behavioral Intervention for Trauma in Schools (Jaycox et al., 2009) might be used. Lower tier services are sometimes provided by teachers, whereas those at higher levels might warrant direct provision by counselors, social workers, therapists or school psychologists.

Realistically, however, in some (many) schools staffing limitations mean that there is neither the time nor the expertise to deliver all needed mental health services. Consequently, over the years more extensive and better organized school mental health services have been advocated. Especially noteworthy are interconnected systems that include employees from both schools and clinics working together on the same campus. Emblematic of an interconnected mental health system is the Southeastern School Behavioral Health Community that exists in South Carolina (Weist, Stevens et al, 2018). A website associated with this program (www.schoolbehavioralhealth.org) indicates the possibility for implementing a tiered system. Such a system involves staff members ranging from teachers to counselors to school psychologists employed by schools working hand and glove with mental health professionals employed by extra-school clinics and agencies. When systems work well, they assure that all students receive some support. But students with more needs receive more services, including evidenced-based interventions delivered by skilled providers. Again, their foundation is PBIS and PBIS is itself shown both to be effective on multiple indicators (e.g., reduced discipline referrals, enhanced academics) and implementable (see Wiest, Stevens et al., 2018). Site of delivery is also a factor, as schools are often the prime venue for the delivery of today’s children’s mental health services (Bruhn, Woods-Grove & Huddle, 2014). What’s more, when mental health services are delivered on a school campus, rather than at an off-campus clinic, transportation problems are circumvented and missed appointments may be minimized or eliminated. Similarly, communication problems can be reduced when therapists and behavioral consultants work on the same campus as classroom teachers.

Concerning Tier 2, Conners CBRS, when used as a screen, or the BIMAS-2 might facilitate students-to-services matches. An elevation on the Disruptive Behavior Disorders Indicator of the Conners CBRS, for example, might direct a student toward one array of Tier 2 services that are ill-suited to a counterpart with an elevation on the Anxiety Disorders Indicator. In fact, it can also be argued that the level of tier selected could be guided by the very same screening instruments that flag students. Recall that T-scores between 60 and 69 on many instruments suggest a student “at risk.” Also recall T-scores above 70 often suggest a student with a “clinically significant” problem. Accordingly, the former would seem to indicate, all other assessment information pointing to the same severity level, Tier 2 services. By the same token, the latter, all other assessment information pointing to the same severity level, to indicate Tier 3 services. Wiest, Eber and colleagues (2018) offer a somewhat similar rationale for matching students to tiers of supports. This would represent the simplest, least expensive, way to locate students for Tier 2 services and marry them to the proper type of service.

But about Tier 3? It seems logical that for Tier 2 vs. Tier 3 decisions that all sources of information (i.e., findings from comprehensive social-emotional assessments) should be used. As was argued in Chapter 1, the more important the assessment-related decision, the more comprehensive the assessment should be (recall Angela and the consideration of social skills group in Chapter 1 as exemplifying a relatively low-stakes assessment). Tier 1 and Tier 2 screening questions, in the aggregate, might be considered to be relatively low stakes. Tier 3, where labels and formal special service designations sometimes come into play, are generally no longer low stakes. In the hope of expanding their roles and truly helping students, Kimani and the DM3 decided to place themselves in the loop so as to conduct in-depth social-emotional assessment at the cusp of any Tier 3 service. They sought to assess all Tier 3 candidates, helping match them to needed services, and potentially themselves deliver some counseling or psychotherapeutic interventions (you will see more about this possibility in the next section as Flora’s efforts are reviewed).

Some of the appeal of screening (besides enhance ED referrals) concerns role expansion. By offering intervention services, starting with those of low-intensity, provided free of potentially stigmatizing labels, and accentuating supports, students’ mental health problems might be headed off before they become severe or chronic. Severe or chronic problems might also be addressed at suitable levels of intensity. This would place school psychologists within the context of mental health providers, rather than as mere supporters of special education (see Kilgus, Reinke & Jimerson, 2015). Procedures like those describe above may also help assure that the estimated one-quarter of students who express a social-emotional problem each year (Perou et al., 2013) do not go untreated. It’s easy to envision why the DM3 might consider screening coupled with PBIS to be a bona fide realization of their hope to do more.

For those aspiring to learn more about mental health screening as a prelude to a multi-tiered system of school-based supports, like members of the DM3, there is a helpful implementation guide https://smhcollaborative.org/universalscreening/(retrieved November 8, 2020). Its title proves to be quite descriptive of its contents, Best Practices in Universal Screening for Social, Emotional, and Behavior Outcomes (Romer et al. 2020). The downloadable document is replete with checklists and forms able to assist in practice.

Counseling and Psychotherapy Informed by Assessment

Flora is every bit as intent as her DM3 colleagues to genuinely help students. But she sensed that her already sizeable counseling practice suffered from two Achilles’ heels: (1.) the singular use of client-centered techniques for her individual counseling cases, and (2.) abundant students enrolled in her social skills training groups. On reflection, she concluded that both her individual and group work were uncoupled from any rational assessment process. She wondered if counseling was indicated for all of her cases and whether her technique matched the needs of students actually requiring services. The same was true regarding the social skills group she inherited. Furthermore, even when she was not a direct treatment provider herself, she wanted to know what treatments she ought to recommend for others to implement. In other words, sometimes parents request suggestions for outside therapists, or mentioned various mental health treatment options under consideration, often at the time of IEP meetings. Flora’s mission for the DM3 concerns assessment-treatment linkage. In essence, it encompasses the following question: “How do my assessment findings intelligently guide recommendations for mental health treatment?”

To help answer this critical question, Flora was directed by a senior colleague toward a helpful website provided by the American Psychological Association’s Division 53 (Society of Clinical Child and Adolescent Psychology). It is entitled “Evidence-based Mental Health Treatment for Children and Adolescents.” Here is the link: https://effectivechildtherapy.org (retrieved November 10, 2020). The site indicates, “…EBTs [evidence-based treatments] are treatments that are based on scientific evidence. Research studies have shown that some treatments work better than others for specific problems…” There are similar books prepared by researchers, (notably Weisz & Kazdin’s 2017 comparably entitled volume Evidence-based Psychotherapies for Children and Adolescents). The meaning is as transparent as it is essential: scientific knowledge, not intuition, determines what is likely to work. And, effectiveness, at least in part, depends on the nature of the circumscribed problem treated. When it comes to psychological interventions, one size does not fit all. Parenthetically, the same is true for psychotropic medications; methylphenidate (e.g., Ritalin®) is routinely used for ADHD, not for bipolar disorder; SSRIs (e.g., Prozac®) are routinely used for depression, not for conduct disorders. Regarding psychological interventions, Table 15.3 provides a hint at various treatments and their associated levels of efficaciousness. For example, straightforward behavioral interventions are well suited to ADHD, cognitive behavior therapy (CBT) to anxiety, family-oriented approaches to anorexia. Client-centered therapy, the workhorse of Flora’s counseling, appears only in the column associated with anxiety and, disappointingly, the technique is devoid of efficacy evidence. The contents of Table 15.4, and by inference similar information, hold obvious implications for school psychologists like Flora who want to help students with mental health concerns. Mechanically enrolling students in generic interventions (such as those that simply happen to be familiar to the provider) may prove futile. School psychologists are too busy to waste time on things that do not work.

Table 15.3 Empirically Supported Psychological Treatments

ADHD

Anxiety

Anorexia nervosa

Level One

“works well”

  • Behavioral parent training
  • Behavioral classroom training
  • Behavioral peer intervention
  • Cognitive behavioral therapy (CBT)*
  • Exposure
  • CBT with parents education
  • CBT with medication
  • Family therapy (behavior)
Level Two

“works”

  • Combined training interventions
  • Family psychoeducation*
  • Cultural storytelling
  • Stress inoculation
  • Family therapy (systemic)
  • Insight-oriented psychotherapy
Level Three

“might work”

  • Neurofeedback training
  • Contingency management
  • Group therapy
  • none
Level Four “unknown/untested”
  • Cognitive training
  • Biofeedback*
  • Play therapy
  • Psychodynamic
  • Social skills
  • CBT
  • Cognitive training
Level Five

“tested and does not work”

  • Social skills training
  • Attachment therapy*
  • Client-centered therapy
  • Eye movement desensitization & reprocessing
  • none
*Partial list only. Source: American Psychological Association, Evidence-based Mental Health Treatment for Children and Adolescents

But what method is used to match treatment and student characteristics? Although the “Evidence-based Mental Health Treatment for Children and Adolescents” website is organized several ways, one way hinges on diagnoses, like those found in DSM-5. This is evident in Table 15.4. The message is that when it comes to treatment planning, diagnoses often. As reflected in DSM-5, diagnoses denote fairly homogenous conditions, children with common symptoms, underlying causes, natural histories, and (to a degree) proven treatments. It is the essence of the nomothetic approach. Accordingly, at least in part, psychological practices should be based on what has been learned about the nature of childhood psychopathology, including how to diagnose and treat it, via application of the scientific method. When knowledge is acquired from the systematic study of many, many children the resulting knowledge can logically be applied as a starting place to help an individual child. For these reasons, one cornerstone of assessment-treatment matching is finding a DSM-5 diagnosis (or denoting a simpler mental health condition like from the list you found in Chapter 1). Of course, idiographic considerations (e.g., students’ strength, unique classroom considerations) will also warrant consideration, especially at Tier 3. You will see more about this in the upcoming case of Lex.

Considering the Medical Model Criticism

A common criticism at this point (i.e., as soon as DSM-5 is mentioned) is that school psychologists should avoid use of “the medical model.” Sadly, accusing behavioral scientists of embracing the medical model often brings critical thinking to a halt. But you are encouraged to engage thinking with the Reflective System and consider the following (rather than relying on immediate emotional reactions, intuition, or simply what others might have said about DSM in schools; i.e., counting on the Automatic System). It is true that we behavioral scientists possess our own valuable procedures. We might study antecedents and consequences to improve overt behavior without resorting to clinical diagnoses (reflecting an idiographic approach). Alternatively, we might understand the unique feelings and perceptions of a student and use that information to capitalize on his strengths to formulate a one-of-a-kind intervention (also reflecting an idiographic approach). These are very often good things to do. But they are hardly the only things to do. An alternative, or complementary, approach is to: (a.) understand (nomothetically) a condition, (b.) determine whether that problem is likely to self-correct, and (c.) if it is not, investigate clearly-specified treatment options and determine whether they should be implemented based on potential payoff, risk of complications, stigma and cost. It’s true that many of us as mental health care providers use logic like this—we do so because this is the rational, scientific, empirical, nomothetic method. But because medicine also happens to use this approach—and this is the key point—the method itself cannot logically be said to be synonymous with practicing like a physician. The approach is merely synonymous with something scientific. After all, psychologists are not accused of practicing via an “astronomical model” just because they use a normal distribution to characterize uncertainty (like Carl Friedrich Gauss, who discovered it to understand his observations of celestial bodies with a telescope). Likewise, psychologists are not accused of practicing with a “meteorological model” just because they deal with distributions of data points and characterize those distributions with measures of central tendency (e.g., mean) and dispersion (e.g., standard deviation) in a manner not dissimilar to that used by meteorologists keeping track of rainfall and temperatures.

Let’s return to the practical. Diagnoses can do more than just indicate fixed intervention programs. They can help reveal why classroom problems appear and what might be devised specifically to help ameliorate them (even if that process is imperfect). Consider Lex, a third grader who was referred with poor work completion, tumbling grades and mounting tension with his teacher. His teacher had the following to say. “Lex can be completely exasperating when he becomes stubborn and non-compliant. I have to tell him the same thing over and over. It looks like he is listening but later he goes back and does the very same thing. His style is one that I would call passive resistance. Just more or less ignore but never really confront me. He hides his schoolwork and refuses to start when instructed to do so. When I ask for an explanation, that’s when I get open defiance. He won’t tell me. It’s not for lack of language. I know he can be very verbal and even articulate. I’ve heard him explain things to other boys on the playground. I’ve tried to improve his behavior. I’ve turned his light from green to yellow to red several times. No effect.”

The school psychologist first hypothesized that this teacher lacked classroom management skills or perhaps that Lex had an enduring problem with compliance and cooperation, something akin to oppositional defiant disorder (ODD). However, once she worked through the various stages of the HR process, she reached a much different conclusion. Background information contraindicated a personal history of non-compliance. Instead, she discovered a personal and family history of anxiety, worry, and perfectionism. Rating scales similarly pointed toward internalizing rather than externalizing characteristics. Classroom observations depicted a quiet, somewhat withdrawn youngster. Most telling, however, was a clinical interview. During it, Lex reveal obsessive worry about making mistakes, a strong desire to repeatedly erase and correct his seatwork, plus hoarding of completed assignments for fear that they may be found to be imperfect if turned in. Critically, repeated corrections and reluctance to turn in papers relate to what he described as a need to have his work products “just right.” Lex described the need to have things executed to his own exact liking so powerful that his teacher’s requests could simply not be honored, “I know she sometimes gets mad at me, but I can’t help myself.” Consulting her DSM-5, Lex’s school psychologist concluded that he warranted no diagnosis of ODD, but instead either satisfied or nearly satisfied criteria for obsessive-compulsive disorder (OCD).

Understanding associated with that diagnosis, in turn, placed Lex’s classroom behavior in context. Deliberate slowness, indecisiveness, and hoarding schoolwork are likely manifestations of OCD, not ODD. Although Lex’s behavior appeared to reflect intentional defiance, the root of this behavior was something else. In light of the insight derived from Lex’s diagnosis, intelligent (assessment-informed) planning for him could then occur. First, straightforward classroom management strategies aimed to encourage compliance and improve rule following would have been misguided. Second, his classroom teacher would seem to benefit from understanding the distinction between willful refusal and internally driven obsessions and compulsions. It would seem crucial to process assessment findings with her, perhaps even using a simplified variant of the bottom-up approach seen in Chapter 13. Third, classroom behavior strategies suitable for students with OCD-spectrum problems would be suggested. These might include, for example, use of a timer (shortening a ritual), picking up work regardless of its apparent completion (preventing a ritual), repeatedly reminding Lex that perfectionism is neither expected nor desired (cognitive restructuring). Fourth, counseling/psychotherapy using CBT, an evidenced-based treatment, would be advised (see Joyce-Beaulieu & Sulkowski, 2015; Weisz & Kazdin, 2017). If CBT were implemented, an on-campus provider might be involved. Alternatively, and depending on the wishes of Lex’s family, an off-campus specialized provider might be used. Fifth, sharing a copy of Lex’s report with his primary care physician would make sense. They may wish to discuss with parents the possibility of psychopharmacology, a treatment option documented to be popular in clinic settings and for which there is evidence of efficacy (Smith, McBride & Storch, 2017). Sixth, providing parents with literature on OCD seems both humane and potentially therapeutic. Although unlikely alone to solve the problem, it can help (Cowan & Swearer, 2004).

Again, it is critical to recognize a role for both idiographic and nomothetic perspectives as assessment and treatment begin to coalesce. Regarding the former perspective, group level knowledge about OCD informs the action of Lex individually (i.e., Lex acts like many other students with OCD). Regarding the latter perspective, an interview with Lex reveals that he harbors personal feelings about being wrong and losing face if he submits imperfect seatwork. Although these considerations are shaded by his diagnosis of OCD, some aspects of his perceptions, feelings, and reactions are those of Lex uniquely. A thorough evaluation (and combining both nomothetic and idiographic approaches) sets the stage for intelligent intervention that is arguably impossible without a proper assessment.

Don’t Forget Cultural Considerations in Planning Treatment

It would take someone skilled in implementing mental health treatments to truly help Lex. It would also require someone with circumspection and knowledge of the systems that operate in schools. But there is even more to consider. Lex is a child of European-American background. This may be fortuitous for Lex’s school psychologist because many EBTs were validated on just such youngsters. But what if he were of Hispanic or African-American background? Extra caution or additional digging for details about interventions would then be needed (Huey & Polo, 2018). Unfortunately, there is relatively little treatment efficacy research that has used predominately ethnic minority youth as participants, verified favorable outcomes for them as subgroups, or has provided disaggregated statistical analysis that can easily inform practitioners (Pina, Polo & Huey, 2019). In addition, even when school professionals are committed to providing mental health services, the diverse skills and extensive staffing profiles sometimes found in clinics can be hard to replicate on a school campus. Thus, the manualized nature of EBTs may need to be revised to fit school usage (Coleman, 2018). In other words, it might be ill advised to simply find and then apply whole cloth an EBT in Weisz and Kazdin (2017) or to do the same thing regarding one from the (Division 53) Evidence-based Mental Health Treatment for Children and Adolescents website. Instead, school psychologists might be better off to select delimited elements of EBTs (i.e., modules). Modules could then be used as circumscribed elements, depending on what local resources and students’ characteristics warrant. This is a complex topic that is often subsumed under Tier 2 or Tier 3 interventions in multitiered systems of school mental health services (Wiest, Eber et al. 2018).

What Flora learned about treatment options and matching them to assessment findings left her feeling empowered. Not surprisingly, she concluded that any single approach to individual counseling, especially one boasting limited treatment effectiveness for most of her caseload (i.e., client-centered therapy), would no longer suffice. Still, disquieting feelings persisted about a professional schedule that involved so much individual counseling and group-level social skills training. Happily, however, a senior colleague directed her toward a comprehensive source whose target audience is school psychologists. Plotts and Lasser’s (2020) School Psychologist as Counselor, argues for role expansion that envisions school psychologists as direct mental health providers. Equally important, the resource, which is published by NASP, covers topic such as informed consent, ethics associated with providing counseling in schools, development of counseling goals and objectives, how to organize counseling sessions, tracking progress as well as information on terminating counseling services. Although she is a recent graduate, Flora was never fully trained on these topics, and lack of formal preparation might similarly prove to be barriers to DM3 members (and colleagues) poised to expand their roles. Flora concluded that counseling is a legitimate school psychology practice but one that should be blended with PBIS. Speaking with Emily, it became apparent that some of her mental health services could be written into students’ IEPs as a related service (see Chapter 10). Included might be oversight and direct treatment. Also, she can see that if many students needed intervention in groups, then group-level interventions might be part of a Tier 1 or Tier 2 services, as indicated by Kimani.

Summary

There are several things that school psychologists can do to expand their roles into the mental health sphere. One is to become active any time a student expresses a discernible social-emotional problem that ought to be addressed in an IEP (or Section 504 accommodation plan). School psychologists could not only help advise about plans but monitor their success. In this regard, several broadband and narrowband scales can be used recurrently and coupled with a statistical technique entitled the Reliable Change Index. Another thing that school psychologists can do is to use psychometric tools to screen all students for social-emotional problems. Universal screening might aid in the proper referral (and ultimate detection) of students with ED. Perhaps even more important, screening results can sometimes be harnessed to a much broader system of social-emotional services. Screening can help inform which student might fit from which MTSS level. Finally, students can be helped when their assessment results are envisioned as more than a tool for special education gatekeeping—when findings help prescribe interventions. This is true because not all types of emotional problems require the same type of intervention. School psychologists can benefit students by accessing off-the-shelf scientific information about disorder-intervention links (i.e., a nomothetic approach). Alternatively, they can sometimes match interventions to specialized emotional, contextual, and behavioral considerations to formulate a unique plan (i.e., an idiographic approach).

 

License

Social-Emotional Assessment in Schools Copyright © by David L. Wodrich. All Rights Reserved.

Share This Book