Shoulda. coulda. woulda: What listening to Joe Durlak might have done

by Jean Rhodes

In 1979, a young psychologist named Joe Durlak published a controversial study in Psychological Bulletin that sent ripples through the helping professions. What Durlak sought to do was to combine all published studies that had compared the outcomes of experienced psychologists, psychiatrists, and social workers with those of paraprofessionals (i.e., nonexpert, minimally trained community volunteers and helpers). His analysis of 42 evaluations led to a provocative conclusion: almost across the board, paraprofessionals were more effective than trained professionals. Overall, paraprofessionals were comparable to trained mental health professionals and in 12 they were actually superior. In only one study were professionals significantly more effective than paraprofessionals in promoting positive mental health outcomes. As Durlak concluded, “professionals do not process demonstrably superior therapeutic skills, compared with paraprofessionals. Moreover, professional mental health education training and experience are not necessary prerequisites for an effective helping person.” (Durlak, 1979, p. 6). Such data challenged mental health professionals to look more closely at the nature and efficacy of mental health practices.

Over the next five years, researchers using more sophisticated, meta-analytic procedures were able to replicate these promising trends, even controlling for the difficulty of the patients with whom professionals were working. “The average person who received help from a paraprofessional was better off at the end of therapy than 63% of persons who received help from professionals” (1984, 536). Similar studies have continued to demonstrate their effectiveness in delivering preventive interventions (Conley, 2016). These studies suggest that, under the right circumstances, mentors and other caring adults can effectively support youth who lack access to trained professionals.

But there is a critical caveat: paraprofessionals with more experience showed the strongest effects relative to professionals. Moreover, the most effective paraprofessionals in Durlak’s study were those whose efforts were focused on specific target problems (e.g., depression, healthy behaviors) as opposed to more general, broad outcomes. For instance, Durlak cites a study by Karlsruher (1976), who found that unsupervised college students were ineffective in helping maladapting elementary school children, whereas carefully supervised students achieved successful results that were equal to those of trained professionals. Many of the paraprofessionals in Durlak’s study had received up to 15 or more hours of training. As Durlak concludes, “Judicious selection, training, and supervision might well account for paraprofessional effectiveness in comparative studies.”

Durlak also made a prescient observation. “Paraprofessional effectiveness in some studies may be due to the development of carefully standardized and systematic treatment programs…In these programs, treatment has consisted of a programmed series of activities. Presumably, the more intervention procedures that can be clearly described and sequentially ordered in a helping program, the easier it will be for less trained personnel to administer them successfully. Paraprofessionals may feel more comfortable and hold higher expectations than professionals when using standardized clinical procedures, and these factors could contribute to paraprofessionals’ clinical effectiveness. Paraprofessionals’ commonsense “real-world” solutions may have been particularly appealing (Baker & Neimeyer, 2003), but their clinical success may be most closely related to professionals’ abilities to define, order, and structure effective sequences of “helping activities when training or supervising paraprofessionals.” In other words, in Durlak’s study, the paraprofessionals may have been outshining the professionals–not because they were inherently more empathic–but because they were more clearly defining and structuring their helping activities, at least relative to the many of the emerging treatments of that time.

Nevertheless, the trope of the healing power of a close mentor relationship, guided mostly by intuition and kindness, continues to shape the views of most youth mentoring researchers and practitioners. Most scholars succumb to the story of “enduring emotional attachment,” as the key “active ingredient” in mentoring (Li and Julian, 2012). They continue that the reason that interventions often produce weak outcomes is that they focus on ‘‘inactive’’ ingredients that don’t promote developmental relationships such as mentor incentives and training curricula. 

Despite data to the contrary, this misplaced emphasis on the friendship model alone is reinforced in most mentoring organizations. There’s no shortage of rigorous meta analyses of youth mentoring programs showing small overall effects, but these studies are no match for the emotional appeal of a compelling anecdote or a well-argued piece that confirms our biases. These messengers mean well; the individuals, programs, and organizations that share overly encouraging verdicts about mentoring on their websites and promotional materials believe deeply in the power of their youth program and rarely have the statistical expertise to fully scrutinize or qualify available data. Presenting this idealized representation of mentoring relationships is also likely driven by the need for donors. Indeed, programs often demonstrate their success to funders not by providing decks of slides with mixed evaluation results but by showcasing successful matches and the heartwarming stories they represent. Even when claims are later qualified, encouraging numbers have had incredible staying power and the urge to cherry pick them is almost irresistible in a competitive funding landscape.

To compound this problem, as the field holds fast to our preconceptions, we easily find “evidence” that supports our viewpoint that “mentoring works,” while ignoring counterfactuals. To illustrate this point, Travis and Elliot  have described the “Problem of the Benevolent Dolphin.” As they note, every once in a while, a news story appears about a shipwrecked sailor who, on the verge of drowning, is nudged to safety by a dolphin (most recently, a 19-year old man described how a dolphin or sea lion kept him afloat long enough be to rescued by the Coast Guard after a suicide attempt off the Golden Gate Bridge in San Francisco). As Travis & Elliot explain: “It is tempting to conclude that dolphins must really like human beings, enough to save us from drowning. But wait – are dolphins aware that humans don’t swim as well as they do? Are they actually intending to be helpful? To answer that question, we would need to know how many shipwrecked sailors have been gently nudged further out to sea by dolphins, there to drown and never be heard from again. We don’t know about those cases because the swimmers don’t live to tell us about their evil-dolphin experiences. If we had that information, we might conclude that dolphins are neither benevolent nor evil; they are just being playful.”  The authors then evaluate psychotherapists, who, in the absence of rigorous, experimental studies, can easily summon up “evidence” that their clients are improving and that their approaches are working. 

It is tempting to consider where the field of mentoring would now be had it aligned with targeted preventive interventions and taken a deliberate approach to training and supervising paraprofessional mentors. Alas, ideological and professional drivers pushed the pendulum of mentoring away from targeted approaches that deploy well-trained paraprofessionals who followed evidence-based protocols with fidelity (Durlak’s recommendation) to the unspecified, often perfunctory, and only modestly effective formal mentoring relationships we have today. In the meantime, prevention science and the helping professions have become increasingly disciplined and effective. Where would mentoring be today had its allies demanded the rigor and discipline suggested by Joe Durlak more than 40 years ago. 

Posted in: Editors Blog

3 Comments on "Shoulda. coulda. woulda: What listening to Joe Durlak might have done"

Trackback | Comments RSS Feed

  1. Tm Cavell says:

    A manifesto for our field.

  2. TIM CAVELL says:

    A manifesto for our field

  3. Liz Lennon says:

    Hi Jean
    Thanks for an excellent article. Very thought provoking. I’ve just started working within a team that’s piloting a mentor 2 work program that matches young unemployed adults with older employed and networked adults.

    It’s a 6 month program and I’ve been developing a Learning and Support Pathways process in the last few weeks. The L&S pathways process will involve what I’m calling IAAAM [Interact: Assess: Aspire: Act: Muse] that has a range of tools I’m creating to aid in the development of a career map plan.

    I’m creating a range of reflection/musing points for all key stakeholders and as we’re federally funded there’s a whole rake of external evaluators.

    The young adults are at the centre of this whole process and I’ve used the MENTOR standards framework to help inform our good practice and development of policies and procedures. The L&S Pathways Framework is underpinned with adult learning principles and a strengths/appreciative enquiry value base.

    We’re in high start up mode for a pilot to test all these tools and processes. I’m enjoying creating a values based and good practice framework and all the tools that will go with it to aid the young adults, mentors and the M2W team.

    Would you be interested in keeping in touch with me about our project? It’s funded for 2 years and as I mentioned, we’re only a few months in but making some interesting progress.

    Best regards
    Liz

Post a Comment