Fidelity scales ensure that services based on evidence deliver high-quality programs and outcomes, and avoid drift from the model. They also provide evidence for funders that they are getting the program they paid for. Different approaches to the implementation of first episode psychosis (FEP) fidelity scales in four countries — Denmark, United States, Canada, and Australia — were presented by experts at IEPA11, together with their results, the challenges encountered, and proposed next steps.
Fidelity scales demonstrate and ensure the quality of first-episode psychosis programs
Fidelity scales measure access to and quality of health services.1 They:
- assess structure (human capital, buildings and other resources) and processes (treatments)
- provide a list of performance measures by which a program is judged2
The Danish SEI fidelity scale
The nationwide evidence-based Specialized Early Intervention (SEI) in Denmark is a service for young adults experiencing FEP. The SEI fidelity scale that has been developed and tested comprises 18 critical components, explained Merete Nordentoft, Professor of Psychiatry, Copenhagen, Denmark.
The following five components are mandatory:
- independent management
- multidisciplinary team
- case-manager ratio no more than 12 to 1 on average
- assertive outreach including home visits
- systematic engagement with the family and relatives
Assessment of the 18 points are based on interviews with team leaders, two staff members and two patients and observation of a team meeting.
The fidelity scale has highlighted an urgent need for better training and supervision
Overall 96% (n=22) of the SEI teams participated and 59% (n=13) fulfilled the criteria for satisfactory program fidelity.
Mapping of the fidelity scale has highlighted:
- difficulties in maintaining a low caseload
- waiting list is generally 1 month
- an urgent need for better training and supervision at a national level
The next steps are to find out how the fidelity scale relates to outcomes and to carry out yearly assessments, said Professor Nordentoft. She added, that she would also like to include psychosis for the treatment package not just incident schizophrenia.
The US OnTrackNY fidelity scale
OnTrackNY, New York’s coordinated specialty care (CSC) program https://progress.im/en/content/pioneering-coordinated-specialty-care-programs-first-episode-psychosis has expanded rapidly to 22 sites, said Ilana Nossel, Assistant Professor of Clinical Psychiatry, Columbia University, NY.
Fidelity assessment is expected to play an increasing role in funding decisions
The OnTrackNY fidelity scale comprises 25 domains with 91 items, and each domain contains one critical item that must be met to meet fidelity for the domain:
- 47 items evaluate client and program level data, address team functioning and outcomes over the past year, and are collected quarterly
- 44 items are site visit evaluations comprising interviews with clients and family members, observation of team meetings, and review of client charts and programs
Collection of the client and program-level data has proved useful for assessing fidelity and supporting ongoing quality improvement, said Dr Nossel. Site visits are a useful adjunct, particularly for domains related to care processes such as shared decision making.
Next steps include determining which domains are most closely linked to outcomes
Fidelity assessment is expected to play an increasing role in funding decisions, she added. The next steps are to:
- improve efficiency
- include client self-report data
- focus technical assistance on weaknesses
- use data to inform expectations
- determine which domains are most closely linked to outcomes
The Australian EMIT
The 2010 Australian Federal Budget funding committed to an FEP service system based on the Early Psychosis Prevention and Intervention Centres (EPPIC) model, said Eóin Killackey, Professor of Psychiatry, Melbourne, Australia.
The EPPIC Model Integrity Tool (EMIT) assesses the fidelity of hYEPP (headspace Youth Early Psychosis Prevention) services to the EPPIC model.
Challenges include definitions of terms
Eighty items within the 16 core components of EPPIC are assessed, and scoring is based on a consensus rating of two independent raters for each EMIT item. Evaluation involves on-site interviews with staff and young people and review of documents, policies and data around client flow and service.
Results for six sites have revealed a number of challenges including:
- minimum standards
- agreeing on interpretation of standards and definitions of terms
- coordinating assessments across the country
- ensuring feedback is useful to services and aids service development
- ensuring the process is seen as constructive and collaborative rather than punitive
- over-reliance on fidelity data in system evaluation
Some areas of the fidelity model need further development, said Professor Killackey, but the services have found the feedback useful and have become deeply invested in improving, especially in terms of youth involvement.
The Canadian FEPS-FS
Ontario’s Early Psychosis Intervention Ontario Network (EPION) provides 50 EPI programs and new program standards were released in 2011, but adherence is unknown, explained Chiachen Cheng, Psychiatrist and Physician Researcher, Ontario, Canada. The EPI programs are developed locally and have different models — no two are the same.
In 2016, EPION piloted the 31-item First Episode Psychosis Fidelity Scale (FEPS-FS).3 The fidelity of nine programs was assessed during 2-day site visits by three-person assessor teams — two volunteer EPI staff and an implementation specialist. The assessors were supported by training, tailored data collection tools, and post-visit rating consensus meetings with an expert. The reports included quality improvement suggestions, fidelity ratings, and related explanations.
Now need to understand how to use the results to implement improvements
The value, feasibility, and quality of the fidelity assessment process were evaluated from qualitative data collected from the assessors and sites.
Overall feedback was positive, but the assessors experienced a steep learning curve and the time commitment was greater than expected. One-third of the voluntary assessors left the process, which together with the variability in the organization of client charts led to rating challenges.
Dr Cheng concluded that there is now a need to understand how to use the results to implement improvements and to develop benchmarks and for feedback discussion with funders.