How to Measure Outcomes with Apricot Software, A Case Study [Webinar]
In this recorded webinar, Plummer Youth Promise, a Massachusetts-based nonprofit supporting youth with residential, foster care, and independent living programs, shares their journey with Social Solutions Apricot software.
Plummer Youth Promise started with Apricot on a positive track, but soon faced a number of limitations in system usability and reporting. Although Apricot was the right platform for their needs, it just wasn’t configured to work for their programs. Plummer Youth Promise was faced with a difficult decision. Should we re-implement our Apricot for better data, clearer outcomes, and improved workflow for case managers or stick it out with the status quo by relying heavily on external reporting and manual data analysis?
In order to effectively track youth outcomes across multiple programs and within specific outcome and domain areas, they decided to re-implement Apricot. With a new Apricot system in place, Plummer Youth Promise has been able to reinforce their program’s intervention and outcome model through Apricot’s design while also supporting case managers in their day-to-day interactions with youth.
In the video and slides below webinar, you will learn:
- Why the design of your program model precedes the design of your Apricot database
- How to design a program model and get buy-in for it across the organization
- When and how to evaluate if it is time to re-implement Apricot
- How to translate an evaluation model into the requirements of your database
- The impacts of form and link design on reporting requirements
- Why using universal, cross-program form design can provide greater insight in reporting
- How to reinforce program fidelity through Apricot design (i.e. workflow, data quality, and form design)
Webinar transcript (not including Q&A):
Welcome to our webinar, “Program Outcome Models in Apricot Software” a Case Study on Plummer Youth Promise. We appreciate your interest in today’s topic. Thank you for joining us.
My name is Jeff Haguewood. I am an Apricot software consultant at Sidekick Solutions. Sidekick Solutions is an independent consulting firm and Apricot Certified Implementation Partner. We help new and existing users integrate Apricot with the way they work.
I am joined today by Sarah Morrill. Sarah is the Director of Outcomes and Evaluation at Plummer Youth Promise in Salem, MA. Sarah is a licensed social worker with 20 years in human-service related data analytics, outcomes measurement, and program design. Sarah joined the Plummer team after working as a consultant for Plummer leadership on the Plummer Intervention and Outcome Model, which is the focus of today’s webinar.
We put together this webinar to answer a single question.
How to design a program and outcome model and then set up a data system that enables the model?
Sarah and I are both excited for today’s webinar because Plummer’s recent program growth and adoption of Social Solutions Apricot software will highlight Plummer’s journey toward answering this question.
Our agenda today will cover…
- Part 1: Designing the program model
- Part 2: Assessing gaps in the initial data system’s design
- Part 3: System redesign for program model integration
Today’s presentation will be a mix of strategy and technical and we will cover a lot of ground. Following this webinar, you will receive an email with a recording of today’s presentation and a copy of the slides. We will also open for questions at the end of the presentation and plan to stay beyond the hour mark to answer questions.
Let’s dive in. To start, I’ll hand it off to Sarah to give us some background on Plummer Youth Promise and explore the Plummer Intervention and Outcomes Model.
Thank you Jeff.
Plummer started as a “farm school of reform for boys” from a bequest in 1854 from philanthropist Caroline Plummer. Originally envisioned as a juvenile diversion program, Plummer was also an orphanage, serving many dozens of children in one building.
By 2005, over 150 years after our founding, we were down to 10 teenage boys at a time, all referred through child welfare.
And while we were considered to be a “successful” program, our leadership, staff, and board felt we could be doing more to support youth. We wanted better! We began a period of reflection that culminated with a new vision, mission, theory of change, and intervention and outcome model. We wanted to answer these key questions.
Today, we serve girls and boys, from infants to young adults, in residential, foster, and community-based programming and our name reflects our promise of “family for everyone.”
At Plummer we emphasize family through our focus on permanency. The best practice definition of permanency ensures all those we serve an enduring family relationship that is safe and lifelong; offers legal rights and social status of full family membership; provides for physical, emotional, social, cognitive and spiritual well being; and assures life long connections to birth and extended family, siblings and other significant adults, family history and traditions, race and ethnic heritage, culture, religion and language.
Our definition of family does not mean biological family only, and for us is an important distinction of our model.
While our journey has been long, this webinar will focus on our more recent history, specifically the development of our program and outcomes model and the alignment of our technology with our clinical practice.
The program and outcomes model came first. We believed then and our experience reinforces that same belief now, a strong model precedes a data system for tracking the model.
We started the design of our program model the way most organizations do. We started with our experience.
Based on our experience, we knew that families were the key to permanency for at-risk youth, but we had little data to support our anecdotes. We wanted to test our theory of change against the day to day practices followed by case workers. We needed an intentional and purposeful program model.
We applied a two-fold approach to designing our model.
We learned as much as we could about the existing outcomes for those we serve. This included sourcing information in the child welfare field and conducting a deep dive of all data sets we had from discharged participants in the previous three years. We parsed through external studies and internal data to build a complete view of our work.
This took time!
Research tasks were shared by active and volunteer board members and the Executive Director. All insights were brought to staff for review and discussion.
Second, we talked to nearly every staff member, board member, and key stakeholder (including youth and family).
The goal was the same. We wanted to learn. There were lots of interviews, conversations, and outreach.
Again, time consuming!
We asked these stakeholder groups to comment on the current assumptions of the program and what they thought was possible for those we serve. While, feedback from external sources offered insight to the factors happening outside of our organization, internal stakeholders identified where our team and culture deviated from where we planned to go.
All feedback presented a picture that would ensure alignment of all groups in next steps. This process also encouraged participation from all levels.
We found that exploring assumptions and getting to the truth is not always easy, but necessary.
Following these two steps, we created our own template of what it would take to get the results we believed all youth deserve:
- to be prepared by having the skills and support to meet his/her/their physical, emotional, educational, and economic needs;
- to be connected to his/her/their community by having a safe place to live, a sense of belonging and a chance to positively contribute to the community; and
- to have permanency through a safe emotionally secure parenting relationship with a life-long, preferably, legal family.
These global goals: Preparedness, Community and Permanency became our outcome areas.
Within each outcome area we defined eight life domains. The eight domains would become essential indicators for youth progress in each of the three outcome areas.
The visualization of our model illustrates these domains and you can see from the blue line holding it together, Permanency is lynch-pin.
The design of our model evolved as we learned more about the importance of permanency. The current vision for Plummer focuses on permanency practice and inclusive of our tagline: “family for everyone”.
We now have a Permanency Practice Leadership division and provide consulting and training to others in this practice area and expanded our permanency core with programs that ensure youth in foster and group care have families they can count on, forever.
So if designing a program model is the start, how do you actually “get started?” There are four takeaways from our process.
#1 – Prioritize both external research and internal discovery
Research and discovery starts with a question.
What do we know about the problem we are trying to solve?
Our process was marked by extensive field research, which helped us crystalize concepts and provide grounding in our direction.
What does current best practice in the field tell you?
We value the results-based work done by others and actively sought to incorporate those lessons wherever it felt like a good fit.
In addition, we conducted internal discovery, both qualitative and quantitative. We combed through the data we did have, as scattered as it may have been, and engaged the front-line of our staff in active conversations to determine what we did and did not know about our clients and programs.
What do we know about what we already do, or what we aren’t doing?
The combination of external and internal research creates a “body of knowledge” as the foundation for designing a program and outcomes model.
#2 – Develop strong support
The organizational inertia required to get an initiative like this off the ground cannot be understated. The process of designing a program model touches your organizational identity, its DNA. A firm and long-standing “way” of going about things can be difficult to unwind and then even more difficult to redesign and build back up.
Full leadership buy-in and deep stakeholder involvement – both internal and external – is required to launch this work. Organizational leadership must do this process in a fully inclusive way that includes front-line staff as they are the ones that implement on the ground, day to day. The ecosystem of tools and supports must be in place to see this process through.
#3 – Merge evidence-based and performance-based principles
While our model was informed by evidence-based practices through research, the measurement of it is now performance-based as we test our theory of change.
Performance-based is the ability:
- to collect data to measure program outcomes and the indicators that lead to positive outcomes
- And to demonstrate, over time, continuously improving outcomes via positive trends in performance
We developed home-grown tools and processes that model what we experience day to day. They aren’t evidence-based yet, but we believe they address our work and we are eager to test them in the field. In the here and now, we focus heavily on measuring performance, learning, refining our tools and processes, and then measuring again. Whether our assumptions or model are right or wrong, we believe it is worth testing.
Performance-based principles can lead to innovation and to that end we believe we are on a bit of an entrepreneurial endeavor, with a goal to demonstrate our model. Whether formal evaluation is in your future plans or not, performance-based is an approach you can start right away.
#4 – Design a universal model and document heavily
This webinar will highlight a consistent theme around “universal.” We designed our model to be universal across the programs and services we offer.
Although the services we offer may vary, our approach toward Permanency, Preparedness, and Community with permanency as the lynch-pin. So in day to day practice, this means we streamline forms as much as possible. We now use one universal intake and discharge, shared treatment planning and service coordination, and a home-grown assessment we call the Matrix, which includes a 4-point scale to measure progress along a continuum for our eight domain areas. We also reimagined a universal daily progress note that engages all staff on all shifts to participate in youth outcomes.
While the universal model simplified standard operating procedures and focused our team around a core of principles, it didn’t decrease the need to document our processes. We heavily document all of our procedures and it has been a key to deploying the model across all of our programs.
With our program model in place, it was time to implement the data system.
We selected Social Solutions Apricot software.
Our initial adoption of Apricot was prompted by the need to have a product that was easy to use, would move us from paper to electronic records and would not blow our budget.
We found it difficult to become more data-informed without a data system, paper forms and spreadsheets weren’t cutting it. The promise of Apricot was to manage data for both clinical practice and outcomes. Looking at both sides of this coin, it was a given that we wanted Apricot to organize our data and report on outcomes, but we also hoped the system would reinforce our clinical practice too. We wanted Apricot to help users “stay in the lane” if you will, toward the methodology we developed for supporting youth toward permanency.
These were principles we held going in, but we did not have concrete examples on how this translated to Apricot or any data system for that matter. It was difficult to see the full picture of what the data system should be, given that we were transitioning from various forms and spreadsheets. We had a direction for the program, but when it came to the data system, we were learning as we went.
At the start, we were able to build forms, employ dynamic links to improve reporting and generate basic census, demographic and trend reports in Apricot. Figuratively we quickly moved through Apricot 101, leveraging the basics and standards that Apricot offers right out of the box.
As we began refining our model and continued to grow the organization, we started to stretch the limits of our configuration in Apricot. We were evolving past the 101 basics and we were ready to graduate to 201 or higher.
With new programs being added, some focusing on one of the three outcome areas, and new funders seeking specific data sets, Apricot started to feel like a constraining technology instead of an enabling one. It just wasn’t configured properly for our growth. Friction ensued.
We had a laundry list of challenges. We were:
- Finding it difficult to track multiple enrollments when a youth enrolled in consecutive programs or a youth re-enrolled after a gap following discharge;
- Losing historical data when youth would change status in a program, our data collection standards were tracking status as of today instead of maintaining historical context of each change;
- Unable to correlate intake, ratings, progress notes, and discharge data without relying heavily on external reporting systems or manual work to make our data meaningful
- Noticing that forms were getting larger and staff were overwhelmed by the data entry tasks required of them in Apricot.
We tried to uncover the cause of these symptoms and developed a priority list of items we hoped to resolve in Apricot. We even made several attempts to draw out the architecture of Apricot to better identify what we were missing, but we needed a fresh set of eyes.
We contracted with Sidekick Solutions to support.
Jeff identified right off the bat a potential gap in our system, we needed a “program enrollment record.” As we worked through our priority list, the “enrollment” issue became a bigger and bigger problem.
Redesigning our Apricot to fix the program enrollment issue was always an option, but to be honest, we hesitated multiple times to fully consider redesign because of the potential cost, especially the cost of migrating the data we’d already collected in Apricot to date.
Too much budget and still too many unknowns to make a smart decision in that direction. We needed to take a step back and look at the system holistically, but with a process that had to respect our budget constraints.
Jeff suggested an Apricot Database Specification and Discovery, or “Spec” project as a way to get some answers and to explore options.
We put a pause on the priorities and followed Jeff’s lead to complete the spec project.
Apricot is a builder. It has a set of rules and constraints on how you can build, but allows flexibility to create custom data entry, workflow, and reporting systems for nearly any program or service.
Most Apricot builds are unique and even when they are similar, there are slight variations that make them custom. In order to assess Apricot, one needs to lay all of its pieces out of the floor, stand over the top, and figure out what’s what.
There is no way to do that in the Apricot platform itself, so we develop an Entity Relationship Diagram or ERD to “map out” an Apricot platform end-to-end.
While a finished ERD looks a bit like a subway map and can be overwhelming to look at, the end product gives us a complete look at the forms and linking relationships between forms, no matter how unique or custom an Apricot database may be.
So, why is this important?
Forms and links determine how you can report on your data, specifically:
- what kinds of reporting is possible or not possible,
- how those reports are configured, and
- how efficient it is to build a report with that data.
The right form/link structure expands Apricot’s potential for outcomes reporting, dashboard analytics, and user workflow. Those features are often the primary reasons organizations select Apricot in the first place but we can’t get those features without the “right” database structure. That’s why the structure of an Apricot database is often the cause when users feel they use only some or part of Apricot’s full potential. Forms and links are the whole ballgame in Apricot.
The spec for Plummer was a three-part project that took about 30 days to complete.
First, we developed an ERD of the system as it exists now. We call this the “Existing ERD” or status quo ERD. That process alone often identifies immediate concerns.
There were four major takeaways from Plummer’s existing system:
- The system wouldn’t scale long-term because each program had its own set of forms and own set of reports. Each program addition or modification was being done in silos. The reporting Sarah wanted to do long-term just wouldn’t be possible without very heavy lifting.
- No universal program enrollment form and a single discharge form shared by all programs, meant we were limited in our ability to both implement new programs and report by specific program, especially for youth that crossed programs, and discharge reporting was a key for Plummer.
- No workflow systems to govern the treatment planning and assessment process. We were able to track goals, but the system didn’t alert users when the next interval was due. Users had to remember what to do next. Data quality was a concern.
- When assessing a list of 30 top reports needed to measure Plummer’s model in Apricot, we found that a majority of them needed to be exported to Excel and then run through additional analysis before we could even answer the model’s questions. Apricot was essentially a data collection portal, not an analytics platform. This wasn’t ideal.
Next, we developed a “Vision ERD” of what the ideal Apricot configuration would look like. This required a deeper dive into the Plummer program model. With Sarah’s help, we also gathered user feedback from users on what was working and not working.
With two ERDs in hand, we concluded the Spec project by comparing the ERDs and noting differences.
Generally, the Spec process supports one of two directions. Either the differences between the two ERDs highlight a need for redesign and the potential of a transition to a new configuration or highlight how we can make incremental changes to an existing system to align it more closely with the vision. In Plummer’s case, we identified a clear need to redesign and unsurprisingly, it centered around the concept of a “program enrollment.”
The decision to proceed with redesign was now in play.
The first investment for the Spec was easy…and that’s really the point. We needed a low cost, low risk project to get started. But the next investment for redesign wasn’t as easy.
We knew redesign could be a substantial investment (remember, price point was one of the driving factors when we started with Apricot), but didn’t know where that redesign would land.
We cannot make light of the decision to redesign. For us, it was an “Oh [blank] moment” because the investment was unexpected, not budgeted, and significant.
As we saw it, there were two options:
- Even though we had gaps in our data system, staff morale was up, youth were more engaged in their goals and everyone understood our mission and how they related to it. Our system did track the core of what we wanted, but it was inefficient and wouldn’t scale long term. It was likely that if we stayed with our existing platform we would sacrifice expansion of our model in order to administer, manage, and implement our data system. For me personally, it was a decision of whether I should be managing a database or analyzing data and facilitating evaluation. For the organization, it was whether we wanted to staff up around database administration of if we wanted to keep resources focused on programs. There was a cost to the inefficiencies we faced with the current system and we had to be honest about that cost.
- While a redesign carried a significant cost that was neither expected or budgeted, redesign provided the opportunity of having a “world class” data system, one that could grow with our programs, support staff with the nuance and structure of our outcomes model, and efficiently report on the questions we wanted to answer.
It was a pivot point.
Leadership saw the benefit of a one time expense to redesign the system on the front end in order to free up time on database administration on the back end, thus allowing us to continue to develop outcomes management and not be mired in administration and reporting.
This is where we did something unexpected, especially when working with a consultant. We shared our project constraints with Jeff: budget, timeline, the whole story. Glad we did too because that opened the conversation to new options that we hadn’t thought of before, nor did we think would be possible.
There were two options to consider as part of the redesign.
- One option was to redesign all programs at once and simultaneously migrate the historical data to the new format. This was going to be a massive project and ended up being too expensive and too intense on timeline to consider. In my opinion, this type of project is rarely a good fit and generally more aggressive than prudent.
- The second option was a staggered implementation approach that bought time for Plummer to get closer to their Fiscal Year break. We would implement two programs first and then continue with two additional programs in staggered timelines, all happening before the new Fiscal Year. Then the kicker…we wouldn’t migrate a single record. No data migration. Instead we would start fresh on the new Fiscal Year with a new system and archive everything historical in its current location.
My recommendation was to stagger implementation and archive historical data in place. We had about seven months before the Fiscal Year break and to me the timeline aligned nicely to launch Plummer’s 2.0 version of Apricot on the Fiscal Year start.
Of course, Sarah had questions about the no data migration and “archive in place” strategy, especially around reporting and user access. Understandable, especially since no data migration sounds like a halfway strategy when considering a redesign. With assurances that we could set up permission sets to allow users view only access to historical data and explaining the short-term pain that will be felt in reporting across the old and new systems, we prepared for the next steps.
This kind of flexibility makes it possible to move forward within the real life constraints of organizational priorities. It meant that for a period of time, for programs that already existed on Apricot, we would have to pull from two data sets to get a complete picture for all youth.
While not ideal, the short-term pain of split data sets was significantly less than the potential data migration cost. And today…over twelve months later, it worked and is still working for us.
There is no making light of the decision we made to invest in redesign. Our team held multiple meetings, there was lots of back and forth, and we had to be honest about what we wanted long-term for the organization. In the end, we prioritized our data system and outcomes measurement objectives. There were three takeaways.
#1 – Define what’s what
I am sure Sarah would agree and hindsight is 20/20, but we should have started our engagement with a spec. Sarah was incredibly prepared, bringing a priority list and having clear expectations for our first project, but looking back the priority list identified symptoms not causes. The takeaway here is that it is best to just get a clear picture of what’s what in order to make informed decisions on next steps.
It is understandable to be skeptical of assessments. Most assessments sit on the shelf, aren’t acted upon, and waste valuable resources. That’s why we simplify the overall assessment into three steps.
- Define the current state or status quo
- Define a vision of the future opportunity
- Compare the two to develop the roadmap
If the comparison validates your challenges with opportunities for resolution, then the assessment has done its job. Learning what’s what is a low cost first step that grounds decision making and lays all the pieces on the table.
#2 – Run a complete comparison of value
All investments are evaluations of value. There is a cost to redesign, but there is also a cost to not redesign. There are benefits on both sides of that equation too. Building the case is essential.
Generally speaking, when we are building a case for software in either new or redesign modes, we are looking at costs and benefits in the four domains of efficiency of time or inefficiency, pure dollar costs, accuracy of decision making (like having clearer information), and increase or decrease in emotional energy like stress or focus.
You can find more information about building a business case for software and these four domains on our website at the link provided.
#3 – Total data migration isn’t the only option
When considering redesign or transition to new features, the most common response we receive on data migration is “we want total” … “migrate all of it.” However, with some exploration we learn that “total” data migration isn’t needed.
Consider a transition on a Calendar Year or Fiscal Year break and coping with split reporting for a short-term period (1-2 years). This lowers the cost of redesign as well as the timeline for redesign. We advise most of our clients to be realistic with data migration and think about what data they use, how they use it, and to consider alternatives like an “archive in place.” You’ll often find, like Plummer did, that migrating the data isn’t as important six or twelve months down the line. And, in the chance it becomes a priority in the future, the data is sitting right there in its archived location to be migrated at that time…when urgency is highest.
Once the decision to re-implement was made and our project kicked-off, Sarah was clear that the system should facilitate the analytics we needed for the program model, but equally important was configuring the system to reinforce the clinical practice.
This, like so many other parts of Plummer’s journey, took time. Sarah and I met weekly. Although the Spec project identified a general architecture for the “Vision Apricot,” we continued refining assumptions and clarifying vocabulary.
We didn’t get the design “right” until the fourth iteration.
During this process, we identified a handful of design choices that were essential to Plummer’s goals.
While we can’t cover Plummer’s entire system or every step we went through to get where we are today, there are some highlights worth noting.
#1 – Universal design
This was a key assumption going in and became the foundation for saving time on Apricot database administration, reporting, and future program implementation.
Plummer’s core design supports cross-program data tracking with the ability to use program filtering in reports. The system included:
- Shared referral form for incoming referrals and program eligibility reporting
- Shared Universal Facesheet with standardized demographics for all programs
- Shared Intake and Discharge form so we could report on the totality of participation at Plummer in a single deck of reports
- Shared Treatment Plan, Progress Notes, and Outcomes Assessment so all programs operate from the same intervention and outcome model
The most technical component of the universal design was the Intake and Discharge, also known as a program enrollment form.
We’ve touched on the program enrollment form during this webinar and why it was a challenge in Plummer’s initial design. In total, the new design would have a single enrollment form to track all program intake and exit with the ability to track and report on multiple, simultaneous, and repeat enrollments, track continuity of service, report globally across all programs, maintain operational consistency, and set up the system for growth in more programs long term.
To start, Sarah and I worked closely to parse data requirements for each program to find shared and common fields. This was a big task. We were able to consolidate intake and discharge around the outcomes areas of permanency, preparedness, and community. However, we still had unique data sets in two programs: foster care and residential. Some of those requirements were compliance related, so we couldn’t overlook them. Our intake form was getting too big and short of more conditional logic, we needed another way to capture program-specific intake data.
We adopted an “enrollment supplement” approach. We hard-coded the data system to allow one and only one supplement per enrollment to ensure accurate reporting with no duplicates. This gave us the best of both worlds, a universal design that honored program specificity while still maintaining a streamlined workflow experience in Apricot. With data quality reporting we know which intakes should have a supplement but none exists and are able to prompt users to complete them.
The integration of the Plummer workflow with Apricot was the breakthrough of the design, but it took the fourth iteration to get there. Each program has slight variations on the intervals used to trigger treatment planning events and reviews. The hardest part of the design was keeping universal structure while also aligning those different timelines.
We came up with a workflow diagram, shown here, that identified the steps required at each program interval. We used program-conditional logic to identify the interval, which we’ll show in a bit and Apricot was ready to mirror Plummer’s day-to-day workflow.
The workflow identified the form structure. It also presented an opportunity. Can we notify staff on what to do next? With that question, the Active Status Dashboard was born.
The Active Status Dashboard is a trigger-based, to-do list and caseload report showing:
- Caseload by staff and by program
- Progress review/updates to do
- Treatment plans to do
- Matrix assessments to do
The Active Status Dashboard is conditional, which means that a task remains due until it is complete. There are no workarounds. The report provides a clear action list for the entire Plummer team.
Here are some screenshots of the Active Status Dashboard highlighting its key components.
First, a simple caseload report grouped by program. We have another section grouped by staff assigned too.
Second, progress review/updates to do shows when the task is due, how long it has been overdue and a quick link to complete the step.
Third, treatment plans to do shows when the task is due and only triggers when a progress/review and update for the previous interval is complete, which matches the workflow illustrated earlier.
Last, outcomes assessments to do shows when the task is due, sorted in priority order by most overdue, and with quick links to complete the task.
In addition to the active status dashboard, we also built assessment to-do lists and conditional data quality reports. These systems help us adhere to compliance standards, funding requirements, and clinical treatment planning. They keep everyone on track and ensure we have a complete and total data set when it comes time for running reports.
Another feature of Plummer’s Apricot design is correlated data sets
Plummer tracks and reports on the following indicators to create a complete picture of a youth’s progress.
- Domain area ratings
- Goal status
This is a dense list of indicators and our challenge during redesign was correlating them for reporting by youth, program, and interval.
The correlation at interval was key, which is shown on the base Tx Plan record here. Each treatment plan identifies an interval which then correlates with each indicator for aggregate reporting.
- Behaviors, services, and milestones are part of the Progress Note. We needed a way to identify multiple per Progress Note because youth may present more than one behavior and staff may provide more than one service. We used a Many to Many dynamic checkbox option that allows us to count the number of times an “event” happens along with its relation to overall treatment planning by interval.
Domain area ratings measure progress and need to be correlated to presenting behaviors, services, and milestones. In addition, we needed to chart domain area progress over time to see trends. We used a pre/post structure that allows for multiples of the Outcomes Assessment, one per interval. From there, Sarah was able to overlay the home-grown Matrix tool to round out the form design.
- Goal status was the final piece of the puzzle. As we discussed earlier, the workflow of treatment planning meant that we would complete a progress review/update at the conclusion of each interval before starting a new treatment plan. That gave us the opportunity to revisit goals assigned at treatment plan interval start to determine if they were complete, incomplete, or need attention. Because treatment planning is tracked per interval, we got the same ability with goals and used the same form to capture goal status.
The correlated data set is the aggregate of these components by interval. Sarah wants to know when, how long, and based on what factors participants turned corners in the model. This structure was designed to support that.
It is relevant to note that we knew Apricot’s standard reporting tools would be able to correlate each single indicator by youth, program, and interval, but that we needed alternative reporting tools for correlating all of these indicators into a single aggregate data set. We’ve since developed these tools and some require external reporting systems, but our redesign priority was always to allow for the proper data structure in Apricot so we could correlate the on the back-end for exploration and analysis.
As part of the redesign, we identified that some resources tracked per youth were global and not program-specific. We don’t have time to cover the full scope of this feature during this webinar, but the Team form was a highlight because of its application to the permanency practice without specific relationship to a program enrollment.
Circling back to the definition of Family mentioned earlier, we wanted to keep logs of people that touch a youth’s life. We profile that individual without creating a “profile” for them in Apricot and allow Team members to apply globally to a youth instead of specifically to a program. We recognize that a youth’s relationships aren’t program-specific. Generally, speaking profiles of entities, like people, are built as T1 forms, not T2 forms. However, in this case, supplemental forms like the Team Form add context to treatment planning, without complicating the Plummer system with non-essential T1 profiles.
I have to credit Plummer for their continued investment in Apricot as an enabling technology, even as the redesign project was wrapping up. During implementation, we identified what could be considered “nice-to-have” functionality to support end users with treatment planning and data visibility. I term data visibility as the ability to surface data that is relevant in time, like a particular event or interval, or even the ability to surface data on demand. Plummer didn’t see these suggestions as supplementary, they saw them as “need-to-have” to achieve the “ultimate” of what their Apricot could be. Redesign was prompted by the idea of having a “world class” data system and Plummer stuck with that principle after the redesign project to build additional custom features. Three of them stood out.
First, the Youth Summary Report.
The Youth Summary Report have become a fan favorite at Plummer, so much so that it holds the acronym YSR and demand for more data expanded the report into five parts. Yes, the report is so big that we had to split it into five parts.
This report searches for a single youth and then creates profiles for each youth based on their total participation in all Plummer programming. The YSR has summaries for profiles, treatment planning, and one part for each of the three outcomes areas.
Next, we developed leadership reports
This was originally envisioned as a single report but has since expanded into a category of reports called Leadership Team. Sarah and I just discussed this report last week and expect it will expand with a Part 4 shortly.
The concept of a leadership scorecard is not unique, but this batch of reports is evidence of a “practice what we preach” mentality at Plummer, especially around data and outcomes. These reports help leadership monitor key indicators for active youth, discharged youth, and incoming referrals. What took days and weeks to prepare before leadership meetings, now takes minutes.
Lastly, we developed printable reports
Plummer was running a number of processes via Word merge and manual aggregation, like typing data from Apricot into a Word template. These recurring tasks were largely to create compliance templates and cheat sheets for staff. To simplify this, we moved to Apricot reports with custom filters to create the template. The Active Census is a perfect example of this type of “printable” report. Again, not enough time to dig into all of the details on this style of report, but illustrates how the redesign is saving time.
These technical features addressed the foundation for Plummer’s model, but the integration with Plummer’s day-to-day clinical practice is where we’ve seen the transformational wins we hoped for.
By the numbers, our new Apricot includes:
- 42 active users and expectations for future growth
- 6 programs operating within a shared “universal” design
- 100% trained staff in 6 programs, all documenting toward outcomes
- 100% trained staff on all shifts aware of and engaged in client goals
- Case notes that reflect clinical interventions and behavioral observations that are aligned with the Plummer program model and ability to add new programs
Having been part of this process from the beginning, I have witnessed a dramatic culture shift in both clinical competency as well as respect for data. What started with eye rolls and disinterest in data has transformed into positive feedback from staff at multiple levels. It is awesome!
This webinar highlights our “wins” and likely implies that we have Apricot, our model, and the deployment of both all figured out. The reality is that our process has not been a straight, unencumbered path. We want to share where we are today looking at the positives and areas we need to improve to show that our integration of Apricot, our model, and practice is an ongoing effort. We are far from complete.
The positive results from our journey include:
- We have an Apricot structure for data entry and data visualization that encourages and reinforces staff adherence to our intervention and outcome model…and staff are increasingly able to see the connection between interventions and outcomes.
- We have fewer reports because our new configuration allows us to use one stack of reports and filter by program. We are also reporting in Apricot, which is much more time efficient. We are moving from anecdote to “why” questions about program outcomes.
- We have way more time to do the fun stuff! We have gone from staff complaining about data to them asking for specific reports or suggesting new ones. And I am able to begin to focus on real evaluation studies.
- New program implementations overlay onto our existing Apricot structure. In the past, we needed to implement new forms and reports for each program. Now we are able to incrementally grow our data system with less effort.
We still need to improve in a few areas.
- Improving data quality procedures. There are two parts: 1) cleaning trouble areas and 2) maintaining regular data quality checks. We keep a list of cleanup tasks, but it is difficult to find time to address all of them. For example, we are currently exploring the need for re-training on intervention definitions. We identified that differences in interpretation skews the data. Additionally, we would like to be more systematic in our approach to data quality checks. Although we have the tools in place to identify gaps, we aren’t as regimented as we would like to be in reviewing those tools. More time would help, which isn’t always easy to find. Accountability to reviews is on our radar.
- Adjusting to changes in the vocabulary of data fields and categories. Occasionally the state or a funder will change what they call something, for example, how race or gender are categorized. These changes can affect existing data and often require mini-data migrations to realign forms for both historical and future reporting. For example, it is sometimes difficult to slow down and intentionally evaluate a change and its impact on the overall system. We are working on better systems for changes because we recognize that hasty updates can lead to messier forms long term.
- Seamless implementation of new capabilities. We do not always have the resources – time or consistent attention – to maintain constant attention to Apricot. This can create disjointed implementation. We continue to work on being patient in deployment when it feels like we are always sprinting.
- Report development and analytics isn’t necessarily a weakness we need to address, but instead a constant topic for capacity building. We are always looking for better report designs that allow for improved interpretation of our data. For example, we recently started a project to develop reports that correlate youth indicators to analyze trends for interventions, behaviors, and milestones.
Given all of the above, where are we going next?
- Getting at that nagging issue of accountability. We are now focused on developing supervisory tools necessary to link clinical practice to HR expectations. The use of tools will be codified in policy and procedures and will be reported in Apricot. We want to get more tools for the day to day practice in the hands of supervisors and frontline staff.
- We are in the midst of applying for accreditation and will need to overlay a case study system in our Apricot to comply with standards.
- We are also working on reporting that itemizes the financial benefit of our intervention, in order to answer questions posed by various stakeholders from investors, board members, and finance.
- We also want to pursue targeted evaluation studies to document the impact of specific evidence-based interventions. Part of this development depends on the scope of our data set, which we expect will be substantial enough for the analysis within this Fiscal Year.
- We are also excited about the prospect of joining peer organizations to design data collection standards for advocacy in our permanency practice. We think peer partnerships will give us more data to test the assumptions behind our intervention and outcomes model.
This process, designing our model, implementing and then re-implementing Apricot, and now administering and managing both the model and Apricot have been and are time-intensive.
We knew it would be an investment, but difficult to understand the full scope of that investment until we got into it. We invested more in our data, analytics, and people systems than we originally planned, but have found it to be necessary to progress in our work. The steps we’ve taken have been worth the investment because we remained committed to a belief in our program and rigorous in testing of our assumptions.
That said, we’ve found this journey both exciting and rewarding. When innovating program design, you have to go all in because you are learning all the time. That’s why we don’t expect our learning to stop. In turn our Apricot is equally dynamic evolving to new conditions and explorations of our outcomes. The integration between these two and the fluidity of learning in our model inherently means “change.”
Although this “change” produces unknowns in next steps, our team is getting closer and our culture is more engaging as we work together to answer the difficult and interesting questions about our work.
Thank you for attending today’s webinar and exploring this case study of Plummer Youth Promise. Sarah and I are thrilled that you could join us.
Shortly, we will open up for questions. Before that, a few housekeeping items.
As a reminder, a recording of today’s webinar and the slides will be made available to you following today’s session. Please be on the lookout for an email with those resources.
If you enjoyed today’s presentation and our approach to Apricot software, let’s find some time to connect. Sarah and my doors are open. We’d love to learn more about your organization, your program model, and your Apricot. Feel free to send us an email.
Thank you for attending today.
Let’s open our time up for questions?