How to Measure Outcomes with Apricot Software, A Case Study [Webinar]

In this recorded webinar, Plummer Youth Promise, a Massachusetts-based nonprofit supporting youth with residential, foster care, and independent living programs, shares their journey with Social Solutions Apricot 360 software.

Plummer Youth Promise started with Apricot on a positive track, but soon faced a number of limitations in system usability and reporting. Although Apricot was the right platform for their needs, it just wasn’t configured to work for their programs. Plummer Youth Promise was faced with a difficult decision. Should we re-implement our Apricot for better data, clearer outcomes, and improved workflow for case managers or stick it out with the status quo by relying heavily on external reporting and manual data analysis?

In order to effectively track youth outcomes across multiple programs and within specific outcome and domain areas, they decided to re-implement Apricot. With a new Apricot system in place and upgrading to the Apricot 360 platform, Plummer Youth Promise has been able to reinforce their program’s intervention and outcome model through Apricot’s design while also supporting case managers in their day-to-day interactions with youth.

In the video and slides below webinar, you will learn:

  • Why the design of your program model precedes the design of your Apricot database
  • How to design a program model and get buy-in for it across the organization
  • When and how to evaluate if it is time to re-implement Apricot
  • How to translate an evaluation model into the requirements of your database
  • The impacts of form and link design on reporting requirements
  • Why using universal, cross-program form design can provide greater insight in reporting
  • How to reinforce program fidelity through Apricot design (i.e. workflow, data quality, and form design)

 

 

Webinar transcript (not including Q&A):

Welcome to our webinar, “How to Measure Outcomes in Apricot Software” a Case Study on Plummer Youth Promise. We appreciate your interest in today’s topic. Thank you for joining us.

[SLIDE – 2]

My name is Jeff Haguewood. I am an Apricot software consultant at Sidekick Solutions. Sidekick Solutions is an Apricot Certified Implementation Partner and consulting firm that specializes in Social Solutions Apricot software. We help new and existing users integrate Apricot with the way they work.

I am joined today by Sarah Morrill. Sarah is the Director of Outcomes and Evaluation at Plummer Youth Promise in Salem, MA. Sarah is a licensed social worker with 25 years in human-service related data analytics, outcomes measurement, and program design. Sarah joined the Plummer team after working as a consultant for Plummer leadership on the Plummer Intervention and Outcome Model, which is the focus of today’s webinar.

[SLIDE – 3]

We put together this webinar to answer a single question.

How do we design a program and outcomes model and then set up a data system that enables the model?

Sarah and I are both excited for today’s webinar because Plummer’s recent program growth and adoption of Social Solutions Apricot software will highlight Plummer’s journey toward answering this question.

[SLIDE – 4]

Our agenda today will cover…

Part 1: Designing the program and outcomes model
Part 2: Assessing gaps in the initial data system’s design
Part 3: System redesign for integration with the outcomes model

Today’s presentation will be a mix of strategy and technical and we will cover a lot of ground. Following this webinar, you will receive an email with a recording of today’s presentation and a copy of the slides. We will also open for questions at the end of the presentation and plan to stay online to answer questions.

Let’s dive in. To start, I’ll hand it off to Sarah to give us some background on Plummer Youth Promise and explore the Plummer Intervention and Outcomes Model.

[SLIDE – 5]

Thank you Jeff.

Plummer started as a “farm school of reform for boys” from a bequest in 1854 from philanthropist Caroline Plummer. Originally envisioned as a juvenile diversion program, Plummer was also an orphanage, serving dozens of children in one building.

By 2005, over 150 years after our founding, “Plummer Home for Boys” (as we were then known) was down to 10 teenage boys at a time, all referred through child welfare.

[SLIDE – 6]

And while we were considered to be a “successful” program, our leadership, staff, and board felt we could be doing more to support youth. We wanted better! We began a period of reflection that culminated with a new vision and mission, and we added a theory of change with a corresponding intervention and outcome model. We wanted to answer these key questions.

Today, we serve girls and boys, from infants to young adults, in residential, foster, and community-based programming and our name- Plummer Youth Promise- reflects our promise of “family for everyone.”

[SLIDE – 7]

At Plummer we emphasize family through our focus on permanency. The best practice definition of permanency ensures all those we serve an enduring family relationship that is safe and lifelong; offers legal rights and social status of full family membership; provides for physical, emotional, social, cognitive and spiritual well being; and assures life long connections to birth and extended family, siblings and other significant adults, family history and traditions, race and ethnic heritage, culture, religion and language.

Our definition of family does not mean biological family only, and for us is an important distinction of our model.

While our journey has been long, this webinar will focus on our more recent history, specifically the development of our intervention and outcome model and the alignment of our technology with our clinical practice, and by intervention, we mean the clinical activities we provide within our programs.

[SLIDE – 8]

The intervention and outcome model came first. It is a visual representation of our theory of change. We believed then and our experience reinforces that same belief now, a strong model precedes a data system for tracking the model.

[SLIDE – 9]

We started the design of our intervention model the way most organizations do. We started with our experience.

Based on our experience, we knew that families were the key to long term success for at-risk youth, but we had little data to support our anecdotes and we were not paid to make these connections, we were paid to teach skills. We wanted to test our theory of change against the day to day practices followed by case workers. We needed an intentional and purposeful model.

[SLIDE – 10]

We applied a two-fold approach to designing our model.

First, research.

We learned as much as we could about the existing outcomes for those we serve. This included sourcing information in the child welfare field and conducting a deep dive of all data sets we had from discharged participants in the previous three years. We parsed through external studies and internal data to build a complete view of our work.

This took time!

Research tasks were shared by volunteer board members and the Executive Director. All insights were brought to staff for review and discussion.

Second, we talked to nearly every staff member, board member, and key stakeholder (including youth and family).

The goal was the same. We wanted to learn. There were lots of interviews, conversations, and outreach.

Again, time consuming!

[SLIDE – 11]

We asked these stakeholder groups to comment on the current assumptions of the program and what they thought was possible for those we serve. While feedback from external sources offered insight to the factors happening outside of our organization, internal stakeholders identified where our team and culture deviated from where we planned to go.

This process demanded participation from all levels.

We found that exploring assumptions and getting to the truth is not always easy, but necessary. The last 3 questions about success set the stage for the next steps.

[SLIDE – 12]

Following these two steps, we created our own template of what it would take to get the results we believed all youth deserve:

to have permanency through a safe emotionally secure parenting relationship with a life-long, preferably, legal family;
to be prepared by having the skills and support to meet his/her/their physical, emotional, educational, and economic needs; and
to be connected to his/her/their community by having a safe place to live, a sense of belonging and a chance to positively contribute to the community.

[SLIDE – 13]

These global goals: Permanency, Preparedness, and Community became our outcome areas.

Within each outcome area we defined eight life domains. The eight domains would become essential indicators for youth progress in each of the three outcome areas.

The visualization of our model illustrates these domains and you can see from the blue line holding it together, Permanency is lynch-pin.

The design of our model evolved as we learned more about the importance of permanency. The current vision for Plummer focuses on permanency practice and inclusive of our tagline: “family for everyone”.

We now have a Permanency Practice Leadership division and provide consulting and training to others in this practice area and expanded our permanency core with programs that ensure youth in foster and group care have families they can count on, forever.

So if designing a program model is the start, how do you actually “get started?” There are four takeaways from our process.

[SLIDE – 14]

#1 – Prioritize both external research and internal discovery

Research and discovery starts with a question.

What do we know about the problem we are trying to solve?

Our process was marked by extensive field research, which helped us crystalize concepts and provide grounding in our direction.

What does current best practice in the field tell us?

We value the results-based work done by others and actively sought to incorporate those lessons wherever it felt like a good fit.

In addition, we conducted internal discovery, both qualitative and quantitative. We combed through the data we did have, as scattered as it may have been, and engaged the front-line of our staff in active conversations to determine what we did and did not know about our clients and programs.

What do we know about what we already do, or what we aren’t doing?

The combination of external and internal research creates a “body of knowledge” as the foundation for designing a program and outcomes model.

[SLIDE – 15]

#2 – Develop strong support

The organizational inertia required to get an initiative like this off the ground cannot be understated. The process of designing a program model touches your organizational identity, its DNA. A firm and long-standing “way” of going about things can be difficult to unwind and then even more difficult to redesign and build back up.

Full leadership buy-in and deep stakeholder involvement – both internal and external – is required to launch this work. Organizational leadership must do this process in a fully inclusive way that includes front-line staff as they are the ones that implement on the ground, day to day. The ecosystem of tools and supports must be in place to see this process through.

[SLIDE – 16]

#3 – Merge evidence-based and performance-based principles

While our model was informed by evidence-based practices through research, the measurement of it is now performance-based as we test our theory of change.

Performance-based is the ability:

to collect data to measure program outcomes and the indicators that lead to positive outcomes
And to demonstrate, over time, continuously improving outcomes via positive trends in performance

We developed home-grown tools and processes that model what we experience day to day. They aren’t evidence-based yet, but we believe they address our work and we are eager to test them in the field. In the here and now, we focus heavily on measuring performance, learning, refining our tools and processes, and then measuring again. Whether our assumptions or our model are right or wrong, we believe it is worth testing.
A goal for 2020 is to strengthen our performance-based work with the inclusion of outcome-specific evidence-based tools. For example, we seek evidence-based tools to measure loneliness and connection for use alongside our homegrown scaling tool.

We’ve found that performance-based principles can lead to innovation and to that end we believe we are on a bit of an entrepreneurial endeavor, with a goal to demonstrate our model. Whether formal evaluation is in your future plans or not, performance-based is an approach you can start right away.

[SLIDE – 17]

#4 – Design a universal model and document heavily

This webinar will highlight a consistent theme around “universal.” We designed our intervention and outcome model to be universal across the programs and services we offer.

Although the services we offer may vary by program, our approach toward Permanency, Preparedness, and Community with permanency as the lynch-pin is universal. So in day to day practice, this means we streamline forms as much as possible.

We now use one universal intake and discharge, shared treatment planning and service coordination, and a home-grown assessment we call the Outcomes Assessment, which includes a 4-point scale to measure progress along a continuum for our eight domain areas. We also reimagined a universal daily progress note that engages all staff on all shifts to participate in documenting their interventions and youth behaviors.

While the universal model simplified standard operating procedures and focused our team around a core of principles, it didn’t decrease the need to document our processes. We heavily document all of our procedures and it has been a key to deploying the model across all of our programs.

[SLIDE – 18]

With our program model in place, it was time to implement the data system.

We selected Social Solutions Apricot software.

Our initial adoption of Apricot was prompted by the need to have a product that was easy to use, would move us from paper to electronic records and would not blow our budget.

[SLIDE – 19]

We found it difficult to become more data-informed without a data system, paper forms and spreadsheets weren’t cutting it. The promise of Apricot was to manage data for compliance, clinical practice, and outcomes. Looking at both sides of this coin, it was a given that we wanted Apricot to organize our data and report on outcomes, but we also hoped the system would reinforce our clinical practice too. We wanted Apricot to help users “stay in the lane” if you will, toward the methodology we developed for supporting youth toward permanency.

These were principles we held going in, but we did not have concrete examples on how this translated to Apricot or any data system for that matter. It was difficult to see the full picture of what the data system should be, given that we were transitioning from various forms and spreadsheets. We had a direction for the program, but when it came to the data system, we were learning as we went.

At the start, we were able to build forms, employ dynamic links to improve reporting and generate basic census, demographic and trend reports in Apricot. Figuratively we quickly moved through Apricot 101, leveraging the basics and standards that Apricot offers right out of the box.

As we began refining our model and continued to grow the organization, we started to stretch the limits of our configuration in Apricot. We were evolving past the 101 basics and we were ready to graduate to 201 or higher.

[SLIDE – 20]

With new programs being added, some focusing on one of the three outcome areas, and new funders seeking specific data sets, Apricot started to feel like a constraining technology instead of an enabling one. It just wasn’t configured properly for our growth. Friction ensued.

We had a laundry list of challenges. We were:

Finding it difficult to track multiple enrollments when a youth enrolled in consecutive programs or a youth re-enrolled after a gap following discharge;
Losing historical data when youth would change status in a program, our data collection standards were tracking status as of today instead of maintaining historical context of each change;
Unable to correlate intake, ratings, progress notes, and discharge data without relying heavily on external reporting systems or manual work to make our data meaningful; and
Noticing that forms were getting larger and staff were overwhelmed by the data entry tasks required of them in Apricot.

We tried to uncover the cause of these symptoms and developed a priority list of items we hoped to resolve in Apricot. We even made several attempts to draw out the architecture of Apricot to better identify what we were missing, but we needed a fresh set of eyes.

We contracted with Sidekick Solutions to support.

Jeff identified right off the bat a potential gap in our system, we needed a “program enrollment record.” The enrollment record was our key to a universal design that allowed data segmentation by program. As we worked through our priority list, the “enrollment” issue became a bigger and bigger problem.

Redesigning our Apricot to fix the program enrollment issue was always an option, but to be honest, we hesitated multiple times to fully consider redesign because of the potential cost, especially the cost of migrating the data we’d already collected in Apricot to date.

Too much budget and still too many unknowns to make a smart decision in that direction. We needed to take a step back and look at the system holistically, but with a process that had to respect our budget constraints.

Jeff suggested an Apricot Database Specification and Discovery, or “Spec” project as a way to get some answers and to explore options.

We put a pause on the priorities and followed Jeff’s lead to complete the spec project.

[SLIDE – 21]

Apricot is a builder. It has a set of rules and constraints on how you can build, but allows flexibility to create custom data entry, workflow, and reporting systems for nearly any program, service, or outcome model.

Most Apricot builds are unique and even when they are similar, there are slight variations that make them custom. In order to assess Apricot, one needs to lay all of its pieces out of the floor, stand over the top, and figure out what’s what.

We develop an Entity Relationship Diagram or ERD to “map out” an Apricot platform end-to-end.

While a finished ERD looks a bit like a subway map and can be overwhelming to look at, the end product gives us a complete look at the forms and linking relationships between forms, no matter how unique or custom an Apricot database may be.

So, why is this important?

[SLIDE – 22]

Forms and links determine how you can report on your data, specifically:

what kinds of reporting is possible or not possible,
how those reports are configured, and
how efficient it is to build a report with that data.

The right data model expands Apricot’s potential for outcomes reporting, dashboard analytics, and user workflow. Forms and links are the whole ballgame in Apricot.

[SLIDE – 23]

The spec for Plummer was a three-part project that took about 30 days to complete.

First, we developed an ERD of the system as it exists now. We call this the “Existing ERD” or status quo ERD. That process alone often identifies immediate concerns.

There were four major takeaways from Plummer’s existing system:

The system wouldn’t scale long-term because each program had its own set of forms and own set of reports. Each program addition or modification was being done in silos. The reporting Sarah wanted to do long-term just wouldn’t be possible without very heavy lifting.
No universal program enrollment form and a single discharge form shared by all programs, meant we were limited in our ability to both implement new programs and report by specific program, especially for youth that crossed programs, and discharge reporting was a key for Plummer. We wanted to know outcomes at discharge with respect to relational, legal, and physical permanency.
No workflow systems to govern the treatment planning and assessment process. We were able to track goals, but the system didn’t alert users when the next interval was due. Users had to remember what to do next. Data quality was a concern.
When assessing a list of 30 top reports needed to measure Plummer’s model in Apricot, we found that a majority of them needed to be exported to Excel and then run through additional analysis before we could even answer the model’s questions. Apricot was essentially a data collection portal, not an analytics platform. This wasn’t ideal.

[SLIDE – 24]

Next, we developed a “Vision ERD” of what the ideal Apricot configuration would look like. This required a deeper dive into the Plummer intervention and outcome model. With Sarah’s help, we also gathered user feedback from users on what was working and not working.

With two ERDs in hand, we concluded the Spec project by comparing the ERDs and noting differences.

Generally, the Spec process supports one of two directions. Either the differences between the two ERDs highlight a need for redesign and the potential of a transition to a new configuration or highlight how we can make incremental changes to an existing system to align it more closely with the vision. In Plummer’s case, we identified a clear need to redesign and unsurprisingly, it centered around the concept of a “program enrollment.”

The decision to proceed with redesign was now in play.

[SLIDE – 25]

The first investment for the Spec was easy…and that’s really the point. We needed a low cost, low risk project to get started. But the next investment for redesign wasn’t as easy.

We knew redesign could be a substantial investment (remember, price point was one of the driving factors when we started with Apricot), but didn’t know where that redesign would land.

We cannot make light of the decision to redesign. For us, it was an “Oh [blank] moment” because the investment was unexpected, not budgeted, and significant.

[SLIDE – 26]

As we saw it, there were two options:

Even though we had gaps in our data system, staff morale was up, youth were more engaged in their goals and everyone understood our mission and how they related to it. Our culture had shifted. Our system did track the core of what we wanted, but it was inefficient and wouldn’t scale long term. It was likely that if we stayed with our existing platform we would sacrifice expansion of our model in order to administer, manage, and implement our data system. For me personally, it was a decision of whether I should be managing a database or analyzing data and facilitating evaluation. For the organization, it was whether we wanted to staff up around database administration or if we wanted to keep resources focused on programs. There was a cost to the inefficiencies we faced with the current system and we had to be honest about that cost.
While a redesign carried a significant cost that was neither expected or budgeted, redesign provided the opportunity of having a “world class” data system, one that could grow with our programs, support staff with the nuance and structure of our outcomes model, and efficiently report on the questions we wanted to answer.

It was a pivot point.

Leadership saw the benefit of a one time expense to redesign the system on the front end in order to free up time on database administration on the back end, thus allowing us to continue to develop outcomes management and not be mired in administration and reporting.

This is where we did something unexpected, especially when working with a consultant. We shared our project constraints with Jeff: budget, timeline, the whole story. Glad we did too because that opened the conversation to new options that we hadn’t thought of before, nor did we think would be possible.

[SLIDE – 27]

There were two options to consider as part of the redesign.

One option was to redesign all programs at once and simultaneously migrate the historical data to the new format. This was going to be a massive project and ended up being too expensive and too intense on timeline to consider. In my opinion, this type of project is rarely a good fit and generally more aggressive than prudent.
The second option was a staggered implementation approach that bought time for Plummer to get closer to their Fiscal Year break. We would implement two programs first and then continue with two additional programs in staggered timelines, all happening before the new Fiscal Year. Then the kicker…we wouldn’t migrate a single record. No data migration. Instead we would start fresh on the new Fiscal Year with a new system and archive everything historical in its current location.

My recommendation was to stagger implementation and archive historical data in place. We had about seven months before the Fiscal Year break and to me the timeline aligned nicely to launch Plummer’s 2.0 version of Apricot on the Fiscal Year start.

Of course, Sarah had questions about the no data migration and “archive in place” strategy, especially around reporting and user access. Understandable, especially since no data migration sounds like a halfway strategy when considering a redesign. With assurances that we could set up permission sets to allow users view only access to historical data and explaining the short-term pain that will be felt in reporting across the old and new systems, we prepared for the next steps.

This kind of flexibility makes it possible to move forward within the real life constraints of organizational priorities. It meant that for a period of time, for programs that already existed on Apricot, we would have to pull from two data sets to get a complete picture for all youth.

While not ideal, the short-term pain of split data sets was significantly less than the potential data migration cost. And today…over eighteen months later, it worked and is still working for us.

There is no making light of the decision we made to invest in redesign. Our team held multiple meetings, there was lots of back and forth, and we had to be honest about what we wanted long-term for the organization. In the end, we prioritized our outcomes measurement objectives and the data system. There were three takeaways.

[SLIDE – 28]

#1 – Define what’s what

It is best to get a clear picture of what’s what in order to make informed decisions on next steps.

It is understandable to be skeptical of assessments. Most assessments sit on the shelf, aren’t acted upon, and waste valuable resources. That’s why we simplify the overall assessment into three steps.

Define the current state or status quo
Define a vision of the future opportunity
Compare the two to develop the roadmap

If the comparison validates your challenges with opportunities for resolution, then the assessment has done its job. Learning what’s what is a low cost first step that grounds decision making and lays all the pieces on the table.

[SLIDE – 29]

#2 – Run a complete comparison of value

All investments are evaluations of value. There is a cost to redesign, but there is also a cost to not redesign. There are benefits on both sides of that equation too. Building the case is essential.

Generally speaking, when we are building a case for software in either new or redesign modes, we are looking at costs and benefits in the four domains of efficiency of time or inefficiency, pure dollar costs, accuracy of decision making (like having clearer information), and increase or decrease in emotional energy like stress or focus.

You can find more information about building a business case for software and these four domains on our website at the link provided.

[SLIDE – 30]

#3 – Total data migration isn’t the only option

When considering redesign or transition to new features, the most common response we receive on data migration is “we want total” … “migrate all of it.” However, with some exploration we learn that “total” data migration isn’t needed.

Consider a transition on a Calendar Year or Fiscal Year break and coping with split reporting for a short-term period (1-2 years). This lowers the cost of redesign as well as the timeline for redesign. We advise most of our clients to be realistic with data migration and think about what data they use, how they use it, and to consider alternatives like an “archive in place.” You’ll often find, like Plummer did, that migrating the data isn’t as important six or twelve months down the line. And, in the chance it becomes a priority in the future, the data is sitting right there in its archived location to be migrated at that time…when urgency is highest.

[SLIDE – 31]

Once the decision to re-implement was made and our project kicked-off, Sarah was clear that the system should facilitate the analytics we needed for the program model, but equally important was configuring the system to reinforce the clinical practice.

This, like so many other parts of Plummer’s journey, took time. Sarah and I met weekly. Although the Spec project identified a general architecture for the “Vision Apricot,” we continued refining assumptions and clarifying vocabulary.

We didn’t get the design “right” until the fourth iteration.

During this process, we identified a handful of design choices that were essential to Plummer’s goals.

While we can’t cover Plummer’s entire system or every step we went through to get where we are today, there are some highlights worth noting.

[SLIDE – 32]

#1 – Universal design

This was a key assumption going in and became the foundation for saving time on Apricot database administration, reporting, and future program implementation.

Plummer’s core design supports cross-program data tracking with the ability to use program filtering in reports.

We currently operate six programs across residential, independent living, and foster care with a…

Shared referral form for incoming referrals and program eligibility reporting
Shared Universal Facesheet with standardized demographics for all programs
Shared Intake and Discharge form so we could report on the totality of participation at Plummer in a single deck of reports
Shared Treatment Plan, Progress Notes, and Outcomes Assessment so all programs operate from the same intervention and outcome model

The most technical component of the universal design was the Intake and Discharge, also known as a program enrollment form.

[SLIDE – 33]

We’ve touched on the program enrollment form during this webinar and why it was a challenge in Plummer’s initial design. In total, the new design would have a single enrollment form to track all program intake and exit with the ability to track and report on multiple, simultaneous, and repeat enrollments, track continuity of service, report globally across all programs, maintain operational consistency, and set up the system for growth in more programs long term.

To start, Sarah and I worked closely to parse data requirements for each program to find shared and common fields. This crosswalk was a big task. We were able to consolidate intake and discharge around the outcomes areas of permanency, preparedness, and community. However, we still had unique data sets in two programs: foster care and residential. Some of those requirements were compliance related, so we couldn’t overlook them. Our intake form was getting too big and short of more conditional logic, we needed another way to capture program-specific intake data.

[SLIDE – 34]

We adopted an “enrollment supplement” approach. We hard-coded the data system to allow one and only one supplement per enrollment to ensure accurate reporting with no duplicates. This gave us the best of both worlds, a universal design that honored program specificity while still maintaining a streamlined workflow experience in Apricot. With data quality reporting we know which intakes should have a supplement but none exists and are able to prompt users to complete them.

[SLIDE – 35]

The integration of the Plummer workflow with Apricot was the breakthrough of the design, but it took the fourth iteration to get there. Each program has slight variations on the intervals used to trigger treatment planning events and reviews. The hardest part of the design was keeping universal structure while aligning those different intervals.

We came up with a workflow diagram, shown here, that identified the steps required at each program interval. We used program-conditional logic to identify the interval, which we’ll show in a bit and Apricot was ready to mirror Plummer’s day-to-day workflow.

This workflow resembles case management practice with a recurring interval for treatment planning, goal setting, and assessment followed by regular evaluation of progress toward the treatment plan goals.

[SLIDE – 36]

The workflow identified the form structure. It also presented an opportunity. Using the workflow model, can we notify staff on what to do next? With that question, the Active Status Dashboard was born.

The Active Status Dashboard is a trigger-based, to-do list and caseload report showing:

Caseload by staff and by program
Progress review/updates to do
Treatment plans to do
Matrix assessments to do

The Active Status Dashboard is conditional, which means that a task remains due until it is complete. There are no workarounds. The report provides a clear action list for the entire Plummer team.

Here are some screenshots of the Active Status Dashboard highlighting its key components.

[SLIDE – 37]

First, a simple caseload report grouped by program. We have another section grouped by staff assigned too.

[SLIDE – 38]

Second, progress review/updates to do shows when the task is due, how long it has been overdue and a quick link to complete the step.

[SLIDE – 39]

Third, treatment plans to do shows when the task is due and only triggers when a progress/review and update for the previous interval is complete, which matches the workflow illustrated earlier.

[SLIDE – 40]

Last, outcomes assessments to do shows when the task is due, sorted in priority order by most overdue, and with quick links to complete the task.

[SLIDE – 41]

In addition to the active status dashboard, we also built assessment to-do lists and conditional data quality reports. These systems help the Plummer team adhere to compliance standards, funding requirements, and clinical treatment planning. They keep everyone on track and ensure we have a complete and total data set when it comes time for running reports.

[SLIDE – 42]

Another feature of Plummer’s Apricot design is correlated data sets

Plummer tracks and reports on the following indicators to create a complete picture of a youth’s progress.

Behaviors
Services
Milestones
Domain area ratings
Goal status

[SLIDE – 43]

This is a dense list of indicators and our challenge during redesign was correlating them for reporting by youth, program, and interval.

The correlation at interval was key, which is shown on the base Tx Plan record here. Each treatment plan identifies an interval which then correlates with each indicator for aggregate reporting.

[SLIDE – 44]

Behaviors, services, and milestones are part of the Progress Note. We needed a way to identify multiple per Progress Note because youth may present more than one behavior and staff may provide more than one service. We used a Many to Many dynamic checkbox option that allows us to count the number of times an “event” happens along with its relation to overall treatment planning by interval.
Domain area ratings measure progress and need to be correlated to presenting behaviors, services, and milestones. In addition, we needed to chart domain area progress over time to see trends. We used a pre/post structure that allows for multiples of the Outcomes Assessment, one per interval. From there, Sarah was able to overlay the home-grown Matrix tool to round out the form design.
Goal status was the final piece of the puzzle. As we discussed earlier, the workflow of treatment planning meant that we would complete a progress review/update at the conclusion of each interval before starting a new treatment plan. That gave us the opportunity to revisit goals assigned at the start of a treatment plan interval to determine if they were complete, incomplete, or need attention.

[SLIDE – 45]

The correlated data set is the aggregate of these components by interval. Sarah wants to know when, how long, and based on what factors participants turned corners in progress. This structure was designed to support that.

To analyze these correlations, we developed an aggregate parser that leverages Apricot standard reporting. The parsed data supports:

Evaluation of progress at interval
Correlation of behavior, milestone, and intervention/service concentration
Intensity and level of of service
Goals achievement per domain
…and we have only begun to scratch the surface

Plummer recently upgraded to Apricot 360. We are currently working on migrating the formats of our parsing tools into templated reporting in Apricot Insights.

It is important to note that we are employing a gradual approach to reporting that reinforces understanding of the data before proceeding to more report development. This has allowed us to steadily increase capacity in this data model over time. That said, we are far from finished in this area.

[SLIDE – 46]

As part of the redesign, we identified that some resources tracked per youth were global and not program-specific. We don’t have time to cover the full scope of this feature during this webinar, but the Team form was a highlight because of its application to the permanency practice without specific relationship to a program enrollment.

Circling back to the definition of Family mentioned earlier, we wanted to keep logs of people that touch a youth’s life. We profile that individual without creating a “profile” for them in Apricot and allow Team members to apply globally to a youth instead of specifically to a program. We recognize that a youth’s relationships aren’t program-specific. Generally, speaking profiles of entities, like people, are built as T1 forms, not T2 forms. However, in this case, supplemental forms like the Team Form add context to treatment planning, without complicating the Plummer system with non-essential T1 profiles.

[SLIDE – 47]

I have to credit Plummer for their continued investment in Apricot as an enabling technology, even as the redesign project was wrapping up. During implementation, we identified what could be considered “nice-to-have” functionality to support end users with treatment planning and data visibility. I term data visibility as the ability to surface data that is relevant in time, like a particular event or interval, or even the ability to surface data on demand. Plummer didn’t see these suggestions as supplementary, they saw them as “need-to-have” to achieve the “ultimate” of what their Apricot could be. Redesign was prompted by the idea of having a “world class” data system and Plummer stuck with that principle after the redesign project to build additional custom features. Three of them stood out.

First, the Youth Summary Report.

The Youth Summary Reports have become a fan favorite at Plummer, so much so that it holds the acronym YSR and demand for more data expanded the report into six parts. Yes, the report is so big that we had to split it into six parts.

This report searches for a single youth and then creates profiles for each youth based on their total participation in all Plummer programming. The YSR has summaries for profiles, treatment planning, and one part for each of the three outcomes areas.

[SLIDE – 48]

Next, we developed leadership reports

This was originally envisioned as a single report but has since expanded into a category of reports called Leadership Team.

The Leadership Team Report stack is now four parts each with an A-D categorization looking at aggregate indicators of active youth, discharged youth, and youth that participated in a range.

The concept of a leadership scorecard is not unique, but this batch of reports is evidence of a “practice what we preach” mentality at Plummer, especially around data and outcomes. These reports help leadership monitor key indicators for active youth, discharged youth, incoming referrals, and historical participation in a cohort style format. What took days and weeks to prepare before leadership meetings, now takes significantly less time.

[SLIDE – 49]

We developed printable reports

Plummer was running a number of processes via Word merge and manual aggregation, like typing data from Apricot into a Word template. These recurring tasks were largely to create cheat sheets for staff. To simplify this, we moved to Apricot reports with custom filters to create the template. All users can run these reports.

The Active Census is a perfect example of this type of “printable” report. Again, not enough time to dig into all of the details on this style of report, but illustrates how the redesign is saving time.

[SLIDE – 50]

Lastly, we are using Apricot to support accreditation.

This includes the following tools:

A report that supports randomized case sampling each interval
A guide for how to select random cases based on population size
A form that’s used to track case reviews in process and completed
Summary reports that aggregate data to efficiently complete the case review

Instead of paper forms, we are able to efficiently complete a case review using pre-defined procedures and tools in Apricot.

We’ve completed over 20 reviews since getting started late last year and so far we are finding great success with Apricot for this task.

[SLIDE – 51]

By the numbers, our new Apricot includes:

49 active users and expectations for future growth
6 programs operating within a shared “universal” design
100% trained staff in 6 programs, all documenting toward outcomes
100% trained staff on all shifts aware of and engaged in client goals

While numbers are important, there are other advances worth noting:
Case notes that reflect clinical interventions and behavioral observations that are aligned with the Plummer program model and ability to add new programs
A new program that allows twenty subcontractors with partial guest access to Apricot forms for tracking permanency for court-related cases.
We also have several data quality and management reports. The data quality reports allow us to maintain high standards for both timeliness and quality of documentation. The management reports provide real time data on client and program progress.

[SLIDE – 52]

Having been part of this process from the beginning, I have witnessed a dramatic culture shift in both clinical competency as well as respect for data. What started with eye rolls and disinterest in data has transformed into positive feedback from staff at multiple levels. It is awesome!

This webinar highlights our “wins” and likely implies that we have Apricot, our model, and the deployment of both all figured out. The reality is that our process has not been a straight, unencumbered path. We want to share where we are today looking at the positives and areas we need to improve to show that our integration of Apricot, our model, and practice is an ongoing effort. We are far from complete.

[SLIDE – 53]

The positive results from our journey include:

We have an Apricot structure for data entry and data visualization that encourages and reinforces staff adherence to our intervention and outcome model…and staff are increasingly able to see the connection between interventions and outcomes.
We have fewer reports that can be filtered by program. We are moving from anecdote to “why” questions about program outcomes.
We have way more time to do the fun stuff! We have gone from staff complaining about data to them asking for specific reports or suggesting new ones. And I am able to begin to focus on real evaluation studies.
New program implementations overlay onto our existing Apricot structure. In the past, we needed to implement new forms and reports for each program. Now we are able to incrementally grow our data system with less effort.
Our data quality procedures are much improved. We keep a list of cleanup tasks and have recently engaged program directors in the process of regular checks. We are using these checks to reinforce supervision.

[SLIDE – 54]

We still need to improve in a few areas.

Adjusting to changes in the vocabulary of data fields and categories. Occasionally the state or a funder will change what they call something, for example, how race or gender are categorized. These changes can affect existing data and often require mini-data migrations to realign forms for both historical and future reporting.
Seamless implementation of new capabilities. We do not always have the resources – time or consistent attention – to maintain constant attention to Apricot. This can create disjointed implementation. We continue to work on being patient in deployment when it feels like we are always sprinting.
With new programs and staff or changes to processes, we need to find better ways to ensure all staff understand how to use Apricot. We are in the process of designing video taped training modules to share as needed. These will go hand in hand with written user guides.
Additionally, our data policies and procedures were not as inclusive as necessary to reinforce the system. As part of the application for accreditation, we have codified much of the data documentation and quality requirements and are now in the process of rolling them out across programs.

[SLIDE – 55]

Given all of the above, where are we going next?

Getting at that nagging issue of accountability. We are now focused on developing supervisory tools necessary to link clinical practice to HR expectations. We want to get more tools for the day to day practice in the hands of supervisors and frontline staff.
Report development and analytics isn’t necessarily a weakness we need to address, but instead a constant topic for capacity building. We’ve begun the process of moving reports into more advanced reporting tools, which are allowing us to better correlate youth indicators to analyze trends for interventions, behaviors, and milestones.
We are in the midst of applying for accreditation and are refining Apricot to support our procedures. We hope to continue improving the total time it takes to complete a review.
We are working on reporting that itemizes the financial benefit of our intervention, in order to answer questions posed by various stakeholders from investors, board members, and finance.
We want to pursue targeted evaluation studies to document the impact of specific evidence-based interventions. We are also excited about joining peer organizations in a joint data collection and analysis project to study the impact of our permanency efforts. We think peer partnerships will give us more data to test the assumptions behind our intervention and outcomes model and advocate for a better way to serve young people in child welfare.

[SLIDE – 56]

The goal for this webinar was to show how we’ve approached tracking and reporting on outcomes using Apricot software.

Our experience reinforced our belief that design of our intervention and outcome model comes first, design of the system comes second.

This ten-step journey highlights that principle.
Steps 1-4 lay a foundation
Step 5-9 support the structure of the model
Step 10 designs the system, in our case Apricot

[SLIDE – 57]

This process, designing our model, implementing and then re-implementing Apricot, and now administering and managing both the model and Apricot have been and are time-intensive.

We knew it would be an investment, but difficult to understand the full scope of that investment until we got into it. We invested more in our data, analytics, and people systems than we originally planned, but have found it to be necessary to progress in our work.

The steps we’ve taken have been worth the investment because we remained committed to a belief in our program and rigorous in testing our assumptions.

[SLIDE – 58]

That said, we’ve found this journey both exciting and rewarding. When innovating program design, you have to go all in because you are learning all the time. That’s why we don’t expect our learning to stop. In turn our Apricot is equally dynamic evolving to new conditions and explorations of our outcomes. The integration between these two and the fluidity of learning in our model inherently means “change.”

Although this “change” produces unknowns in next steps, our team is getting closer and our culture is more engaging as we work together to answer the difficult and interesting questions about our work.

[SLIDE – 59]

Thank you for attending today’s webinar and exploring this case study of Plummer Youth Promise. Sarah and I are thrilled that you could join us.

Shortly, we will open up for questions. Before that, a few housekeeping items.

As a reminder, a recording of today’s webinar and the slides will be made available to you following today’s session. Please be on the lookout for an email with those resources.

[SLIDE – 60]

If you enjoyed today’s presentation and our approach to Apricot software, let’s find some time to connect. Sarah and my doors are open. We’d love to learn more about your organization, your program model, and your Apricot. Feel free to send us an email.

Thank you for attending today.

Let’s open our time up for questions?

Free Download

The Essential Kit to Apricot Best Practices

  • 3x PDF Guides
  • 2x Template/Worksheet
  • 1x 60-minute Webinar
  • ( checklist + slides )

Join the 250+ users that already got their kit!

DOWNLOAD NOW