Does It Work for Me? Supporting Self-Experimentation of Simple Health Behavior Interventions
Document
Description
Many individual-level behavioral interventions improve health and well-being. However, most interventions exhibit considerable heterogeneity in response. Put differently, what might be effective on average might not be effective for specific individuals. From an individual’s perspective, many healthy behaviors exist that seem to have a positive impact. However, few existing tools support people in identifying interventions that work for them, personally.
One approach to support such personalization is via self-experimentation using single-case designs. ‘Hack Your Health’ is a tool that guides individuals through an 18-day self-experiment to test if an intervention they choose (e.g., meditation, gratitude journaling) improves their own psychological well-being (e.g., stress, happiness), whether it fits in their routine, and whether they enjoy it.
The purpose of this work was to conduct a formative evaluation of Hack Your Health to examine user burden, adherence, and to evaluate its usefulness in supporting decision-making about a health intervention. A mixed-methods approach was used, and two versions of the tool were tested via two waves of participants (Wave 1, N=20; Wave 2, N=8). Participants completed their self-experiments and provided feedback via follow-up surveys (n=26) and interviews (n=20).
Findings indicated that the tool had high usability and low burden overall. Average survey completion rate was 91%, and compliance to protocol was 72%. Overall, participants found the experience useful to test if their chosen intervention helped them. However, there were discrepancies between participants’ intuition about intervention effect and results from analyses. Participants often relied on intuition/lived experience over results for decision-making. This suggested that the usefulness of Hack Your Health in its current form might be through the structure, accountability, and means for self-reflection it provided rather than the specific experimental design/results. Additionally, situations where performing interventions within a rigorous/restrictive experimental set-up may not be appropriate (e.g., when goal is to assess intervention enjoyment) were uncovered. Plausible design implications include: longer experimental and phase durations, accounting for non-compliance, missingness, and proximal/acute effects, and exploring strategies to complement quantitative data with participants’ lived experiences with interventions to effectively support decision-making. Future work should explore ways to balance scientific rigor with participants’ needs for such decision-making.
One approach to support such personalization is via self-experimentation using single-case designs. ‘Hack Your Health’ is a tool that guides individuals through an 18-day self-experiment to test if an intervention they choose (e.g., meditation, gratitude journaling) improves their own psychological well-being (e.g., stress, happiness), whether it fits in their routine, and whether they enjoy it.
The purpose of this work was to conduct a formative evaluation of Hack Your Health to examine user burden, adherence, and to evaluate its usefulness in supporting decision-making about a health intervention. A mixed-methods approach was used, and two versions of the tool were tested via two waves of participants (Wave 1, N=20; Wave 2, N=8). Participants completed their self-experiments and provided feedback via follow-up surveys (n=26) and interviews (n=20).
Findings indicated that the tool had high usability and low burden overall. Average survey completion rate was 91%, and compliance to protocol was 72%. Overall, participants found the experience useful to test if their chosen intervention helped them. However, there were discrepancies between participants’ intuition about intervention effect and results from analyses. Participants often relied on intuition/lived experience over results for decision-making. This suggested that the usefulness of Hack Your Health in its current form might be through the structure, accountability, and means for self-reflection it provided rather than the specific experimental design/results. Additionally, situations where performing interventions within a rigorous/restrictive experimental set-up may not be appropriate (e.g., when goal is to assess intervention enjoyment) were uncovered. Plausible design implications include: longer experimental and phase durations, accounting for non-compliance, missingness, and proximal/acute effects, and exploring strategies to complement quantitative data with participants’ lived experiences with interventions to effectively support decision-making. Future work should explore ways to balance scientific rigor with participants’ needs for such decision-making.