Meditation has been deeply impactful in my life, and I want others to experience the positive shifts that come with it. Reflecting on my nine years of practice, I realized that current tools fail to track progress objectively or adapt to a person’s real-time needs. As a regular user of the Headspace app, I’ve often wondered if I’m performing the technique correctly or if my experience is actually deepening over time. A major limitation with standard guided meditations is their reliance on fixed-interval reminders. If a user loses focus ten seconds after an awareness reminder, they might spend the next few minutes lost in thought rather than training their mind. This inspired me to create a system where awareness reminders could adapt to the user by detecting when focus is lost and gently nudging them back immediately. This approach helps sustain awareness longer, naturally decreasing the need for reminders as the user’s skill improves.
The U-M Library mini grant provided the support needed to make this happen. With the funding to purchase a Muse Athena EEG (electroencephalography) headset, I brought the idea to my graduate Human Computer Interaction (HCI) course and teamed up with Alexander Bartolozzi, Donald Lin, and Annus Zulfiqar. Together, we built Reflect - an EEG-powered app that uses machine learning to accelerate the learning rate of meditation practices. It tracks meditation states in real-time, playing a gentle audio cue to restore awareness when the mind wanders, and dynamically adapts to the user's ability to maintain focus.
The Challenge of Internal Practice
Whenever I talk to people who haven't meditated, they tell me they feel like they don't do it right, or they lose their focus too often, or they just feel completely lost and don't know where to start. Getting over that initial learning curve is something that would make meditation more accessible, and Reflect hopes to do that with adaptive monitoring and feedback. Since the practice is entirely internal, there is no feedback loop to validate the experience. Without this real-time guidance, it’s easy to spend an entire session lost in thought without realizing it. Addressing this uncertainty presents a unique opportunity to make a tangible impact.
The University of Michigan acknowledges the mental health crisis that students face, which reinforces the need for effective and accessible tools for students. We know mindfulness is a powerful tool for wellbeing, but it’s incredibly hard to stick with a practice when a student feels like they are failing or stuck in a plateau.
Existing options to solve this problem were either prohibitively expensive clinical equipment or consumer apps with black box data that offered no transparency. I saw a clear gap for something in the middle. I wanted a tool accessible enough for a student to use in their dorm room, but transparent enough to be trusted. Our goal was to build a system that lowered the barrier to entry using affordable hardware while providing the kind of measurable, honest feedback that actually facilitates learning.
Building Reflect
To solve this, we built a system that combines consumer hardware with advanced machine learning. We use the Muse Athena headset, which has four electrodes that sit against the forehead and behind the ears to detect brainwave activity. This data is fed into a machine learning model that we trained to recognize three distinct states: focused attention, open monitoring, and a neutral baseline.
The most critical part of the workflow is the calibration step. Because early tests showed that generic models struggled with the variability of different head shapes and physiology, we require a one-minute calibration to fine-tune the model to the individual user.
Once the session begins, Reflect monitors brain activity in real-time. If focus drifts from the goal state, the system plays a gentle bell to nudge the user back. After the session, a dashboard displays detailed results, showing exactly how much time was spent correctly in each meditative state.
Under the hood, we use the feature extraction method developed by Jordan Bird [1] to process the EEG data every second and feed it into an XGBoost classifier. We prioritized processing speed because the model needs to generate predictions instantly to ensure the feedback is truly real-time. Check out the attached demo video for a full walkthrough and explanation of the app!
Results from Live Testing
When we first trained our machine learning models, the results were promising. After some model tuning, we saw strong predictive power on our training data. However, live testing revealed that a static model couldn't easily handle the messy reality of every user having different physiology and wearing the headband slightly differently. We shifted our focus to personalization, ensuring the system learned the user's specific baseline via the calibration step before trying to guide them. The difference was substantial. With fine-tuning in place, the feedback felt much more aligned with reality, making the tool helpful rather than distracting.
The most exciting part of the project was the demo day at the end of the semester, where we got to test the application on people who had never seen it before as they explored the poster session. The results were even better than we had hoped. Those who identified as regular meditators had much higher accuracy in matching the goal state, while those who had never meditated before took significantly longer to reach the goal state and struggled to keep it. We also observed fascinating individual differences where some users were naturally better at maintaining the open monitoring state, while others excelled at maintaining the focused attention state. This pattern suggests that some people are naturally more skilled at thought awareness, while others find it easier to maintain directed concentration.
On the user experience side, the feedback was encouraging. We designed Reflect to extend a meditation practice rather than replace it, and users found the interface clean and intuitive. It demonstrated that we don't need expensive clinical equipment to get meaningful insights into the mind. We just need smart software that acknowledges how unique every brain is.
Future Directions
I see multiple directions where this work could expand. First, the application could be extended to include long-term tracking, allowing users to visualize how their practice improves over weeks or months. To make the tool even more accessible, I envision developing a fully mobile version that is more user-friendly than the current prototype.
I also see potential in partnering with existing meditation platforms to offer optional EEG integration, allowing users to track progress without changing their preferred meditation content.
On the research side, I am interested in collecting more high-quality data to refine the prediction model, making the feedback feel even more natural. I would also love to explore integrating different meditation practices, such as visualization or loving-kindness, to see if the model can adapt to a broader range of techniques, allowing for even more accessibility to meditation for beginners and experienced users alike.
Research Support
The support from the U-M Library was helpful in navigating the research landscape, particularly as we moved outside our engineering comfort zone. An initial consultation directed us toward key resources like the UM Research Guides for biomedical engineering and psychology. These guides helped us quickly identify relevant research areas and locate the most significant academic papers on EEG classification. This access to the university’s research infrastructure was instrumental in finding suitable feature extraction methods, exploring classification models specifically for meditation states, searching for relevant EEG datasets, and deepening our understanding of meditation theory to better shape the project's goals.
Final Thoughts
Working on Reflect has been an incredibly rewarding journey. It allowed me to merge a decade-long personal passion for mindfulness with the technical skills I've honed in my graduate studies. Beyond the code and the hardware, there is a deep sense of fulfillment in building something that could genuinely help students navigate the mental health crisis.
This project would not have been possible without the U-M Library mini grant, which removed the financial barrier to accessing the EEG hardware we needed. I am also grateful to my teammates for their equal partnership in bringing this vision to life, and to Professor Ke Sun for his guidance throughout the EECS 596 course.
I am always looking for new ways to explore the intersection of technology and wellbeing. I invite anyone interested in meditation technology, student mental health tools, or potential collaborations to reach out. If you are interested in exploring the codebase, please feel free to email me at hicsea@umich.edu. I hope this project serves as a step toward making mindfulness practice more accessible and effective for everyone.
References
[1] Bird, Jordan. "eeg-feature-generation: Sliding-window feature generation for EEG (Muse-LSL compatible)." https://github.com/jordan-bird/eeg-feature-generation (Accessed 2025-12-12).
Team members
Sean Hickey
Alexander Bartolozzi
Annus Zulfiqar
Donald Lin