The Wishing Well - Final Project Documentation
- Alexandros Barbayianis

- Dec 12, 2025
- 8 min read
Live Project: The Wishing Well
GitHub Repo: barbajohnz/ConnectionsLab-Final-Wishing-Well
The Concept
I wanted to create an interactive sound installation where people can anonymously record wishes that blend together into this collective soundscape. The idea is that your individual voice gradually dissolves into a chorus of shared hopes through reverb effects and timed playback - so you start clear and personal, but slowly become part of something bigger.
Inspiration
The moment I thought of "wishing well," I immediately pictured the 1937 Snow White wishing well scene. I didn't do formal research on the history of wishing or anything, but that childlike, dreamy, fairy-tale vibe was exactly what I wanted to capture. That scene has this intimate, confessional quality where Snow White's singing for herself but also kind of... putting it out into the universe? That's the feeling I was going for.
The goal for this class was to prototype a version that lives fully online, but eventually I'd love to take this into a physical installation space, like a gallery or contemplative environment where people interact one at a time through a large screen with headphones. Real ritual-like experience.
Production Process

The Messy Beginning
I started super plain, bare-bones HTML and barely any CSS. I wanted to jump straight into making the interactive audio stuff work first, so I focused entirely on JavaScript and getting my server running. Once I could actually record audio and play it back on a button click, then I went back and styled the hell out of it.
This meant I kept bouncing between files - sometimes I'd realize I needed to remove a button from HTML and recreate it as a CSS/HTML asset instead, or I'd need to go back to JavaScript to add new functionality. The server/database setup was definitely the part I wanted to do least, so I did it first and ripped off the band aid luckily I found a website that guided me and looking back at all my code now I'm still overwhelmed but during the process it was like adding piece by piece and one day I looked back and it was a fully developed website.
The Audio Journey
Everything audio-wise is handled through Tone.js - I decided to use one library for everything rather than mixing multiple dependencies. This included:
Recording with Tone.UserMedia and Tone.Recorder
Playback with Tone.Player
Effects with Tone.Reverb
Sound effects with synthesizers (MetalSynth for coin clinks, MembraneSynth for water splash)
The key breakthrough was figuring out how to make wishes gradually fade into the collective chorus. I used Tone.js's rampTo() method to slowly increase the reverb on personal wishes while staggering playback timing - so you hear your wish clearly at first, then it starts blending with others.
The Visual Development
After getting the audio working, I went hard on the visuals. I wanted that dreamy Snow White aesthetic, so I found assets on adobe stock (png cutouts) that matched and created three distinct screens:
Landing screen - Cloud curtains that animate apart, revealing the forest where the wishing well lives in.
Permission screen - Trees and a wooden sign asking for audio permission which would also pull back to reveal the wish screen
Wish screen - Interactive coin that drops into the well to trigger recording
Design Evolution: In the process of making this, I realized it would tell a more cohesive story if the permission and wish screens stayed on the same background. So after you click the wooden sign and allow mic permissions, the permission screen assets (trees and sign) peel back like curtains - just like the clouds did on the landing screen. This reveals the same forest background that was obscured by the permission assets, and then the wish screen elements (well, coin, mushrooms, fairies) appear right there in that same space. The actual screen change only happens after you've made your wish and dropped the coin, that's when you transition to the listening/ripple screen at the bottom of the well.
I used CSS animations heavily, exit animations for clouds and trees, entrance animations for wish assets, coin drop physics. Everything needed to feel whimsical but also maintain that contemplative, vulnerable space.
Major Challenges
1. Responsive Design (The Big One)
I originally built everything on my 1920x1080 desktop screen without thinking about other devices. When we user-tested in class, I saw everyone's screen showing completely different layouts, assets cut off, things in wrong positions, button hovers not responding correctly total chaos.
The Solution: I went into DevTools responsive mode and manually adjusted EVERY asset for three screen sizes:
Desktop (1920x1080)
Tablet (768x1024)
Mobile (430x932)
I tested each screen type, adjusted positions/sizes in the browser inspector until they looked right, then documented all the changes [and later used AI to create a base style for each.] This took forever because I had like 10+ elements per screen that needed individual positioning.
2. Audio Overlap Without Chaos
Getting multiple audio recordings to play simultaneously without sounding like a mess was tricky. Too much overlap was cacophony & not enough was boring.
The Solution: Staggered timing and gradual reverb increases. Each new wish starts clear, then the reverb slowly ramps up while previous wishes are already reverbed. The timing offsets (2-3 seconds between recordings) give just enough space to hear individual voices before they blend.
Browsers don't let you autoplay audio without user interaction, which broke my initial approach; using an FMSynth for ambient drone that gets louder the closer you get to the "Wishing Well Button" on the landing screen.
The Solution: Everything requires a click or interaction first. The "Start" button on the permission screen initializes the audio context so I just scrapped this idea I was going to bring this back later on the actual "wishing screen" but focused too much on styling.
Lessons Learned
Testing early matters. Building for one screen size then fixing it later was way harder than testing responsively from the start.
Single library is less headache. Using Tone.js for everything (recording, playback, effects, synthesis) kept things simpler than mixing Web Audio API, different audio libraries, etc.
DevTools is powerful. I learned to test and tweak parameters directly in browser DevTools before writing them into VS Code. Way faster than code, save, refresh, adjust, repeat.
Comment-out rather than delete. Instead of deleting code I wasn't using, I commented it out. This let me quickly tweak and revert without losing work and gave me the option to come back to it and use it differently.
Next Steps
If I keep working on this project, here's what I'd love to explore:
Physical Installation Enhancements
Arduino integration - Instead of clicking a coin on screen, I'd use a physical touch sensor. You'd hold an actual coin to record your wish, and when you let go, recording stops. Way more tactile and ritualistic.
Take it fully offline - Eventually, I might abandon the web version entirely and make this exist only as a physical installation piece in galleries or other spaces.
Audio Improvements
Ambient forest sounds - Add background atmosphere (crickets, wind, rustling leaves) that plays before recording starts, then fades back in when the wishes start playing back. Would really enhance that "deep in the woods" feeling.
Advanced layering techniques - Experiment with different reverb styles or echo patterns. Maybe longer decay times, or even spatial audio if I go physical.
User Experience Fixes
Fix the listening screen timing - Right now the alert pops up too quickly and cuts off the wishes before they finish playing. Need to calculate the actual audio duration and delay the prompt until everything completes.
Listen-only mode - Let people hear existing wishes without having to record their own. Some people might just want to experience the collective soundscape without contributing.
AI Usage & Process
I used Claude throughout the project, primarily for three purposes: debugging, understanding existing code, and responsive design implementation.
How I Used AI
Debugging & Development
When building features, I'd ask Claude to add console.log() messages throughout my code so I could track exactly what was happening. For example, when testing audio recording, I'd request logs at each step: mic access > recording start > blob creation > upload. This helped me isolate exactly where things broke without spending hours guessing.
Understanding Code Examples When I found code snippets on GitHub or documentation sites that I wanted to use, I'd paste them into Claude and ask "why is this working this way?" Most reference sites had code examples but brief or shallow explanations. AI helped me actually understand what the code was doing and why it worked, rather than just copy-pasting blindly. This was especially useful for Tone.js features where the official docs were technical but not always intuitive.
Responsive CSS (The Big One)
This is where I used AI most heavily. After user testing revealed my site only worked on my screen size, I:
Manually tested in DevTools - Went through three screen sizes (desktop 1920x1080, tablet 768x1024, mobile 430x932) and adjusted every single asset position and size by hand in the browser inspector
Documented my adjustments - Copied all the updated CSS values for each screen size
Gave Claude the data - Sent Claude my manual adjustments for all three screen sizes along with my existing CSS file
Claude organized it - Claude wrote the complete responsive CSS using media queries, organizing my adjustments into:
Base styles (desktop)
@media (max-width: 1024px) (tablet)
@media (max-width: 768px) (mobile)
Why this worked: I did all the creative and design work myself, testing what looked good visually. Claude handled the tedious structural organization. The AI didn't design anything; it just turned my 60+ manual adjustments into proper, maintainable code structure.
Reflection on AI Usage
What helped:
Tedious organization work (turning my 60+ manual adjustments into structured CSS)
Explaining concepts step-by-step so I actually learned
Debugging without having to dig through documentation for hours
Decoding code examples into plain explanations
What didn't help:
Sometimes AI would suggest overly complex solutions when simple ones worked fine
I still had to understand the code to make tweaks and fixes
Demonstrating my understanding: The key was that I used AI as a tool, not a crutch. I manually tested everything, made all design decisions, and understood the final code enough to continue modifying it independently. The AI was like having a really fast CSS formatter, not a replacement for learning.
References & Resources
Learning Resources
Codecademy - Free CSS HTML JSS lessons for layout and positioning fundamentals
Core Audio Library
Tone.js (Web Audio Framework)
Specific Tone.js Features:
Tone.UserMedia - microphone access
Tone.Recorder - audio recording
Tone.Player - audio playback
Tone.Reverb - reverb effects
Signal Ramping (rampTo) - gradual parameter changes
Synthesizers:
Tone.MetalSynth - coin clink sound
Tone.MembraneSynth - water splash
Tone.FMSynth - ambient drone
Audio Processing:
Tone.Filter - lowpass filtering
Backend & Server
Express.js (Node.js Framework)
Multer (File Upload Middleware)
Deployment
Railway (Cloud Platform)
Web APIs & Browser Features
FormData API - file upload handling
Fetch API - HTTP requests
URL.createObjectURL - blob handling
getBoundingClientRect - element positioning
Web Audio API - underlying browser audio
CSS & Animation
JavaScript Techniques
async/await - asynchronous programming
setTimeout - timing control
Typography
Rubik Bubbles - whimsical cloud title font
Fredericka the Great - wooden sign aesthetic








Comments