MultiHub Forum

Full Version: Scalability implications of adopting a service mesh in production
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I’ve been tasked with creating a small, immersive audio installation for a historical maritime museum’s new exhibit on 19th-century whaling voyages, but I’m hitting a wall with the spatial audio design. My budget is only about $800, and I’m confined to a narrow, L-shaped corridor with uneven brick walls that wreak havoc on sound reflection. I have a Raspberry Pi 4, four small but decent mono speakers, and a basic USB audio interface, but I need to simulate the layered sounds of a ship at sea—creaking wood, wind, distant whale calls—without it becoming a muddy, echoing mess in such a problematic space. I’ve tried some simple delay and reverb effects in Pure Data, but it either feels flat or overwhelmingly chaotic by the time visitors walk the ten-meter path. I’m on a three-week deadline and I’m starting to think my approach is fundamentally wrong for the architecture.
Treat the corridor as an instrument: run a four‑channel Ambisonic decode (WXYZ) to the ceiling speakers and craft a few long, sparse drones that pan slowly across channels, with motion triggers via a PIR to shift textures rather than a fixed sweet spot. Capture or synthesize a room impulse response so reflections become texture, then drive a short convolution reverb to tame the brick walls. A Raspberry Pi 4 plus USB interface, four mono boxes, and DIY absorbers should stay under €800; test early with a measured IR and staged cues.