Early in my career, I worked on a system designed to locate the source of a gunshot in real time. On paper, it looked straightforward: detect the sound and calculate its angle of arrival using time differences across multiple sensors. In practice, it was not.
Gunfire produces two sound signatures: the muzzle blast and the shockwave. Both travel through the environment and often arrive at the sensors almost together, mixed with reflections and background noise. We focused on localisation at first, assuming detection was already solved. The results were inconsistent.
The issue was the localisation algorithm, not the assumptions leading into it.
When the Problem Isn’t Fully Defined
Textbook problem solving assumes clear definitions. You state the problem, apply known methods, and work toward a solution. That approach works in controlled environments where variables are understood, and behaviour is predictable. Embedded systems rarely offer that.
At the intersection of hardware and software, systems are often complex. Signals are noisy. Timing shifts. Edge cases are common. Many constraints are implicit, based on assumptions about component behaviour. Over time, I found that experimental thinking works better in these situations.
Experimental thinking treats uncertainty as the starting point. Instead of assuming the problem is fully understood, I design small, focused experiments to uncover what is actually happening. The goal is not to solve everything at once. It is to reduce uncertainty step by step.
The Experimental Loop
One way I work through this is an iterative loop.
- Clarify reality: Start with observation, i.e. what the system is actually doing in practice, not what it is supposed to do.
- Surface assumptions: Every implementation has hidden assumptions, such as timing, data, protocol, or algorithm behaviour. Many failures happen when these assumptions break.
- Design the simplest test: Instead of building a full solution, create the smallest experiment that can confirm or reject a single idea.
- Isolate variables: Change one variable at a time. In complex systems, multiple simultaneous changes make results hard to interpret.
- Update the model: Each result refines the understanding of the system.
This loop is lightweight but powerful. It works for debugging a single issue and for designing entire systems.
Rethinking the Gunshot System
In the gunshot detection project, the breakthrough came when we stopped treating it as a localisation problem. Further analysis showed that accurate positioning depended on isolating the muzzle blast. The shockwave, although part of the same event, introduced ambiguity that skewed timing calculations.
What we thought was a localisation problem was actually signal separation. That changed the approach. Instead of working directly with raw signals, we treated them as mixtures of independent sources. The question became whether we could separate these signals before interpretation.
This led us to ideas related to the cocktail party effect. After reviewing academic work and building quick prototypes in MATLAB, we implemented the DUET algorithm. With two input channels, we separated the signals and isolated the muzzle blast from the shockwave and background noise.
Once that separation worked, localisation became much more reliable. The key insight was not the algorithm. It was identifying the correct problem.
Working With Fixed Constraints
In another project, I worked on upgrading a system that introduced new hardware and updated communication requirements while still requiring full backward compatibility with existing installations. At first glance, it seemed simple: build the new system and connect it to the old one. In practice, the constraints made it fragile.
We had fixed requirements:
- Existing systems had to continue operating without changes
- Legacy communication behaviour had to remain unchanged
- New hardware introduced different timing and interface characteristics
- The new protocol had to support added functionality without breaking compatibility
None of these constraints could be altered.
That is when it became clear this was not a “build a better system” problem. It was a constraint management problem. We approached it experimentally. First, we identified what had to remain the same for legacy systems to function correctly. Then we mapped where change was possible.
Progress came from testing combinations of constraints rather than a single design decision. We kept exploring what could coexist without breaking the system. In complex systems, the challenge is often less about building something new and more about evolving what already exists without breaking it.
What Experience Keeps Reinforcing
Across different projects, one pattern holds. Strong engineering work does not start with full understanding. It starts with testing assumptions. You observe. You test. You adjust. Then you repeat. After more than twenty years in embedded systems, this approach has proven more reliable than any single tool or method.




