I have been following Chris’ videos on the assassination attempt for a while and as always Chris has done a great job.
However, I believe the echoes are not as straightforward proof as it might seem at first glance - and rightly so, Chris only mentions that the echoes’ incongruences “need to be explained”.
I am attaching a quick sketch that I put together that, in my opinion could very easily explain the incongruences in the echo delay differences.
I didn’t take any elements of the scene, terrain or actual buildings in consideration, but the sketch is only meant to explain the concept of how such incongruences could easily be explained.
Sketch explanation:
- Let’s consider the camera (and mic) have moved from point A to point B between shots 1-3 and 5-9.
- Let’s also assume there are two reflecting objects (surface A and surface B)
- The audio of the shot would reflect on these surfaces and propagate mostly likely as shown on the sketch.
- This means not only different echoes signature can be heard depending where the mic is at, but the delay from shot to echo would vary depending on the distances of surfaces A and B to the shot location.
- So by moving from point A to B, the mic could have easily captured two different echoes from two different objects, even if the shots originated from the same spot/gun.
–
In terms of audio sounding different, unfortunately I think that is highly inconclusive when considering it has been captured by moving smartphones. Regarding smartphones recordings, we need to consider a few things (and I know, I designed a few of those):
- Most smartphones around have 3 mics or more
- Normally one is omnidirectional and the other ones are directional mics
The sounds captured by these microphones are modified via software to optimise for different functions of the phone (video calls, voice recognition, video and audio recording, etc.).
So when video recording the smartphone software is constantly messing up with the audio in order to optimise it for whatever it “thinks” will give you the best output.
For video, normally clear audio of someone speaking is the most important feature, so there is a LOT of NOISE REDUCTION going on, which could definitely affect the sound signatures of background noises, especially if the device is on the move and pointing at different directions.
I believe all smartphone audio captures are good to look at, but ONLY the stage microphone recoding can give reliable data for audio analysis.
Just wanted to share my 2 cents on this.