Is there any software that analyzes archive-sounds and enables me to search for related soundfx just by spectral patterns and other complex acoustic parameters?
In step one i would wish to find more whooshes of similar character by just choosing one initial whoosh.
In step two i wish this “neuronal fingerprint comparison” would be a kind of soundminer-plugin/option 😉
In step three the spectral-analysis learns to intelligently link the “whooshyness” of my initial Soundfx with the most common tag “WHOOSH” from the already available metadata of another 100 similar sounding whooshes in my archive.
In step four soundminer would offer automatic batch-tagging of sound files in a new field “spectral tags”. After learning from 100 well tagged “Footsteps, wood” the spectral analysis would now by itself suggest tags like “Footsteps, wood” for soundfx with acoustically analogical structure.
Where do you see the chances and/or limits?
I think something like this is would require technology that is still only in prototype stage such as this MIT experiment that appeared earlier this year.
Perhaps something like deep learning AI will soon be able to distinguish and perceive timbre from spectral analysis for comparative searches like you posit.
I could be wrong but that sounds like it would be incredibly resource hungry and possibly time-consuming.
What would the resolution of the spectral analysis be? How similar would two files have to be to be deemed similar enough? How would the files’ relative lengths come into play? Also, depending on how it worked, you might get quite a few false positives when processing multi-channel libraries and IRs.
Don’t get me wrong I think it’s a cool idea, just reckon you’d need to do some fairly intensive crunching and possibly be aware of some trade-offs to make it happen.