• Register
  • Help
Results 1 to 6 of 6

Topic: Questions for sample developers (making dry samples using inverse convolution?)

Share/Bookmark
  1. #1
    Senior Member
    Join Date
    Mar 2005
    Location
    Stockholm, Sweden
    Posts
    116

    Question Questions for sample developers (making dry samples using inverse convolution?)

    I'm curious about the way you create your "dry" samples. Does "dry" in this case simply mean that the recording has been made in a small room with few reflections, or do you actually remove the acoustic response from samples using INVERSE convolution?

    They did a similiar (and interresting) thing with the Hubble telescope. In the beginning, the images from the telescope weren't perfect. The lens wasn't good enough, and the images were somewhat blurred/distorted. I mean, the images were great, but could be even better. Now, if you blur/distort an image, this can be seen as convolution of the image by a specific blurring/distorting function. You're "smearing" the image in the same way a reverb "smears" the sound signal. Someone came up with the idea of calculating exactly how the lens system "smeared" the image (similiar to the acoustic response of a room, but instead it's the optic response of a lens). They then tried to remove the distortion using inverse convolution, and the results were incredible. I don't remember the numbers, but you know, I think the telescope could see 100 times farther through space or something like that. A huge difference.

    I'm not sure to what extent this can be used, but it sure is reality. What I'm trying to say is that it's possible to record the acoustic response of the studio room, at the microphone position (using the same microphone/recording system), and remove EVERYTHING but the original pure instrument signal (acoustic response, noise from the microphone/studio system, EQ, constant background noise, electronic buzz, etc. you name it). It would be like playing the instrument from the inside of your computer.

    Now, todays "dry" samples probably also work great for sample-based libraries. But if you use this technique you might be able to REALLY dissect the sound and learn exactly how the inherent instrument signal works. Imagine the sound of a violin with NO distortions at all. Now, if you want to create libraries with generated sounds instead of samples...

    These "dry" samples will probably sound a little corny, but if you run them through a convolution reverb and play them, they should have the quality of "not ever having been recorded by a system made by man". I don't want to judge anything before I hear it, but in theory, these samples would really be the real thing.

    You have to remember that an acoustic response isn't just the acoustic responce of the room, but actually the acoustic responce of the room combined with the acoustic distortions from the recording system used. If your recording system also EQ's the recorded acoustic responce in any way, so will the signals ran through the same convolution reverb be EQ'd. This is one of the main reasons it sounds so great, it actually emulates a RECORDING made in the room! (if I haven't got just about EVERYTHING wrong?)

    Maybe you've already done this stuff? I'd love to hear about it.

    /Arne

  2. #2

    Re: Questions for sample developers (making dry samples using inverse convolution?)

    Hello Arne,

    at least as far as our Library is concerned I can assure you that we do our outmost to capture the sound of an instrument as pure as possible, taking into account the influences of the microphones, the room, cables, converters, and much more - but each instrument's sound (apart from electronic ones) lives from the surrounding air as much as from the corpus/body itself. We built a dedicated soundstage to achieve this goal: to have enough "air" for an instrument to breathe, without adding "reverb" in the actual sense of the word.

    What you are actually asking for is more some kind of resynthesis than sampling ....

    While I'm involved quite a bit into new implementations of convolution as a creative tool for aural virtualisation, I have not yet encountered a convincing way to use this technique to get _rid_ of reverb trails. - There have been some other attempts during the years to achieve this goal, though, some of them relying on virtual neural networks, for example, but not convolution, AFAIK.

  3. #3

    Re: Questions for sample developers (making dry samples using inverse convolution?)

    I record everything as dry as possible.

    The recording room in my studio has heavy curtains on every wall.
    A fingersnip in this 20m2 room has no reverb and nearly no early reflection.

    Recording a nice room in samples is good, when you just use one library for your mix.
    But I´m using a setup of many different sample-librarys for my productions
    thats why I like to use my fine selection of reverbs to place them all in the same room.
    Thats very important in surround-productions.

    I don´t like sampled release trails with big reverbs.
    I often use kompakt-player-instruments with release trails in kontakt2 and delete the trails.
    It also saves a lot of disk space and sample memory if I use dry samples.
    Even if you use direct from disk streaming, the use of release trails doubles the amount of samples.

    No doubt, the rooms in the sample-librarys I own are great,
    but since I also own some good room-simulators, I prefer to use them.

    Chris Hein

  4. #4
    Senior Member
    Join Date
    Mar 2005
    Location
    Stockholm, Sweden
    Posts
    116

    Re: Questions for sample developers (making dry samples using inverse convolution?)

    at least as far as our Library is concerned I can assure you that we do our outmost to cap...
    What you are actually asking for is more some kind of resynthesis than sampling ....

    While I'm involved quite a bit into new implementations of convolution as a creative tool for aural virtualisation, I have not yet encountered a convincing way to use this technique to get _rid_ of reverb trails. - There have been some other attempts during the years to achieve this goal, though, some of them relying on virtual neural networks, for example, but not convolution, AFAIK.
    I've heard your library so I know it sounds great . It was more of theoretical interest. Real room acoustics just makes such a difference if you're analyzing the spectral components, for example. It would (probably) be easier to create convincing additive synthesis if there were simple ways to make accurate studies of the spectral series of different techniques and tones on different instruments.

    For example, you probably already know this but, the physics of the piano gives it "stretched" harmonic series. So that middle A doesn't have 440, 880, 1320, ... as overtones, but actually more like 440, 881, 1323, ... depending on the size of the instrument. This doesn't apply to overtones of most other instruments, even though people prefer stretched octaves and seem to tune the base tones of those instruments too that way, regardless of the overtone series (to my knowledge).

    Also, the variation of amplitude of the overtones is extremely interesting from a theoretical perspective. If you're into additive synthesis.

    Soundwise, it seems like there are as many opinions as composers. It's all a matter of taste and almost everything is possible today with these extensive libraries. I'm really just more interested in the synthesis and the physics behind it. Musically, a 50 GB+ library is the by far best way to go today, but it's still a brute force approach.

    I wouldn't be surprised if the deconvolution approach is useless, when it all comes down to it.

    I record everything as dry as possible.

    ...
    Yes you can probably get rid of almost all the reverb, but this is also due to the small size of the room. All audible reflections occur immediately, and it has more or less been proved that all reflections that occur within 10 ms actually only makes the sound fuller and clearer, no blurring at all. Still, the frequency spectrum of the sound will be altered by these reflections. But you know I'm just digging in theory here, I have no real experience in recording samples.

    It's not the end trails that are of interest. What's interesting is the alterations of the sound during the sustained note, and during the touch, since that is when you wan't to analyze spectral components etc. if you wan't to reproduce the sound. Still, if this would work, easily, it would be a great way to make great sounding dry recordings even if you only have a decent recording system, and a decent room to record in. The more I think about it, the more I doubt it's a workable approach. If it was possible, someone should have done it by now.

    But as I said, I doubt this will make a huge difference for a user of the sample. It's more a theoretical gimmick.

    /Arne

  5. #5

    Re: Questions for sample developers (making dry samples using inverse convolution?)

    One huge problem is the complexity of musical instrument radiation patterns.
    Frequencies coming from musical instruments tend to go in all directions with different force and radiation patterns. The mid-low frequency band of a cello for example has a figure-of-eight radiation pattern, where the hi-low frequency band is cardoid and the higher frequency bands are more radial. These properties are also different for each note. This makes it virtually impossible to completely capture an instrument with it's complex radiation behavior with microphones. Even a 5.1 recording would not give an accurate image of the instrument.
    De-convolution works with simple signals but fail to compute the complexity of multi-angle radiation behavior. De-convolution is as accurate as the impulse response is. To de-convolute ambience from a sound, the impulse response would have to be at least 8 frequency bands and have multi-microphone positions. That means a full 3 dimensional model. Current convolution programs are basically 2 or 4 channel programs with no frequency band selection. They remain 2 dimensional. Deconvolution of focus distortion on images (as with Hubble space telescope) is actually a 2 dimensional process too.
    There is a lot of academic research going on in this field, as (surprise surprise) realistic emulation of musical instruments through computer based re-synthesis is a scientific goal, a kind of holy grail for many researchers. This effort already results in better understanding of what we hear and brings us closer to this goal. As an example Synful comes close with re-synthesis.

    Your idea to de-convolute the ambience from recorded 3D samples will become daily routine within the next 10 years or so. But it's my guess that re-synthesis will reach maturity earlier and present us with working musical instrument emulations.
    Best regards,
    Michiel Post


  6. #6
    Senior Member
    Join Date
    Mar 2005
    Location
    Stockholm, Sweden
    Posts
    116

    Re: Questions for sample developers (making dry samples using inverse convolution?)

    One huge problem is the complexity of musical instrument radiation patterns.
    Frequencies coming from musical instruments tend to go in all directions with different force and radiation patterns. The mid-low frequency band of a cello for example has a figure-of-eight...
    ...
    Wow, I love your insight in the subject . This was really rewarding. I knew instruments were very directional, but I had no idea it affected the complexity of the signal that way. I guess then it won't ever be a practical solution on average recording systems, no matter what.

    /Arne

Go Back to forum

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •