• Register
  • Help
Page 1 of 2 12 LastLast
Results 1 to 10 of 11

Topic: OT: Offline bouncing/rendering

Share/Bookmark
  1. #1

    OT: Offline bouncing/rendering

    sorry chris...

    recently in an EWQLSO thread there was a lot of talk about offline \"bouncing\" and such and I was just wondering if anyone could give me the lowdown on the process. There seemed to be a lot of different info bouncing about in that thread so I didn\'t really get to soak it all in.

    How exactly does it work? not from a technical jargon side, of course, but does the program actually calculate an output file as opposed to recording from a song being played realtime?

    thanks!

    Spencer

  2. #2

    Re: OT: Offline bouncing/rendering

    Hey Spenik,

    Yeah, hipper sequencers will do an internal bounce without you even hitting play - you just highlight the track/s to be bounced and select a destination. The common use for this is to turn a CPU intensive VST or VSTi track into a simple audio track once you\'re happy with how it sounds. You don\'t lose anything because you can just mute the original track once you have the audio version.

    Gigastudio does a simple capture to disk which some might call a bounce. It will record its own output as a wav file - as many or as few channels as you decide to keep open.

    The \'old fashioned\' way was to route the DAW\'s output to its own input and do a real bounce of whatever tracks were open to the new track. This happened in real time.

    Remember that the output of midi devices which are, by definition, external to the DAW (eg you\'re Prophet, RD500, Virus etc) won\'t get recorded in this setup - although you can do it with the \'old fashioned\' setup by simply routing the external midi gear\'s audio output to the DAW\'s input as well.

    There\'s also another thread running on this forum at the moment which mentions an EXTREMELY elegant and intuituive implementation of the first bounce system - where the DAW renders a bunch of VST stuff to an audio track faster than real time. Logic 6 will have a SINGLE simple button called \'Ice\'. Push it on a VST track and, unseen by you, the track will be converted into standard audio, freeing up CPU. If you need to make detailed changes other than level or panning, you just hit the Ice button again and do your edit. Once you\'re happy - freeze the track into audio and get a bit more CPU overhead back again.

    This is brilliant, as it is completely invisible to the user - just the kind of approach all coders should take with PC apps designed for those of us who are right-brain disadvantaged [img]images/icons/wink.gif[/img]

  3. #3

    Re: OT: Offline bouncing/rendering

    Am I understanding this correctly? I could render my midifiles, without listening to it?

    Does this helps audio cards which are on lower budget scale to achieve better quality?

    Is this true? If yes, I MUST HAVE IT! [img]images/icons/grin.gif[/img]

    btw. What sequencers can do this?

  4. #4
    Senior Member Bruce A. Richardson's Avatar
    Join Date
    Sep 1999
    Location
    Dallas, Texas
    Posts
    5,755

    Re: OT: Offline bouncing/rendering

    Originally posted by falcon1:
    Am I understanding this correctly? I could render my midifiles, without listening to it?

    Does this helps audio cards which are on lower budget scale to achieve better quality?

    Is this true? If yes, I MUST HAVE IT! [img]images/icons/grin.gif[/img]

    btw. What sequencers can do this?
    <font size=\"2\" face=\"Verdana, Arial\">Any of the modern sequencers will do this. Logic 6 will automate it a bit, but FWIW, other sequencer companies have been discussing this UI methodology, too. Video apps have been doing this for a while, because their rendering times are painfully slow, and their processes don\'t run very well in realtime without LOTS of supporting hardware.

    With GigaStudio, things are a little different, because Giga\'s core design spec has always been about realtime performance as well as \"hardware-like\" playability. Meaning, in short, you can take GigaStudio out on a job and once you\'ve set up your machine for its maximum dependable polyphony, you can\'t make GigaStudio cough or choke up in realtime.

    So with GigaStudio, you must render the tracks in realtime.

    Frankly, I have always marveled that people leave their work in the MIDI domain for so long as a matter of convenience. I always try to render tracks to audio the moment the part is right, rather than \"waiting\" and letting the workload build up.

    I use GigaStudio on a separate machine, which is networked to my other DAW machines. I mix in Vegas, but sequence in SONAR. So essentially, I\'ll sequence up my parts, and render them to disk via GIGA\'s rendering function--then drag them into Vegas across the network (without actually moving the rendered file from the Giga machine). A cheap 100 Mbps network will easily stream 20 or so tracks across ethernet. I keep both SONAR and Vegas running on the DAW machine, and just blip back and forth between them. Sometimes I mix in SONAR, but I prefer to keep the mixing tasks in Vegas because it\'s so simple and elegant.

    If I make a change, I just render out the change and paste it into the track. No biggie. Takes seconds, and I\'m done.

    Really, all that the new Logic UI is doing is making this task a bit more automated. As far as \"new\" technology, it\'s not. Not that it isn\'t good--it\'s a nice UI shortcut. Just that there\'s no need to get burning Logic envy if you have any of the other leading sequencers, because they will all do this fairly elegantly (and will probably follow the lead of video apps and Logic in implementing a similar \"one push\" approach.

  5. #5

    Re: OT: Offline bouncing/rendering

    Bruce,

    When you render the track in Giga, and then import it to Sonar, how do you line it up?

    When I do it that way, I have to use trial and error (zoomed in) until it is exactly right. That is why I have started recording the output of giga directly into Sonar. Then it lines up perfectly.

    -- Martin

  6. #6

    Re: OT: Offline bouncing/rendering

    thanks so much guys!

    I\'ve used Gigastudio\'s capture to wave function for awhile, but depending on how my computer\'s feeling on a given day (moody little bastard) sometimes I can\'t even get a DDBE line (w/ release trails and such) to play back without some sort of problems.

    But just to make sure I processed all the info right - essentially with these \"ice\" features and such w/ VSTi\'s, it\'s like working with a 3D modelling/rendering app. So essentially if your computer\'s not the greatest, you could hammer out a low quality sketch of a track (w/out release triggers and such) and then render out the same track with the more intense samples without any real regard to hard drive efficiency, sound card latency, etc?

    Sorry about all the questions - but I want to make sure I understand [img]images/icons/grin.gif[/img] I\'m pretty excited about EWQLSO and have been saving for it pretty hard, but when the time comes I won\'t have money for some GOD of a computer. This new (to me) info is making the future look brighter.

    Spencer

  7. #7
    Senior Member
    Join Date
    Jan 2002
    Location
    Dorset, UK
    Posts
    470

    Re: OT: Offline bouncing/rendering

    Originally posted by mschiff:
    Bruce,

    When you render the track in Giga, and then import it to Sonar, how do you line it up?

    When I do it that way, I have to use trial and error (zoomed in) until it is exactly right. That is why I have started recording the output of giga directly into Sonar. Then it lines up perfectly.

    -- Martin
    <font size=\"2\" face=\"Verdana, Arial\">I\'m not Bruce, but I always record the same initial cue along with every midi track(s) I capture to wave, using percussion or harpsichord, which makes it quick and easy to line up the tracks, deleting the cues after mixing.

  8. #8
    Senior Member
    Join Date
    Jan 2002
    Location
    Dorset, UK
    Posts
    470

    Re: OT: Offline bouncing/rendering

    Originally posted by Spenik:
    thanks so much guys!

    I\'ve used Gigastudio\'s capture to wave function for awhile, but depending on how my computer\'s feeling on a given day (moody little bastard) sometimes I can\'t even get a DDBE line (w/ release trails and such) to play back without some sort of problems.

    But just to make sure I processed all the info right - essentially with these \"ice\" features and such w/ VSTi\'s, it\'s like working with a 3D modelling/rendering app. So essentially if your computer\'s not the greatest, you could hammer out a low quality sketch of a track (w/out release triggers and such) and then render out the same track with the more intense samples without any real regard to hard drive efficiency, sound card latency, etc?

    <font size=\"2\" face=\"Verdana, Arial\">Spenik

    Wave capturing from Gigastudio is internal and does not use the soundcard.

    If you\'re having playback problems you may be having polyphony trouble -which version of GSt are you using?, -hardware?. This is not going to be helped by changing your way of recording /capturing

  9. #9

    Re: OT: Offline bouncing/rendering

    Hey Spenik,

    \"But just to make sure I processed all the info right - essentially with these \"ice\" features and such w/ VSTi\'s, it\'s like working with a 3D modelling/rendering app. So essentially if your computer\'s not the greatest, you could hammer out a low quality sketch of a track (w/out release triggers and such) and then render out the same track with the more intense samples without any real regard to hard drive efficiency, sound card latency, etc? \"

    As far as Gigastudio is concerned, the PC still has to be able to play at least the part you want to render in realtime. If it can, then you render your track, mute the midi parts and listen to it as audio in the sequencer.

    If your PC can\'t handle the Giga part in realtime without blips or undue latency, I don\'t think a capture to disk will necessarily solve things (although in some cases taking the soundcard out of the equation may help).

    The main point is that if your PC can handle 12 audio tracks plus 12 Giga parts better than it handles 24 live Giga parts, then your better off turning some of those live Giga parts into audio tracks.

  10. #10

    Re: OT: Offline bouncing/rendering

    Lee\'s right, the freeze track thing is phenomenal. It may seem a simple thing but it\'ll be incredibly useful. If I want to turn a single track to audio the \'old\' way then I\'d have to solo it, export and have it come back into the sequencer as a new audio track. Then I\'d mute the MIDI part and enable the audio and I\'d have to copy over all the settings/plugs/panning or whatever (probably nothing but if I did) I might have had, over to this new audio track. If I wanted to change several or all the tracks to audio then I\'d probably want to create a new project entirely that contained just the audio tracks, otherwise they would get too cluttered with all the MIDI parts there as well.

    Plus to the fact I wouldn\'t want to export to audio until the part was right. With the freeze track I could freeze it even if it was just a rough sketch of a part and I wanted to hear how it would basically sound.

    Let\'s not forget the most obvious advantage of effectively unlimited processing. I\'ve raved on about the advantages of limiting polyphony per instrument, quality settings etc in the past but this just takes the biscuit! Orchestrations could literally be as large or small as you like - on just one machine. If you\'re running out of CPU/HD stream horsepower then simply \'ice\' the track. One minute its 30 notes running through the CPU and streaming and the next minute its taking virtually no CPU and streaming just 2 notes at once...

    Its an incredible new feature and its making me consider jumping ship more than ever. [img]images/icons/smile.gif[/img]

    Originally posted by Chadwick:
    Hey Spenik,

    \"But just to make sure I processed all the info right - essentially with these \"ice\" features and such w/ VSTi\'s, it\'s like working with a 3D modelling/rendering app. So essentially if your computer\'s not the greatest, you could hammer out a low quality sketch of a track (w/out release triggers and such) and then render out the same track with the more intense samples without any real regard to hard drive efficiency, sound card latency, etc? \"

    As far as Gigastudio is concerned, the PC still has to be able to play at least the part you want to render in realtime. If it can, then you render your track, mute the midi parts and listen to it as audio in the sequencer.
    <font size=\"2\" face=\"Verdana, Arial\">Well I think with regards the \'ice\' feature we\'re talking about using EXS24 are we not? I don\'t know that this can be done with Giga \'cos its such a separate entity. No need for realtime rendering with EXS24.

Go Back to forum

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •