I posted this on the Gigasampler email wish-list around 3 months ago, but since I\'ve not heard anything back, have assumed have busilly beavered off to patent idea
It goes like this (bare with me here):

If GS could integrate a decent sequencer into the Gigastudio, there is no reason why they could not give us an environment with virtually infinite (well...) polyphony and tracks.

It goes like this: in the current world, a sequencer tends to contain two types of data - midi and audio. The midi data may drive a synth or in the case of GS, may drive a sampler. Now - if the sequencer was within the environment of the Gigastudio and the sequencer therefore had intimate knowledge and control over the GS, then there is no reason why the sequencer couldn\'t determine which midi tracks were driving the GS sampler and therefore to \'calculate\' what that track should play as a background task, rather than playing it in real-time, thus losing polyphony.
Think about it: the only part of a sequenced performance that is ever \'inderterminate\', is the live playing, which is where you need a lot of polyphony.
Once the performance has been laid down, it becomes completely determinate, albeit variably determinate as midi notes are tweeked manually.
Why, therefore, is it not possible for that sequencing package to be configured to automatically \'capture to wave\' as a background task whilst the musician is away making a cup of tea, or deciding what they want to do next.
Knowing that a midi track wants to play a certain Cello giga instrument should be enough to generate an audio image of that entire track? Surely?
You could extend this further by then saying:
\"well - now that I\'ve generated an audio image for this track, why not now combine tracks that have not changed for a while, therefore saving CPU time?\"
It is all about making the sequencer a bit intelligent, by using easily availble disk-space to store buffers of combined audio tracks.
I understand that there is a reasonable amount of processing power involved in something like this, but there should be no reason why you could not select those tracks that you wanted this to apply to.
Think about it: this would, in theory always give you maximum polyphony, in addition to a number of audio tracks.
You could even extend the principal to the effects...
Of course, you make it look to the user as though it were all working in real time.
What think?