Hello. Big newbie. FrameLib is awesome and I’m interested in using the multi-stream operators to trigger grains at offsets nabbed from a FluCoMa buffer.
The idea is to grab a list of slice-buffer indices from a list that grows and shrinks in size and create one grain voice per offset. I’m trying to wrap my head around this sort of polyphony in FL.
I must be misunderstanding how this is supposed to work, because the density stays the same when I have less slices:
I don’t see anything particularly wrong with your patch, although I’d expect it to put 3 streams on the left and 2 on the right at the output (because streams run modulo).
Two general tips I’d say are:
1 - I think setting enum parameters with symbols is better practice (I’ve been slightly tempted to remove setting by numbers, because it makes it harder to change objects later, but I haven’t pulled the trigger on it yet).
2 - Rather than read the slices using ms input (needing conversion using fl.samplerate~ before fl.read~) I’d just set the units of fl.read~ to samples. I’d also turn interpolation off for that fl.read~ to make sure you are getting the exact sample you ask for (the accuracy means that’s probably going to happen the way you’ve done it, but it’s conceptually nicer to set the object up in this way).
Right now my personal build of framelib is not complete, so I’ll look again once I’ve fixed that and see if I can understand why you aren’t getting the sort of results you want.
Yep - I’ve just checked and this does exactly what I’d expect here - I get 3 sounds on the left (each triggering randomly) and 2 on the right (also triggering randomly).
Because the triggers are so fast and the grains so short (1899 sample every 2500-4500 samples) it’s a bit hard to differentiate the sounds, but it does happen. You’ve also connected both of the outputs to the left, so you’re only hearing 3 of the 5 layers in your patch.
Let me know if you’re still struggling to understand and we can talk further or even set up a call to go through it if needed.
Hey! Thank you for the speedy response. Yes, I definitely don’t quite understand multistreams yet. Big improvement.
My main issue now is how to add and remove voices. Here’s the idea - I’m using fluid.plotter with a radius to pick up some number of samples. I want one voice to be created/removed for each point the radius picks up - for now, if I have one point selected, it plays that point in every stream created. My guess is I want to mute the streams that are greater than the count of selected points, but I’m not sure. Hopefully that makes sense.
I’d certainly be down for a call to work through this if you’re up for it. Thanks again!
Yes - in terms of muting streams you have a few options, depending on what you want to happen. The most obvious is to start and stop the fl.interval~ in each stream (you need to set the /switchable parameter to allow this).
Essentially here the multi stream approach is like having (in this case) 9 copies of the patch - you can think of it as a convenience in terms of not having to make and deal with those copies, but it would be possible to do the same things without multi stream capabilities if you coded it manually.
Send me an email off list or direct message here (I think that will trigger an email to me) if you’d like to fix a time for a call.
I just having a similar nut to crack… very beginner who just try to understand how overlapping grain playback with the flucoma concat plotter would work with fl as grainplayback engine … maybe you found a solution in the meantime and care to elaborate?
Here’s something quick and nasty without any of the plumbing inbetween that does the map / analysis etc. But the principle is there to build the infrastructure you need around this.
this is great @james.bradbury! than ks for taking the time.
I somehow missed your post today, while I was working on the ola version and now I find it very interesting to see/follow how you did the parts with fl that I did in normal max…
just in case: iam referring to my mod of your corpus explorer patch over here:
a question about simultaneity of frames in frame lib:
is it correct that as long as grains overlap but didn’t start at the very same time
than sink~ can do it just fine … but if I would like to have 3 simultaneous instances of a grain with different pitches (each one octave apart) I do need poly~ to accomplish it ?
You don’t need to manage polyphony at all which is one of the great things about FL . Instead if a new frame hits the fl.sink~ while a new one is “playing” the new one is added to the old one. This is fundamentally no different to how a poly~ output works anyway, as usually you just have a singular out~ 1 where all the voices sum.
I’ve tried to visualise how that works for you below. Time goes from left to right and the big squares represent frames as they would be generated. The first two frames don’t overlap at all because the first is already over by the time the second arrives. The second and third frames however do have some overlap, shown with the red overlay. For “frame 3”, the red portion would be added to “frame 2”.
It shows how as you increase the length of the fl.uniform~ (in practice the duration of a grain) it makes the output larger, because it is being overlapped and added at the output.
while we are at multistreams/polyphony/voices: let say i would like to track 3 kdtree neighbours from a corpus and play those all at once via the plotter.
to see how this type of overlapping stream sounds at all and how it can be done in framelib i modified @james.bradbury ´s “quick and nasty” example patch as a starting point. the input is a list of 3 integers that refer to 3 different slice positions. this list is iterated, so i basically send 3 sets of integers (slice n and slice n+1) to [fl.read~ slices].
for some reasons it only plays the last item of the list. why? did i missed something?
maybe i still miss something important, but iam not able to start multiple grains at the the same time.
from my tests it seems that if the interval between the grain starts is smaller than 20 ms it only plays the last in a row. i would like to understand what happens here.
i append the patch (adopted from @james.bradbury ) with 3 different ways of iteration of my list (with neighbour indicies).
one version with using iter object one with a deferlow´d uzi (basically a slightly slower iter) and a iter with a 25ms interval between iterations. only the latter works.
maybe @a.harker can shed some light on it?
In the middle of grant writing so no time today to look at the patch, but framelib can trigger grains in extremely rapid succession in one stream (a tiny tiny fraction of a sample apart).
In a single stream you cannot have simultaneous grains - for that you have to consider “polyphony” and multiple streams.
There are thus two options for simultaneous (or near simultaneous) grains.
1 - you have a multi stream network
2 - you trigger grains a tiny distance apart with some suitable method.
So the first thing is to confirm if you are doing 1 or 2 or not. If you are and it doesn’t work there is an implementation issue. If you are not then it’s a broader conceptual issue.
thanks Alex, that already gives me a rough idea and a direction.
the grain time distance (25ms) already worked but with 8 grains it adds up to 200 ms and when triggering a big bunch of grains with lot of transient it would be nice to have them in dead sync… I will look closer into the multistream stuff…take care
So here’s an example of 2 (super short gap of 0.0001 samples between sounds). I can’t remember (given that fl.sink~ is not set to interpolate by default) if all the samples will trigger on the same sample exactly, or whether the first one might be a single sample earlier than the others, but one of those things will happen, which for most purposes means they are simultaneous. I could dig into the detail if needed.
I’ve just checked and in practice with interpolation off for fl.sink~ the grains are exactly simultaneous in the above example. There is a limit where there are enough triggers together that the combined intervals sum to more than half a sample and the grains get split across samples at the output, but that depends on the gap between them (here 0.0001 samples, which will be enough for 5000 simultaneous grains) - you can make that gap smaller if you need some ridiculous number of simultaneous grains.
thanks for the patch and the explanation. that makes sense…
I read more about multistreams and just tested something that I thought might be easy: creating 4 interval instances that get their interval-values from a list.
hmm, doesn’t work as I expected: sending the list into a perblock´ed fl.frommax~ =4.
or is there also a special treatment of the list values needed when using multistreams?
looping a variable number of grains/neighbours simultaneously from a corpus, each instance loops one its own…
set the amplitude of the grains/neighbours according to their distance to the query, so grains further away will have lower amplitudes.
from the top of my head this start with calculating the length for each grain and set the fl.intervals properly. further down the line I would somehow unpack the multistream to individual streams to set the amps. more test the next days…
any hints are appreciated.peace
The issue here as that you’ve made an assumption about how lists going into fl.frommax~ will behave that sadly doesn’t match how the object works. Lists don’t get split across streams, because lists going to fl.tomax~ are converted to frames (which is very useful for other things) and there’s no way in the patch you’ve made of clearly differentiating between the fact that you want to separate values to different streams, rather than group them into a frame - so all 4 numbers go to all 4 streams in this example.
I could see an argument for fl.frommax~ having multiple inputs when there are multiple streams (to mirror what happens with fl.tomax~) and I’d consider that for a future version, but for now you’d need a way of splitting a frame to streams or sending a different number to different streams. So - that could be 4 fl.frommax~ objects each taking a different number, or you could split a frame to 4 streams after you are in framelib land - I’ve done the second of these below.
is there also a way to dynamic change the number of streams by checking the number of the list-items?
when I send a smaller list it wont update to lets say 3 or 2 streams but will stay 4.
Stream sizes in a network are fixed - you cannot dynamically change them - you can have them stay silent by controlling them in some appropriate fashion (such as using a switchable fl.interval~ and turning it off), but you cannot change the number of streams on-the-fly - this is changed only when you rewatch. In this way this technique is more similar to using poly~ or similar where you pick a max number of voices and then manage them appropriately to your needs.