Latency will be an issue for serious jam


This video has nothing to do with Hifi except that there are 2 guys playing together from 2 different places. I chose this clip because firstly bass carves ass like nothing else on the planet, secondly the speed that these guys head metronomes are running is blistering.

This jam represents what I want to do in Hifi but fear I will only ever get to jam with people in Australia due to the currently latency presented by sheer distance alone.

We have almost reached a point where we can actually stream music inworld (almost), is my dream of jamming with others in a live sense doomed until faster than light transmission is achieved?

The fact that these 2 guys are both playing bass guitars is just an absolute bonus, get your head around this funk.


Yes its doomed. Latency over about 4ms makes jamming horrible. The options are the chaining method where each person down the chain adds to the tune
If you could figure out away to be exactly 12 bars behind again you could fake it
then there’s the Milli Vanilli method, where everyone mimes :smiley:

There may be a 3rd way, if we could calculate the the delay each player had then something with a click track ah no still wont work
maybe somthing using a stable wormhole?
ooh ooh
i got it
everyone just play jazz and call it syncopation


What needs to be done is to set up the audio mixers used by the musicians to deliver the same latency to everyone because with the same latency, there is half a chance for people to work out decent chaining.

Without equal latency for all players, it’s pretty much impossible and quite uncomfortable.

And yes, once quantum entanglement modems are available, the latency problem will be mostly solved.



There is no way you could depend on the latency being regular enough to rely on it, its going to fluctuate over time.
But if you forced the delay to a manageable frame.
I can see a path and without using quantum theories, but its complex.

Each input (singer, guitarist, drums whatever) plays and transmits an embedded timecode with the signal. Someone with the right skills builds a system that captures everybody’s audio along with their timecodes, and syncs them all up and allow say 3 seconds to account for most circumstances, then sends the mix to each player minus that players part. And sends the entire synced mixed result to spacial audio. There would have to be a separate audio channels to send to each player and returns for each player.

That player hears everyone else in sync and has to mix his own sound with the delivered sound locally, to jam with, and doesnt listen to the server sound, and only sends his sound with timecode to the synchronizer.

Like a compressed chain, so to the audience you are live, but to each player you are playing to what seems like a pre-recorded track and there is no immediate human feedback. If you want to change the groove its going to take a few cycles.

But there’s the software development to think about, and the channels. There’s a project for someone, shall I put it on the worklist? lol

I’d like Judas’s idea about waiting 12 bars then coming in, if it wasnt for my hatred of 12 bar blues lol.

Could pick any song that just repeats a short loop and just come in where you feel like it, I wonder how that would sound to the listeners, I would prob still be out of sync because of my own latency.:confused:


That is an old school approach doing seconds long buffering. VOIP can do a fine job maintaining bobble free latency within 50-100ms (there are existing techniques to do this). And, a lot of the infrastructure is already in HF’s audio mixer architecture. The key is identifying a geo-location where an audio mixer can give approximately the same latency to all participants (the musicians). Listeners do not need that tight a control.

A weaker alternative, is to find the highest latency, wherever the mixing points end up, keep them there and then add delays on the shorter legs to match up with the longer latency.