Join me to make an Interactive Music & Sound piece!


#1

Hey gang,

I’m pretty new here and excited to be exploring possibilities in this new world!

Some of you may remember me from my work in SL as Dizzy Banjo. I have also done a fair bit on iOS working on unusual projects like Inception the App and RjDj. Here is a bit more about me : http://dizzybanjo.wordpress.com/

I would love to explore the possibilities of low latency interaction with music and sound in this new virtual environment. Unfortunately right now my workload is very intense and I can’t get the time to make it all myself.

Would anyone here like to collaborate with me to make it? Perhaps someone who could do the visuals / building and someone who could do the scripts? I could contribute interaction designs and sounds/music.

I’d love to do something abstract, which explores the unique capabilities of this environment. The idea I had was to convert facetracking and gestural control information into direct manipulation of voxels which would actually be “musical voxels” each playing notes / sounds as they were manipulated by the avatar… to explore what it would be like to hear the music of your smile :smile: !


#2

Hi @RobertThomas welcome! @KevinMThomas, @Judas and @Krysania have been playing a lot of music and would probably love to get involved.

Chris


#3

Indeed we would! YAY!


#4

@chris thanks for the pointers!

@KevinMThomas cool - are you a scripter or builder?

When I get a chance I will do some diagrams and notes about some ideas. But does anyone know if this would be possible:

  • hook the size and colour of voxels ( like all of the voxels in a scene ) to data from the face tracking / hand movement sensor?
  • hook the size and colour of voxels to trigger different sound file playback?

#5

@RobertThomas - I try to dabble in a bit:

Voxel Building/Scripting - https://gist.github.com/kevinmthomas-carpool

Blender - http://kevintown.net/projects.html

To my knowledge right now you need to utilize mesh (.fbx) to communicate with face shift however it is possible at some point I would think to connect metavoxels to the blendshapes.


#6

Hey @KevinMThomas sorry for the time lag in reply !

So perhaps it might be more interesting to do something with the avatar mesh.

Do you know if its possible for the avatar mesh itself to emit sounds (aside from those from the mic) ?


#7

Hi again. Not sure if this project is still interesting for anyone - but I thought I’d revive it as there is a new technology available now which could be very interesting for Hi-Fidelity. https://enzienaudio.com/ is a service which compiles Pure data patches to C and also to JS. This could mean poeple could author quite elaborate dsp code, in a realtime visual programming environment which is very good for music and sound work - and deploy it within Hi-Fidelity. If anyone would like to try a test project like this - perhaps handling all of the visual / 3d content - I’d be interested in trying it! r


#8

Hi @RobertThomas & @KevinMThomas !

We´ve been working (www.taxi.design) on a virtual domain to act as an virtual gallery to exhibit works of art for The Wrong Digital Art Biennale. I´d be intersted in collaborating with both of you and make it part of the online and offline exhibit taking place at Espacio Pla in Buenos Aires, Argentina!


#9

Also, we have 3D modelers and sound designers who are part of our team. So we would mostly need help with scripting…


#10

Hi there,

This sounds interesting. My workload is currently very busy, but perhaps you could email me on dizzybanjo@gmail.com with details of timescale and budget?

Best,

Rob