Dear Videosync users, i would like to share with you an ongoing project i call “Tabula”. It’s using Videosync to manipulate with sounds from your voice. It’s a kind of art-installation and i’ll show it next week to an audience and they will be invited to interact. If you are interested and wan’t to see some videos in action follow the link. And let me know what you’re thinking, that would be great.
Wow, I love the idea of using physical paint as a monochromatic input source and then colorizing it according to the sound! Also the analog “air-gap” between the output image (iPad) and then a camera is fascinating. It almost invites the idea of analog post-processing.
Did you use the Tabula plugin for this? And It would be interesting to know how your audio is connected to the Videosync parameters.
Hi Mattijs, thanks a lot - and funny - i didn’t had in mind that Tabula is the name of your great plugin, but of cause… The naming comes from my now over ten years old tabula i’m drawing on. And i started testing coloring with your plugin tabula, but i ended up with the ColorControl - wich is from great value for me because i have very good control (as the name says ). I’ve two layers with the same image input, the level1 has blendmode screen and level0 is inverted. It’s working a bit like a mask and this way i can color the same image with to colors that i can 100% adjust. Sound analysis is done with a cool M4L Plugin called Sektor by David Johannes and with Gate & Pitchtracker. The Values are mapped with Expression Control and i’m using only incremental parameter, this way you can use your voice as a step-controler. BTW: the iPad is connected by cable and i use an app called Airserver streaming using Apple Air technology and i picked up a SWIFT project with camera so i could make an own button to take the Screenshot via Midi in Syphon-Recorder. Wild thing at all…