![]() |
Still from the experience of a single performer |
Time is a strange animal – our relationship to it is often changed by how we perceive the future or the past, and our experience of the now is often clouded by what we’re expecting to need to do soon or reflections of what we did some time ago. Those same ideas find their way into how we program machines, or expect operations to happen – I need something that is past to happen at the present time. Well, that’s simple enough on the face of it, but how do we think about that when we’re programming? This problem was what challenged me most while creating Self, and I’m glad I came up with a clear expression of what I meant by this.
The name of the project at first sight might resemble a recognition towards selfie: The so-called addictive process of image-making or photographing of one’s own image. This is certainly referred in the Project Self, while also ensuring the interactive and immersive characteristics of the experience. The user is served with an image which represents his/her outlining contours, but not disappearing after a movement or a change in the posture. It remains there, and the user instantly becomes stranger to its past images. What is left on the screen from the past image is what the user created milliseconds ago, and what’s continually being generated is the present state of the posture. This is like a painting experience, where a continuum of harmonious colors and shapes automatically occur, what you have to do is just to move. We made it possible for the user to paint something onto the screen, very easily but very temporally at the same time. The joy of painting with harmonious colors and bodily shapes is easy in the digital age, we say, but it does not remain long. Therefore, what you have to do is just to enjoy the moment that you’re in.
What made Project Self possible with its interactive and user-sensing capabilities are the use of a Kinect device and a software called TouchDesigner that’s developed by a Canada based company Derivative. From the series of Kinect 2 family, the device we used was a XBOX 360 Build 1414 version of the sensor device. Besides its capability to work like a webcam, Kinect’s ability to scan its visible scope and give a depth map of what’s included in its sight is the main feature we made use of. During the conceptual development of the project, one of the questions of interactivity was to capture a point data from the sensor device we have, so we could interact with the exact position of the subject and one/more of extensions of the subject such as arms, head or legs. Unfortunately, the Kinect model we have access to did not work properly in any of the devices we own; namely two Windows 10 operated PCs and one MacBook Pro. Therefore, this limitation led the project to develop entirely around the idea that we’ll interact with the user through the image of the user. Thus, my first trials around building this interactivity in TouchDesigner started with using the webcam of my computer.
Before heading into the development of the trials, I think it’s necessary to give a brief outline of what TouchDesigner is and what it is capable of. Derivative’s TouchDesigner is a node-based application that carries its own library of components, which are capable of generating, controlling, compositing and rendering a set of geometrical components or images in real-time. This simple definition is of course not enough to summarize all of the capabilities of TouchDesigner. From building a UI system to run real-time interactive visuals in large venues or exhibitions, TouchDesigner is an industry-standard program that is utilized in many large-scale projects and live performances. It is in its 099 version now, which a non-commercial license can be obtained freely from the Derivative’s website.
Now, as we have praised TouchDesigner enough I think, it’d be better to have a look at the development phases of the project we created on TouchDesigner. As I’ve already said, we decided to build the project on a Kinect interaction, which transfers a real-time image of the user(s) to the system; and the system is thought to be generating a differentiated and enriched imagery on the screen, which is provided as a feedback to the user of the system. After a few nights of roaming around TouchDesigner, I was able to generate a wireframe portrait of the spectator of the system –namely, whomever is in front of my laptop webcam-. The node-map of such a system basically looked like this:
![]() |
Webcam-generated basic tracing system built in TouchDesigner |
In this system of networks, I first was inputting a live camera image from my webcam using a VideoDeviceIn TOP (Texture Operator) at TouchDesigner. Then this camera image was filtered and keyed to give a better outline of the subject in front of the camera, for which I needed a flat and clear background as much as possible. For clearing out the background, I used a Chroma Key TOP component and I set the parameters manually each time I run the system because every scene had different light intensities and background & color settings of its own. After the Chroma Key settings, I transferred the resulting image into a Trace SOP (Surface Operator), which traces the Chroma Key image in real-time and generates a parameter-set points in 2D space to recreate a mathematical equivalent of the camera image. This was the crucial part of transferring the camera image into the Geometry COMP (Component) and then render it with a Wireframe Material and have a real-time export from the system. The significance of the Trace SOP is very high in every step of our project, because that is the step where the pixel-based camera image is converted into geometrical information.
I cannot say that the development of the project exactly coincided with my process of understanding how to program with TouchDesigner. Because of the fact that the platform has numerous different components that each of them can be used to manipulate the nature of a project in tremendous ways, I first tried to establish a background on how to generate custom images, how to reflect them on a Geometry COMP and edit them with my needs and how to control certain parameters in an interactive way. In other words, this mean that I had to learn to write cross-referencing Python code in TouchDesigner. Python is the main programming language to generate expressions in TouchDesigner, in which an operator and its parameters can be addressed and be used to control another operator’s certain parameters. During this stage, I learned how to develop an audio analysis tool, which at first we did not think of as an interactive part of our system but later I thought of integrating into the system I built.
After I more or less had an idea of how the image generation with certain interactive parameters occur in TouchDesigner, I went on to implement our project’s requirements into the system. Firstly, I had to think about creating a clear posture of the user of the system. Using the webcam, as I said before, could not be an option to do this because no matter how clear the background would be, we were thinking of building our space in a dark room and it was impossible for a webcam or a Kinect Color Camera to capture anything in such little light. Therefore, I passed into another feature of the Kinect, which is its Infrared Camera. Through this Infrared Camera, the Kinect was able to generate a Depth Image of its sensible scope. The resulting image of a Depth Map was looking like this in TD:
This was something I could work with! The fact that Kinect only senses the bulk of an object and created a solid fill with regard to the distance of that object from the sensor, I was easily able to trace that image into one clear wireframe and transfer it directly into the Geometry COMP after several transformations to fill the screen properly. These transformations consisted of scaling the image a little bit larger and dragging it upwards so that most of the sensed object is still in the scene.
After I successfully created a posture-wireframe image of the subject in the screen, there were basically two more implementations to be made to generate the resulting image that we desired. One of them was to create an ever-changing coloring of the wireframe and the second and more important one is to create a feedback within the system, so that the generated images do not disappear instantly but they’d still be visible on the screen a little more time. To achieve this, inspired by a mixture of different methods, I developed and utilized a method which was consisting of transferring a rendered image into a Composite TOP and re-wiring the Composite TOP with a Feedback TOP and a Level TOP in a loop.
![]() |
A feedback network to generate and gradually disappear the rendered geometry in TouchDesigner |
Consequently, the images generated are stored into the Composite COMP after each iteration within the Feedback TOP and the Level TOP was controlling the disappearance of these images with slightly tweaked Pre & Post Level TOP Parameters of the generated images.
The color part was easy to implement. The Geometry COMP can be materialized by any of the MAT (Material) Operators, which can be triggered by RGBA color parameters of any color-generating component in TD. So I generated a Random Noise TOP Channel with the dimensions of 1x1 pixel, so that it generated only one color at a given time. Then I colorized a Constant MAT with the Null extension of that Noise channel and applied the Constant material into the geometry to colorize and solidify the generated wireframe image in the Geometry COMP. After all of these operations, the resulting rendered image was something we were satisfied with.
![]() |
Whole network structure of the second part of the experience: The Feedback Freak |
![]() |
Still from the Feedback Freak phase |
In the pursuit of upgrading the whole project into a more interactive level, I thought of implementing another reactive mechanism into the system and that was the point when my previously-built audio analysis tool came into my mind. In this tool, I was able to capture the sound of the environment with the built-in microphone of my laptop and convert it into a single-channel numeric value that was resonating between the decimal values of 0 and 1. The silent situation was responded with a value that is very close to 0 and the more sound the environment generates, it was getting closer to 1. For very high noises, the value was able to see the values around 1.4.
“So” I thought, “how can I make use of that sound parameter?” There were several basic options, such as controlling the magnitude or displacement of an image in the screen with the present sound level, but this seemed very simplistic at first thought, so I immediately abandoned that idea. I needed something a little bit more complicated and responsive. So I built the following system:
![]() |
Whole network structure of the Presence on Sound phase |
![]() |
Screenshot of the network part in which the sound level controls the crossfade rate between the noise image and the Kinect image |
To drive this experiment more elaborately, I again made use of a Feedback System, which controls the dissolving speed of the dots into noisy particles. As a result, the user was not immediately disappointed by vanishing of their image on the screen after they make a tiny noise. They were able to see their image slowly disappear into sparse dots.
![]() |
Still from the Presence on Sound phase |
After I built these two independent systems of interaction, I spent my last days until the exhibition day with creating a switching algorithm between these two systems. After I integrate the whole nodes in one single file, I connected the two different outcomes into a Switch TOP, which works like a Cross CHOP. With an index value between 0 and 1, it chooses to show which TOP to display. To control the index value, I thought of interacting with the user again and tried to use the sound level value again. However, it was not a viable option to make the user pass into another experience only when they are generating a sound. I thought of implementing a logic & trigger mechanism that makes the system display the other screen only when a certain sound level is reached, however I could not solve the necessary triggering mechanism in TD within 2 days. Therefore, I chose the easy way: Within an Animation COMP, I created a keyframe timeline that controls a single channel value between 0 and 1, travelling over a looping timeline of 12660 frames, or say, 3.5 minutes. In every 3.5 minutes, I set up the whole system to display first and second systems in a following-each-other order and with crossfades between them.
![]() |
The animation keyframe window |
It is impossible to think of an immersive system without sound, right? I was aware of that fact, too, and the whole semester I thought of listening to my Spotify lists with an imagination of how can I create an audial sense that fits perfectly into the experience we created. At first, I thought of connecting the visual feedback effect to the delay parameter of a certain sound in the system, in which the increasing number of wireframes would create an increasing delay feedback in the sound. However, it became a disappointment that I could not set up a sound interaction & generation system until the exhibition date. Consequently, for the exhibition opening, I created a 3-minute track of which 1.5 minute is occupied with an ambient sound of a jungle, which characterized the sparse noisy dots on the screen and let the viewers to add their own sound and interact with the dots. When the system exactly changes to the second scene, I made a solo flute track fade in and generate a continuous, esoteric and cosmic feeling so that the generation of the continuously appearing wireframes would create a transcending feeling. Unfortunately, it did not work well. After several loops, I realized that I needed a more energetic sound with rhythmic elements so that people can dance and track their movements on the screen with a certain feeling of painting the screen and synchronizing with the music; I’ve created a playlist of several psychedelic & rhythmic tracks and I’ve acquired a very well response. Some of the attendees reported that they did not want to leave the system and they “can stay in there until the morning” and “want a copy of that in their home.” I noticed that average staying time in the experience was 10 minutes, with outliers ranging between 3 minutes (one loop) and 45 minutes (some children and people who went crazy with the visuals and the music). Also, some phosphene-like closed eye re-visions of the experience after they leave the space are reported by some of the users.
Obviously the experience gained more meaning with a rhythmic music playing along. I, therefore thought of implementing the system into a performance hall or a nightclub setting onto one or more of the walls and create a space for the audience to interact with it. I think the limited sensorium space of the Kinect is a lucky coincidence in this setup, because we would probably not want all of the audience to interact with the system, that would be pure chaos. However, I think, setting up the Kinect in a reasonable position and capturing the audience up to 5-8 members at a time would generate a great visual to reflect onto the walls/surfaces in a concert or nightclub setting. Actually, I’m thinking of developing and implementing this system to be used at one of my own band Ikaru’s live performances, which we already deemed as an “audiovisual” project, with my pure motivation of eliminating the borders between the audial and visual perceptual spaces.
Credits:
Creative Coding, Programming, Execution: Ali Bozkurt
Concept Development: Ali Bozkurt, Sena Çelebi, Sena Örücü
The Self is made possible at Bilkent University's Faculty of Art, Design and Architecture and presented as a Term Project for GRA502 Graduate Studio II course.
Special thanks to Andreas Treske for his valuable motivations and thoughts on the process and to Matthew Ragan for his remarkable online documentation on TouchDesigner; also for the inspiration for the starting sentences of this blog post.
Yorumlar
Yorum Gönder