Architectural Form Exploration III - Sound Fueling Form / by Andreas Kopriva

Hello everyone and welcome to the latest installment of Architectural Form Exploration. In this particular installment I'll be introducing a side-study I carried out during the second term of my first year. 

As mentioned in the previous post, the exercise revolved around extracting data which would be used for fueling structure and form generation, inevitably leading to the construction of a pavillion of sorts. That will be covered in the next post of this series, but for this post I'm going to cover a small diversion I took while exploring the concept of the universality of data. 


I had set up two cameras to record the new motion that I had decided on, having abandoned the stroboscopic movements I had initially experimented with, extracting a series of stills from the feeds. An unintended side effect of following this capturing method was that I ended up recording sound as well. 

This sound was essentially some ruffling of clothing along with the thumps associated with my clumsy jumping maneuvers. While selecting the images I was to use for my main analyses I noticed the form of the captured audio and noticed that, much like the move (jump-landing-pause-jump-landing-end) it had a series of spikes in activity. 

At this point I was also exploring the concept of reactive architecture, as in structures which are affected by someone going through them or perhaps being around them. So I decided to carry out a little experiment on how this could potentially be facilitated using the sound generated from the recorded movement. 

To begin with, I imported the sound into Audition where I had a few more analytical tools at my disposal. 

I switched the view to a Spectral Display analysis and then subsequently to a Pitch Display analysis mode. Upon having the pitch references on the right hand side, I imported a screen grab into photoshop and dragged connecting lines from the centers of most 'activity' in the waveform and acquired the relevant pitches that were generated. 

In an attempt to establish full disclosure I must say that I approximated these values, as ascertaining a true pitch from such a random collection of sounds was quite difficult. 

Upon making this sound selection, I decided to compose a little melody based on the approximate rhythmic cadence of my inelegant stumbling around (i.e. my analysed move) and drafted the following little composition : 

Armed with this cacophony, I loaded up C4D and carried out a few tests using simple geometric shapes and the sound effector modifier. This generated an interesting looking shape illustrated below:

Having carried out this proof of concept I went back to the idea of creating a simpler structure, something that could feasibly be built without too much effort and that mechanically made some sort of sense (I mean, the above shape is motivated by magically deforming materials, triggered by an input sound - that would be quite challenging to construct.... at least considering my lacklustre technical capabilities). Therefore, I opted to keep it fairly simple, considering a series of plates connected to pistons which reacted to an input sound. 

In a real life scenario I imagined that such a construction could be linked to a series of microphones set around the particular structure which could then be connected to a simple processor which would feed the particular pistons with the instructions required to analogously react to the incoming sound. I later found this clip from the MIT Architecture lab which shows off the concept very beautifully (around the 11 minute mark) - in their example, multiple panels appear to be connected to a variety of sensors (including depth sensor technology, probably a modified Kinect sensor or something similar) which cause the panels to react to a variety of input information. 

I was hoping to explore this idea further, perhaps even building a small scale model illustrating this idea, but considering the time constraints in place, my lack of any real mechanical engineering knowledge and the fact that this was a brief diversionary endeavor fueled by a brief spark of curiosity, I decided to settle on a Sound Effector driven approximation of the concept at this stage: 

Following the production of the above visualization, I expanded on the concept for a tiny bit more before laying it finally to rest. This latest interpretation revolved around a new translation from musical notes to shapes, facilitated through exploring the idea of Cymatics. 


Cymatics, according to the Wikipedia entry, is a subset of modal vibrational phenomena. Basically, the surface of a plate, membrane or diaphragm is coated with particles, paste or liquid and then exposed to different vibrations which, in turn, cause different patterns to emerge. 

Though this is an old concept (coined by Hans Jenny in 1967), it has become quite relevant in these days through the following popular video which serves as an exemplary illustration of the principle : 

Having briefly looked into this field of study, and still having my cacophonous composition at hand, I decided to see whether I could come full circle and transform this data back into a physical form. A quick google search lead to the discovery of a pitch based cymatic diagram sheet which I used to create 3d slices of all 12 chromatic keys. 

These were generated fairly simply - I imported the diagram into C4D and simply drew splines around each shape, extruding around to get a three dimensional volume. Armed with these building blocks, I laid out the melody I had extracted from the pitch display analysis and proceeded to sequentially stack them together to create the cymatic pillar below : 

To summarize everything because at this point I myself feel a tiny bit lost : 

  1. An arbitrary movement was carried out in space. Jumping from a standing position to a crouching position in the middle of the room - with a slight deviation in Z space - and jumping back to the original plane in the other direction
  2. The sound generated from this movement was extracted from the recorded video and translated (approximately) into a melodic piece, with some liberties being taken to ensure cohesion
  3. The pitches comprising that melody where then translated to their cymatic pattern equivalents using a found diagram showing pitch patterns 
  4. These patterns were modelled in 3d space and stacked sequentially into the pillar illustrated above 

Basically, that pillar is structured based on the melodic encoding of the sound generated from the movement initially recorded. This whole journey of extracting data, transforming it and re-encoding it to create different forms was absolutely exhilarating for me and I must say that it's one of the most entertaining and thrilling explorations I have carried out on this journey exploring architecture. In fact, it's this potential of unifying multiple disciplines under the umbrella of finding form that absolutely fascinates me about this field. 

I didn't end up using this anywhere - besides making some 3d prints of my cymatic columns of course - but I may look into it in more depth in the future, when I have the luxury of time to explore this field further. 

In the next post I will be outlining the final form generated around a designated site, and will outline the considerations taken on how to apply an abstract form-finding methodology into the constraints of a space provided. 

Hope you've enjoyed this rather long post and stay tuned for additional stuff over the next few days.