14/14 Posenet rabbithole

As mentioned previously, my ICM finals was to design a Pose-Karaoke experience. For my motivations and background on the project, please go to the post here.

While I had major issues with getting the ICM code to work on the ml5js platform, much of it has been rectified by the maintainers of the code-base and this current example solves pretty much all the problems.

But while I was doing this, I did not have those luxuries. This resulted me in understanding how Javascript works and figuring out the issues myself. The main issue was that a P5Image does not have a img html tag that ml5js needs to be able to run the algorithm. (Funnily, it works for video. No clue why this is done this way). This was solved using an image tag. But the problem with taking a screenshot from the live video still remained. I soldiered on and found my redemption in the toDataUrl() method.

But this was the easy part.

While starting the project, I did not realise the complexity of comparing 2 poses. A lot of what I had to do relied on being able to compare 2 images and it wasn’t a trivial problem. Trawling through the depths of the internet, I came across this post by Google research where they had worked on a similar problem. This post is a wealth of information on how to compare poses and it was outside my technical ability to be able to incorporate everything in my work. But the chief things that I could incorporate were:

1) Cosine similarity: It is a measure of similarity between two vectors: basically, it measures the angle between them and returns -1 if they’re exactly opposite, 1 if they’re exactly the same. Importantly, it’s a measure of orientation and not magnitude.

2) L2 normalization: which just means we’re scaling the vector to have a unit norm. This helps in ensuring that the scale does not play a factor in comparison and the 2 images can be compared normally.

The cosine similarity helped my code run faster and the L2 normalization ensured that the relative distance from the camera won’t play a role in the comparison.

Getting these 2 things to work proved to be a big challenge and once that was done, the comparison went pretty smoothly as seen in the video below:

I ran out of time to build a complete experience for the users which involve an engaging UI but that gives me something to do for the winter break. While I could not match the scope I had set initially, I am very happy that I could dive into algorithmic complexities and solve those issues to make something working. This gives me a lot of hope for the future and my coding abilities. All in all, time well spent!

Understanding comics

For animation class, we were asked to read ‘Understanding comics‘ and reflect on what we have learnt. I have read the book many years ago and it was great to pick it up and go through it all over again after so many years. I had originally read the book before I started design school and I realised that I forgotten so many aspects of the book which emerged to me the second time around. Here is a list of my reflections that I noticed and picked up on my 2nd read-through:

  • The narrative is more immediate as compared to film. While we demand narrative coherence in film as we respond to the flow of ‘time’, a comic is free because it can move through time and space in a matter of few panels. I hypothesize that its one of the reasons why comic book plots don’t translate well on screen where the audience responds more to the flow of events across time rather than the space of a comic book.

  • The role of a narrator: I believe that the narration is the anchor which hold the comics together. Which is why I haven’t seen many works where the narration and visuals are at odds with each other. It will be interesting to see a narrative where the visuals and the narration start diverging and running at total odds with each other.

  • Panel to panel transitions: This was the biggest part of the book that I had totally forgotten. Scott Mccloud does a great job at explaining the various ways in which a narrative can work across time and space using the 2 dimensional paper grid. This got me thinking about the forms that I see back home. Is there an inherent structure to how a story manifests in a mandala or on the wall of an Indian temple? Do similar rules apply> I believe that there should be one but hopefully, I will find a book that talks about it in detail.

One of the biggest things that struck me while reading this book was that the constraint of the medium squeezes out the narrative style and structure. While the boxes might be seen as constraints by some, artists used it to tell their stories in unique ways that have now become representative of the medium. I wonder if there is a similar story with VR. While VR does not have any control over the user’s view-point, what are it’s unique constraints from which VR-only narratives will emerge?

After effects project proposal.

For the after-effects project, I am teaming up with the mighty Cara. During our first brainstorm, I fell in love with the spaceman character that Cara had come up with and we decided to use the character in a museum dedicated to Earth after it has been destroyed.

The plot:

An anonymous astronaut visits the Museum of Earth, the last surviving vestige of the planet destroyed in the water wars of 2050. The screens consist of short montages of what it means to be human. The animation cycles through 4-5 animations.


Pretty excited to see where this goes. After effects is daunting AF but we can power through this.

FAB 3: Shattered, does it matter?

This week’s assignment was to make something out of acrylic using the laser cutter.

Easy peasy. I have been meaning to do a Voronoi lamp for the longest time and finally, it’s time to scratch that itch. So, off to p5 I went. Now, things are much easier compared to the last time I worked with voronois and guess what? there are libraries for that now.

After playing around with the shapes and size, I brought a few patterns that I liked into Illustrator. I played around with the dimensions of my box and tried to find a cross-section that won’t have very tiny shapes that might mess up with the laser cutter.

Screenshot 2018-11-16 14.41.39.png

Once I had that, it was time to find the material. Now, my original plan was to have 6-10 colors but looking at the costs and the availability of plastics, I brought it down to 4.

Screenshot 2018-11-16 14.46.44.png

I separated the colors into individual files for each color and off to the laser printer I went. Cutting the pieces was pretty uneventful except the part where I lost a nice slab of acrylic to the 75w printer which refused to work. (The cutting gods always demand a sacrifice) and within 45 minutes I had all my pieces. (easy peasy!)


Sticking the acrylic was a different monster altogether. The adhesive that I had was so runny that it was making my life miserable. But thankfully, Lydia got me out of a soup and loaned her rubber cement to me which made my life so much easier. And voila, within 2 hours I had a lamp!


All I need to do is find some LEDs to light it up and it shall be AMAZING!.

Finals madness!

For my final in physical computing, my initial direction was to continue working with soft robotics. I wanted to explore the material more and create data-driven experiences that with softness, slowness and reflection as guiding principles. I brainstormed on multiple ideas and none of them felt satisfying. I was tired of being a one trick pony with silicon and nothing really felt like it was adding up to a meaningful experience. I spent a lot of time going around in circles till I gave up and focused on everything but Physical computing.

And that’s probably the best thing I did.

Not working on P.Comp gave me time and distance to think about it and combined with the happy coincidence of my friend Nun continuously going “I wish I could use all the buttons!”, it led me to a happy place that became the start of an idea that seems promising for the final project.

So here’s my 5 minute pitch:

Update: Lillian and Atharva have decided to team up with me! This gives us an actual chance to make a nuanced, complex interactive model so we have updated our deck with the new, refined idea.

  We were born in the 80s and was made in the 90s.

We were born in the 80s and was made in the 90s.

  It was a great time to be alive. The music was the best.

It was a great time to be alive. The music was the best.

  The cartoons were definitely the best.

The cartoons were definitely the best.

  The technology was clunky.

The technology was clunky.

  But with the physical controls, it was so satisfying.

But with the physical controls, it was so satisfying.

  It was intimidating to approach it.

It was intimidating to approach it.

  But when you got it, it became a part of your muscles.

But when you got it, it became a part of your muscles.

And the sounds really brought them to life.

  And we all remember clambering to rooftop antennas to fix a TV signal.

And we all remember clambering to rooftop antennas to fix a TV signal.

  But the 90s also had something awesome. Insanely obtuse point and click adventures games!

But the 90s also had something awesome. Insanely obtuse point and click adventures games!

  Where the instructions were minimal and the user had to play around with the interface to discover the path ahead. (In the image above, you have to throw a bone so that the fire beaver on top jumps from the ledge and then quickly pull out your fire extinguisher on it. Once the flame goes out, you can collect a key that opens a gate. Phew!)

Where the instructions were minimal and the user had to play around with the interface to discover the path ahead. (In the image above, you have to throw a bone so that the fire beaver on top jumps from the ledge and then quickly pull out your fire extinguisher on it. Once the flame goes out, you can collect a key that opens a gate. Phew!)

  So, we want to build a machine where the instructions are obtuse and the users have to play around with the interface to figure out how to make the machine work.

So, we want to build a machine where the instructions are obtuse and the users have to play around with the interface to figure out how to make the machine work.

  You walk up to to a machine that has a button, knob and dial garden on its face. There’s a single bulb blinking. What will you do?

You walk up to to a machine that has a button, knob and dial garden on its face. There’s a single bulb blinking. What will you do?

Fab 2: This is a drill. Repeat.

For the 2nd fabrication assignment, we had to make 5 repeating shapes of the same dimension. I was still hung up on making lamps and I wanted to combine wood and silicon in such a way that the 2 materials are interlocked with each other. My idea was to split the wood into multiple sections and then fill silicon between them as you can see in the figure below (the black area is the silicon):


I wanted to illuminate it from the bottom and the angled cuts appealed to me more. I came up with the straight cut option as a backup plan (2 months in ITP has taught me that at least!) and set off on my merry way.

I found a piece of squarish wood from the shop spring cleaning that I cut into wooden blocks using the miter saw.

The next task was to create equal blocks which was achieved using a pencil, ruler, miter saw and the sander.


I forgot to clamp the first piece and lost the whole piece as it flew away and smashed on the wall. Never forget to clamp the wood, kids!

Thankfully, I had extra wood and the breakage proved to be a minor inconvenience.


I traced the diagonal cut shape and went at it with a band saw and sander. I got the shape I wanted but problems were immediately apparent:

1) The sander eats through tiny pieces of wood. The piece on the middle-right became smaller than the rest in no time.

2) With the diagonal cut, it becomes very hard to keep track of the perpendicular surfaces and tracking the relative position to each other.


In the interest of time, I decided to go with the straight cuts and eliminate the smaller pieces once they are cut.

Measure and draw clearly marked go-no go lines.


Put your trust and full attention in the band saw and sander.


Cut small dowels and glue them in.


Wait for a few hours and voila! You have your wood shape.


I did not get any time to pour and cure the silicon but that’s next on the item list as soon as the class is done!


1) The first prototype sucks. ALWAYS. Everything that can go wrong will go wrong. In hindsight, I should have made one and then made the others.

2) The hardness of wood varies so much that it’s not always possible to predict its actual behavior on the cutter and the sander from the sketch.


4) Sanding eats into wood fast and measurements go for a toss. The sandpaper is slower but you are in so much more control.

Bonus picture of leftovers:

(9/14) Posenet adventures.

I had quite a fortuitous moment that my project was to focus on machine learning while our final projects were due. I have always been quite wary of using anything that does machine learning as I don’t think I am don’t have enough programming chops but ml5js laid that to rest. Picking it up was so easy and getting posenet to work was trivial. It gave me the confidence to approach these algorithms and start thinking on those lines.

Back to my concept.

I have always wanted to work with the body as an input (Since I don’t move so well in real life.) Until now, most of the work related to body, gait and posture have been with a kinect or a specialized camera that is currently beyond my scope of technical ability. But posenet makes body-tracking so trivial that I realised that I could work with it.

The posture is so iconic for a person that people are defined by it.

Screenshot 2018-11-07 08.50.58.png

And who hasn’t done this!

1920px-EB_Games_Expo_2015_-_Just_Dance_2016 (1).JPG

So i want to take the energy and goofy fun of dance games, karaoke and combine it with posture and movement.

My idea: People are given famous posture to imitate one after the other and they are supposed mimic it. The computer compares their posture to the original posture and scores them on the basis of it.


To get it working, I realised that I had to work with ratios and angle instead of absolute numbers. My explorations with the posenet library was to use it to calculate angles between various body parts but I was thrown off by the vector method in p5 which refused to work for me. So, here is a sketch of my failure:


Fab 1: Welcome to the machine

For our first “Intro to Fabrication“ class assignment, we had to make a light-switch. The rules were pretty simple:

  • It should be portable.

  • It should create light.

I went through all the other weeks’ assignments and I realised that it was going to be the most open ended assignment that we have in Fabrication. So instead of working with wood, acrylic or any other material that is going to be used in the class, I wanted to use something that I will not be able to use again. I had some silicon lying around after my mid-term adventure and I was like “Why not?“. What I wanted to do was make a silicon shape that looked with no exposed electronics outside but it lights up once you touch or squeeze it.

As a first step, I cooked up a batch of silicon and added acrylic color to it to create a batch of colored silicon.

 All the material in the picture above were found in the junk shop. YASS.

All the material in the picture above were found in the junk shop. YASS.

Adding the color to the silicon made it cure faster. Weird but I wasn’t complaining. Some That ITP friends pointed out a thai sweet called Khanom chan. The next step was to figure out how to cut it and that’s where I made a big, big mistake. Cutting silicon with a cold razor was a bad idea and I completely messed up the cuts.

Physically hurts to see this disaster of a cut. Ugh.

But Anyways, I soldiered on and cut a small air-pocket inside to fit the LEDs, battery and wire. the Leds were packed on top of a battery and i made a copper contact that hovers over the battery and gets connected when you squeeze the top. With some trial and error, i got it working and slathered some fresh silicon in a clamp and prayed.

4 hours later, It worked! Here’s the video:

Woohoo! Pretty satisfying except the shitty shitty cuts. Oh well, Onto the next one!

Mid-term fever.

For the midterms, Jeff did a thing with his randomizer bot (Pay no attention to the man behind the curtain! ) and I was paired with the mighty Shu-Ju!

For our first meeting, we spoke about what we liked, enjoyed and wanted for our project. We both agreed that we need to make something which involved some fabrication as we had not done any of it till that point. We also spoke of our love for TeamLab and how we enjoy data turning into Art. But for Halloween, we both agreed (It was a pretty agreeable meeting!) that we both wanted to make something which was highly interactive and involved multiple people playing together. (It’s a people festival, after all!) One of the ideas that emerged was a wand-duel game recreating the Harry Potter-Voldemort face-off in Goblet of the Fire. It would look something like this:


Two players would grasp the 2 ends and the middle section will fill up with light which gets more intense with vibration and the brightness of the LEDs. The first person to let go off the wand loses!

We decided to head our way and think of more 2,3 or multiplayer games and meet again in a couple of days. Once we came back together and started finalizing the idea, we threw across ideas to each other but nothing seemed to stick. Shu-ju mentioned how everyone was doing scary things and it would be fun to just do a happy candy dispenser. We left the idea behind and started thinking of something else to do.


During a break in our brainstorming session, We started talking about weird and funny interactive objects and Shu-ju showed me this video: https://vimeo.com/52555492. We both were laughing through the whole thing and realised that we could actually incorporate softness in our Physical computing project and the idea of a candy dispenser that needed to be played with emerged.

We thought of a candy machine which dispenses candy only if you press it nicely. If you press it too hard, it gets cross, shrivels back and refuses to dispense any candy.

(It’s quite interesting to see how the final idea did not veer too much from the first sketches.)

Now that we had an initial concept in mind, we quickly realised that the sensor was the hard part. One idea was to put a force sensor inside a soft material and call it a day but we were not very happy with it. It felt like a cheap way out and we spent a day without moving ahead. Thankfully, Shu-ju was sitting next to one of the residents, Lola who had worked with silicon and using air pressure to measure intensity for her work in soft interfaces. (Wish, universe, law of attraction yada yada). We quickly booked an office hour with Lola and she gave us everything on a platter. What we needed to do, how to do it, plenty of encouragement and a loan of her sensors so that we could get started asap. Since we were still figuring out how to work with silicon, We raided the junk shop and found a discarded silicon tube. The thing was gunky, dirty and full of holes but a bad repair job with some hot silicon, we had a dirty sensor which looked like a dismembered finger (hey! it’s the flavor of the season) but it worked well for us to start coding.


Looks so gross, works so fine!

The lighting of the LED went on without a hitch and using an MAX7219 IC reduced the number of wires into the arduino to free up spaces for more wires.


This was trivial!

We were feeling pretty good about our progress as our temporary sensor and led array worked and it let us program the logic of the candy dispenser based on difference in pressure through the air pressure transducer (MPX5010 for life!) and we started our foray into making the actual air pressure sensor. We first 3-d printed a shape and then fashioned a cover to create the negative space for the air-pocket.

 Shu-ju bubbling up some silicon tea! (Notice the upturned lid? See how big it is? That caused the first skin to be fragile and a total loss)

Shu-ju bubbling up some silicon tea! (Notice the upturned lid? See how big it is? That caused the first skin to be fragile and a total loss)



We printed a total of 3 times as we kept getting the wall thickness wrong. Each wrong silicon experiment set us back by 12 hours (considering the 3d-printing and curing time) and we lost a lot of time but we finally got it right and it felt so nice to squish it!

Once we had everything, Shu-ju quickly put together and enclosure and I got to work on the final coding and assembly. We had thought the structure through during the never-ending wait for our silicon sensor we really didn’t face any issue while putting it together. (Touchwood!)

The sound, mechanism for the candy dispenser and everything else fell together really well and we thought we were at the finish line all high and dry!

Or so we thought…

Because Murphy came visiting at the last moment and the nose broke with the sensor ruining the circuit (broken pins et all!) and we had to hastily put it back together at the last moment before the final presentation (Panic and duct tapes are a match made in heaven!). The thing worked but the sound wouldn’t work and it was finicky.

Here are a couple of videos which show it working (without sound):

The project went surprisingly smooth expect the last minute hiccup. It was a blast working with Shu-Ju and I would love to do it again soon. We have ordered all the materials we need for repair so I expect the poor box to live again soon. In hindsight, we should have started fabrication a bit more early to prevent the last minute rush job and relying on a functional form because we didnt have time to think it through. Overall, I was very happy with what we had and it was a great start to ITP. Onto the finals and Happy festival season!

(8/14) Sound+video: Unforseen circumstances

For this week’s assignment, we were assigned to go through the video and sound lectures and then form a cohesive project around it. The videos went pretty smoothly this time (I think I prefer the visual aspect of programming over the DOMs and APIs) so I was feeling pretty comfortable with working on the assignment.

As my first idea, I thought of 2 ideas:

  1. Taking a video and playing it from the end to the start while the sound plays normally. I specifically wanted to making a video play backward while the sound plays normally. Think Coldplay’s Scientist playing reverse to front while the audio plays normally (It would make a boring video but would have been cool to code.)

  2. Splice the screen horizontally into 8 parts. Each splice plays the same video with a time difference.


Pretty simple right? This is where I ran into the first problem! To achieve my first intended result for idea 1, there is no way to play a video backwards in p5js. The other

Physical computing: Analog/Digital

First month in USA and I already fell ill. Not the greatest start to school and I was horribly behind on all the videos and assignments. But gradually, I managed to dig myself out of that hole and here is my combined blog on the Week 3/4 (digital and analog projects) for Physical computing.

As I was very short on time, I did not go with a big concept but chose to focus on making something which demonstrated my learning of the topics for week 3 & 4.

The project that I chose to work on was:

3 Buttons.

3 LEDs.

If you press the buttons in the correct order one after the other, the LEDs light up together.

If you don’t, the middle one lights up (Literally showing you the middle finger).

To make things more fun, I added in 1 motor which rotates a full 90 degrees when the buttons are in the correct order and gives a small shake when you don’t.

I did not run into major issues with the circuits after following the labs videos but the code to count the correct order of buttons was a bit tricky. My final code is a jumble of if/else statements but I think there is a more elegant way to do this. I shall speak to some residents and see if that can be done in a better way.

I did not get time to try out the speaker and tone (also, I did not have any speakers with me) but I am glad that I am not as behind as everyone as I was 1.5 weeks ago.

Onward and upward!

Currently listening: Drive-Incubus

Physical Computing: Let the games begin!

For the first assignment of Physical computing, we were asked to interpret a switch and come up with a creative version of it. My first reaction was to make some kind of a Rube Goldberg like contraption but as all my tools and kit was still back in India (Stupid US shipping times! 😠) I had to scale down my idea.

While chewing on the topic to come up with ideas, I started playing a mobile game (The shitty kind whose name I shall refuse to take because of total embarrassment!) I remembered the Oasis puzzle in Legend of Zelda: Wind waker. That puzzle was emblematic of the countless hours I have invested in computer games in finding buttons and switches and I thought that would be a good start to my Physical Computing journey.

 Slipping & sliding

Slipping & sliding

So, armed with the enthusiasm. I fist went and got a pattern, cut it out and stuck it on cardboard, stuck silver foil below the cardboard in a way so that the pieces line up when placed in the correct pattern.


Did this work? Yeah, kinda, sorta. the pieces slid properly once of twice before the sliver foil below ripped off and it was unworkable. Thankfully, I managed to get a picture before that happened.

I was really satisfied with the idea and the first prototype. I just wish it had worked a bit longer. I think it will make an interesting fabrication project and I would like to take this ahead when Intro to Fabrication starts next month.

Looking forward to the next one!

Currently listening: Switch-Will Smith


And it’s a wrap.

Go listen to the sound-walk first before you read ahead.

Ok done? Now let’s go.

Continuing from where we left off, our project went pretty smoothly in creating a similar soundscape even though we were working on separate sections. We thought we were home high and dry before Mr. Murphy showed up. One of our project files was saved at 44.1kHz and the other two were at 48kHz. We ended up exporting one file as a separate audio file and inserted it before the other but it did leave a bad taste in the mouth. We are still not sure of a better solution but this worked well for our purposes for now.

Reflecting on the whole experience, I feel that my biggest gripe with the whole sound-walk was the ending. We didn’t really know how to end it so we invoked Deus-Red Machina and added her speech at the end (Bless her soul!). It was cliched and hokily sentimental but it was a desperate Hail Mary to finish the assignment on time.

Another challenge was timing it right with the elevators (and other spaces which have heavy, unpredictable footfall) The same problem was common with other groups’ sound-walks that we listened to. Providing a visual guide, providing explicit instructions and timing the speech with pauses are workarounds that most sound-walk designers use but I believe that there is quite a unused potential to use the sensors inside a smartphone to time and trigger sound experiences just right. There is a microphone that listens, a GPS that tracks you, an accelerometer that knows you orientation and som many other sensors that are stuffed in there! If we are putting up with all this tracking, might as well put it to some good use.

My favorite part was our world-building. Too many futuristic scenarios focus on pure utopias or dystopias and our interpretation felt a lot more mundane yet believable that I really enjoyed. And the description of the toilets! That was really inspired.

All in all, this was such a blast with 2 exceptionally lovely people that I really wanted to keep working them for all assignments. There was an unused idea for a speculative futuristic sound walk on an injectable male contraceptive and its effect on society. Hmm, stored for future reference.

Currently Listening: When the curtain falls- Greta Van Fleet

(2/14) Please insert disk to continue...

Week-2 in ICM was pretty fun and deep. The class-work was focusing on manipulating shapes using variables and a short introduction to transformations (Link) As a take-away assignment, We were asked to create a piece in which:

  • One element controlled by the mouse.

  • One element that changes over time, independently of the mouse.

  • One element that is different every time you run the sketch.

While trying to figure out what to do, I chanced upon the clocks assignment which sent me down a deep rabbit-hole. (Thanks Cassie!) Going through John Maeda’s work has always been educational but the clocks piece blew me away. I also realised that its a wonderful, self-contained assignment for visual programming newbies to test the limits of their creativity. More details can be found below:

Maeda’s 12 clocks

The JS port of the original 12 clocks by Coding Train

Golan Levin’s assignment based on clocks

Golan Levin’s INSANELY DETAILED lecture on clocks, time-keeping and it’s representation in New Media. (It’s really worth your time to read through!)

As a first exploration, I decided to focus on being able to capture the current time and represent it using simple shape creation techniques that I learnt in Week 1. I got stuck with trying to calculate the arc angle but Shiffman’s coding challenge on clocks came to the rescue.

  Tycho’s new album-cover. Prints available on request.

Tycho’s new album-cover. Prints available on request.

The first clock: Link

So now that I was confident of being able to capture and manipulate the time variables, I decided to go for a number representation. I was looking at Shiffman’s background fade sketch and that jittery pattern was quite interesting. I decided to use that as a base for the fill of my number shapes.

The numbers were then drawn with a simple grid and by manipulating the frame rate and opacity, you could achieve a nice blur effect:

  Find light in the beautiful sea

Find light in the beautiful sea

I added some interactivity but being able to change the background color on mouse click. I did not have time to try out more effects (Which I am coming to believe will be the leitmotif of projects at ITP) but I was pretty happy with the outcome this time. The full sketch can be found at: Link

While I am pretty happy with the lessons so far, I think that I need to update my ability to manipulate shapes using Maths(!!!). I wonder if there is a “Maths for programming newbies”. Adding interactivity to shapes using co-ordinate based manipulation is not going to get me too far.


Confidence: +3

Missions: 3/3

Secrets: 0

Currently listening: Clocks-Coldplay

Week 2: Enter the Machine

Starting a project with unknown people at random is always pretty stressful for me. As an introvert who takes time to open up, it feels like going on a first date and with a tight deadline looming, my anxiety levels were pretty much through the roof.

Which turned out to be completely unfounded. Because my teammates are FREAKING awwwwwesome. (Is that a word? it should be a word).

In one corner is Lillian Ritchie, (She was on the production team on How to train your dragon 2, DAMNNIT!!!) Charming, humorous and so good at instinctively slicing a project into timelines and effort that will be needed to do it.

In another corner is Nuntinee who is equally pragmatic and wonderful at listening to everyone blabbering on about ideas and morphing them into a cohesive whole.

After our initial discussion, we zeroed on doing a speculative sound-walk in the future as ITP will move to a new building from next year. We all agreed on doing a sound walk set in the indeterminate future (where physical travel is optional) for incoming ITPers who would visit the physical space as a symbolic ritual.

WHICH IS WHERE THE MAGIC HAPPENED. I started on a draft version which the others just riffed on and we got the whole script down within a hour. It was a Grateful Dead concert on Google docs!

With Lillian’s exceptional doc and sheets ninja skills, we had a list of sounds task list in no time and off we went to collect the sounds! After a lot of flushing, clanging, marching, misplaced batteries, full SD cards and waiting for the ITP floor to quieten down, we had a list of sounds that we threw into Audition. Nun and Lillian took the lead and put a rough cut together and I trundled along with my part in their lead. One of the major issues we faced was finding a narrator voice which Lillian solved by discovering the “Normalize to -3dB” setting which gave us the perfect robotic but yet human voice that we needed. (And Oh! Lillian’s voice-over kicked ass).

In -3db we trust.

Also, note to self: Learn from Lillian’s planning. It going to come back and bite me back, I am sure. Will I learn?

<Deep sigh>

Currently Listening: Truckin’-Grateful Dead

The journey to MoMA

As a part of the first assignment for the sound and video course, we were asked to go on a sound walk and write our reflections on the same.

For people reading this who have no idea what that is, the wikipedia article is a good start. Link

The ITM Sound+Video has a nice collection of sound walks which can be found here. (If it’s not available, drop me a comment and I will send you the links).

From the ones on the list, The one on ‘Whale creek’ and ‘Central Park’ caught my eye because of the location.

  Dull, grey and FULL!

Dull, grey and FULL!

But New York rains have the uncanny ability to show up when it’s the most inconvenient. Which is why I found myself on a grey, dull morning at the MoMA. (Central heating should be a human right!). The audio walk is called ‘Dust Gathering’ by artist Nina Katchadourian.

She takes the listeners on an audio journey across the MoMA through the unusual perspective of dust in the museum. Her idea of making the invisible visible caught my attention and I was looking forward to the experience which I expected to be similar to one of my favorite podcasts 99% Invisible. Add in the temptation of finally seeing a Picasso, Matisse, Monet and ‘Starry Night’ up close, I was sold!

The nuts & bolts:

The audio walk is divided into 14 audio pieces which are around 2-3 minutes long and they start on the first floor of the exhibition and end on the 5th floor.

The audios are accompanied by an audio guide which can be found here.

My experience:

One of the things that I forgot to consider was that this audio-walk was done 3 years ago which in MoMA time is ancient history. Artifacts are moved around, new exhibits come up and there is a general flux on the floor which makes it pretty unlikely that the original pieces will be found in the same location. Which threw a big curve-ball during the end of the walk. But more on that later.

The walk started on the first floor which was milling around with visitors and finding a spot to start listening to the walk became a challenge with people moving hastily in and around you. Throw in children, groups moving together and security guards hustling you along it became a big challenge. The audio walk settled into a good groove once the 1st floor talks were done but it did take away from the experience while having to look over your shoulder all the time.

One of the first things she mentions is the presence of a big dust catcher below the reception floor and invites the user to touch the vents to feel the layer of dust on it.

The vent was conveniently located next to the main door and a cop stood next to it. I was not really sure that as a brown-skinned guy, I could muster up the confidence to squeeze past a cop to touch a vent. So HARD PASS. And that brought me to my first observation:

When an audio talk invites people to touch and feel the environment, what assumptions are we already making about the person listening to it? As an international student who is on-alert all the time and would wish to do nothing to stand out, what kind of an audio experience would actually make me go touch and play with an alien environment?

The audio tour is essentially trying to make the invisible visible by a series of interesting facts about dust, its role in the museum, the weird places it gathers and the unusual ways of cleaning it. This is done primarily by the authors narration followed by interviews with a museum staff. The snippets are not information dense, the staff interviews are clear, insightful and are accompanied by a single image on the app.

This creates a situation where the listener does not have anything to look at except a picture and is busy staring at the distance while listening to the piece. This can get very distracting by the sheer amount of activity on the museum floor and I found myself zoning out even though I wanted to hear the whole thing. Through this whole experience, a bunch of questions were rapidly going on through my mind:

What kind of a narrative structure works for stimulating a person’s imagination? Could it have done with a tighter narrative and soundscape (ala 99% invisible and Radiolab)? How much freedom of movement do you give to your listener? What is the appropriate soundscape for use in noisy environments?

  The eyes of the ladies of  The Young Ladies of Avignon  speak of the horror of being drenched in someone else’s spit. FOR ETERNITY.

The eyes of the ladies of The Young Ladies of Avignon speak of the horror of being drenched in someone else’s spit. FOR ETERNITY.

The peak experience was an old museum staff talking about how she regularly cleaned a Picasso with her spit.



Because the audio recording is so old, the museum shifted the pieces around and the Picasso mentioned in the talk was replaced by a completely different artist. But, no matter, I went and found a Picasso, stood in front of it and imagined a frail, matronly lady going over the whole painting with her tiny brush and spit.

It does make you giggle. A LOT.

  Look on my Dust, ye Mighty, and despair!

Look on my Dust, ye Mighty, and despair!

The final interesting event that happened was that as someone coming from a country where dust is a part of your existence, I never noticed it while I was in India. It’s only after coming to the USA and not feeling it everyday, that I understand what dust-free means. But I was wondering if someone else would notice its presence.

Which lead me to my magical experience during the audio walk.

As the narrator was describing the sheer impossibility of cleaning the Bell-47D1 helicopter while I was pondering the same, a bunch of kids ran past me screaming, “It’s so dusty!”

That was pure joy. Would recommend 10/10.

Final thoughts:

While the audio walk was a pretty sub-par and broken experience, it still managed to create few moments of serendipity which suggests the inherent power of the medium. It will be interesting to see the possibilities which open up with location awareness, machine learning, environment detection and esp. augmented reality! The walk really opened my eyes up to the possibility of creating powerful experiences with sound and I hopefully will be able to incorporate a lot of this in my work in the future. I also, at some point, will be able to do the ‘Central Park’ and ‘Whale Creek’ ones and experience what people have been raving about.

Onto the ITP sound walk assignment! The sounds that we collected can be found here.

Currently listening: Sunday morning-Lou Reed

Current level: Code-Scavenger. Update in progress (1/14)...

ICM Assignment 1

Status report:

The task, which I chose to accept, was to use the primitive shapes of p5js and to create a screen drawing of my liking.

The constraints that I set on myself for this assignment were:

  • To use the limits of the videosit as a constraint instead of using concepts which I know of (loops, variables etc.) to make my life easier.

  • To focus on understanding the limitations of the shapes for creating an image and where it can get really hairy. One of the first questions in my mind was how would shapes could be parallel, perpendicular or aligned to each other when it was difficult to calculate the exact dimensions of its vertices. (For example: A rectangle which is perpendicular/ parallel to a hypotenuse of a triangle).

While I was thinking of shapes, I started thinking of Monument valley and how pretty the game was but built on repeating shapes and patterns. It seemed like a good enough template to try things with. Also, the player is tasked with recovering ‘sacred geometry’ in the game which fits into the theme of the assignment quite well. =)

So pretty!

I spent some time playing the game again (Hey! It was ‘research’ *Ahem*) and looking at it as a collection of basic shapes was quite illuminating. I also immediately realised the problem with creating isometric shapes with only co-ordinate values without using math-magic and vectors. Needless, I decided to press ahead and see where it would take me.

As test subjects, I decided to focus on the main character of the game, Ida and her awesome, mute side-kick, Totem. I felt that they had the right amount of curves and shapes that would make it a challenging trial.

I decided to start with the Totem first because he(?) is awesome. I immediately decided to not do it in an isometric view and go with a flat view instead. In the spirit of the challenge, I decided to focus on the shapes on Totem’s body on the left side as they would be more challenging to pull off.

It started out pretty well…


The first 2 shapes were very simple. I discovered the beginShape() function which made it a breeze. Though, the process of finding out the coordinates was quite onerous. I haven’t done counting like that since I was in High School!

My initial plan was to make the whole grid of boxes of the Totem which can be then printed and shaped into a 3-D paper object. The best laid plans of mice and men…

Seemingly satisfied with my progress, I decided to tackle the shape pattern inside the 2nd box. And the limitations of using a basic coordinate system were immediately laid bare.

  Align weird shapes, they said. It will be easy, they said…

Align weird shapes, they said. It will be easy, they said…



I could not, for the life of me, figure out how to maintain a consistent distance at an angle between two shapes. Well, not that much of a hacker, I guess.

The failed sketch can be found at: Link

However, I could not leave Totem unfinished! He(?) has already been through a lot. Since I had 4 surfaces to choose from, I chose a relatively easier one and finished the sketch.

Screenshot 2018-09-12 05.30.09.png


The sketch can be found at: Link

I gave the curve functions a spin and I thought that I understood them. But when it came to re-creating Ada or anything with specific, controlled curves, I failed miserably. I have no idea how to control the curves to flow into shapes that I want.

Ada will have to wait. But, I shall be back!

Major learnings:

  • I might be missing something but shapes seem to not be useful without a relative coordinate system. We invented computers so that we don’t have to manually figure out coordinates. But I have no idea how to do that.

  • I have no idea how to use the curve functions to form any kind of a remotely controlled curve. My respect for the programmers who wrote Illustrator’s Pen tool went up by a 100.

  • Must learn curves.



Confidence: +1

Ego: -5

Missions: 1/2

Secrets: 1

Currently listening: Tycho-Awake