Space between notes

Indian music is about the space between notes. It’s also about the space within.

For over twenty years, every one of my attempts at resolving the problem of expressing my musical ideas digitally hit the wall of above realization.

Indian music is fundamentally monophonic. If mono-melodic is a word, that might be more apt. It is about creating a single melodic line that remain complex enough to be interesting. There’s a gap in time that a strand of moving frequencies want to traverse. Of all the possible paths, I need to find the one I didn’t traverse before, but the one that leads to the next.

My favorite part of Indian music, especially Hindustani classical, is the primacy of improvisation. It calls for continuously creating new ways of moving in time and frequency. It’s like building an intricate structures on the go, while maintaining uniqueness. Something like the growth of crystalline structures in a super saturated liquid. And like that process, here too, I need to super saturate my brain with musical ideas, dissolved melodic strands.

I like doing that. Very much. For the music that get realized is something new for me as well. Performance here is not about retelling, nor recreating.

When a group perform music together, It’s about being part of the whole. Our individual expression needs to be congruent to the rest. It is relative to the rest. Producing music as group is exhilarating, and very fulfilling. But to make this togetherness happen, there need to be a musical framework that supports this. European classical music traditions (let’s call it western classical), probably influenced by group worship traditions of Judaic religions, came up with a strategy based on creating spectral structures. It try to fill the audible spectrum with static structures. The movement then is morphing of these spectral artifacts to the next in quantized time. Being written for and performed by many people (or polyphonic instruments like piano, organ), it also specifies the limits to individual freedom.

In contrast, Indian music revolves around an individual and it mostly involves monophonic instruments. It’s not restricted by conditions related to other simultaneous sounds. It’s about creating a structure in time. Expression doesn’t happen across the spectrum in static time, but as a defined, narrow frequency band continuously moving along the spectrum. It is by exploiting this temporal movement that Indian music finds its expressive power.

Digital music, and electronic music in general is a product of the western classical and art music traditions. Naturally most of the electronic music products in the market align more with western music. Until two or three years ago, the availability of even sampled Indian instruments were a rarity among virtual instrument and synth vendors. While Harmonium is currently widely used in Hindustani music, Piano key arrangement and playing techniques are not very conducive to produce Indian music.

Another aspect of electronic art music is that, a large portion of it is still produced in non-live settings. Even in the live EDM, majority chose live remix/DJ strategies than creating new and improvised elements. There are, of course, exceptions, but since tool makers usually fulfill the requirements of majority users before attending to the niche.

So, for a non-commercial musician like me, there were not a lot of options to work purely in digital domain, especially in a live setting.

In the last couple of years, there have been a few instruments that try to get away from the traditional piano keyboard approach to MIDI controllers for digital music. In general, these instruments provide a vastly enhanced level of interactivity. While it is already possible to manipulate and control large number of parameters of virtual instruments and sound modules, the way to interact with them were at best cumbersome. One could setup control surfaces with many buttons, pods and sliders, but remembering and fluidly using them live is a tremendously difficult task. If one add to the demands of improvising something like an Indian raaga, this very soon become impractical.

The key to playability is easy and contextually clear access to various parameters of the sound. This means that, I should be able to access and modify the parameter in an intuitive way that doesn’t require me to come up with difficult to execute workflows.

As an example, lets assume we want to play a sampled violin. In a traditional keyboard setup, I will use the piano keys to communicate the note and its initial velocity using my fingers, use the expression pedal for controlling dynamic expression, additional key switches on the keyboard for selecting the right stroke or sound, pitch bend lever or wheel, mod wheel to control timbre, pedal, a few buttons on the controller to change additional characteristics of the sound like the bow pressure or speed etc. Not only that we run out of limbs pretty fast, it also causes a huge information and perception load to effectively utilize.

If one looks at how a violinist plays the physical violin, she is manipulating several things at the same time as well. However, everything a violinist has to do is within the immediate context. There is no need to take your hands off and tweak a knob to change the pressure of the bow, or use a different mechanism to play a legato or portamento. The resulting workflow is much less cumbersome and easier to remember and develop muscle memory. This is crucial when producing original music, especially in an improv setting, as most of the brain is working on the evolving structure of the music rather than the details of workflow.

In a follow up article, I will describe how some of these new protocols, virtual instruments and hardware controllers come together to create a setup that is very close in its interactivity and expressiveness to some of the traditional physical/analog instruments.

Tagged ,

0, 1, infinity: a digital art music tour

It is only recently digital music production capabilities start to provide means for expressing nuances of Indian music. This is true with software and tools, virtual instruments and the means of human-computer interaction.

The most commonly used music creation tools, viz., sequencers/daws provide an interface and workflow that suites creation of orchestral compositions using western musical idioms quite efficiently. Many tools provide advanced capabilities for transforming and creating novel forms that adhere to the principles of western music. Indian music, which is pretty archaic in its representation and even codification*

Virtual instruments (I mostly use VSTs these days) is another area of digital music that has expansive support for traditional, conventional and novel instruments and sounds and many of them are eminently playable. However, the same design and implementation decisions that is heavily biased towards western music production makes them rather clumsy to use in an Indian music context. It is true there are Indian instruments available as VSTs (typically packaged under “ethnic”, “world” or “exotic” category!), but their playability, especially live playability is quite suspect. Some instrument developers try to compensate for the limitations of the daw and keyboards by providing prebuilt transitions and phrases. However, I find them of very limited use since my compositions need more than just occasional riff of an “exotic” instrument.**. Recently, however, more VSTs that consider such dynamic and nuanced articulation and provide a way to execute them even in live setup more or less naturally. Many of the new modelled instruments (as against the sampled ones) seem to have a better handle in this at this time.

Influence and adaptation of instruments from other musical cultures including the European to Indian music has been going on for some time. However, major impact in musical expression itself started happening the wide spread adoption of orchestral instruments as the background for popular music. Harmonium is another one of those instruments that changed the way Indian music is expressed. MIDI keyboards and other controllers for interacting with digital music tools were a direct copy of piano keyboards with some additional capabilities. However, except for some fringe, experimental ones, none of them provided a way to provide nuanced tonal control that Indian music demands. This was the case until a few years ago when a new class of MIDI instruments stated to appear in the market. These are collectively referred to as MPC (Multi-dimensional polyphonic controllers). A new extension for the age old MIDI standard to accommodate larger amount of per note data to support these controllers were also developed alongside. The result is several new MIDI devices that even look drastically different from standard piano keyboards that came to the market in last two/three years.

One such instrument is Linnstrument, which has a matrix layout of keys instead of the linear piano one. This is similar to the fret layout of many string instruments. It also provides four different parameters to be controlled separately for each note, viz, velocity, pressure, timbre and pitch. This is much closer to what a physical instrument like sitar or violin provides. Breath and bite controller from TEControls is another device, which while not provide note data, is capable of capturing x-y movements and bite pressure along with breath.

0, 1, infinity is in some sense, celebration of these tools and methodologies available to produce digital music that includes the extensive melodic nuances of Indian Music. It is also my journey from being an analog bamboo flautist to a purely digital musician.

The video above is from premier performance of the tour at David Hall, Fort Kochi. The tour will continue till March exploring more and more aspects of these new possibilities. This is movement 3 from the tone poem named “Night in the Meadow”.


*While Indian classical music, especially Karnatic has very strong body of formalizing, this is more about the static structure of music rather than a dynamic performance. Thus the gamakas (meend), exact durations, microtonal assignments etc. are left out of the representation system. So, a written version of Indian classical music only gives an outline.

**The situation is better for percussive instruments though. There are excellent Indian percussion libraries available. My complaint is mostly about melodic instruments, as this is where Indian music drastically differs from other musical expressions.

Monsanto and the Farmers’ Suicide

Over at discover magazine, Keth Kloor has a post about the BT Cotton suicide narrative in India (http://blogs.discovermagazine.com/collideascape/2014/01/07/selling-suicide-seeds-narrative/). The blog is announcing a much longer article about this problem (http://blogs.discovermagazine.com/collideascape/files/2014/01/GMOsuicidemyth.pdf).

I am a very big proponent of genetic engineering and believe that genetically modified crops and livestock will help us in a big way to reduce our environmental impact and to produce enough for the growing population. However, I am also a strong critic of the monopolistic practices of the GM companies (Monsanto et al.).

I acknowledge that these companies spend a lot of money to come up with their GM products. They guard their secret and even put very severe restrictions on replanting or otherwise trying to propagate the changes by the farmer herself.

Vandana Shiva is a person I admire, for her zealous fight for issues of women and poor. However, I have very grave differences with her ideology, methodology and tactic. Reading the article, which charges Ms. Shiva with the manufacturing of a crisis story, or at least, appropriating a real tragedy for her fight against globalization and genetically modified crops. I am equally or more alarmed by the fast pace of economic “liberalization” in India that vastly increases the power and resources for the rich and the business while systematically eroding the social safety nets. Giving a huge multinational corporation whatever level of power over India’s food supplies is quite a scary thought. However, as I said in the beginning, I believe as a technology, GM has a lot to offer in a country like India.

While Ms.Shiva is a particularly vociferous and a bit over the top activist, one thing the article did not mention much is about the much higher cost of farming using modern farming technologies, including GM (Typically GM seeds are of much higher cost with severe restrictions on re-seeding). The tightening of credit by the ongoing liberalization of banking (which btw, is part of the same globalization Ms. Shiva and many others are against), lower levels of social security net etc., along with the perennial Indian problems of slow infrastructure growth and uneven investment all are problems that are affecting the poor in India, of which most farmers are.

Again, I have no undue worries about the technology of genetically engineered agricultural products, the Monsanto (or any other large GM seed company) way might not be the best to provide agricultural stability in developing countries. Primary concerns are the cost of seeds and the strictly commercial nature of its availability. For e.g., in case of a crop failure, earlier a farmer could acquire locally grown seeds with very little money. But, when most farms are already growing GM (for e.g. BT Cotton), non-GM farms will fair extremely poor. It is not a far fetched conclusion that, wide spread acceptance of GM food crops owned by large multi-nationals like Monsanto can have a significant effect on the country’s food security.

The problem in India is not about the technology of GM, but the way the technology is been adopted. For e.g., if the BT Cotton was an open source technology, it would have, literally transformed Indian villages in the cotton belt.

Things are not always black and white.

Tagged , , ,

Pixie Flux theory of Quantum Consciousness

In a video by Sixty Symbols (see below), Prof. Moriarty while giving a dress down of Dr. Lanza and his theory of quantum woo mentions that one could postulate pixies coming in and out of existence to create spooky effects. I think this is a serious proposition. One could come up with a hypothesis without violating any of the laws that we currently hold that, this is what actually happens.

Prof Moriarty on Quantum Woo

This is how it works. Since anything can happen without violating any physical laws within plank time (like the creation of annihilation of particle-antiparticle pairs), one could hypothesize that, there are magical Pixies that come into and poof out of existence under 10-43 seconds. These Pixies are the ones that maintains the reality and causes all kinds of quantum spookiness. The Pixies are also carriers of consciousness. Well, actually consciousness is produced by the fluttering of the Pixie wings.

This hypothesis is named the Pixie Flux Hypothesis© of Quantum Consciousness. I am claiming the copyright for this hypothesis. Unlike actual theories of physics, for which I can write a paper, get it peer reviewed and published, this can only be ascertained by brute force of an enlightened mind (mine).

Creative Commons License
Pixie Flux Hypothesis by Salim Nair is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Based on a work at https://bamboodreams.wordpress.com/.

Tagged

Scienciness

I was watching this video of Richard Dawkins debating Deepak Chopra. It is an interesting watch.

While watching it and responding to the comments I was trying to find the right word for what Deepak Chopra does with science. Something that expresses the manipulation and dismemberment of scientific ideas that Chopra does. Then I remembered. Truthiness! But , that is about truth. A good version of it in this case would be Scienciness. It appears that the word is used in a similar fashion by a few before me here and here for example. That is perfect. So, now I have a good word to call what Chopra does.

He is indulging in Scienciness!!!

Tagged

Storm!

My visit to India after a year was, as always filled with situations very close to what Tim Minhin, inimitably showing in this video. Enjoy

Tim Minchin: STORM
Tagged

I am afraid of superstar developers!

I now think that choosing roman numerals to represent parts in my series of posts about agile was a bad idea. It is not very scalable! This post, while related to agile, is not part of the series anyways.

A few months ago, I read blog posting about agile development. I think it was called 10 points to consider to make a good agile team. (Unfortunately I couldn’t find the article again.) One of the first few points was – I am paraphrasing – “Recruit best programmers/developers to the team..” When I was reading it, I thought, what sort of advice is that!

If you can get excellent, top notch, superstar developers in your team, then it is not a big deal to be productive. But, it is obviously not practical as a general practice. To start with, the terms excellent, top notch, brilliant etc. are quite vague. Even if you are experienced enough to figure out what it means and are able to determine if a candidate matches these vague criteria, not every software team in every company be able to find them. As a rule, if a developer is considered to be excellent, she will be relatively expensive compared to the more average developer. If you are working in a large company, there will be severe competition to steal better developers by different teams and not all of them are going to get the people they prefer. Since superstars are in high demand, there is always the fear of losing them!

Today I was reading an excellent article by Laurent Bossavit of Institut Agile (in French) named Fact and Folklore in Software Engineering. The article is about the oft quoted statement that “best programmers are 10 times better than the worst”. While I have heard this statement in my early days, I have never thought about it in depth and considered it to be an insignificant observation in practical software production. Bossavit goes into the history and evolution of this statement in great depth.

What attracted me most was the rigor with which the article was written. It is an excellent review in the scientific sense. He starts from the first published record of this statement to a 1968 paper by H. Sackman et. al. While this paper describes an experimental study, it actually measures difference between debugging task. Bossavit then continues to investigate further papers that repeats this claim either by referring to the original study or by referring to other studies that supposedly replicates this result. He then looks at a 2008 blog post by Steve McConnell which talks about this problem and does a survey of papers that apparently supported this claim published after 1968. The fun is when he tries to verify the references, which are either just repetitions, circular references or just opinion pieces. (I am not going to repeat this part. Read the post).

A common problem in software development is the subjectivity of measurement. There are so many metrics and methodologies to measure developer productivity both for predicting and rewarding. But, most of them are quite arbitrary. It is one thing to introduce a measure with the express acknowledgement of its arbitrary nature, for e.g Story Points used by many agile teams. But, when people start to take these numbers and then come up with complicated calculations of velocity and then plug it into traditional project management schemes, things start to crumble.

Another thin I noticed while reviewing the articles was that, they all measure the initial time to solve a certain task. In practical software development, the initial solution to a problem is just that. The total cost of that solution will not be clear until it goes in production and actual users start using it. So, just because a person can finish a solution to a problem faster than everybody else does not mean that she is the most productive.

I think the real challenge of a developer, especially a person in a lead/mentor role is to figure out how to achieve sustainable productivity with an average team comprised of average people. If you don’t have superstars in the team, you don’t have to fear losing those superstars!

Tagged , ,

Being Agile… Part IV: Falsify me, please

My reference to Adam Savage in Part III was not just incidental. I think it is a very profound one, especially in software development.

Adam Savage, in a later podcast (unfortunately, I was unable to find it) explains how the phrase “Failure is always an option” represent a fundamental fact about scientific enquiry. Unlike we see in many movies where mad scientists work like crazy and then be heart broken after their experiment “fails”, scientific enquiry, especially that of experimental enquiry thrives on failure. There might be a favored outcome for an experiment, but if the real outcome is different, it provides data. Failures in many cases provide vastly more data than success. Even when one gets expected results, it will most likely be falsified later by someone else.

It is hard not to notice the similarity between this and software development.

Every claim about a software is eminently falsifiable.

While, as a software user, many of us are faced with the mysterious ways of working of an application. But, this is neither due to a supernatural intervention or from a Heisenbergian uncertainty. It is just simple classic phenomenon of not having enough information about the inner workings of the software. However, while developing a software, one cannot really appeal to ignorance. Many things in software development resembles Murphy’s law in steroids. Things are guaranteed to go wrong, and they will always go wrong in detectable ways.

Software development is always considered and thrived to be an engineering discipline. This is why we try to create one engineering process after another to make it behave more like other engineering projects. But, the history of these tight engineering controls is at best dismal, and even when they worked, they did it by curtailing innovation and creativity to the extreme. Agile/Extreme programming in many ways was a response, resistance movement if you will, against this tyranny of process. It concentrates on the human element (like the Dow chemicals commercial – but then they went and bought Union Carbide), and creativity. Instead of trying to control and limit change, agile methodologies embrace change.

Successful software development demands a lot of intellectual commitment from the people involved. It is more like pursuing a scientific experiment. Here, we have this hypothesis. What is the best way, in terms of representational accuracy, maintainability, and overall usability to model it! There is always a multitude of choices. Optimality of one of these choices is unlikely to be clear in the beginning of the process. So we have to start with hypotheses and empirically prove that the assertion is either true or false. Irrespective of which answer we get, the data we collect during this process define the problem in a better light. We get to define more variables and get the values for more constants. May be we have to go back to the proverbial “drawing board” and adjust our original hypothesis, except that the drawing board here is a constant addition to workspace. One difference from a scientific experiment though is that, at the end of even a partially successful experiment, we get something more tangible.

This is the spirit of scientific enquiry. This is why I think software development should be treated less like an engineering discipline and more like a research activity.

Agile in many ways does this. It unseats many of the mechanistic visions of earlier methodologies. By focusing more on the team dynamics and accepting change – constant change – as a welcome phenomenon. I am sure many of you are familiar with the old adage that the cost of change in software development increases exponentially as it progresses, which results in the axiom that we should try to reduce change, and capture as much as possible in the beginning. This is a wrong premise. Usual dynamic is that the user will find more things to change as the feature/component nears completion. Users may find many of their original assumptions were not accurate. We can always shut the user down until we announce that everything is done, and then tell them to live with what they have just like they are with their last home improvement job. There was a time when this would have worked. But users now understands more about software and its nature. We can no longer afford to blame the user for everything that went wrong… “they changed the specification, they don’t know what they want!”

There are some parallel efforts at resurrecting the engineering credentials of software. One such attempt is Intentional Programming. One assumption I had earlier was that the current problems in software development is just because it is a new industry and will eventually find its true calling. However, the nature of software, that of modeling real world scenarios, makes it unlikely that this will happen soon. The complexity of human society, individuals, interactions, even that of our artificial systems like banking and finance are so great, and our ability to model them, or even understand them is still in a very very early stage. Software, which tries to create virtual worlds, information models about them, and sometimes even helps create this understanding is bound to be complex and tentative.

That takes us to the next parallel between software development and scientific enquiry. Tentativeness of the solutions we create. There are so many factors that will reduce the overall usability of the system and create obsolescence from change in practices, advent of new hardware or software technology, changes in social expectations etc. Even when we successfully produce a model that satisfied the requirements, one has to constantly question the viability of that model. This could be a new human computer interaction paradigm like multi-touch or Kinect, ubiquity of small form factor devices, change in financial regulations, expectation of connectivity with the rest of the digital world, disappearing boundaries of office and home etc. Just like there are no sacred theories or laws in science, there are no sacred software. There are no eternal killer feature. There are no “only I can do this”es.

What this means is that, if I don’t poke holes in my model, someone else would. And if that someone else is a customer with the cudgel of a Service Level Agreement, it could bring us a world of hurt. So, the best way is to do this proactively. The main function of a software developer is to think about how to break what we have done, how to negate the hypothesis, how to falsify what we just proved.

So, go ahead, break your Fitnesse tests. Break the build and if you cannot fix it within the day, buy Donuts (or Parippuvata) for the whole team. As long as you take the code to a better place, it will all be forgotten.

Tagged , ,

Being Agile… Part III: Failure is always an option!*

So we are TDD. We proudly announce the number of unit tests and the percentage coverage as part of the scrum achievements. We make demands on minimum coverage (for a brief while when we had TFS, it was a check-in constraint). But, what do we actually gain by testing? Is there a law of diminishing returns in testing?

Uncle Bob in his first presentation at our company demonstrated the bowling example. It is such a simple, eye opening experience to see how easy it is to over specify.

As I mentioned earlier, we were used to very delayed gratification. There were no demands on checking in code, actually we encouraged private branches for doing stuff. Sometimes it takes months before the changes could get in to the build.

The fun part about unit testing (especially if you have a “run this test” context menu) is the instant gratification, even in failure. You should first write a test that fails, so says TDD.

In a non TDD style of development we always expect to succeed. The first time I press build after a series of changes, I expect it to pass. The first time I run the application after a change, it better not crash. Even under the best circumstances a full build and run of the application, which was required if the change was in any of the core units, takes quite a while. Once the application comes up, we need to login and navigate to the view where the change is to see its effect. If we were to find something not working it would be a downer. So, we expected to succeed all the time, there by accumulating heart breaks upon heart breaks.

What changes when tests becomes the primary focus of development? When you write a unit test to model a new behavior, the first attempt is not even supposed to compile. Sometimes, if you are just fixing a behavior, we might have a unit test that can be compiled successfully without changes, but it definitely should not be passing. So, most times, we are specifically looking for a failure. It grows on you. I am no longer ashamed by a compilation error. It is a piece of information, sometimes quite valuable insight into the change that I am going to make. Since when a unit test breaks no airplanes fall from the sky (or angels die), we can afford to do this over and over. Every failing test gives us yet another insight into the problem, one more thing to do; every passing test makes us look for the next best way to fail.

Accepting failure as not just a normal outcome but as a desired outcome makes things much less stressful. If we fear failure, we will build safety measures for every imaginable way something can fail. The problem here is that, there are more imaginable ways to fail than plausible. And there are far more even plausible ways to fail than probable.

In a very fast pace environment things do go wrong from time to time. Since failure is welcome, there need to be a way to celebrate it. This is why we invented the Blame Gametm. When something goes wrong, when the build turns red, when a test “works in my machine” but no where else, when you wipe out the changes for 50 Fitnesse scripts because of one wrong merge, we blame. Of course, the blamee doesn’t have to accept it. There can always be come backs as long as they are more logically consistent and evidenced than “dog ate my hard drive”. The key is to embrace the failure.

What TDD, not just unit testing but aggressive acceptance testing teaches us is to fail often and fail gracefully. As we all know, if your millions of assertions never fail, they are as good as absent. The value of a test is when it fails.

—-

*This is one of the best memes to come out of MythBusters promoted constantly by Adam Savage. There is one podcast where he describes why it is a fundamental principle for him. I hope it is not copyrighted by Adam or Discovery channel.

Tagged , ,