Wednesday, December 26, 2012

The land in Laṅkā is wooden

Once Kaṡyapa's son Garuḍa expressed his desire to have a hearty meal to become strong enough to save his mother from the snakes. Kaṡyapa told Garuḍa about a huge elephant and a tortoise at the bottom of the ocean and told him to devour them. Garuḍa carried them onto a nearby Kalpavṛkṣa tree to enjoy his meal. But the weight of the well-built Garuḍa, together with the elephant, caused the tree branch to snap. Garuḍa held the branch with his beak stopping it from falling on the Vālakhilya (thumb-sized ṛṣis) who were meditating under the tree. Garuḍa then asked his father what he should do with the broken branch. Kaṡyapa directed him to take it some place far away. Laṅkā was established on this branch of that Kalpavṛkṣa tree, and therefore the land there is wooden.

[1] Tales from the Kathasaritsagara, Arshia Sattar (translator), Penguin Classics, 1994.
[2] The Ocean of Story, C. H. Tawney (translator), C. J. Sawyer Ltd., London, 1924.

Tuesday, December 18, 2012

V for Vendetta [1]


V: [Evey pulls out her mace] I can assure you I mean you no harm.
Evey Hammond: Who are you?
V: Who? Who is but the form following the function of what and what I am is a man in a mask.
Evey Hammond: Well I can see that.
V: Of course you can. I'm not questioning your powers of observation; I'm merely remarking upon the paradox of asking a masked man who he is.
Evey Hammond: Oh. Right.
V: But on this most auspicious of nights, permit me then, in lieu of the more commonplace sobriquet, to suggest the character of this dramatis persona.
V: Voilà! In view, a humble vaudevillian veteran, cast vicariously as both victim and villain by the vicissitudes of Fate. This visage, no mere veneer of vanity, is a vestige of the vox populi, now vacant, vanished. However, this valorous visitation of a by-gone vexation, stands vivified and has vowed to vanquish these venal and virulent vermin vanguarding vice and vouchsafing the violently vicious and voracious violation of volition.
[carves "V" into poster on wall]
V: The only verdict is vengeance; a vendetta, held as a votive, not in vain, for the value and veracity of such shall one day vindicate the vigilant and the virtuous.
V: [giggles]
V: Verily, this vichyssoise of verbiage veers most verbose, so let me simply add that it's my very good honor to meet you and you may call me V.
Evey Hammond: Are you, like, a crazy person?
V: I am quite sure they will say so. But to whom, might I ask, am I speaking with?
Evey Hammond: I'm Evey.
V: Evey? E-V. Of course you are.
Evey Hammond: What does that mean?
V: It means that I, like God, do not play with dice and do not believe in coincidence. Are you hurt?

Reference:
[1] IMDb http://www.imdb.com/title/tt0434409/quotes (Accessed Dec 19, 2012)

Friday, November 23, 2012

Concatenation error

Any idea what is wrong here?

>> A = [-1./alpha.^2 -3./2./sig.^2, 1./alpha.^2-1./sig^2, -1./2./sig.^2;... 
1./alpha.^2-1./sig.^2, -2./alpha.^2-1./sig.^2, 1./alpha.^2-1./2./sig.^2;... 
-1./2./sig.^2, 1./alpha.^2-1./2./sig.^2, -1./alpha.^2-1./2./sig.^2];
??? Error using ==> sym.cat>checkDimensions at 76
CAT arguments dimensions are not consistent.

Error in ==> sym.cat>catMany at 39
[resz, ranges] = checkDimensions(sz,dim);

Error in ==> sym.cat at 29
    y = catMany(dim, strs);

Error in ==> sym.vertcat at 26
    y = cat(1,args{:});


Turns out there is an innocuous looking white space on the first line before the -3. Next time be careful when adding extra spaces to improve readability.

Wednesday, September 19, 2012

Maps icon in iOS 6.0

Interestingly, the maps icon in iOS 6.0 shows a route starting at BJ's Restaurant and Brewhouse on N DeAnza Blvd in Cupertino, CA. Or is it the Apple Company Store nearby? You'll never know.

Monday, July 23, 2012

MathJax TeX Test Page

When $a \ne 0$, there are two solutions to \(ax^2 + bx + c = 0\) and they are $$x = {-b \pm \sqrt{b^2-4ac} \over 2a}.$$ Does that look ok?

Yes. All I did was include this in my HTML code:

<script type="text/x-mathjax-config">
  MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}});
</script >
<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript">
</script>

Saturday, July 21, 2012

A quick energy story in per capita numbers

Here are some interesting numbers related to the energy sector that I obtained from WolframAlpha and the 2012 CO2 Report.

1. Coal consumption
USA 1.122 billion tonnes per year (2008)
India 637.5 million tonnes per year (2008)

2. Oil consumption
USA 18.69 million barrels per day (2009)
India 2.98 million barrels per day (2009)

3. Greenhouse gases (CO2 equivalent) [1]
USA 5500 million tonnes per year (2011)
India 2000 million tonnes per year (2011)

To make it more interesting, let us look at these numbers per person. (The population estimates used here are from 2010.)

1. Coal consumption
USA 3.63 tonnes per person per year
India 0.525 tonnes per person per year

2. Oil consumption
USA 22.1 barrels per person per year
India 0.896 barrels per person per year

3. Greenhouse gases (CO2 equivalent)
USA 17.8 tonnes per person per year
India 1.65 tonnes per person per year

And now, to make it sensational look at the ratios!
An average American uses 7 times more coal, 25 times more oil and produces 11 times more CO2 than an average Indian.


[1] Oliver JGJ, Janssens-Maenhout G and Peters JAHW (2012), "Trends in global CO2 emissions;" 2012 Report, The Hague: PBL Netherlands Environmental Assessment Agency; Ispra: Joint Research Centre. [Available online at http://edgar.jrc.ec.europa.eu/CO2REPORT2012.pdf. Last viewed July 22, 2012]

Tuesday, June 5, 2012

Mouseless Browsing Add-on

Perhaps the only reason I will recommend installing this add-on is that it allows you to change scrolling shortcuts in Firefox to j and k. Additionally, I also assign the keys , and . for history backward and forward respectively. Under Preferences > Layout, make sure to hide all IDs and also uncheck the "Show tab ids" checkbox to get rid of all the annoying numbered IDs.

Friday, March 2, 2012

A law that no one can break (not even Apple)


The article ``A Law Apple Would Like to Break'' by James B. Stewart (appeared on-line on Feb 24, 2011 under `Common Sense') [1] is fundamentally flawed in interpreting Bernoulli's law. The author uses Bernoulli's law to ``predict'' Apple's growth rate based on past numbers.

Before pointing out the serious flaws in the Mr. Stewart's argument, let me formally state Bernoulli's law (also known as the weak law of large numbers) without getting into the mathematical nuances and notation: ``For an infinitely large collection of random numbers each generated independently and under identical conditions, the average converges to the true mean of the random variable [2].'' Armed with this (almost) precise statement it can be seen that the author's arguments are deficient on three counts.

1. Although sales, revenue and such other economic indicators of a company's performance may be regarded as random variables, these numbers are neither independent, nor generated under identical conditions. It is easy to convince oneself that a company's performance numbers from the past and the present are correlated. Numbers in quarterly balance sheets are not generated by rolling a fair die over and over again!

2. It is not clear what the true mean of the random variable that the author is referring to is. Hence, it is illogical to conclude that Apple's growth should slow down by using the ``revert-to-its-mean'' idea. We do not know what the true mean is (no matter how much analysts and experts speculate). I do agree that there are other physical limitations that may lead to slower growth in case of larger companies (of course Apple can not sell an iPhone to every human and rock on this planet), but attributing it to the weak law is absurd.

3. Finally, Bernoulli's law applies to random numbers when the collection grows infinitely large. It says nothing about individual outcomes. Incidentally, colloquial misuse of Bernoulli's law (as in the present article) is so common that it has a name of its own -- the Gambler's fallacy [3]. A streak of ten consecutive heads in ten tosses of a fair coin does not increase the possibility of seeing tails in subsequent tosses! It is natural and quite intuitive to think that the coin toss experiment should eventually ``even out.'' In fact, one can justifiably invoke the weak law of large numbers and say that if a fair coin is tossed infinitely many times, then the fraction heads (and tails) will become arbitrarily close to 1/2. However, twisting this argument and saying that a streak of one hundred consecutive heads raises the chances of seeing tails in the 101st toss is naive.

To conclude, I agree with the author's intuition that Apple's growth may slow down, but I have serious reservations about ascribing this to Bernoulli's law of large numbers. Apple is not running against this law in any manner whatsoever. No one can.

Notes:
[1] See the news article at NYT.com.
[2] For those with a penchant for mathematics, a precise statement of the weak law can be found in any standard book on probability. For instance, see J. A. Gubner, Probability and Random Processes for Electrical and Computer Engineers. Cambridge University Press, 2006, pp. 576.
[3] The term originates from an interesting story that can be found on fallacyfiles.org. Also see the Wikipedia page on this topic and about the rampant misuse of Bernoulli's law under the garb of the law of averages.

Sunday, February 19, 2012

Confusing lim sup and lim inf

The definitions of and can get super confusing. I h0pe typing it out will help me remember it once and for all.

Let be a sequence of real numbers indexed by . Consider the sequence which is formed using the supremums of tails of the original sequence. Observe that is non-increasing (since ``the supremum over a smaller set can only get smaller''). We define.

Now if we switch the order of infimum and supremum in the above definition, the new quantity still makes sense (because the sequence of infimums of tails is non-decreasing). It is natural to define .

When faced with , think of a sequence of supremums obtained by sequentially chopping off the initial terms of the given sequence. Then take its limit, i.e., its infimum. One can interpret in a similar way.

Monday, February 6, 2012

*Must* use parfor!

I was recently implementing a Monte Carlo simulation with 50000 particles for estimating a probability density function in which took me 1100 seconds of runtime. Using Matlab's profiling tool, I found that 99% of the time was being spent on generating the new set of 50000 sample points from the old sample points. I immediately switched to a parallel looping strategy with 8 workers in parallel. The new runtime dropped to 80 seconds. Next, I found that I was calling a ``coinflip'' function in every loop. I removed it from the loop and instead stored a sequence of coinflips in memory using Matlab's binornd function and simply accessed this vector of ``already flipped coins'' in the parfor. The final runtime was 40 seconds.

Saturday, January 28, 2012

"So, what are you working on?"

I have been asked about my PhD research on multiple occasions. The following description keeps people from yawning and/or dozing off while I ramble about what I work on these days.

Imagine that you are running a 100 meter sprint. Once in a while, you are nudged by a mischievous member of the audience causing you to speed up or slow down instantaneously (depending on whether the mischievous individual pushed you from behind or the front, respectively). You continue to run at a constant speed between nudges.

Now, also imagine that there are sensors and timers at every meter mark along the track that log the time you cross each sensor. But these timers are inaccurate; so they record some number that may be slightly bigger or smaller than the actual time you crossed the sensor.

Given these 100 timer readings, can you find the locations where you were nudged by the frolicsome bystander?

Saturday, January 21, 2012

Prior, posterior, likelihood, MAP, ML

Let be a parameter and be the observation or data.


The posterior density is the probability density of given the observation, that is, .


The a priori (or prior) density is the probability density of before any observations are made, that is, .


The likelihood density is the probability of observed data, given the model parameters, that is, .


Using Bayes' rule, it is easy to see that




or, posterior prior likelihood.


When learning an unknown parameter from data, two commonly used estimation methods are as follows.


1. Maximum a posteriori (MAP) estimate




2. Maximum likelihood (ML) estimate



Notice that under a uniform prior, MAP and ML are identical.