2015 Z06 Ascending Page Mill Rd to Skyline Blvd


One of my favorite roads in Silicon Valley. From I-280, go west on Page Mill Rd, climbing all the way to Skyline Blvd. Pavement is a little rough in several places which kills some of the advantage of the tires on this car, but that unpredictability is part of the fun. Choose early morning or just before sunset, and you might get the whole ride without having to pass even a single car. This was my first ride with the camera setup installed, and I was barely using the car’s capabilities, but it was exhilarating nonetheless.

2015 Corvette Z06 with Z07 upgrade, full convertible, 650HP, 7 speed manual transmission, white with black interior, Michelin Pilot Sport cup 2 tires.
Main camera: Corvette inbuilt PDR in sport mode
Top left inset: GoPro mounted between seats
Top Middle Inset: GPS Map

Lego EV3 Cobra vs NXT Alligator vs EV3E Elephant

Watch an autonomous EV3 Cobra with fangs go after an aggressive NXT Alligator, with a lumbering Elephant to round out the competition! Each one was built and programmed by my 9 year old son. The Cobra and Alligator are autonomous with their movements guided solely by their sensors and programming. The Elephant’s movement and trunk is remote controlled.

Ten Thousand Miles in Fourteen Years

Fourteen years after landing at JFK airport in NY..

Sitting on the curb outside with nowhere to go..

Eating a 99c McBurger two times a day for months on end..

After experiencing the pure ecstasy of being in Times Square at the millenium’s turn..

After sleeping overnight several times on the cold sidewalk outside the US Consulate in New Delhi..

After experiencing George Bush vs Al Gore in 2000..

After watching the towers go down in 2001..

The smell of ground-zero, and the cold stares immediately after..

After getting laid-off abruptly and having to find a job within 10 days or go back..

Not being able to attend my father-in-law’s funeral because of visa issues..

Not being able to change jobs for years because the green card was “in-process”..

After experiencing John McCain vs Barack Obama in 2008..

After filing 19 US patents..

Getting a chance to touch millions of lives while working at eBay..

Finally being “free” to quit working for someone else, to test my mettle against the raw elements..

Feeling the joy of helping with safety for millions of people while running WeatherSphere..

After experiencing Mitt Romney vs Barack Obama in 2012..

Ten thousand miles from where I was born..

On June 19, 2013.. I finally got my US citizenship!


WeatherSphere Has 3 Apps In Top 58 Paid iOS Apps

Gratifying to see that when the weather gets really nasty, millions of people across the country turn to WeatherSphere apps to ensure their loved ones’ safety. Today (May 20, 2013) has been a busy day, with massive tornadoes hitting the midwest that led to significant loss of life. Millions of existing users were using our apps simultaneously to track the storms in real-time, and thousands of new users were downloading the apps and trying them for the first time.

Today, our virtual “cloud” at RackSpace actually came of use to foil the real storm cloud! At peak, we were serving nearly a gigabit of traffic every second, something we couldn’t hope to do with our original fixed infrastructure.

WeatherSphere Tops Apps

WeatherSphere Tops Apps

There is no question about it; the cloud plays a huge role in leveling the playing field. Without it, there is no way we could have scaled up in time to serve today’s traffic.

Why Apple’s App Store Still Has Its Mojo

There is a lot to say about Apple in general, but the evolution of its App Store is an important subject in itself. The App Store app is the single largest gateway to all of Apple’s revenue from apps. This app is the embodiment of so many  design, technological, product and business aspects necessarily combined together, with each aspect conflicting with the other in some way or the other, yet it somehow works.

The App Store eco-system is also responsible for the livelihoods of many developers (including my company WeatherSphere). I would like to pay homage by highlighting some of the good things it does (before ripping it apart in a future post).

These are in no particular order, and some impact some more than others. I am basing this on my experience building apps full-time for the last several years, initially by myself,  now with a small team.


Awesome Notifications Platform

This is probably the most under-appreciated service provided by Apple to all its developers, for no charge whatsoever no matter how much you use it. This is available to all apps regardless of whether being paid or free. This single service by itself is responsible for killing carriers revenue from text messages, or rather, forcing them to stop ridiculously overcharging for text messages.

I can attest that every single text blurb sent via the app store notification system reaches its target user’s device within a few seconds, even when we are sending hundreds of messages a minute. Multiply that by the tens of thousands of other developers doing the same thing at the same time, and with recipient users spread across the globe,  you can imagine the scale. The very fact that it continues to work is mind boggling. Having built many of eBay’s high-volume backend services, I speak from experience.

Developer Tools

In the very beginning, like many developers coming from the Linux/Emacs world, or the MS/Visual Studio world,  or the C/C++ world, I was aghast at having to kowtow to Apple’s edict on having to learn the archaic “Objective-C” programming language as a pre-requisite to build iOS apps. On top of that, imagine the surprise many of us had after attempting to download the developer toolkit on our trusty old Dell laptop, only to find out that iPhone apps could only be developed on Macs!

But, over time, this was offset by the facts that the developer toolkit was free, that the documentation was excellent, and there were not too many bugs.


This is a boon. Until the iPhone 4 came along, there were only two screen sizes to design for, iPhone and iPad. The iPhone 4 came along with its retina display causing a disruption, but Apple wisely kept the “virtual” screen pixel size the same as before and allowed for automatic scaling. For us personally the biggest irritant was the iPhone 5 screen size change. But even then, there is broad stability to the number of screens we have to design for, and this is literally a huge time saver. And as we all know, time is money.


I love the intrinsic security built into the system where apps cannot be installed and run on random devices unless either explicitly authorized by the developer AND the device owner, or unless they are released via the App Store. For sure there is the jailbreak society and cracked versions of paid apps, but by and large the system works.

In-App Purchases

Did I say earlier how in-awe I am of how well Apple’s In-App purchases work? Yes they take 30%, but oh well. From a security standpoint, we are totally happy in not having to deal with building payment transaction systems or storing customer credit card data. Apple does all of that for paid apps, and it does all of that for in-app purchases within free or paid apps. As a developer, it is one less reinvent-the-wheel thing for us to build. Plus, it is so much less likely for average users to fork over their credit card number on a random developer’s website, than it is to simply authorize the payment onto their existing iTunes accounts.

There are definitely more good things Apple does, but these have affected us the most. Please feel free to remind me of any major ones I have missed. In my next post, I will take Apple to task for the abysmal decline in quality of the consumer App Store app.

Ode to eBay

Came to this land,
To survive another day..
Day after day, still too bland
Hung on by just a strand,
Until I landed at eBay.

It was a beacon of light
Its foes cowered in fright,
The strength of its community,
Granted it virtual immunity
Under custodians sight,
No one dared slight,
Unto this giant’s might.

As with all blinding power
The custodians went slowly sour
Took the community for granted
Seeds of dissent met with dour
The ecosystem was finally salted.

Failed at my task I did
Couldn’t raise enough to bid
Change for even an hour
To lighten up the day
Of even one user on eBay.

Some will be sad,
Maybe some happy
A few ambivalent,
Some downright ecstatic,
That I am leaving eBay

Losing all right to preach,
I urge one and every each
Heed the user, not the ladder
Now is the time, now the hour
Give more than is your due
To restore this mighty power
To what was known as eBay.

Monday, October the 13th will be my last day at eBay. And no, I was not laid off, rather my decision to exit into the current hellish market environment is entirely voluntary.

I express my sincere gratitude to all who keep the miracle of eBay alive.

-Raghav Gupta
October 10, 2008

A definition of “language”?

Lately while doing research on automated language translation, I’ve come to realize that there isn’t a clear, concise, well accepted definition of human language itself. A quick check on google reveals the wealth of interpretations. So then, kindly, let me proffer one more.

Language is the serialization of thought.

The term “serialization” should be ready accessible to programmers et al. For others, a quick explanation is in order. Serialization is the process of taking a complex (e.g wide) entity and transforming it, re-constructively,  such that is can be passed through a much simpler (e.g narrower) channel. A typical requirement for correct serialization is the ability to de-serialize the serialized data to result in exactly the original entity.

A very simple analogy would be “serializing” a bunch of untidy children through a narrow gate, one child at a time. The only caveat being that, if the serialization process was perfect, then the children would regroup on the other side in the exact same configuration as before the serialization started.
Once the concept of serialization is clear, the intent of my definiton of language should also be clear, although you may or may not agree with it.

If we go along this line of thought (pardon the pun), a corollary immediately follows:

Language is the ultimate compression engine.


If we believe that human thought is one of the most complex phenomena known to us, and if language allows serialization of a complex thought into a small compact representation that can be communicated and stored in a myriad of ways, and ultimately easily de-serialized by target humans to reveal the original thought, then the corollary must be true.

When one thinks hard to solve a problem, it is likely he or she uses bits of language in self-communication to focus attention on particular aspects, annotate the intermediate results, and proceed one by one onto higher level steps. Although the most revealing flashes of insight most likely occur during a moment of unbounded “massively parallel” thought, knowledge of language undoubtedly plays a role in allowing the thinker to carry out elaborate thought experiments.

In primitive cultures (or in someone never exposed to the concept of language), undoubtedly the enterprising inventors of that time devised their own methods of mentally labelling specific ideas with individual symbols, and then using those symbols to ease the task of deriving higher level constructs.

Also, if the above corollary holds weight, I think it has some additional fantastic implications.

This means that if we were to one day achieve “brain dumps”, or “downloading a brain” and their ilk, the best format to allow accurate storage and re-construction would be plain text! If the brain could somehow be tricked into emitting a high speed lecture on its current and past states, then the language best suited for that would be the mother-tongue of the brain’s owner. Nothing else we conceive will probably ever come close in accuracy or compactness.

Of course, this does not bode well for automated machine translation attempts. To be fully successful at that task, implies the ability to de-serialize a piece of text into the original speaker’s thoughts. If we accept that human thought is one of the most complex or mysterious activities known to mankind, then we are accepting that automated machine translation is a pipe dream for many years to come.

On the other hand, if the code of language does get cracked soon, will it mean that human thought is not so complex after all?

Why the word “simulation” is a misnomer

Reading all the recent brouhaha over the “are we living in a simulation” argument, I’m compelled to write this post.

I seriously wish the people who talk authoritatively about “simulation”, or “simulating life”, have actually tried it themselves. You know, modelled the behaviors, the interactions, the environment, and actually written some code.

Realise this: We cannot yet even accurately simulate a teaspoon of water!

Why? We are talking about 10^24 molecules of H20, thats why! *Even* if every molecule’s state could be accurately  saved in one byte of memory, it would require 1 million peta-bytes. Even if we somehow managed that, the processing power necessary to move to the next state would be enormous. It would take minutes to perform the same movements of all the molecules that reality does in an instant.

Of course, this is assuming that one byte per molecule would be enough. I think, when we actually do reach that capability in a few years/decades, we’ll find that its not enough. We’ll find that the simulation still doesn’t behave accurately like water, when,  for example, the temperature is dropped. “It doesn’t freeze into ice at a simulated temperature of 32F.” Then we’ll find that to achieve that it is necessary to model the states of the individual electrons in each molecule, their orbits and spins etc. And of course, the problem will once again become as difficult as it is today.

And may I remind you, the proponents of the WALIAS theory argue that all our population, all the species, the oceans, the planet, the *entire* universe is a simulation being run by some post-human species.  Now, if one teaspoon of water is so much trouble, why do we expect to be able to “simulate” even one single human brain, which, uhm.., is arguably more complicated?

Does this sound like pessimism? It will to some. I don’t think that way at all. Rather, I think the wrong kind of people have jumped onto the wrong type of bandwagon and are pushing a wrong kind of science.

If we are all indeed living in a simulation, then the only way for any of us to accurately re-simulate anything existing within our world, is to do it using the constructs used by those who are simulating us. We, the simulated, cannot use new constructs created by us to simulate anything in our world. We can only use new constructs created by us to simulate new things, using new rules in a new simulated world.

Therefore, in regular discussion, a simulation should *always* be interpreted as “a crude approximation”. We, or anybody else, *cannot* look at something built according to the rules and constructs of one world-system,  and create an *exact* copy using the rules and constructs of another world-system. It is a physical impossibility.

But, what if through persistent advancement we are able to correctly figure out our exact building blocks, our constructs, the lowest level rules of our world. My own personal view is that we cannot, or for that matter, no entity within a world-system can ever ever realize its own lowest level building blocks, because doing so would violate the laws of its existence. This is where I’ll run afoul of lots of folks, but that is a different matter.

Now, even if we do allow that concession, suppose we do figure it out, and we use it create a copy of some less advanced species known to us, lets say chimps. Suppose we do succeed, and we have a living breathing chimp in front of us. Then that chimp is *NOT* a simulation, it is the real thing! We will have as much control over its behavior as we do over a real chimp. And whats more, it will be *existing* in our own world, our own universe, not within in a walled-garden simulation. Therefore, if the post-humans have followed this scenario, then we exist in the same world as them, and therefore, we are not a simulation.

And, for all of us trying to “simulate life” sometime or the other, I’d like to make this comment: The simulated “life form”, if implemented correctly, no matter how similar its behavior to some real-world life-form, is its own species. It is pointless to compare it with the existing real world thing. Because that is in effect, similar to comparing apples and screen pixels. Yes one day we will succeed in creating artificial life forms, yes they will have “consciousness”. But, the moment they learn to communicate with us in a meaningful way, the word “simulation” will become taboo pretty soon.

Why Genetic Programming cannot solve the most important AI problem

In a comment to my last post, Upgrade Zero One A asks:

I was wondering what your opinion was on John Koza’s work specifically (Invention Machine)and Genetic Programming in general.
Is there any hope that GP could help solve or improve upon existing approaches any of the three areas mentioned?

1. Natural language understanding.
2. Vision, Image/Scene understanding.
3. Creating “consciousness”.

Since this excellent question really made me think beyond the scope of the original post, I decided to create a separate post for the answer, which follows.

This is really asking me to predict the future of mankind here..but I’ll give my best shot.
(I’ve changed the order of the areas)

1. Creating consciousness: No, I don’t think this is an area needing more direct interest. When we have more intelligent algorithms for processing input and turning it into output, the arising consciousness will automatically seem more real. Just program in a way for it to “wail” whenever a specific combination of sensory inputs happen, and I guarantee you, it will *be* real. That’s when the “AI Rights” bill will be presented in the house. One of the clauses will be to deem the “Artificial” in Artificial Intelligence as a politically incorrect/insensitive word.

2. Vision,Image/Scene understanding: Yes. Not so much for the “understanding” part, but very much so for the segmentation part. A perfect or even near-perfect image segmentation algorithm is yet to be found/invented/discovered, even though the task happens in the lowest level of processing in most beings. Separating a tiger or a zebra from the background of a forest is trivial even for tiny animals, but it stumps the best of today’s generic algorithms (they might work ONLY if they have been specially constructed to solve this specific problem itself, or if they are run in a “supervised” manner). The final breakthrough might very well come from using a GP approach.

3. Natural Language Understanding: I highly doubt it. Look at it from “real” evolution’s point of view. It took a gazillion number of species living concurrently, some new ones coming up, some going extinct, for billions of years, for ultimately *one* of them to be able to process natural language. The level of *concurrency* and the *interaction* between the species is enormous. Furthermore, “real” evolution is able to transcend all “local” maxima problems, because it, in effect, has an unlimited time scale. Dinosaurs evolved to be the dominant species 65 million years ago, but the next dominant species(us) was not a descendant of dinosaurs, it came from a totally different branch. And after all these years, this one species developed natural language.
One might say that GP is ideally suited to exactly this kind of task, but I think the scale is way off.

The task was to build an algorithm for understanding natural language, but I equated it to almost having to emulate the human species itself, or in essence, a human level intelligence.
Since GP tries to emulate evolution, and saying that current or somewhat futuristic GP could accomplish this task(NLU), is saying that GP can compress into a few days/weeks what real evolution did in billions of years. Give it a few years to run, maybe. But then we would never know if it will really ever converge or not. That’s the curse of evolution, presented to GP.

Therefore, I think the solution to this problem will come not from GP, but from other traditional directions, where a spark in the mind of some genius will be able to “utilize” the inherent knowledge built into the human brain and bootstrap an algorithm with a human-like capacity to learn.

The most difficult AI problem?

What do you think is the most difficult AI problem of all? I’m not sure there is even a debate around this, but regardless, I’d like to clarify the issue, at least for my own sake.
Here are some candidates, let me know if I missed something important.

1. Natural language understanding.
2. Vision, Image/Scene understanding.
3. Creating “consciousness”.

I’m very sure some of you will immediately rank the difficulty in the order presented above, with natural language understanding being easiest and creating consciousness most difficult, and scene understanding being somewhere in the middle.

I, however, would like to argue exactly the opposite.

Looking at things from an evolutionary point of view, what evolved first, and what evolved last?

1. “Conscious” beings emerged first as life evolved, reacting to various stimuli in their environment.

2. Soon various life-forms were able to “see” their surroundings and react accordingly.

3. Much later, only humans, the pinnacle of evolution’s achievement yet, are able to create and learn an innumerable number of communication mechanisms using symbolic languages, in a multitude of mediums including sound, light, surface markings and so on.

Consciousness, at one level, could be treated just as the presence of a feedback loop. It could be argued that a vast number of computer systems in existence today are already “conscious”, although the degree of consciousness varies significantly.
Vision, or image understanding, is present in species vastly lower on the scale than the human, i.e insects, birds and so on. And since the known communication mechanisms in those species are enormously rudimentary compared to human speech and language, and coupled with the fact that even species only a notch below humans cannot “talk” in the fine detail that we do, language must be the most difficult piece of engineering to achieve (I do know this one personally, I’ve been trying very hard to build an NLU algorithm for a while).

If you find flaws in my reasoning, please do point them out.

Because if my assertion is true, the corollary would be: “The moment a complete NLU system is built, it means the code of intelligence is cracked, and that fully intelligent self-aware beings are then only a matter of combining the various existing building blocks”.