An Absurd Situation

Given the complexity of our reward circuitry, its output is remarkably simple.

When we do something that isn’t good for us (i.e. that lowers fitness), it produces the experience of pain (a generic term for all manner of unpleasant sensations). When we do something that’s good for us, we experience pleasure (a generic term for all manner of pleasant sensations).

It’s a system honed over millions of years of human evolution, and should serve as our most trusted guide for how to stay alive and thrive.

And yet, in this day and age, we find ourselves routinely second guessing that feedback system.

That second guessing is so commonplace that we seldom stop and think how absurd it is that we do so. How absurd it is to think that maintaining health should require sacrifice and willpower.

Consider: when we avoid things we find pleasurable, we’re effectively saying that we believe our brain is trying convince us to do something that would hasten its own demise.

It’s like walking towards a burning flame, getting unbearably hot, and thinking that moving closer to the fire is still in our best interest. That the feeling of searing pain generated by our brain is a lie that can’t be trusted.

And yet, absurd as it is when framed like this, there indeed are examples where our reward signals are misaligned with our best interests. There are times when our brain does encourage us to avoid things that are good for us, or do things that are bad for us (things that hasten its own demise).

Our brain encourages us to eat more cake, veg out on the sofa all day, smoke cigarettes or inject heroin. This should strike you as absurd.

There was no second guessing for our wild ancestors living in our natural habitat. I guarantee they never once felt guilty about eating too much mastodon, or lazing around too long after a hunt.

Their pleasure and pain signals, the output of a reward system with millions of years of evolved wisdom embedded into it, could be wholly trusted as a guide for what was in their best interest.

Why can’t we do the same?

Because the world in which our reward signals were forged for eons is not same as the world we live in now. In the face of evolutionarily familiar inputs, the system can be trusted. In the face of evolutionarily novel inputs, however, all bets are off (especially those that have been expressly designed to hijack our reward centers – see cake, heroin, etc…).

Wouldn’t it be great if there were no second guessing? No more guilt?

Wouldn’t it be great if maintaining a healthy lifestyle required nothing more than avoiding pain, and seeking pleasure?