The $5.00 Toaster Bet: A Lesson in Probabilistic Thinking
Author
Hector C. Ortiz
Date Published

Note: This post is wife approved.
The other day my wife complained that I hadn’t put the toaster away and that she always has to do it for me (the wonders of marriage).
My response, of course, was that I put the toaster away all the time. She just never notices because when I do, the toaster is already back where it belongs and no crime scene remains. I also pointed out that I put it away for her plenty of times and don’t keep a running ledger of it.
She was unmoved.
So she proposed a system. The next time she puts the toaster away for me, I owe her $5.
I thought about it for a moment and said, “Deal, but let’s tweak it. If I put the toaster away for you, you pay me $5. If you put it away for me, I pay you $2.50.”
She immediately declined, which surprised me.
When I asked why, she said, “Because that doesn’t feel fair.”
That answer fascinated me because what felt unfair wasn’t just mathematically fair, but based on her estimates, it was mathematically advantageous to her.
These were probabilistic mistakes almost all of us make, and ones we’re never taught. It was a blog post waiting to happen.
The first one is subtle but incredibly common: treating absence of evidence as evidence of absence.
My wife wasn’t seeing me put the toaster away, so she concluded that I simply wasn’t doing it. The times the toaster was used responsibly left no physical evidence. No toaster. No memory. As far as her brain was concerned, those events never happened.
This is also a form of survivorship bias. We reason only from the outcomes that survive long enough to be noticed, while quietly ignoring the invisible cases that matter just as much, if not more.
A classic example comes from World War II. Engineers studied returning aircraft to decide where to add armor. They mapped the bullet holes on the planes that made it back and naturally concluded those were the areas that needed reinforcement.

The insight was exactly backward.
Those bullet holes showed where planes could be hit and still fly home. The truly dangerous spots were the areas with no bullet holes at all, because planes hit there never returned to complain about it.

The correct probabilistic conclusion wasn’t “protect the damaged areas,” but “protect the areas where there’s no evidence, because those planes didn’t survive to be counted.”
The next mistake showed up when she fixated on the part where she might lose $5 but only win $2.50. Her brain immediately decided this was a terrible deal and shut the whole thing down for safety reasons.
That’s a classic case of reward-to-risk fixation, focusing on how bad a loss feels while quietly ignoring how unlikely it is to happen.
Probabilistic decisions don’t make sense one toaster incident at a time. Evaluating them that way guarantees you’ll reject good bets, because every favorable bet includes outcomes that are emotionally unpleasant in isolation.
This is also where base rate neglect sneaks in. She implicitly treated the odds as if they were close to 50/50, even though her own estimates put them nowhere near that. Once you ignore base rates, payoff sizes start lying to you.
Casinos understand this perfectly.
They don’t make money because every game favors them by a lot. They make money because almost every game favors them by a little.
At a blackjack table, the house edge might be only one or two percent. On any single hand, that advantage is basically invisible. The player often wins but Casinos don’t care.
They aren’t thinking about this hand. They’re thinking about millions of hands, across thousands of players, over years. In that context, a tiny edge stops being cute and starts paying for very large buildings with no clocks.
.jpg&w=3840&q=100)
The final mistake was the most human one of all: she assumed she needed to be right for the deal to be good when she didn’t.
The best probabilistic decisions don’t require precision. They’re robust to error. They’re designed so that even if your estimates are off, sometimes embarrassingly off, the decision still works.
Weather forecasts understand this better than we do.
Meteorologists don’t try to predict the exact driveway a hurricane will destroy. They publish giant probability cones that basically say, “Something bad might happen somewhere in this entire region, good luck.” And somehow, that’s still useful.

The forecast doesn’t need to be precise. It just needs to be directionally correct.
So let’s bring it all back to the toaster.
On the surface, the deal looked terrible. A 2-to-1 payout. For every $1 she might gain, she could lose $2. If the odds were 50/50, that would indeed be a bad idea and an excellent way to ruin toaster-adjacent marital peace.
But the odds weren’t even close to 50/50.
She believed that she would end up putting the toaster away for me about 35% of the time, a number that generously assumes I occasionally remember things, while I would put it away for her only about 5% of the time.
Once you plug those numbers into the math, everything changes.
Over 100 toaster uses per person:
- I would owe her $87.50
- She would owe me $25.00
That’s a net gain of $62.50 for her, for doing exactly what she already believed she was doing anyway.
Even better, for the deal to break even, she would need to put the toaster away for me 10% of the time, a number so low that, in her mind, it was science fiction.
And here’s the truly absurd part, she could be wrong by 25%, wildly misjudging how often she puts the toaster away and still not lose money.
She didn’t need to be right.
She just needed to be not catastrophically wrong.
Which, it turns out, is how most good probabilistic decisions actually work.