Back to Hand Evaluation Articles

Listen to the Experts: 4333 Patterns in Notrump

The Question

A recent discussion on rec.games.bridge turned into a debate on the expert practice of treating the 4333 pattern as a liability when considering playing notrump.

If you've seen my data, you'll see that the expected number of tricks in notrump is slightly higher if you have 4333 pattern than if you have 4432 or 5332. This would indicate that the 4333 pattern actually plays better in notrump, contrary to expert practice.

So, are the experts wrong?

A number of the posters in the debate used my data to argue that the experts were wrong, but I took the view that the experts might well be right.

Understanding The Data

My research evaluates patterns based on an average across all deals with a hand holding the pattern. This includes lots of deals you'd never want to play in notrump. For example:

Additionally, remember that the average value for a 4432 hand includes hands where there are honors in the doubleton suit, honors which experts already devalue.

For example, while 4333 is better on average than 4432 in the data, Binky Points evaluates the hand ♠ A-J-8-2 ♥ Q-7-5 ♦ Q-6-3 ♣ 6-5-2 as very slightly worse for notrump than ♠ A-J-8-2 ♥ Q-7-5 ♦ Q-6-3-2 ♣ 6-5.

The entire statistical advantage of the 4333 pattern is likely due to the fact that other shapes can have badly placed honors in doubletons and singletons. (This is also probably why patterns with voids average in notrump better than patterns with singletons.)

Running a Crude Experiment

So, I ran a crude experiment. I dealt a large selection of hands where south is dealt a 4333 pattern, and then checked if notrump was the best double-dummy denomination for the north/south pair. (If no contract makes, I allow that notrump is the best contract if no suit contract makes more tricks.) I then computed the average difference on just these hands between the actual number of tricks in notrump and the expected number of tricks according to Binky Points.

This is a crude simulation related to the discussion above - I conjectured that expert practice is different from my data because expert practice is based on the experience of actually playing notrump with 4333 patterns, which experts probably won't do if they "obviously" belong in a suit contract, for some suitable definition of "obviously." In this experiment, I'm using an extreme definition of "obviously," which is, I'm assuming experts always know when to play in suit and when to play in notrump.

The average difference actually distinguishes between 4333 hands with a 4-card major and 4333 hands with a 4-card minor, because the odds of belonging in a suit contract are different with the two sets. Also, the average difference is expected to be positive, because when we belong in notrump, it is often because we make more tricks in notrump than we might expect.

So, here's the data, for 4333 patterns and for other balanced patterns:

          Tricks
Pattern     Diff
-------------------
3 3 4 3     0.56
4 3 3 3     0.60
3 2 4 4     0.70
4 2 4 3     0.77
4 3 4 2     0.79
3 2 5 3     0.81
3 3 5 2     0.84
4 4 3 2     0.85
5 2 3 3     0.92
5 3 3 2     0.95 

So, what we see is that when we are dealt a flat hand, and we belong in notrump, then Binky Points underestimates the notrump trick-taking value of the hand by 0.60 points or so. But when we're dealt any other balanced shape, and we belong in notrump, Binky Points underestimates the notrump tricks by even more. In other words, when we belong in notrump, 4333 patterns are the worst case for balanced hands.

The obvious problem with this experiment is that it uses a fairly extreme definition of "obvious." I doubt the disadvantage of 4333 is as big this experiment shows.

Still, it suggests that my guess that expert practice is correct when we consider which deals actually get played in notrump might have a basis in the data.

Respecting Expert Practice

It's remarkable how many people want to reject expert practice based on my data.

The first big problem is that expert practice is not based on double-dummy results, and my data is.

But even if double-dummy data is a useful measure of the real-world tricks available on a deal, my view is, when the data appears to disagree with expert practice, it is best to explore the data (and expert practice) in more depth. Experts are not necessarily right, but that does not mean we should jump to conclusions.

There is further evidence for trusting expert practice in my (warning LARGE) article, When Partner Opens 2NT article, which looks at borderline hands for 3NT opposite a 2NT opener.

Copyright 2009.
Thomas Andrews (bridge@thomasoandrews.com).