I really like the distinction between reducible and irreducible uncertainty, because it reflects how engineering decisions actually get made. Even when uncertainty is epistemic and theoretically reducible, in real projects we almost always stop reducing it early, due to time, cost, or institutional pressure. That stopping point is often where bias enters most strongly.
This is also where the difference between risk and uncertainty becomes practical. We tend to call it risk once we assign consequences and some form of likelihood, even if that likelihood is based partly on assumptions rather than solid data. In practice, we translate uncertainty into risk long before uncertainty is resolved, because decisions cannot wait.
From that perspective, probabilistic risk assessments are still very useful, not as precise predictors, but as structured ways to make assumptions visible, explore sensitivities, and support discussion. Their real value is often less in the numbers themselves than in showing where judgment, bias, and incomplete knowledge are playing an important role. Becoming comfortable with that reality, and being explicit about it, feels like an essential part of engineering.
Original Message:
Sent: 01-29-2026 09:37 AM
From: Jacob Davis
Subject: How do you distinguish between risk and uncertainty?
This is great discussion that I'd like to add on to.
First, let's not confuse risk and uncertainty, though they are certainly related.
It is more correct to say "aleatory uncertainty" and "epistemic uncertainty". Rolling a die is not a risk unless you put money on the uncertain outcome of 1/6.
Aleatory is also known as natural variability, usually associated with temporal and spatial randomness, and is irreducible. Epistemic is knowledge uncertainty, usually associated with data and models, and this can be reduced by collecting more data, refining the model or math, etc.
For example, a flood event is aleatory uncertainty because you can never know precisely the timing, location, and magnitude of the flood. I feel sorry for meteorologists - it seems like they are wrong 99% of the time. However, the modeling of floods incorporates a bevy of knowledge uncertainty: atmospheric conditions; antecedent moisture; ground saturation; river dynamics, inflow volumes; model type; depth, location and timing of precipitation; etc., all of which can be reduced, but not eliminated, through data collection and model refinements. Thus, we are always getting better at modeling storms.
The first step in dealing with uncertainty, is to acknowledge it and its sources. Put the sources into bins: reducible and irreducible. If uncertainty can be reduced by field testing/sampling or other means, and if it will change the decision, then take the time to go get the data. Other uncertainties might not be reducible, and decision-makers should know which - usually, though, people in the C-suite are used to making decisions under uncertainty.
When it comes to biases, it is also important that you recognize them and there are ways to counteract nearly all. Engineers are susceptible to over-confidence bias. This is best countered by working with dynamic teams of varied backgrounds and experience (include scientists and technicians into discussions with engineers). Problem-solving and trivia games have the best chance for success when done in groups. Reference and availability biases are countered by being familiar with case histories and performance.
There is no magic salve for dealing with uncertainty, expect perhaps for becoming comfortable with it.
------------------------------
Jacob Davis P.E., PMP, M.ASCE
Special Assistant for Dam Safety
U.S. Army Corps of Engineers
Washington, DC
Original Message:
Sent: 01-28-2026 09:07 PM
From: Olga Marin
Subject: How do you distinguish between risk and uncertainty?
Thanks for the clarification. I'm not a risk-management specialist, but I'm very interested in this topic because, as engineers, we deal with risk and uncertainty every day and still have to make decisions.
I agree that, in practice, it is often more useful to focus on likelihood, consequences, consequence trees, and residual risk than to try to justify precise probability distributions. Bounding impacts and thinking through failure paths is usually more robust, especially in complex systems where response actions can fail and secondary effects matter.
At the same time, I see this approach as a practical way of working under uncertainty rather than removing it. When we move from probability to likelihood, or rely mainly on consequence analysis, we are implicitly accepting that the numbers are shaped by assumptions, experience, and judgment, particularly in epistemic cases and in the presence of unknown unknowns.
That's really what I'm trying to understand better: how should we understand and use probabilistic risk assessments, if we accept bias and incomplete knowledge are unavoidable? In other words, what kind of confidence should we place in those numbers, and how explicitly should we acknowledge their limits when they are guiding real engineering decisions?
------------------------------
[Olga] [Marin]
[Tafur Marin Ingenieria Estructural SAS]
[+57 3148944222]
[Colombia]
[<maskemail>olgalmarinc@...</maskemail>]
Original Message:
Sent: 01-28-2026 07:50 AM
From: Joerg-Martin Hohberg
Subject: How do you distinguish between risk and uncertainty?
I would like to disagree:
- There is "aleatoric risk", like throwing a dice where the possible outcomes are known with 1/6 probability. This is also called the "known unknowns".
- But there is also "epistemic risk", i.e. due to lack of knowledge the risk cannot be perceived and assessed. This is typically the case in engineeering before certain phenomena were recognized first time, such as soil lequifaction or the pressure sensitivity of the permeability of certain shistosic rocks (which led to the failure of Malpasset dam in France 1959). This is called the "unknown unknowns" and worse than uncertainty, as the mere existance of such kind of risk unknown ("black swan").
A useful concept - even though difficult to handle - is a "Damokles' sword" sort of risck, amounting to the product of (0 × ∞), as is the case for nuclear power plants.
I suggest we should rather discuss the possibilities to estimate likelihood (which is different from probability, if the distribution is not known) and impact/consequences. Usually the latter is easier to ascertain (value at risk), but if response actions fail which were supposed to work properly, assessing the impact may entail regarding conditional probabilities on the downstream side in the consequence tree.
We might also discuss additional secondary risks created by preventive action, and the nature of residual risk, which is not the same as accepted risk.
------------------------------
J.-Martin Hohberg
Dr.sc.techn, M.ASCE FED
Sr. Consultant, IABSE e-Learning Board
Bremgarten / Berne, Switzerland
Original Message:
Sent: 01-27-2026 03:34 PM
From: Olga Marin
Subject: How do you distinguish between risk and uncertainty?
I like your proposal. To keep it simple and consistent with Frank Knight:
Risk: The possible outcomes are known, and probabilities can be assigned to them.
Uncertainty: The possible outcomes, their probabilities, or both are not known.
In real life many "risk models" are in fact uncertainty models built on assumptions. When those assumptions fail, risk collapses back into uncertainty.
Some argue that uncertainty disappears if one "believes" probabilities exist. However, this brings the discussion close to bias: subjective beliefs, prior assumptions, and model choices inevitably shape the probabilities we assign.
Key question: How can we rigorously distinguish between genuine probabilities and assumed ones, and how should we deal, cleanly and honestly, with irreducible uncertainty, if such a distinction is possible at all?
------------------------------
[Olga] [Marin]
[Tafur Marin Ingenieria Estructural SAS]
[+57 3148944222]
[Colombia]
[<maskemail>olgalmarinc@...</maskemail>]
Original Message:
Sent: 11-21-2025 08:23 AM
From: Mitchell Winkler
Subject: How do you distinguish between risk and uncertainty?
I recently received a request from ASCE for my support in developing the ASCE CEC Early Career cross-discipline certification. This request prompted me to review the CEBOK, which includes a section on Risk and Uncertainty. It further prompted me to examine the distinction between risk and uncertainty. I found this reference to Frank Wright
https://news.mit.edu/2010/explained-knightian-0602
Frank Knight was an idiosyncratic economist who formalized a distinction between risk and uncertainty in his 1921 book, Risk, Uncertainty, and Profit. As Knight saw it, an ever-changing world brings new opportunities for businesses to make profits, but also means we have imperfect knowledge of future events. Therefore, according to Knight, risk applies to situations where we do not know the outcome of a given situation, but can accurately measure the odds. Uncertainty, on the other hand, applies to situations where we cannot know all the information we need in order to set accurate odds in the first place.
Searching for Frank Knight and risk and uncertainty will yield many more references.
How do you view the difference between risk and uncertainty? Sharpening this distinction would help everyone.
------------------------------
Mitch Winkler P.E.(inactive), M.ASCE
Houston, TX
------------------------------