|dc.description.abstract||Petroleum industry performance has been consistently below expectations. This underperformance has been attributed in part to the existence of cognitive biases in project evaluation, resulting in poor project valuation and selection. It was demonstrated in the literature that chronic overconfidence and optimism (estimated distributions of project value too narrow and shifted positively), both common in industry, produce substantial disappointment (realized portfolio values less than estimated).
In this work, I aim to evaluate the impact of overconfidence as well as underconfidence (estimated distributions too wide) on portfolio performance, to determine if it is more beneficial to reduce biases and improve calibration or to reduce uncertainty, to provide a simple way of measuring biases from historical assessments, to determine the relationship between the number of probabilistic assessments and the accuracy of these measurements, and to determine guidelines for minimizing biases in new assessments using external adjustment.
I simulated the performance of projects selected in a typical portfolio of O&G projects to determine the effects of biases on portfolio performance and to compare reducing biases against reducing uncertainty. Next, I generated calibration curves for historical probabilistic assessments and used these curves to calculate different reliability measures. Then I generated different numbers of biased assessments and used them to determine the relationship between the number of assessments and the accuracy of the bias measurements. Furthermore, I used the calibration curve to adjust new forecasts and measured the reliability of the new forecasts after adjustment as a function of the number of historical assessments and other parameters.
This research demonstrates that underconfidence is just as detrimental to portfolio performance as overconfidence. Decision error will be minimized and portfolio value will be maximized only when there is no bias in project estimation. Furthermore, I found that reducing biases consistently generates more value than reducing uncertainty. Moreover, this research shows that using more historical assessments to measure biases typically improves the accuracy of the bias measurements. However, even a low number of assessments is enough to detect moderate and extreme biases. Finally, this research shows that production forecasts that were updated frequently over time using newly available data and externally adjusted using the most recent bias measurements were superior in terms of calibration to forecasts that were not updated or externally adjusted.
The methods presented in this work can be used to measure and improve the reliability of probabilistic assessments in many petroleum engineering applications. Implementing these methods will result, over the long run, in the best calibrated assessments. Well-calibrated assessments result in better identification of superior projects and inferior projects, and ultimately, better investment decision making and increased profitability.||en