• Home
  • Post-Estimation Adjustments in Data-Driven Decision-Making with Applications in Pricing - NUS Business School
Post-Estimation Adjustments in Data-Driven Decision-Making with Applications in Pricing
In "Seminars and talks"

Speakers

Chen Ningyuan
Chen Ningyuan

Associate Professor, Department of Management, University of Toronto, Mississauga, Rotman School of Management, University of Toronto

Dr. Ningyuan Chen is currently an associate professor at the Department of Management at the University of Toronto, Mississauga and at the Rotman School of Management, University of Toronto. Before joining the University of Toronto, he was an assistant professor at the Hong Kong University of Science and Technology. Prior to that, he was a postdoctoral fellow at the Yale School of Management. He received his Ph.D. from the Industrial Engineering and Operations Research (IEOR) department at Columbia University in 2015. His studies have been published in Management Science, Operations Research, Annals of Statistics, NeurIPS and other journals and proceedings. His research is supported by the UGC of Hong Kong and the Discovery Grants Program of Canada. He is the recipient of the Roger Martin Award for Excellence in Research and the IMI Research Award.


Date:
Wednesday, 17 September 2025
Time:
10:00 am - 11:30 am
Venue:
NUS Business School
Mochtar Riady Building BIZ1-0202
15 Kent Ridge Drive
Singapore 119245 (Map)

Abstract

The predict-then-optimize (PTO) framework is a standard approach in data-driven decision-making, where a decision-maker first estimates an unknown parameter from historical data and then uses this estimate to solve an optimization problem. While widely used for its simplicity and modularity, PTO can lead to suboptimal decisions because the estimation step does not account for the structure of the downstream optimization problem. We study a class of problems where the objective function, evaluated at the PTO decision, is asymmetric with respect to estimation errors. This asymmetry causes the expected outcome to be systematically degraded by noise in the parameter estimate, as the penalty for underestimation differs from that of overestimation. To address this, we develop a data-driven post-estimation adjustment that improves decision quality while preserving the practicality and modularity of PTO. We show that when the objective function satisfies a particular curvature condition, based on the ratio of its third and second derivatives, the adjustment simplifies to a closed-form expression. This condition holds for a broad range of pricing problems, including those with linear, log-linear, and power-law demand models. Under this condition, we establish theoretical guarantees that our adjustment uniformly and asymptotically outperforms standard PTO, and we precisely characterize the resulting improvement. Additionally, we extend our framework to multi-parameter optimization settings. Numerical pricing experiments demonstrate that our method consistently improves revenue, particularly in small-sample regimes where estimation uncertainty is most pronounced. This makes our approach especially well-suited for pricing new products or in settings with limited historical price variation.