Asymptotic Optimality of Open-Loop Policies in Lost-Sales Inventory Models with Stochastic Lead Times
In "Seminars and talks"

Speakers

Xingyu Bai
Xingyu Bai

University of Illinois at Urbana-Champaign

Xingyu Bai is a PhD student in the department of industrial and enterprise systems engineering at the University of Illinois at Urbana-Champaign, advised by Professor Xin Chen and Professor Alexander Stolyar. Prior to that, he received a Bachelor of Management degree from Shanghai University of Finance and Economics in 2018. His research interests include inventory and supply chain management, revenue management, asymptotic analysis, and approximation algorithms. His PhD thesis focuses on inventory management problems characterized by incomplete information.


Date:
Tuesday, 12 December 2023
Time:
10:00 am - 11:30 am
Venue:
NUS Business School
Mochtar Riady Building BIZ1 0302
15 Kent Ridge Drive
Singapore 119245 (Map)

Abstract

Inventory models with lost sales and large lead times are notoriously difficult to manage due to the curse of dimensionality. It has been recently proved that in the lost-sales inventory model with divisible products, as the lead time grows large, a simple open-loop constant-order policy is asymptotically optimal. In this paper, we consider the lost-sales inventory model in which the lead time is not only large but also random. Under the assumption that the placed orders cannot cross in time, we establish the asymptotic optimality of constant-order policies as the lead time increases for the model with divisible products. For the model with indivisible products, we propose an open-loop bracket policy, which alternates deterministically between two consecutive integer order quantities. By employing the concept of multimodularity, we prove that the bracket policy is asymptotically optimal. Our results on divisible products also hold for the models with order crossover and random supply functions. Finally, we provide a numerical study to demonstrate the good performance of the proposed open-loop policies and derive further insights.