Audrey Bazerghi

Audrey Bazerghi

Lecturer of Operations Management, PhD candidate

Contact Information

audrey dot bazerghi at kellogg.northwestern.edu

CV

About me

I teach Strategic Decisions in Operations (OPNS454), a full-time/evening and weekend MBA elective at Kellogg. As the main instructor in spring 2024, I also created an AI teaching assistant for the course.

I am a fourth year PhD candidate in Operations Management and a Lecturer at the Kellogg School of Management, Northwestern University. I received the B.Eng. degree in Engineering Physics from Polytechnique Montréal (Canada) in 2015 and the SM/MBA degrees from the Massachusetts Institute of Technology (USA) as a Leaders for Global Operations fellow in 2020. From 2015 to 2018, I worked at Oliver Wyman as a management consultant advising clients in North America on topics focusing on procurement and logistics. In my spare time, I enjoy hiking, climbing and learning new languages.

Working papers

  1. Relative Monte Carlo for Reinforcement Learning, with S. Martin and G. J. van Ryzin. (2024) Journal version under preparation. Available at: http://dx.doi.org/10.2139/ssrn.4857358
    * Accepted at MSOM annual conference 2024, Minneapolis, MN
  2. Last Time Buys during Product Rollovers: Manufacturer and Supplier Equilibria. Bazerghi, A. and Van Mieghem, J. A. (2024). Production and Operations Management 33(3): 757–774. Available at: https://doi.org/10.1177/10591478241231859
    * Winner of 2023 POMS College of Supply Chain Management Best Student Paper Competition
    * Accepted at MSOM annual conference 2023, Montreal, QC
  3. Last Time Buy or Last Resort? Insights from the Field. Bazerghi, A. and Van Mieghem, J. A. (2022). Available at: http://dx.doi.org/10.2139/ssrn.4114913

Research Interests

My research interests include supply chain management (procurement, inventory control, product life cycle) and optimization & learning (sequential optimization, reinforcement learning).

Current work

I am working on developing a new, general-purpose policy gradient algorithm for reinforcement learning with discrete actions. The algorithm is called relative Monte Carlo (rMC). The policy is improved in real time using relative returns between a root sample path and counterfactual simulated paths instantiated by taking a different action from the root. The method is guaranteed to converge for episodic and average reward tasks. We are testing rMC with a policy network in a two-tiered inventory fulfillment problem and it learns better policies faster than comparable policy gradient algorithms. I will present this work at MSOM 2024 and ISMP in the coming months.

 

Recent work

I published a research paper with Jan A. Van Mieghem entitled “Last Time Buys during Product Rollovers: Manufacturer & Supplier Equilibria” and a white paper companion “Last Time Buy or Last Resort? Insights from the Field.” When a supplier decides to obsolete a legacy component to focus resources on new growth products, original equipment manufacturers are faced with a difficult situation: how can they continue to support old products with important user bases even after a supplier obsoletes a critical component? Last time buys are one of the most widespread solutions. Our research paper uses game theory to study how supplier and manufacturer interact during a negotiation that considers both the last time buy of an old part and the launch of its successor.