Jose Blanchet (Stanford University)
Understanding Strength in Polymers: A Stochastic Networks Approach
We report results based on joint collaborations with a group of material scientists, including the groups lead by Prof. Wei Cai (Stanford) and Prof. Zhigang Suo (Harvard). The results are motivated (as we shall discuss) by fundamental questions that (surprisingly!) are still not very well understood in polymer science. Understanding what makes a polymer strong has fundamental implications in real life applications (like how to extend the lifetime of vehicle tires and other polymers, which can be a significant source of pollution). Material scientists use data-driven evidence (based on experiments) and computational methods (based on molecular dynamic simulations) to study these questions. While these methods are powerful, they can be expensive, time consuming, and sometimes difficult to interpret. We propose a model based on a network of branching random walks which is validated by coarse-grained molecular dynamic simulations. This model is then used to predict the critical strain at which a polymer network will experience significant deformation. The model helps explain experimental phenomena at the core of questions about the strength of polymers. The results enhance predictions based on other models that are widely used in material science. Our analysis is based on large deviations theory and extends recent results about first passage times for branching random walks.
This is joint work with Wei Cai, Shaswat Mohanty, Zhenyuan Zhang and sponsored by the DoD through a MURI grant.
Xinyun Chen (The Chinese University of Hong Kong, Shenzhen)
Online learning for dynamic pricing with delay-sensitive customers
Motivated by sharing economy and online marketplaces, we propose an online learning approach to optimize a state-dependent pricing policy in service systems where customers are delay sensitive. Leveraging the underlying queueing structures, the proposed online learning method is featured with improved sample efficiency and faster convergence rate, compared to standard online learning algorithms.
The talk is based on a joint work with Guiyu Hong (CUHK-Shenzhen) and Yunan Liu (NCSU).
Richard Combes (Laboratory of Signals and Systems, CentraleSupélec)
An extension of McDiarmid’s inequality
We generalize McDiarmid’s inequality for functions with bounded differences on a high probability set, using an extension argument. Those functions concentrate around their conditional expectations. We illustrate the usefulness of this generalized inequality on a few examples. We further extend the results to concentration in general metric spaces.
Jing Dong (Columbia Business School)
Differentiable Discrete Event Simulation for Queuing Network Control
Queuing network control is a ubiquitous task for managing congestion in job-processing systems, such as service systems, communication networks, manufacturing processes, etc. Recently, there has been a growing interest in applying reinforcement learning (RL) techniques to learn good control policies for general queueing networks. However, queuing network control poses unique challenges when applying standard RL methods. These include (1) high stochasticity, (2) large state and action spaces, and (3) lack of stability guarantees. In this work, we propose a scalable framework for policy optimization in multi-class queuing networks based on differentiable simulation, which tackles the challenges listed above and substantially improves the sample efficiency over existing approaches. This framework is based on a novel approach for computing pathwise gradients for discrete event dynamical systems with auto-differentiation, and we address the non-differentiability of the dynamics through carefully designed smoothing. Our proposed method is easy to train (e.g., not sensitivity to hyper-parameter tuning), highly scalable (e.g., can solve very large-scale problems), and can handle more realistic instances outside the scope of existing RL methods, including systems operating in a non-stationary environment with non-exponential inputs (i.e., interarrival and service times).
This is joint work with Ethan Che and Hongseok Namkoong.
Harsha Honnappa (Purdue University)
Doubly Stochastic Point Processes: From Representation to Nonparametric Maximum Likelihood Estimation
Doubly stochastic point process models are invaluable for modeling discrete-event systems exhibiting nonstationarities and high variability. The probability measure over the sample paths of the point process is typically represented as an infinite mixture model. The doubly stochastic Poisson (DSPP) or Cox process serves as the primary example in this talk. Statistical inference for these models is challenging due to the need to estimate the mixing probability measure from point process observations. This challenge is known as nonparametric maximum likelihood estimation (NPMLE) in statistical literature. While much of the existing literature focuses on mixing measures supported on finite-dimensional subspaces, doubly stochastic processes often involve mixing measures supported on infinite-dimensional or path spaces, further complicating statistical estimation and inference.
This talk addresses two key questions:
- The “representation” power of DSPP or Cox models: Which classes of point processes can be exactly represented using a mixture model? We will revisit JFC Kingman’s classic result on representing renewal processes as DSPPs and present a new, simplified proof of his main finding.
- Estimating the mixing measure from sample paths: After reviewing classical work on estimating finitely parameterized mixing models using the expectation-maximization (EM) algorithm, we will explore tight lower-bounding variational inference (VI) approximations to the NPMLE objective in more complicated settings that use neural networks to parametrize the mixing measure. We will provide statistical guarantees and insights into the accuracy of these approximate solutions.
This ongoing research is conducted in collaboration with Xinlong Du at Purdue University.
Rouba Ibrahim (University College London)
Communication in Social Services Operations
We study the effectiveness of information design as a managerial lever to mitigate the overuse of critical resources in congestion-prone social service systems. Leveraging the service provider’s informational advantage about relevant aspects of the system, effective communication requires the sharing of carefully curated information to persuade low-need customers to forgo service for the benefit of customers with higher service needs. To study whether effective communication can arise in equilibrium, we design controlled laboratory experiments to test the predictions of a queueing-game theoretic model that endogenizes the implementation of information-sharing policies. Our main result is that communication increases social welfare even when the service provider lacks the ability to formally commit to their information policy (as usually is the case in practical settings), i.e., under conditions where standard theory predicts that communication fails because it lacks credibility and thus fails to affect customer behaviour.
(Joint work with Arturo Estrada Rodriguez and Mirko Kremer.)
Janusz Meylahn, University of Twente
Can pricing algorithms learn to collude?
Pricing algorithms may be able to learn to work together to raise prices, in ways that are legal under current antitrust laws, instead of competing. Many simulation studies have shown that reinforcement algorithms may be capable of this feat, but the interpretation of these results is hotly debated among economists, legal scholars, computer scientists and mathematicians. A central issue obscuring the debate is the lack of a common definition of what it means for algorithms to learn to collude. In this talk, I will propose two such definitions and presents results on the collusive capabilities of various algorithms in the light of these definitions.
(Based on joint work with Arnoud den Boer, Maarten Pieter Schinkel, Ibrahim Abada, Joe Harrington and Xavier Lambin.)
Bart van Parys, MIT Sloan School of Management
Disciplined Decisions in the Face of Uncertainty and Data
Problem uncertainty typically limits how well decisions can be tailored to the problem at hand but often can not be avoided. The availability of large quantities of data in modern applications however presents an exciting opportunity to nevertheless make better informed decisions. Capitalizing on this opportunity requires developing novel tools on the intersection between operations research, stochastics as well as data science. In a modern setting the primitive describing uncertainty is often messy data rather than classical distributions. Simply quantifying the probability of an undesirable outcome becomes a challenging uncertainty quantification problem which I approach with a distributional optimization lens. Distributional robust optimization (DRO) has recently gained prominence as a paradigm for making data-driven decisions which are protected against adverse overfitting effects. We justify the effectiveness of this paradigm by pointing out that certain DRO formulations indeed enjoy optimal statistical properties. Furthermore, DRO formulations can also be tailored to efficiently protect decisions against overfitting even when working with messy corrupted data. Finally, as such formulations are often computationally tractable they provide a practical road to the development of tomorrow’s trustworthy decision systems.
Alexandre Proutiere (KTH Royal Institute of Technology)
Pure Exploration with Fixed Budget: A Large Deviation Perspective
Pure exploration with fixed budget refers to the task of addressing a question with a finite set of possible answers about a stochastic environment based on a fixed budget of noisy observations. The collection of these observations can be made online in an adaptive manner. Examples of such tasks include A/B testing, identifying the best arm in bandits, or determining the best policy identification in Reinforcement Learning. Surprisingly, even for this simplest of these tasks, researchers have failed at characterizing the minimal achievable error probability. When observations are selected using a static sampling strategy, the error probability can be explicitly derived via Large Deviation techniques. Analyzing the performance of algorithms with adaptive sampling strategies is significantly more challenging. In this talk, we discuss recent advances towards addressing this challenge using Large Deviations. For example, in the best arm identification task, we establish a connection between the Large Deviation Principle (LDP) satisfied by the empirical exploration process and that satisfied by the empirical arm rewards. This connection holds for any adaptive algorithm, and is leveraged to improve error probability upper bounds of some existing algorithms, such as the celebrated SR (Successive Rejects) algorithm, and to devise and analyze new algorithms with improved performance.
Sanjay Shakkottai (University of Texas at Austin)
On solving inverse problems in computer vision using latent diffusion based generative models
Diffusion models have emerged as a powerful new approach to generative modeling. In this talk, we present the first framework that uses pre-trained latent diffusion models to solve linear inverse problems such as image denoising, inpainting, and super-resolution. Previously proposed algorithms (such as DPS and DDRM) only apply to pixel-space diffusion models. We theoretically analyze our algorithm showing provable sample recovery in a linear model setting. The algorithmic insight obtained from our analysis extends to more general settings often considered in practice. Experimentally, we outperform previously proposed posterior sampling algorithms in a wide variety of problems including random inpainting, block inpainting, denoising, deblurring, destriping, and super-resolution. Next, we present an efficient second-order approximation using Tweedie’s formula to mitigate the bias incurred in the widely used first-order samplers. With this method, we devise a surrogate loss function to refine the reverse process at every diffusion step to address inverse problems and perform high-fidelity text-guided image editing.
Based on joint work with Litu Rout, Negin Raoof, Giannis Daras, Constantine Caramanis, Alex Dimakis, Yujia Chen, Abhishek Kumar, and Wen-Sheng Chu.
Pengyi Shi (Purdue University)
Breaking the Revolving Doors: Combining Machine Learning and Queueing Theory for Incarceration Diversion Programs
Recidivism, where formerly incarcerated individuals reoffend, is a significant challenge in the U.S. criminal justice system with an over 50% reoffending rate. Incarceration diversion programs aim to break this revolving door by focusing on rehabilitation rather than incarceration. Machine learning (ML) algorithms assist in admission decisions to these programs by assigning recidivism risk scores to potential participants, helping to balance the program benefits against the risk of in-program reoffense. However, ML risk estimation can be imperfect and biased, leading to potentially suboptimal and unfair admission decisions. We formulate an admission control problem using a queueing model with heterogeneous classes and covariate-based risk estimation. Our model accounts for in-service reneging, where participants may reoffend before completing the program. We derive a priority score policy from a static control problem, analyze the performance of this priority policy under imperfect estimation, and propose conditions for its optimality. Additionally, we leverage Markov Decision Processes to develop dynamic decision support. If time permits, I will also discuss some ongoing work that leverages generative AI models to assist in identifying individuals at risk and our engagement with community partners to deploy the decision support tools. This talk is based on joint work with Xiaoquan Gao, Bingxuan Li from Purdue University, and Zhiqiang Zhang, Amy Ward from the University of Chicago.
Ran Snitkovsky (Tel Aviv University)
Learning Algorithms for Queues with Strategic Customers
This talk will discuss novel iterative algorithms for computing, or learning, Nash equilibria and optimal regulation in queueing games. The purpose in this line of research is to suggest robust, easy-to-implement algorithms, with theoretical guarantees, that learn customer equilibrium behavior and related quantities based on adaptive simulation of the underlying queueing game. Such algorithms potentially provide a powerful tool to approximate equilibrium strategies and optimal regulation in a broad range of strategic queueing models, in which classical queueing-analytic methods are impractical. In the talk I will introduce the general framework, present several established results, and suggest some promising paths for future research.
R Srikant (University of Illinois at Urbana-Champaign)
Learning and Control in Countable State Spaces
We will consider policy optimization methods in reinforcement learning where the state space is countably infinite. The motivation arises from control problems in communication networks, matching markets, and other queueing systems. We consider an algorithm called Natural Policy Gradient (NPG), which is a popular algorithm for finite state spaces, and show three results in the context of countable state spaces: (i) in the case where perfect policy evaluation is possible, we show that standard NPG converges with a small modification; (ii) if the error is policy evaluation is within a factor of the true value function, we show that one can obtain bounds on the performance of the NPG algorithms; and (iii) we will discuss the ability of neural network-based function approximations to satisfy the condition in (ii) above.
Neil Walton (University of Durham)
Understanding Waiting List Pressures
NHS waiting lists currently sit at record lengths due to a combination of the immediate impact of the pandemic and, as well as, long-run pressures requiring investment on NHS resources. These factors have left managers and clinicians with increasingly complex decisions when scheduling elective operations. It is imperative that managers understand the basic dynamics, tradeoffs, and pressures when managing waiting lists.
Queueing theory is a key part of operational research, extensively used throughout manufacturing, retail, information technology and other sectors. This article provides an exposition of the theory of queues within the context of the current NHS backlog. With this information a manager will be able understand the demand, queue size, waiting times, capacity requirements and trade-offs for different waiting lists. We describe the metrics and a reporting system developed to understand waiting list pressures in a large NHS trust. Our aim is to enable managers to better understand their waiting lists, to achieve targets and improve health outcomes.
Mengdi Wang (Princeton University)
Capitalizing on Generative AI: Guided Diffusion Models Towards Generative Optimization
Weina Wang (Carnegie Mellon University)
Recent Advances in Average-Reward Restless Bandits
We consider the infinite-horizon, average reward restless bandit problem. A central challenge is designing computationally efficient policies that achieve a diminishing optimality gap as the number of arms, N, grows large. Existing policies, including the renowned Whittle index policy, all rely on a uniform global attractor property (UGAP) assumption to achieve asymptotic optimality, which is a complex and difficult-to-verify assumption. In this talk, I will present new, sampling-based policy designs for restless bandits. One of our proposed policies breaks the long-standing UGAP assumption for the first time, and the subsequent policies eliminate the need for the UGAP assumption to achieve asymptotic optimality. Our techniques offer new insights into guaranteeing convergence (avoiding undesirable attractors or cycles) in large stochastic systems.
This talk is based on joint work with Yige Hong (CMU), Qiaomin Xie (UW–Madison), and Yudong Chen (UW–Madison).
Xingyu Wang (University of Amsterdam)
Sharp Characterization and Control of Global Dynamics of SGDs with Heavy Tails
The empirical success of deep learning is often attributed to the mysterious ability of stochastic gradient descents (SGDs) to avoid sharp local minima in the loss landscape, as sharp minima are believed to lead to poor generalization. To unravel this mystery and potentially further enhance such capability of SGDs, it is imperative to go beyond the traditional local convergence analysis and obtain a comprehensive understanding of SGDs’ global dynamics within complex non-convex loss landscapes. In this talk, we characterize the global dynamics of SGDs through the heavy-tailed large deviations and local stability framework. This framework systematically characterizes the rare events in heavy-tailed dynamical systems; building on this, we characterize intricate phase transitions in the first exit times, which leads to the heavy-tailed counterparts of the classical Freidlin-Wentzell and Eyring-Kramers theories. Moreover, applying this framework to SGD, we reveal a fascinating phenomenon in deep learning: by injecting and then truncating heavy-tailed noises during the training phase, SGD can almost completely avoid sharp minima and hence achieve better generalization performance for the test data.
Zicheng Wang (The Chinese University of Hong Kong, Shenzhen)
Human in The Loop Automation: Ride-Hailing with Remote (Tele-) Drivers
Tele-driving refers to a novel concept where drivers can remotely operate vehicles. By putting the human back “in the loop,” tele-driving has emerged recently as a more viable alternative to fully automated vehicles, with ride-hailing being an important application. Because remote drivers can be operated as a shared resource, it may be possible for such services to deploy fewer drivers than vehicles without significantly reducing service quality. In this paper, we examine the extent to which this is possible. Using a spatial queueing model that captures the dynamics of both pick up and trip times, we show that the impact of reducing the number of drivers depends crucially on system workload relative to the number of vehicles. In particular, when workload is sufficiently high relative to the number of vehicles, we show that, perhaps surprisingly, reducing the number of drivers relative to the number of vehicles can actually improve service level. Having fewer drivers than vehicles ensures that there are always idle vehicles; with the fewer the drivers, the likelier it is for there to be more idle vehicles. Consequently, the fewer the drivers, the likelier it is for the pick-up times to be shorter (making overall shorter service times likelier). The impact of shorter service time is particularly significant when the workload is high and, in this case, it is enough to overcome the loss in driver capacity. When workload is sufficiently low relative to the number of vehicles, we show that it is possible to significantly reduce the number of drivers without significantly reducing service level.
Xiaowei Zhang (Hong Kong University of Science and Technology)
Staffing Under Taylor’s Law
Staffing rules serve as an essential management tool in service industries to attain target service levels. Traditionally, the square-root safety rule, based on the Poisson arrival assumption, has been commonly used. However, empirical findings suggest that arrival processes often exhibit an `over-dispersion’ phenomenon, in which the variance of the arrival exceeds the mean. In this paper, we develop a new doubly stochastic Poisson process model to capture a significant dispersion scaling law, known as Taylor’s law, showing that the variance is a power function of the mean. We further examine how over-dispersion affects staffing, providing a closed-form staffing formula to ensure a desired service level. Interestingly, the additional staffing level beyond the nominal load is a power function of the nominal load, with the power exponent lying between 1/2 (the square-root safety rule) and 1 (the linear safety rule), depending on the degree of over-dispersion. Simulation studies and a large-scale call center case study indicate that our staffing rule outperforms classical alternatives.