TY - JOUR
T1 - Learning policies for resource allocation in business processes
AU - Middelhuis, Jeroen
AU - Bianco, Riccardo Lo
AU - Sherzer, Eliran
AU - Bukhsh, Zaharah
AU - Adan, Ivo
AU - Dijkman, Remco
N1 - Publisher Copyright:
© 2024
PY - 2025/2
Y1 - 2025/2
N2 - Efficient allocation of resources to activities is pivotal in executing business processes but remains challenging. While resource allocation methodologies are well-established in domains like manufacturing, their application within business process management remains limited. Existing methods often do not scale well to large processes with numerous activities or optimize across multiple cases. This paper aims to address this gap by proposing two learning-based methods for resource allocation in business processes to minimize the average cycle time of cases. The first method leverages Deep Reinforcement Learning (DRL) to learn policies by allocating resources to activities. The second method is a score-based value function approximation approach, which learns the weights of a set of curated features to prioritize resource assignments. We evaluated the proposed approaches on six distinct business processes with archetypal process flows, referred to as scenarios, and three realistically sized business processes, referred to as composite business processes, which are a combination of the scenarios. We benchmarked our methods against traditional heuristics and existing resource allocation methods. The results show that our methods learn adaptive resource allocation policies that outperform or are competitive with the benchmarks in five out of six scenarios. The DRL approach outperforms all benchmarks in all three composite business processes and finds a policy that is, on average, 12.7% better than the best-performing benchmark.
AB - Efficient allocation of resources to activities is pivotal in executing business processes but remains challenging. While resource allocation methodologies are well-established in domains like manufacturing, their application within business process management remains limited. Existing methods often do not scale well to large processes with numerous activities or optimize across multiple cases. This paper aims to address this gap by proposing two learning-based methods for resource allocation in business processes to minimize the average cycle time of cases. The first method leverages Deep Reinforcement Learning (DRL) to learn policies by allocating resources to activities. The second method is a score-based value function approximation approach, which learns the weights of a set of curated features to prioritize resource assignments. We evaluated the proposed approaches on six distinct business processes with archetypal process flows, referred to as scenarios, and three realistically sized business processes, referred to as composite business processes, which are a combination of the scenarios. We benchmarked our methods against traditional heuristics and existing resource allocation methods. The results show that our methods learn adaptive resource allocation policies that outperform or are competitive with the benchmarks in five out of six scenarios. The DRL approach outperforms all benchmarks in all three composite business processes and finds a policy that is, on average, 12.7% better than the best-performing benchmark.
KW - Bayesian optimization
KW - Business process optimization
KW - Deep reinforcement learning
KW - Resource allocation
UR - http://www.scopus.com/inward/record.url?scp=85210061122&partnerID=8YFLogxK
U2 - 10.1016/j.is.2024.102492
DO - 10.1016/j.is.2024.102492
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85210061122
SN - 0306-4379
VL - 128
JO - Information Systems
JF - Information Systems
M1 - 102492
ER -