2

In working with a social services agency that provides a continuum of programs across the behavioral health and child welfare spectrum the need to adequately manage individual worker and total program caseloads has become increasingly difficult due to burgeoning demand.

Each program has already developed a profile for ranking the complexity of a client based a select number of factors (i.e. diagnosis, previous encounters, results of assessment instruments, etc.).

What we are trying to develop now is an approach to act as a load balancer for new cases to ensure that individual workers' caseloads are balanced with the complexity of cases. We can do this manually to some degree but if there was a way to automate and take the human bias out of it, that is the goal.

Are there are any algorithms or adaptable approaches that have been used to do such a task (even in other sectors)?

patrick
  • 31
  • 1
  • I think this is a very good, high-level question on approaches. (Hopefully it will garner an answer that can point you in the right direction for deep exploration on one of the related AI stacks, such as Data Science.) Welcome to AI! – DukeZhou Aug 29 '17 at 23:01
  • Just to clarify, you are asking for an algorithm that takes in a new case, estimates it’s caseload on the average worker, and assigns it to a worker such that no one is over burdened? Is caseload measured in difficulty or in estimates time to complete? – Jaden Travnik Feb 14 '18 at 13:50

1 Answers1

1

It sounds like you're looking at the Partition Problem. https://en.wikipedia.org/wiki/Partition_problem

The task of slicing one set into N sets so that each set is equal or as close to equal as possible.

Obtaining an exact solution is NP-hard (you can't do much better than trying all combinations), however you can get an approximate answer in polynomial time.

Greedy approach:

  1. Create N sets, each initially empty
  2. Sort all items in descending order (by their complexity score)
  3. for each item add it to the set with the lowest sum

If N is large, you may want to put the sets into a min heap/priority queue.

thayne
  • 146
  • 3