For doing node-aware partitioning, we want the Ramba workers/RemoteStates to be grouped so that all the workers on a given node are laid out consecutively. In MPI mode, there are some configurations where the ranks are not natively laid out consecutively on the nodes. Thus, we need to have some kind of a mapping between MPI ranks and workers so that from Ramba's perspective the invariant is maintained that chunks of the worker array of size num_workers/num_nodes are laid out consecutively.
For doing node-aware partitioning, we want the Ramba workers/RemoteStates to be grouped so that all the workers on a given node are laid out consecutively. In MPI mode, there are some configurations where the ranks are not natively laid out consecutively on the nodes. Thus, we need to have some kind of a mapping between MPI ranks and workers so that from Ramba's perspective the invariant is maintained that chunks of the worker array of size num_workers/num_nodes are laid out consecutively.