I submitted multiple jobs on Linux machine using python parallel processor using: (imported pool from multiprocessing)
parallel = Pool(processes = 10)
I had about 4000 jobs .. it was utilizing all the processors for first few days but it started dropping the number of processes. Currently I have around 300 jobs left and it is down to 4 processes used. It is not using full 10 processors
My guess is python initially assigned the jobs to all processors before hand and some processors completed their job early and are now sitting idle.
Is this correct assumption ? Is there any way around this so all processors are utilized till end to finish the runs faster ?
def run_parallel(self, my_jobs):
#give the number of processors
parallel = Pool(processes = 10)
#run jobs in parallel
parallel.map(run_jobs, my_jobs)
return