2014-10-09 36 views
1

我试过(不成功)使用多处理来并行化一个循环。 这里是我的Python代码:Python多处理 - 使用多个进程的错误

from MMTK import * 
from MMTK.Trajectory import Trajectory, TrajectoryOutput, SnapshotGenerator 
from MMTK.Proteins import Protein, PeptideChain 
import numpy as np 

filename = 'traj_prot_nojump.nc' 

trajectory = Trajectory(None, filename) 
universe = trajectory.universe 
proteins = universe.objectList(Protein) 
chain = proteins[0][0] 

def calpha_2dmap_mult(t = range(0,len(trajectory))): 
    dist = [] 
    global trajectory 
    universe = trajectory.universe 
    proteins = universe.objectList(Protein) 
    chain = proteins[0][0] 
    traj = trajectory[t] 
    dt = 1000 # calculate distance every 1000 steps 
    for n, step in enumerate(traj): 
     if n % dt == 0: 
      universe.setConfiguration(step['configuration']) 
      for i in np.arange(len(chain)-1): 
       for j in np.arange(len(chain)-1): 
        dist.append(universe.distance(chain[i].peptide.C_alpha, 
                chain[j].peptide.C_alpha)) 
    return(dist) 

dist1 = calpha_2dmap_mult(range(1000,2000)) 
dist2 = calpha_2dmap_mult(range(2000,3000)) 

# Multiprocessing 
from multiprocessing import Pool, cpu_count 

pool = Pool(processes=2) 
dist_pool = [pool.apply(calpha_2dmap_mult, args=(t,)) for t in [range(1000,2000), range(2000,3000)]] 

print(dist_pool[0]==dist1) 
print(dist_pool[1]==dist2) 

如果我尝试Pool(processes = 1),代码工作正常,但只要我问了多个进程,代码崩溃与此错误:

python: posixio.c:286: px_pgin: Assertion `*posp == ((off_t)(-1)) || *posp == lseek(nciop->fd, 0, 1)' failed. 

如果有人有一个建议,这将是非常赞赏;-)

回答

0

我怀疑这是因为这样:

trajectory = Trajectory(None, filename) 

您在开始时只打开一次文件。您应该只需将文件名传递给多处理目标函数,然后在那里打开它。

0

如果您在OS X或任何其他类Unix系统上运行此代码,多处理使用分叉来创建子进程。

分叉时,文件描述符与父进程共享。就我所知,轨迹对象包含对文件描述符的引用。

为了解决这个问题,你应该把

trajectory = Trajectory(None, filename)

calpha_2dmap_mult内,以确保每个子单独打开文件。

+0

感谢您的意见(@John和@Wynand),我可以知道使用多个过程......但性能没有任何改进!新脚本写在下一个答案! – guillaume 2014-10-13 12:27:57

0

这是新的脚本允许使用多个进程(但不包括性能改进):

from MMTK import * 
from MMTK.Trajectory import Trajectory, TrajectoryOutput, SnapshotGenerator 
from MMTK.Proteins import Protein, PeptideChain 
import numpy as np 
import time 

filename = 'traj_prot_nojump.nc' 


trajectory = Trajectory(None, filename) 
universe = trajectory.universe 
proteins = universe.objectList(Protein) 
chain = proteins[0][0] 

def calpha_2dmap_mult(trajectory = trajectory, t = range(0,len(trajectory))): 
    dist = [] 
    universe = trajectory.universe 
    proteins = universe.objectList(Protein) 
    chain = proteins[0][0] 
    traj = trajectory[t] 
    dt = 1000 # calculate distance every 1000 steps 
    for n, step in enumerate(traj): 
     if n % dt == 0: 
      universe.setConfiguration(step['configuration']) 
      for i in np.arange(len(chain)-1): 
       for j in np.arange(len(chain)-1): 
        dist.append(universe.distance(chain[i].peptide.C_alpha, 
                chain[j].peptide.C_alpha)) 
    return(dist) 

c0 = time.time() 
dist1 = calpha_2dmap_mult(trajectory, range(0,11001)) 
#dist1 = calpha_2dmap_mult(trajectory, range(0,11001)) 
c1 = time.time() - c0 
print(c1) 


# Multiprocessing 
from multiprocessing import Pool, cpu_count 

pool = Pool(processes=4) 
c0 = time.time() 
dist_pool = [pool.apply(calpha_2dmap_mult, args=(trajectory, t,)) for t in 
      [range(0,2001), range(3000,5001), range(6000,8001), 
       range(9000,11001)]] 
c1 = time.time() - c0 
print(c1) 


dist1 = np.array(dist1) 
dist_pool = np.array(dist_pool) 
dist_pool = dist_pool.flatten() 
print(np.all((dist_pool == dist1))) 

所花费的时间来计算的距离是“相同的”无(70.1s)或多处理(70.2s)!我可能并不期待4因素的改善,但我至少期待一些改进!

0

听起来像这可能是通过NFS读取netCDF文件的问题。在NFS存储上是traj_prot_nojump.nc?见this Unidata mailing list postthis post to the IDL newsgroup。后者提出了一种解决方法,首先将文件复制到本地存储。

+0

诀窍是使用pool.apply_async而不是pool.apply来获得预期的性能。有关解释,请参阅[http://stackoverflow.com/questions/26356757/python-multiprocessing-no-performance-gain-with-multiple-processes]。 – guillaume 2015-04-13 12:29:42