经过几天的尝试,以下是我能想到的最好的。
我的初始尝试,当我意识到提交的作业在终端被分离时不会被剔除时,是(在bash脚本中)提交并杀死作业。但是,这并不能很好地发挥作用,因为AWS调用EMR调用,因此一些作业在提交之前被杀死。
目前最好的解决方案
from jobs import MyMRJob
import logging
logging.basicConfig(
level=logging.INFO,
format = '%(asctime)-15s %(levelname)-8s %(message)s',
)
log = logging.getLogger('submitjobs')
def main():
cluster_id="x-MXMXMX"
log.info('Cluster: %s', cluster_id)
for i in range(10):
n = '%04d' % i
log.info('Adding job: %s', n)
mr_job = MyMRJob(args=[
'-r', 'emr',
'--conf-path', 'mrjob.conf',
'--no-output',
'--output-dir', 's3://mybucket/mrjob/%s' % n,
'--cluster-id', cluster_id,
'input/file.%s' % n
])
runner = mr_job.make_runner()
# the following is the secret sauce, submits the job and returns
# it is a private method though, so may be changed without notice
runner._launch()
if __name__ == '__main__':
main()