如何提高python 中for循环的效率
对于某个城市的出租车数据,一天就有33210000条记录,如何将每辆车的数据单独拎出来放到一个专属的文件中呢?
思路很简单:
就是循环33210000条记录,将每辆车的数据搬运到它该去的文件中。
但是对于3000多万条数据,一个一个循环太消耗时间,我花了2个小时才搬运了60万数据,算算3000万我需要花费100个小时,也就需要4-5天。并且还需要保证这五天全天开机,不能出现卡机的事故。
因此,需要使用并行进行for循环的技巧:
由于3000万数据放到csv中导致csv打不开,因此我就把一个csv通过split软件将其切分成每份60万,共53个csv。
我原来的思路是读取文件夹,获取由每一个60万的csv文件组成的列表,再分别对每一个60万的csv进行处理。实质上还是循环33210000次,并行for循环就是同时处理几个60万的csv文件,就能成倍的减少时间消耗。
并行进行for循环是受下面的方法启发:
我之前的做法类似这样:
words=['apple','bananan','cake','dumpling'] forwordinwords: printword
并行for循环类似这样:
frommultiprocessing.dummyimportPoolasThreadPool items=list() pool=ThreadPool() pool.map(process,items) pool.close() pool.join()
其中,process是进行处理的函数
实例代码如下:
#-*-coding:utf-8-*- importtime frommultiprocessing.dummyimportPoolasThreadPool defprocess(item): print('正在并行for循环') print(item) time.sleep(5) items=['apple','bananan','cake','dumpling'] pool=ThreadPool() pool.map(process,items) pool.close() pool.join()
补充知识:Python3用多线程替代for循环提升程序运行速度
优化前后新老代码如下:
fromgit_tools.git_toolimportget_collect_projects,QQNews_Git fromthreadingimportThread,Lock importdatetime base_url="http://git.xx.com" project_members_commits_lang_info={} lock=Lock() threads=[] ''' Author:zenkilan ''' defcount_time(func): deftook_up_time(*args,**kwargs): start_time=datetime.datetime.now() ret=func(*args,**kwargs) end_time=datetime.datetime.now() took_up_time=(end_time-start_time).total_seconds() print(f"{func.__name__}executiontookuptime:{took_up_time}") returnret returntook_up_time defget_project_member_lang_code_lines(git,member,begin_date,end_date): globalproject_members_commits_lang_info globallock member_name=member["username"] r=git.get_user_info(member_name) ifnotr["id"]: return user_commits_lang_info=git.get_commits_user_lang_diff_between(r["id"],begin_date,end_date) iflen(user_commits_lang_info)==0: return lock.acquire() project_members_commits_lang_info.setdefault(git.project,dict()) project_members_commits_lang_info[git.project][member_name]=user_commits_lang_info lock.release() defget_project_lang_code_lines(project,begin_date,end_date): globalthreads git=QQNews_Git(project[1],base_url,project[0]) project_members=git.get_project_members() iflen(project_members)==0: return formemberinproject_members: thread=Thread(target=get_project_member_lang_code_lines,args=(git,member,begin_date,end_date)) threads.append(thread) thread.start() @count_time defget_projects_lang_code_lines(begin_date,end_date): """ 获取项目代码行语言相关统计——新方法(提升效率) 应用多线程替代for循环 并发访问共享外部资源 :return: """ globalproject_members_commits_lang_info globalthreads forprojectinget_collect_projects(): thread=Thread(target=get_project_lang_code_lines,args=(project,begin_date,end_date)) threads.append(thread) thread.start() @count_time defget_projects_lang_code_lines_old(begin_date,end_date): """ 获取项目代码行语言相关统计——老方法(耗时严重) 使用最基本的思路进行编程 双层for循环嵌套并且每层都包含耗时操作 :return: """ project_members_commits_lang_info={} forprojectinget_collect_projects(): git=QQNews_Git(project[1],base_url,project[0]) project_members=git.get_project_members() user_commits_lang_info_dict={} iflen(project_members)==0: continue formemberinproject_members: member_name=member["username"] r=git.get_user_info(member_name,debug=False) ifnotr["id"]: continue try: user_commits_lang_info=git.get_commits_user_lang_diff_between(r["id"],begin_date,end_date) iflen(user_commits_lang_info)==0: continue user_commits_lang_info_dict[member_name]=user_commits_lang_info project_members_commits_lang_info[git.project]=user_commits_lang_info_dict except: pass returnproject_members_commits_lang_info deftest_results_equal(resultA,resultB): """ 测试方法 :paramresultA: :paramresultB: :return: """ print(resultA) print(resultB) assertlen(str(resultA))==len(str(resultB)) if__name__=='__main__': fromgit_tools.configimportbegin_date,end_date get_projects_lang_code_lines(begin_date,end_date) fortinthreads: t.join() old_result=get_projects_lang_code_lines_old(begin_date,end_date) test_results_equal(old_result,project_members_commits_lang_info)
老方法里外层for循环和内层for循环里均存在耗时操作:
1)git.get_project_members()
2)git.get_user_info(member_name,debug=False)
分两步来优化,先里后外或先外后里都行。用多线程替换for循环,并发共享外部资源,加锁避免写冲突。
测试结果通过,函数运行时间装饰器显示(单位秒):
get_projects_lang_code_linesexecutiontookuptime:1.85294
get_projects_lang_code_lines_oldexecutiontookuptime:108.604177
速度提升了约58倍
以上这篇如何提高python中for循环的效率就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持毛票票。