比较两个文本文件并删除python中的重复项

comparing two text files and remove duplicates in python(比较两个文本文件并删除python中的重复项)
本文介绍了比较两个文本文件并删除python中的重复项的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

问题描述

我有两个文本文件,file1file2.

I have two text files, file1 and file2.

File1 包含一堆随机单词,而 file2 包含我想在出现时从 file1 中删除的单词.有没有办法做到这一点?

File1 contains a bunch of random words, and file2 contains words that I want to remove from file1 when they occur. Is there a way of doing this?

我知道我可能应该在脚本中加入我自己的尝试,至少是为了显示努力,但老实说,这很可笑,不会有任何帮助.

I know I probably should include my own attempt at a script, to at least show effort, but to be honest it's laughable and wouldn't be of any help.

如果有人至少可以提供关于从哪里开始的提示,将不胜感激.

If someone could at least give a tip about where to start, it would be greatly appreciated.

推荐答案

获取每个单词:

f1 = open("/path/to/file1", "r") 
f2 = open("/path/to/file2", "r") 

file1_raw = f1.read()
file2_raw = f2.read()

file1_words = file1_raw.split()
file2_words = file2_raw.split()

如果您想要 file1 中不在 file2 中的唯一单词:

if you want unique words from file1 that aren't in file2:

result = set(file1_words).difference(set(file2_words))

如果您关心从 file1 的文本中删除单词

if you care about removing the words from the text of file1

for w in file2_words:
    file1_raw = file1_raw.replace(w, "")

这篇关于比较两个文本文件并删除python中的重复项的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

本站部分内容来源互联网,如果有图片或者内容侵犯了您的权益,请联系我们,我们会在确认后第一时间进行删除!

相关文档推荐

groupby multiple coords along a single dimension in xarray(在xarray中按单个维度的多个坐标分组)
Group by and Sum in Pandas without losing columns(Pandas中的GROUP BY AND SUM不丢失列)
Group by + New Column + Grab value former row based on conditionals(GROUP BY+新列+基于条件的前一行抓取值)
Groupby and interpolate in Pandas(PANDA中的Groupby算法和插值算法)
Pandas - Group Rows based on a column and replace NaN with non-null values(PANAS-基于列对行进行分组,并将NaN替换为非空值)
Grouping pandas DataFrame by 10 minute intervals(按10分钟间隔对 pandas 数据帧进行分组)