相同的代码给出不同的输出取决于它是否具有列表推导或生成器

时间:2017-12-24 08:59:59

标签: python web-scraping beautifulsoup generator list-comprehension

我正在尝试清理这个网站并获取每一个字。但使用生成器比使用列表给我更多的单词。而且,这些词是不一致的。有时我有1个单词,有时没有,有时超过30个单词。我已经阅读了有关python文档的生成器,并查找了有关生成器的一些问题。我的理解是它不应该有所不同。我不明白引擎盖下面发生了什么。我正在使用python 3.6。我也读过Generator Comprehension different output from list comprehension?,但我无法理解这种情况。

这是发电机的第一个功能。

def text_cleaner1(website):
    '''
    This function just cleans up the raw html so that I can look at it.
    Inputs: a URL to investigate
    Outputs: Cleaned text only
    '''
    try:
        site = requests.get(url).text # Connect to the job posting
    except: 
        return   # Need this in case the website isn't there anymore or some other weird connection problem 

    soup_obj = BeautifulSoup(site, "lxml") # Get the html from the site


    for script in soup_obj(["script", "style"]):
        script.extract() # Remove these two elements from the BS4 object

    text = soup_obj.get_text() # Get the text from this

    lines = (line.strip() for line in text.splitlines()) # break into lines

    print(type(lines))

    chunks = (phrase.strip() for line in lines for phrase in line.split("  ")) # break multi-headlines into a line each

    print(type(chunks))

    def chunk_space(chunk):
        chunk_out = chunk + ' ' # Need to fix spacing issue
        return chunk_out  

    text = ''.join(chunk_space(chunk) for chunk in chunks if chunk).encode('utf-8') # Get rid of all blank lines and ends of line

    # Now clean out all of the unicode junk (this line works great!!!)


    try:
        text = text.decode('unicode_escape').encode('ascii', 'ignore') # Need this as some websites aren't formatted
    except:                                                            # in a way that this works, can occasionally throw
        return                                                         # an exception  

    text = str(text)

    text = re.sub("[^a-zA-Z.+3]"," ", text)  # Now get rid of any terms that aren't words (include 3 for d3.js)
                                             # Also include + for C++


    text = text.lower().split()  # Go to lower case and split them apart


    stop_words = set(stopwords.words("english")) # Filter out any stop words
    text = [w for w in text if not w in stop_words]



    text = set(text) # Last, just get the set of these. Ignore counts (we are just looking at whether a term existed
                            # or not on the website)

    return text

这是列表推导的第二个功能。

def text_cleaner2(website):
    '''
    This function just cleans up the raw html so that I can look at it.
    Inputs: a URL to investigate
    Outputs: Cleaned text only
    '''
    try:
        site = requests.get(url).text # Connect to the job posting
    except: 
        return   # Need this in case the website isn't there anymore or some other weird connection problem 

    soup_obj = BeautifulSoup(site, "lxml") # Get the html from the site


    for script in soup_obj(["script", "style"]):
        script.extract() # Remove these two elements from the BS4 object

    text = soup_obj.get_text() # Get the text from this

    lines = [line.strip() for line in text.splitlines()] # break into lines

    chunks = [phrase.strip() for line in lines for phrase in line.split("  ")] # break multi-headlines into a line each

    def chunk_space(chunk):
        chunk_out = chunk + ' ' # Need to fix spacing issue
        return chunk_out  

    text = ''.join(chunk_space(chunk) for chunk in chunks if chunk).encode('utf-8') # Get rid of all blank lines and ends of line

    # Now clean out all of the unicode junk (this line works great!!!)


    try:
        text = text.decode('unicode_escape').encode('ascii', 'ignore') # Need this as some websites aren't formatted
    except:                                                            # in a way that this works, can occasionally throw
        return                                                         # an exception  

    text = str(text)

    text = re.sub("[^a-zA-Z.+3]"," ", text)  # Now get rid of any terms that aren't words (include 3 for d3.js)
                                             # Also include + for C++


    text = text.lower().split()  # Go to lower case and split them apart


    stop_words = set(stopwords.words("english")) # Filter out any stop words
    text = [w for w in text if not w in stop_words]



    text = set(text) # Last, just get the set of these. Ignore counts (we are just looking at whether a term existed
                            # or not on the website)

    return text

这段代码随机给我不同的结果。

text_cleaner1("https://www.indeed.com/rc/clk?jk=02ecc871f377f959&fccid=c46d0116f6e69eae") - text_cleaner2("https://www.indeed.com/rc/clk?jk=02ecc871f377f959&fccid=c46d0116f6e69eae")

1 个答案:

答案 0 :(得分:0)

生成器是"lazy" - 它不会立即执行代码,但是稍后在需要结果时执行它。这意味着它不会立即从变量或函数中获取值,但它会保留对变量和函数的引用。

链接示例

all_configs = [
    {'a': 1, 'b':3},
    {'a': 2, 'b':2}
]
unique_keys = ['a','b']


for x in zip( *([c[k] for k in unique_keys] for c in all_configs) ):
    print(x)

print('---')
for x in zip( *((c[k] for k in unique_keys) for c in all_configs) ):
    print(list(x))

在生成器中,另一个for循环内部有for个循环。

内部生成器会引用c而非c中的值,并且稍后会获得值。

稍后(当它必须从生成器获得结果时)它开始使用外部生成器for c in all_configs执行。当外部生成器被执行时,它循环并生成两个内部生成器,它们使用c的引用,而不是来自c的值,但是当它循环时它也会改变c中的值 - 所以最后你有"列表"在{'a': 2, 'b':2}中有两个内部生成器和c

之后它执行内部生成器,最终从c获得值,但此时c已经有{'a': 2, 'b':2}

顺便说一句:当lambda中的按钮与for循环使用时,tkinter循环中存在<meta http-equiv="refresh" content="40" >类似的问题。