如何在嵌套的python词典和列表中过滤Web链接

时间:2018-04-23 17:22:27

标签: python list dictionary

我使用了一个递归函数来列出嵌套python词典列表中的Web链接。我想知道如何使用Web链接过滤Web链接作为字符串条件。也就是说,我想过滤包含“问题”这个词的网页链接。一个简单的列表理解可能会这样做,但如何? 输入json(截断):

{
  "items": [
    {
      "tags": [
        "python-3.x",
        "web-scraping",
        "beautifulsoup",
        "urllib2",
        "urllib"
      ],
      "owner": {
        "reputation": 6,
        "user_id": 7726653,
        "user_type": "registered",
        "profile_image": "https://lh3.googleusercontent.com/-KAJHpZe2R2c/AAAAAAAAAAI/AAAAAAAAD54/UfehMYOIApw/photo.jpg?sz=128",
        "display_name": "Dhanush M",
        "link": "https://stackoverflow.com/users/7726653/dhanush-m"
      },
      "is_answered": false,
      "view_count": 24,
      "answer_count": 1,
      "score": 1,
      "last_activity_date": 1524279738,
      "creation_date": 1524243890,
      "last_edit_date": 1524279738,
      "question_id": 49946597,
      "link": "https://stackoverflow.com/questions/49946597/python-web-scraping-on-large-html-webpages",
      "title": "Python web scraping on large html webpages"
    },
    {
      "tags": [
        "python",
        "string",
        "encoding",
        "python-2.x",
        "python-internals"
      ],
      "owner": {
        "reputation": 16,
        "user_id": 3742872,
        "user_type": "registered",
        "profile_image": "https://i.stack.imgur.com/jx8Ij.jpg?s=128&g=1",
        "display_name": "Cortex",
        "link": "https://stackoverflow.com/users/3742872/cortex"
      },
      "is_answered": true,
      "view_count": 57,
      "answer_count": 3,
      "score": 3,
      "last_activity_date": 1524275907,
      "creation_date": 1524225161,
      "last_edit_date": 1524225665,
      "question_id": 49941052,
      "link": "https://stackoverflow.com/questions/49941052/which-encoding-is-used-for-strings-in-python-2-x",
      "title": "Which encoding is used for strings in Python 2.x?"
    },
    {
      "tags": [
        "python",
        "unicode",
        "ascii",
        "decode"
      ],
      "owner": {
        "reputation": 8,
        "user_id": 6835833,
        "user_type": "registered",
        "profile_image": "https://www.gravatar.com/avatar/1d2ff8635bc92955344c0affc23fe731?s=128&d=identicon&r=PG&f=1",
        "display_name": "double_negative",
        "link": "https://stackoverflow.com/users/6835833/double-negative"
      },
      "is_answered": true,
      "view_count": 28,
      "accepted_answer_id": 49945731,
      "answer_count": 3,
      "score": 0,
      "last_activity_date": 1524240482,
      "creation_date": 1524216785,
      "last_edit_date": 1524221387,
      "question_id": 49938517,
      "link": "https://stackoverflow.com/questions/49938517/python-unreproducible-unicodedecodeerror",
      "title": "Python Unreproducible UnicodeDecodeError"
    },
    {
      "tags": [
        "python",
        "list",
        "unicode"
      ],
      "owner": {
        "reputation": 1,
        "user_id": 9158477,
        "user_type": "registered",
        "profile_image": "https://lh6.googleusercontent.com/-SiMJZAP0eNU/AAAAAAAAAAI/AAAAAAAAA24/pWt8jygFt1w/photo.jpg?sz=128",
        "display_name": "Erwin Schleier",
        "link": "https://stackoverflow.com/users/9158477/erwin-schleier"
      },
      "is_answered": false,
      "view_count": 47,
      "answer_count": 0,
      "score": 0,
      "last_activity_date": 1524232267,
      "creation_date": 1524174371,
      "last_edit_date": 1524180439,
      "question_id": 49930710,
      "link": "https://stackoverflow.com/questions/49930710/python-unicode-to-list-reciving-double-double-quotes",
      "title": "python unicode to list, reciving double double quotes"
    }

import json

data=r'C:\Users\Kaleab\Desktop\GISN24\Python Programming\Python_Real_World_Exercises\unicode_json.json'
def js_r(data):
   with open(data, encoding='utf-8') as f_in:
       return(json.load(f_in))


my_dic_data = js_r(data)
def get_recursively(search_dict=my_dic_data,is_answered='True',field='link'):
    '''Takes a dict with nested lists and dicts,and searches all dicts for a key of the field provided'''
    fields_found=[]
    for key, value in search_dict.items():

            if key == field:
                fields_found.append(value)
            elif isinstance(value,dict):
                results=get_recursively(value,field)
                for result in results:
                    fields_found.append(result)
            elif isinstance(value,list):
                 for item in value:
                     if isinstance(item,dict):
                         more_results=get_recursively(item,field)
                         for another_result in more_results:
                            fields_found.append(another_result)
    return fields_found
if __name__ == "__main__":
    my_dic_data = js_r(data)
    print("This is my dictionary", my_dic_data)
    my_search_field=get_recursively()
    print("Searching field='link' yielded",my_search_field)
    final=[print(x) for x in my_search_field if 'questions' in my_search_field]
    print (final)

搜索字段='链接'得到了[' https://www.example.com/users/123456/abbe-m',' https://example.com/questions/125678/python-web-scraping-on-large-html-webpages'] 预期成果:' https://stackoverflow.com/questions/125678/python-web-scraping-on-large-html-webpages'] 我想过滤问题的链接,而不是用户。我怎样才能做到这一点?

1 个答案:

答案 0 :(得分:1)

<强>解

final=[x for x in my_search_field if 'questions' in x]

<强>说明

1)在该行的条件中使用x而不是my_search_field:

final=[print(x) for x in my_search_field if 'questions' in my_search_field]

2)列表comprehansion无效。尝试删除&#34;打印&#34;。