我正在尝试使用Pythons Requests
捕获一个唯一的url来源网站为https://www.realestate.com.au/property/1-10-grosvenor-rd-terrigal-nsw-2260
目标网址为http://www.realestate.com.au/sold/property-unit-nsw-terrigal-124570934
当我尝试
时import requests
import csv
import datetime
import pandas as pd
import csv
from lxml import html
df = pd.read_excel("C:\Python27\Projects\REA_UNIQUE_ID\\UN.xlsx", sheetname="UN")
dnc = df['Property']
dnc_list = list(dnc)
url_base = "https://www.realestate.com.au/property/"
URL_LIST = []
for nd in dnc_list:
nd = nd.strip()
nd = nd.lower()
nd = nd.replace(" ", "-")
URL_LIST.append(url_base + nd)
text2search = '''The information provided'''
with open('Auctions.csv', 'wb') as csv_file:
writer = csv.writer(csv_file)
for index, url in enumerate(URL_LIST):
page = requests.get(url)
print '\r' 'Scraping URL ' + str(index+1) + ' of ' + str(len(URL_LIST)),
if text2search in page.text:
tree = html.fromstring(page.content)
(title,) = (x.text_content() for x in tree.xpath('//title'))
(Unique_ID,) = (x.text_content() for x in tree.xpath('//a[@class="property-value__link--muted rui-button-brand property- value__btn-listing"]'))
#(sold,) = (x.text_content().strip() for x in tree.xpath('//p[@class="property-value__agent"]'))
writer.writerow([title, Unique_ID])
CSV返回查看列表
除非我弄错了,否则我已经完成了正确的课堂搜索,因为href不够独特?我应该采取不同的方法来捕获URL而不是文本吗?
如果需要,请填写以下完整代码。
提前致谢。
git checkout foo
答案 0 :(得分:1)
text_content()
允许您仅获取文字。尝试抓取@href
,如下所示
(Unique_ID,) = (x for x in tree.xpath('//a[@class="property-value__link--muted rui-button-brand property-value__btn-listing"]/@href'))