我正在尝试从以下网址(http://www.ancient-hebrew.org/m/dictionary/1000.html)中抓取数据。
因此,每个希伯来语单词部分均以img网址开头,后跟2个文本,即实际的希伯来语单词及其发音。例如,URL中的第一个条目是以下“ img1 img2 img3אֶלֶףe-leph”,希伯来语单词是使用wget下载html后的unicode
我试图依次收集这些信息,以便首先获取图像文件,然后获取希伯来语单词,然后获取发音。最后,我想找到音频文件的URL。
答案 0 :(得分:0)
我写了一个脚本,应该可以帮助您。它具有您要求的所有信息。由于希伯来字母,无法将其另存为json文件,否则将被存储为字节。我知道您前不久发布了这个问题,但今天发现了它,因此决定尝试一下。无论如何,这里是:
import requests
from bs4 import BeautifulSoup
import re
import json
url = 'http://www.ancient-hebrew.org/m/dictionary/1000.html'
page = requests.get(url)
soup = BeautifulSoup(page.text, 'html.parser')
def images():
#Gathers all the images (this includes unwanted gifs)
imgs = soup.find_all('img')
#Gets the src attribute to form the full url
srcs = [img['src'] for img in imgs]
base_url = 'https://www.ancient-hebrew.org/files/'
imgs = {}
section = 0
#Goes through each source of all the images
for src in srcs:
#Checks if it is a gif, these act as a separator
if src.endswith('.gif'):
#If it is a gif, change sections (acts as separator)
section += 1
else:
#If it is a letter image, use regex to extract the part of src we want and form full url
actual_link = re.search(r'files/(.+\.jpg)', src)
imgs.setdefault(section, []).append(base_url + actual_link.group(1))
return imgs
def hebrew_letters():
#Gets hebrew letters, strips whitespace, reverses letter order since hebrew letters get messed up
h_letters = [h_letter.text.strip() for h_letter in soup.find_all('font', attrs={'face': 'arial'})]
return h_letters
def english_letters():
#Gets english letters by regex, this part was difficult because these letters are not surrounded by tags in the html
letters = ''.join(str(content) for content in soup.find('table', attrs={'width': '90%'}).td.contents)
search_text = re.finditer(r'/font>\s+(.+?)\s+<br/>', letters)
e_letters = [letter.group(1) for letter in search_text]
return e_letters
def get_audio_urls():
#Gets all the mp3 hrefs for the audio part
base_url = 'https://www.ancient-hebrew.org/m/dictionary/'
links = soup.find_all('a', href=re.compile(r'\d+\s*.mp3$'))
audio_urls = [base_url+link['href'].replace('\t','') for link in links]
return audio_urls
def main():
#Gathers scraped data
imgs = images()
h_letters = hebrew_letters()
e_letters = english_letters()
audio_urls = get_audio_urls()
#Encodes data into utf-8 (due to hebrew letters) and saves it to text file
with open('scraped_hebrew.txt', 'w', encoding='utf-8') as text_file:
for img, h_letter, e_letter, audio_url in zip(imgs.values(), h_letters, e_letters, audio_urls):
text_file.write('Image Urls: ' + ' - '.join(im for im in img) + '\n')
text_file.write('Hebrew Letters: ' + h_letter + '\n')
text_file.write('English Letters: ' + e_letter + '\n')
text_file.write('Audio Urls: ' + audio_url + '\n\n')
if __name__ == '__main__':
main()