I am new to the field of web-scraping and while running the following code, I got unexpected results:
from bs4 import BeautifulSoup as soup
import requests
header = {'User-Agent':'Chrome/71.0.3578.98 Mozilla/64.0'}
url = 'https://www.amazon.in/Philips-Trimmer-Cordless-QT4001-15/dp/B00L8PEEAI'
res = requests.get(url, headers = header)
bs = soup(res.text, 'html.parser')
print(bs.div)
which yields the following output:
<!-- From remote config v3-->
<script type="text/javascript">
(function(d) {
document.createElement("header");
function b(e) {
return [].slice.call(e)
}
function c(f, e) {
return {
m: f,
a: b(e)
}
}
var a = function(f) {
var g = {};
g._sourceName = f;
g._replay = [];
g.getNow = function(i, h) {
return h
};
function e(i, h, j) {
i[j] = function() {
g._replay.push(h.concat(c(j, arguments)))
}
}
g.when = function() {
var i = [c("when", arguments)];
var h = {};
e(h, i, "run");
e(h, i, "declare");
e(h, i, "publish");
e(h, i, "build");
return h
};
e(g, [], "declare");
e(g, [], "build");
e(g, [], "publish");
e(g, [], "importEvent");
a._shims.push(g);
return g
};
a._shims = [];
if (!d.$Nav) {
d.$Nav = a("rcx-nav")
}
if (!d.$Nav.make) {
d.$Nav.make = a
}
}(window));
which clearly isn't what I was looking for. And I cannot work with this, for example if I need to fetch the product name, I would write something like:
bs.findAll("div",{"id":"titleSection"})[0].span.text.strip()
which, given the output above throws an index error. Oddly, when I run this code through a loop for several similar products on Amazon across a terminal, it works fine for the first 10-15 products and then throws the same error and sometimes goes up to 130 items before breaking down and on other occasions gives the error on the very first iteration. I would be very thankful for an explanation(and possibly a solution) for this random behaviour.
答案 0 :(得分:1)
请求无法提取带有javascript呈现的页面,因此要解决问题,您可以选择: