我正在尝试从数组中查找最小值和最大值
尝试了各种方法,但找不到满足我需要的解决方案
我有一个像这样的数组对象
其值为
{
0:exchange:bitfix
totaleth:334
1:exchange:coinbase
totaleth:6000
2:exchange:koinex
totaleth:3000
}
现在这个对象就像一个数组,我想根据totaleth字段显示最小值和最大值,请问有1个解决方案
我尝试了诸如map reduce for loop之类的各种操作,似乎没有任何作用
答案 0 :(得分:1)
假设一个对象数组,可以通过采用最小值和最大值来缩小数组。
var array = [{ exchange: 'bitfix', totaleth: 334 }, { exchange: 'coinbase', totaleth: 6000 }, { exchange: 'koinex', totaleth: 3000 }],
{ min, max } = array.reduce(
(r, o, i) => i
? {
min: r.min.totaleth < o.totaleth ? r.min : o,
max: r.max.totaleth > o.totaleth ? r.max : o
}
: { min: o, max: o },
undefined
);
console.log('min', min);
console.log('max', max);
答案 1 :(得分:1)
好,因此您可以使用sort函数,但是请记住,它会修改原始数组。
const someArr = [{"_id":"5c585ed5a9a5b931c3057d48","exchange":"Poloniex","totaleth":-338,"dt":"2019-02-04T15:48:37.232Z","__v":0},{"_id":"5c585ed5a9a5b931c3057d47","exchange":"Bitrex","totaleth":-227,"dt":"2019-02-04T15:48:37.222Z","__v":0},{"_id":"5c585ed5a9a5b931c3057d46","exchange":"Gemini","totaleth":86,"dt":"2019-02-04T15:48:37.220Z","__v":0},{"_id":"5c585ed5a9a5b931c3057d45","exchange":"Bitfinex","totaleth":-373,"dt":"2019-02-04T15:48:37.219Z","__v":0},{"_id":"5c585ed4a9a5b931c3057d44","exchange":"Binance","totaleth":6531,"dt":"2019-02-04T15:48:36.586Z","__v":0}];
console.log(
someArr.sort((a, b) => {
return a.totaleth - b.totaleth;
})
);
答案 2 :(得分:0)
您的代码成功了,非常感谢nina scholz
from bs4 import BeautifulSoup
from selenium import webdriver as wd
from selenium.common.exceptions import StaleElementReferenceException
from selenium.common.exceptions import TimeoutException
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
driver = wd.Firefox()
quote_page = "https://www.zillow.com/homes/for_sale/Minneapolis-MN/condo_type/5983_rid/0-175000_price/0-685_mp" \
"/globalrelevanceex_sort/45.075097,-93.09248,44.866211,-93.430309_rect/11_zm/"
# print (soup.prettify())
driver.get(quote_page)
html = driver.page_source
soup = BeautifulSoup(html, 'html.parser')
webURL = []
while True:
try:
element = WebDriverWait(driver, 5).until(EC.presence_of_element_located((By.ID, "element_id")))
except TimeoutException:
print("Timeout Exception")
elems = driver.find_elements_by_xpath("//a[@href]")
for elem in elems:
try:
if 'homedetails' in elem.get_attribute("href"):
print(elem.get_attribute("href"))
webURL.append(elem.get_attribute("href"))
except StaleElementReferenceException:
print("test")
try:
driver.find_element_by_link_text('NEXT').click()
print('Going to next page')
except NoSuchElementException:
break
for item in webURL:
print(item)
newPage = webURL[0]
driver.get(newPage)
price = driver.find_element_by_class_name("price").text
print(price)