我检查了几个jenkins共享库示例,发现其中一些调用方法的定义如下:
def call (Map parameters) { .... }
和其他
def call (Map parameters = [:]) { .... }
使用=[:]
和不使用import requests #grabs HTML from remote file
from bs4 import BeautifulSoup #core scraping module
import csv #prints results to a file
import time #adds delay and avoid overloading host server
from urllib.request import urlopen
# create file to hold scraped data.
artist_csv_file = open('artist_data.csv', 'w')
csv_writer = csv.writer(artist_csv_file)
# nominate header columns for the CSV
csv_writer.writerow(['date_text', 'artist', 'track', 'url'])
# read in a list of URLs to scrap (testing with 3 of 500+ pages)
contents = []
with open('Rage-links.csv','r') as csvf: # Open file in read mode
urls = csv.reader(csvf)
for url in urls:
page = urlopen(url[0]).read()
rage = BeautifulSoup(page, 'html.parser')
# identify page section with artist and track names
playlist = rage.find('div', class_='comp-rich-text article-text clearfix')
# open a CSV file to collect results. And set file's column headers
artist_csv_file = open('artist_data.csv', 'w')
csv_writer = csv.writer(artist_csv_file)
csv_writer.writerow(['date_text', 'artist', 'track', 'url'])
# capture date from the page
rage_date = rage.find('div', class_='view-comp-publish-date')
date_text = rage_date.time.text
# define the artist and define song title as its sibling
artist = playlist.find('strong')
# track = playlist.strong.next_sibling
# loop through the HTML, write date, artist and track data.
for artist in playlist.find_all('strong'):
print(date_text)
print(artist)
print(artist.next_sibling)
print(url)
time.sleep(.1) # delay scrape for .1 seconds
csv_writer.writerow([date_text, artist, artist.next_sibling, url])
# close the CSV file
artist_csv_file.close()
定义参数之间有什么区别?
答案 0 :(得分:1)
Groovy支持一种称为default arguments的功能
第一个示例要求您传递参数值。
call(['key': 'value'])
可以以这种方式调用第二个示例,但是也可以在不指定值的情况下调用它,它将使用默认值:
call()