以前它是图像的System.Drawing命名空间。我不确定在asp.net 5中我需要哪个库来拥有Image
类型的对象。
答案 0 :(得分:8)
有很多库可以填补这个空白,即使System.Drawing没有替代品。
我遇到了同样的问题而且我正在查看这些库:
<强>更新强>
看起来这些lib还没有准备好在没有System.Drawing的情况下工作。 有报告称,如果服务器上安装了较旧的.Net版本,您仍然可以引用System.Drawing。将以下位添加到project.json:
"frameworks": {
"aspnet50": {
"dependencies": {
"System.Drawing": ""
}
}
}
此处有关此主题的更多信息:https://github.com/imazen/Graphics-vNext。 该讨论的要点是,还没有真正的替代方案。
更新2
Scott Hanselman blogged对此有所了解。似乎服务器端图像处理在此时根本不是Microsoft的优先事项。
所以现在最好的选择仍然是上面提到的项目。
答案 1 :(得分:0)
我从 nuget 添加了这个包并解决了这个问题
import requests
import sqlite3
import Keywords
from bs4 import BeautifulSoup
from time import sleep
from random import randint
from datetime import datetime
from datetime import timedelta
# ----- Initializing Database & Notification Service -----
connect = sqlite3.connect('StoredArticles.db')
cursor = connect.cursor()
print("Connection created.")
try:
cursor.execute('''CREATE TABLE articlestable (article_time TEXT, article_title TEXT, article_keyword TEXT,
article_link TEXT, article_description TEXT, article_entry_time DATETIME)''')
cursor.execute('''CREATE UNIQUE INDEX index_article_link ON articlestable(article_link)''')
except:
pass
print("Table ready.")
while True:
class Scrapers:
# ----- Initialize Keywords -----
def __init__(self):
self.article_keyword = None
self.article_title = None
self.article_link = None
self.article_time = None
self.article_time_drop = None
self.article_description = None
self.article_entry_time = None
self.headers = {
'User-Agent':
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko)' +
'Version/14.0.1 Safari/605.1.15'
}
def scraping_globenewswire(self, page):
url = 'https://www.globenewswire.com/NewsRoom?page=' + str(page)
r = requests.get(url, headers=self.headers)
soup = BeautifulSoup(r.text, 'html.parser')
articles = soup.select('.main-container > .row')
print("GlobeNewswire - Scraping page " + str(page) + "...")
sleep(randint(0, 1))
for item in articles:
self.article_title = item.select_one('a[data-autid="article-url"]').text.strip()
self.article_time = item.select_one('span[data-autid="article-published-date"]').text.strip()
self.article_link = 'https://www.globenewswire.com' + \
item.select_one('a[data-autid="article-url"]')['href']
self.article_description = item.select_one('span', _class='pagging-list-item-text-body').text.strip()
self.article_entry_time = datetime.now()
cursor.execute('''INSERT OR IGNORE INTO articlestable VALUES(?,?,?,?,?,?)''',
(self.article_time, self.article_title, self.article_keyword, self.article_link,
self.article_description, self.article_entry_time))
print(self.article_title)
return
# ----- End of Loops -----
scraper = Scrapers()
# ----- Range of Pages to scrape through -----
for x in range(1, 3):
scraper.scraping_globenewswire(x)
# ----- Add to Database -----
connect.commit()
print("Process done. Starting to sleep again. Time: " + str(datetime.now()))
sleep(randint(5, 12))