我有一个python脚本(用Jupyter笔记本编写),我想在Azure中运行此脚本。 python脚本基本上从API源获取数据(每24小时更新一次),并更新SQL数据库Azure。因此,这种自动的python脚本将在每次运行时更新数据库表
有人可以取悦我吗?
下面是我编写的python代码,
graph <- data %>%
filter(Dataframe == "weekly" & Question == 1) %>%
as_tbl_graph(directed = FALSE) %>%
activate(edges) %>% # first manipulate edges
filter(!edge_is_loop()) %>% # remove any loops
activate(nodes) %>% # now manipulate nodes
left_join(node.group, by = "name") %>%
mutate(
Popularity = centrality_degree(mode = 'in'),
Centre = node_is_center(),
Keyplayer = node_is_keyplayer(k = 5))
答案 0 :(得分:0)
我不使用azure和jupyter笔记本,但我认为我有解决方案 如果您整夜不运行计算机,请将代码更改为:
import time
import pyodbc
import requests
import json
import pandas as pd
while 1:
responses = requests.get("https://data.buffalony.gov/resource/d6g9-xbgu.json")
crime_data = json.loads(responses.text)
dic = {}
dic = crime_data
df = pd.DataFrame.from_dict(dic)
dff = df [['case_number','day_of_week','incident_datetime','incident_description','incident_i d','incident_type_primary']].copy()
connection = pyodbc.connect ('Driver={ODBC Driver 17 for SQL Server};Server=servername;Database=Databasename;UID=admin;PWD=admin')
cur = connection.cursor()
row = []
for i in range(dff.shape[0]):
row.append(dff.iloc[i].tolist())
sql = '''\
INSERT INTO [dbo].[FF] ([case_number],[day_of_week],[incident_datetime], [incident_description],[incident_id],[incident_type_primary]) values (?,?,?,?,?,?)
'''
for i in range(dff.shape[0]):
cur.execute(sql,row[i])
connection.commit()
time.sleep(86400)
如果没有在启动文件中创建新的python程序,如下所示:
import time, os
while 1:
if time.ctime()[11:13] >= "update hour" and time.ctime()[0:4] != open("path/to/any_file.txt").read():
file = open("path/to/any_file.txt", "w")
file.write(time.ctime()[0:4])
file.close()
os.system("python /path/to/file.py")
答案 1 :(得分:-1)
像Azure WebJobs这样的任务计划程序将为您完成此任务。