我目前从python开始,我正在尝试使用German energy network的开源模型。该模型使用一个makefile来自动化代码的几个步骤。现在的问题是,我没有运行makefile。我在Mozilla浏览器中将Windows与JupyterLab一起使用。我还完成了“运行makefile之前...”部分的步骤。
我已经用JupyterLab打开了代码路径,如果我只是在控制台中编写“ make test”(如模型文档中给出的那样),则会出现语法错误:
File "<ipython-input-14-8152ed2c6ffd>", line 1
make test
^
SyntaxError: invalid syntax
我试图用Google搜索解决此问题的方法,尝试安装py-make(我不太了解的文档/应用程序),并使用cmd代替JupyterLab。没有一个起作用。
我想这更多是一个入门问题,正如我从谷歌搜索中知道该命令应该可以正常工作一样。不过,这是Makefile中的代码:
###################################################################################
# #
# Copyright "2015" "NEXT ENERGY" #
# #
# Licensed under the Apache License, Version 2.0 (the "License"); #
# you may not use this file except in compliance with the License. #
# You may obtain a copy of the License at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
# #
# Unless required by applicable law or agreed to in writing, software #
# distributed under the License is distributed on an "AS IS" BASIS, #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
# See the License for the specific language governing permissions and #
# limitations under the License. #
# #
###################################################################################
################################## IMPORTANT ######################################
#
# Before running the makefile check the following:
#
# I. Make sure that you installed PostgreSQL, osmosis and osm2pgsql on your system.
#
# II. Adjust the config.txt file accordingly to your environment (e.g. adopted
# folder structure, location of binary files, database connection etc).
#
# III. Make sure that the different paths, options, etc. in the config file are set
# correctly. These settings are passed to variables of this makefile in the
# section 'Environment Varibles'. The paths are based on the folder sturcture
# delivered with the SciGRID code. Change the paths according to where the data,
# the tools and software used are located on your system.
#
# IV. Make sure that the name of the databases provided in the config.txt is unique.
# Otherwise, the makefile will override existing data in the database.
# Note, the user can run "make drop_database" to drop an existing database.
################################## OUTLINE ########################################
# This makefile executes the following tasks:
#
# Step1.
# Download the OSM raw data.
#
# Step2.
# Filter the OSM raw data from step1 spatially (polyfile) for OSM raw power data.
#
# Step3.
# Export the OSM filtered power data (from step2) to the created database.
#
# Step4.
# Execute the abstraction script on the database created in step3 to obtain the
# abstracted transmission network.
# Stores a vizualization of the abstracted network.
#
# Step5.
# Stores the vertices and links of the abstracted network to .csv files.
#
# Step Update.
# Update your database and the network topology.
#=================================================================================#
# Setting the config file #
#=================================================================================#
config:=default_config.mk
include $(config)
#=================================================================================#
# Environment Variables #
#=================================================================================#
export PGCLUSTER=$(postgres_cluster)
export PGDATABASE=$(postgres_database)
export PGUSER=$(postgres_user)
export PGPORT=$(postgres_port)
export PGHOSTADDR=$(postgres_host)
export PGPASS=$(postgres_password)
export JAVACMD_OPTIONS=-Djava.io.tmpdir=$(osmosis_tmp_folder)
#=================================================================================#
# Output Files #
#=================================================================================#
TOPOLOGY_CSV:= $(network_folder)/vertices_$(postgres_database).csv $(network_folder)/vertices_ref_id_$(postgres_database).csv
TOPOLOGY_PLOT:= $(visualization_folder)/topology_$(postgres_database).png
#=================================================================================#
# Definition of tasks #
#=================================================================================#
.PHONY: all
.PHONY: scigrid
.PHONY: filter_OSM
.PHONY: download
.PHONY: clean_all
.PHONY: clean
.PHONY: drop_database
.PHONY: topology
.PHONY: test
# Performs automaticly the steps that were needed.
scigrid: topology
@echo "--> All done."
all: clean_all download filter_OSM scigrid topology
test:
@echo 'Running a test SciGRID abstraction using the OSM data for the state of Bremen (Germany):'
@$(MAKE) OSM_raw_data_URL=http://download.geofabrik.de/europe/germany/bremen-latest.osm.pbf OSM_raw_data=../data/01_osm_raw_data/bremen-latest.osm.pbf download
@$(MAKE) OSM_raw_power_data=../data/02_osm_raw_power_data/br_power_latest.osm.pbf OSM_raw_data=../data/01_osm_raw_data/bremen-latest.osm.pbf osmosis_tmp_folder=/tmp filter_OSM
@$(MAKE) OSM_raw_power_data=../data/02_osm_raw_power_data/br_power_latest.osm.pbf postgres_database=br_power_latest scigrid
# Step5: Save the network topology as CSV files
topology: log/abstraction.done
@echo "\n### Step5 ### \nSaving network topology as CSV files to folder '$(network_folder)':"
@psql --dbname=$(postgres_database) --username=$(postgres_user) --port=$(postgres_port) --host=$(postgres_host) -c "COPY (SELECT * FROM vertices ORDER BY v_id) TO STDOUT WITH CSV HEADER DELIMITER ',' QUOTE '''' ENCODING 'UTF8';" > $(network_folder)/vertices_$(postgres_database).csvdata
@psql --dbname=$(postgres_database) --username=$(postgres_user) --port=$(postgres_port) --host=$(postgres_host) -c "COPY (SELECT * FROM links ORDER BY l_id) TO STDOUT WITH CSV HEADER DELIMITER ',' QUOTE '''' ENCODING 'UTF8';" > $(network_folder)/links_$(postgres_database).csvdata
@psql --dbname=$(postgres_database) --username=$(postgres_user) --port=$(postgres_port) --host=$(postgres_host) -c "COPY (SELECT * FROM vertices_ref_id ORDER BY v_id) TO STDOUT WITH CSV HEADER DELIMITER ',' QUOTE '''' ENCODING 'UTF8';" > $(network_folder)/vertices_ref_id_$(postgres_database).csvdata
@echo "--> Done. Saving network topology as CSV files."
# Step4: Execute the abstraction script on the database created in step3
log/abstraction.done: log/database_import.done
@echo "\n### Step4 ### \nRunning the abstraction script SciGRID.py on the database '$(postgres_database)':"
@if [ $(postgres_password) = ]; \
then \
python SciGRID.py -U $(postgres_user) -P $(postgres_port) -H $(postgres_host) -D $(postgres_database) ;\
else \
python SciGRID.py -U $(postgres_user) -P $(postgres_port) -H $(postgres_host) -D $(postgres_database) -X $(postgres_password) ; \
fi
@if [ ! -e ../data/04_visualization/topology_$(postgres_database).png ]; then mv ../data/04_visualization/topology_$(postgres_database).png $(visualization_folder)/topology_$(postgres_database).png; fi
@touch log/abstraction.done
@echo "--> Done. SciGRID abstraction."
# Step3: Export the OSM filtered power data (from step2) to the created database.
log/database_import.done:
@if [ -e $(OSM_raw_power_data) ]; then echo "\n### Step3 ### \nExport the OSM filtered power data \n '$(OSM_raw_power_data)' \nto the database \n '$(postgres_database)':"; else echo "$(OSM_raw_power_data) does not exist."; exit 1; fi
@if (! psql --username=$(postgres_user) --port=$(postgres_port) --host=$(postgres_host) -lqt | cut -d \| -f 1 | grep -wq $(postgres_database)); \
then \
createdb --username=$(postgres_user) --port=$(postgres_port) --host=$(postgres_host) $(postgres_database) > log/database.log 2>&1; \
psql --dbname=$(postgres_database) --username=$(postgres_user) --port=$(postgres_port) --host=$(postgres_host) -q -f $(postgis) >> log/database.log 2>&1; \
psql --dbname=$(postgres_database) --username=$(postgres_user) --port=$(postgres_port) --host=$(postgres_host) -q -f $(spatial_ref_sys) >> log/database.log 2>&1; \
psql --dbname=$(postgres_database) --username=$(postgres_user) --port=$(postgres_port) --host=$(postgres_host) -c "CREATE EXTENSION hstore;" >> log/database.log 2>&1; \
psql --dbname=$(postgres_database) --username=$(postgres_user) --port=$(postgres_port) --host=$(postgres_host) -c "CREATE TABLE vertices_ref_id (v_id serial PRIMARY KEY NOT NULL, osm_id bigint, osm_id_typ char, visible smallint);" >> log/database.log 2>&1; \
if [ -e ../data/03_network/vertices_ref_id.csvdata ] ; \
then \
psql --dbname=$(postgres_database) --username=$(postgres_user) --port=$(postgres_port) --host=$(postgres_host) -q -c "COPY vertices_ref_id FROM STDIN WITH CSV HEADER DELIMITER ',' QUOTE '''' ENCODING 'UTF8';" < ../data/03_network/vertices_ref_id.csvdata >> log/database.log 2>&1; \
psql --dbname=$(postgres_database) --username=$(postgres_user) --port=$(postgres_port) --host=$(postgres_host) -q -c "SELECT setval('vertices_ref_id_v_id_seq', (SELECT MAX(v_id) FROM vertices_ref_id));" >> log/database.log 2>&1; \
psql --dbname=$(postgres_database) --username=$(postgres_user) --port=$(postgres_port) --host=$(postgres_host) -q -c "UPDATE vertices_ref_id SET visible = '0';" >> log/database.log 2>&1; \
echo "Created new database and imorted vertices_ref_id.csvdata into table vertices_ref_id."; \
else \
echo "Did not find vertices_ref_id.csvdata. \nThus, created new database with an empty vertices_ref_id table. \nBe aware, that the network topology may has different v_id's compared to the SciGRID release v0.2. "; \
fi \
else \
psql --dbname=$(postgres_database) --username=$(postgres_user) --port=$(postgres_port) --host=$(postgres_host) -q -c "UPDATE vertices_ref_id SET visible = '0';" >> log/database.log 2>&1; \
fi
@$(osm2pgsql_bin) -r pbf --username=$(postgres_user) --database=$(postgres_database) --host=$(postgres_host) --port=$(postgres_port) -s \
-C $(osm2pgsql_cache) --hstore --number-processes $(osm2pgsql_num_processes) --style $(stylefile) $(OSM_raw_power_data) > log/osm2pgsql.log 2>&1
@touch log/database_import.done
@echo "--> Done. Database import."
# Step2: Filter the OSM raw data from step1 spatially (polyfile) for OSM raw power data.
filter_OSM:
@if [ -e $(OSM_raw_data) ]; then echo "\n### Step2 ### \nFilter the OSM raw data from step1 for power data and spatially with \n $(polyfile):"; else echo "$(OSM_raw_data) does not exist."; exit 1; fi
@$(osmosis_bin) \
--read-pbf file=$(OSM_raw_data) \
--tag-filter accept-relations route=power \
--used-way --used-node \
--bounding-polygon file=$(polyfile) completeRelations=yes \
--buffer outPipe.0=route \
--read-pbf file=$(OSM_raw_data) \
--tag-filter accept-relations power=* \
--used-way --used-node \
--bounding-polygon file=$(polyfile) completeRelations=yes \
--buffer outPipe.0=power \
--read-pbf file=$(OSM_raw_data) \
--tag-filter reject-relations \
--tag-filter accept-ways power=* \
--used-node \
--bounding-polygon file=$(polyfile) completeWays=yes \
--buffer outPipe.0=pways \
--read-pbf file=$(OSM_raw_data) \
--tag-filter reject-relations \
--tag-filter reject-ways \
--tag-filter accept-nodes power=* \
--bounding-polygon file=$(polyfile) \
--buffer outPipe.0=pnodes \
--merge inPipe.0=route inPipe.1=power \
--buffer outPipe.0=mone \
--merge inPipe.0=pways inPipe.1=pnodes \
--buffer outPipe.0=mtwo \
--merge inPipe.0=mone inPipe.1=mtwo \
--write-pbf file=$(OSM_raw_power_data) > log/osmosis.log 2>&1
@echo "--> Done. OSM filtered power data."
# Step1: Download the OSM raw data.
download:
@echo "\n### Step1 ### \nDownload the OSM raw data from \n '$(OSM_raw_data_URL)' \nand saving it to \n '$(OSM_raw_data)':"
@wget -nv -O $(OSM_raw_data) $(OSM_raw_data_URL) > log/download.log 2>&1
@echo "--> Done. Download OSM raw data."
# If you wish to drop your database
drop_database:
$(eval answer := $(shell read -r -p "Do you really want to delete the SciGRID database '$(postgres_database)'? Type 'yes' if you are sure you wish to continue: " ANSWER; echo $$ANSWER))
@if [ "$(answer)" = "yes" ]; then if(dropdb --username=$(postgres_user) --port=$(postgres_port) --host=$(postgres_host) $(postgres_database)); then echo "The SciGRID database '$(postgres_database)' has been dropped."; fi else echo "\nDid not drop the SciGRID database '$(postgres_database)'."; fi
# Using 'make clean' implies to perform neccessary steps except from download and filter of OSM raw data afterwards
clean:
@rm -f log/*.done
@find . -name "*.pyc" -delete
@echo 'Done. clean'
# Use 'make clean_all' only if you really want to start from scratch by downloading the OSM raw data and performing all steps
clean_all:
@rm -f $(TOPOLOGY_CSV)
@rm -f $(TOPOLOGY_PLOT)
@rm -f log/*
@rm -f *.pyc
@echo 'Done. clean_all'
答案 0 :(得分:0)
制作文件不是python脚本。 Jupyter需要python脚本。您可能需要在终端上使用makefile。
不确定在Windows上制作文件的性能如何,在Linux上可能需要这样做。
这就是我的假设,尽管我没有阅读文档。