我正在AWS中构建一个应用程序,其中要编写许多Lambda函数-它们全部将通过运行以下代码来创建要查询的数据库实例:
mydb = mysql.connector.connect(
host="endpoint.rds.amazonaws.com",
user="user",
passwd="password",
database="dbname"
)
现在,我不想在每个Lambda函数中都明确包含此代码-我宁愿将其放在其他地方(在Layer或单独的Lambda函数中),以便可以简单地完成此操作通过这样的东西:
mydb = ConnectToDB()
关于如何执行此操作的任何想法?
答案 0 :(得分:1)
解决了!我创建了一个名为DBConnections.py的python文件,其中包含以下功能-并将其包含在我的AWS Lambda Layer的部署包中。
SELECT I.MEDICATION_ID, M.MEDICATION_NAME, SUM(I.STOCK_LEVEL) AS STOCK_LEVEL,
SUBSTR(M.MEDICATION_DESC,1,20) "Medication Description", M.MANUFACTURER, M.PACK_SIZE
FROM INVENTORY I
JOIN MEDICATION M ON M.MEDICATION_ID = I.MEDICATION_ID
GROUP BY I.MEDICATION_ID, M.MEDICATION_NAME, SUBSTR(M.MEDICATION_DESC,1,20),
M.MANUFACTURER, M.PACK_SIZE;
部署后,我唯一要做的就是:
def Connect():
mydb = mysql.connector.connect(
host="endpoint.amazonaws.com",
user="user",
passwd="password",
database="mydbname"
)
return mydb
Voilà。
答案 1 :(得分:1)
您有一个正确的想法,假设您使用的是python,我会创建一个类似于以下内容的图层包:
library(ggplot2)
library(scales)
df = data.frame("No" = seq_len(20),
"Perc" = c(72.0, 12.7273,
8.72727, rep(0.363636, 17)),
"Codon" = as.factor(c("ATA","ATG","GTG", "ATG",
rep("GTG",5), rep("ATA",6),
rep("ATG",5))))
breaks <- unique(c(seq(0,1,by = 0.1),
seq(1,10, 1), seq(10,100, 10)))
labs = c("","0.1",rep("",8),"1",
rep("",8), "10", rep("",8), "100")
gg <- ggplot(data = df, aes(x = No, y = Perc, fill = Codon))
gg + geom_bar(stat = "identity") +
coord_flip(ylim = c(0.1,100)) + scale_x_reverse() +
scale_y_continuous(
trans = pseudo_log_trans(base = 10),
breaks = breaks, position = "right",
labels = labs)
其中mysql包括mysql软件包,而myPackage.py包括以下内容的某些变体:
python/
myPackage.py
mysql/
然后使用此脚本在lambda中创建一个图层:
import mysql
def ConnectToDB(**kwargs):
return mysql.connector.connect(
host=kwargs.get("YOUR_ENDPOINT"),
user=kwargs.get("YOUR_USER"),
passwd=kwargs.get("YOUR_PASSWORD"),
database=kwargs.get("YOUR_DBNAME")
)
然后您可以将图层与lambda函数关联,并使用以下语法将包导入lambda中:
#!/bin/bash
#Required variables
LAYER_NAME="YOUR_LAYER_NAME"
LAYER_DESCRIPTION="YOUR_LAYER_DESCRIPTION"
LAYER_RUNTIMES="python3.6 python3.7"
S3_BUCKET="YOUR_S3_BUCKET"
#Zip Package Files
zip -r ${LAYER_NAME}.zip .
echo "Zipped ${LAYER_NAME}"
#Upload Package to Lambda
aws s3 cp ./${LAYER_NAME}.zip s3://${S3_BUCKET}
#Create new layer
aws lambda publish-layer-version --layer-name ${LAYER_NAME} --description "${LAYER_DESCRIPTION}" --content S3Bucket=${S3_BUCKET},S3Key=${LAYER_NAME}.zip --compatible-runtimes ${LAYER_RUNTIMES}
#Cleanup zip files
rm ${LAYER_NAME}.zip