我试图在kubernetes集群上为mongo
配置身份验证。我部署了以下yaml
:
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongodb
image: mongo:4.0.0
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "admin"
- name: MONGO_INITDB_ROOT_PASSWORD
# Get password from secret
value: "abc123changeme"
command:
- mongod
- --auth
- --replSet
- rs0
- --bind_ip
- 0.0.0.0
ports:
- containerPort: 27017
name: web
volumeMounts:
- name: mongo-ps
mountPath: /data/db
volumes:
- name: mongo-ps
persistentVolumeClaim:
claimName: mongodb-pvc
当我尝试使用用户名“ admin”和密码“ abc123changeme”进行身份验证时,我收到了"Authentication failed."
。
如何配置mongo管理员用户名和密码(我想从机密中获取密码)?
谢谢
答案 0 :(得分:2)
环境变量不起作用的原因是映像(https://github.com/docker-library/mongo/tree/master/4.0)中的docker-entrypoint.sh脚本使用了MONGO_INITDB环境变量,但是在kubernetes中定义“ command:”时您覆盖该入口点的文件(请参见注释https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/)
请参阅下面的YML,它是根据我在网上找到的一些示例改编而成的。注意我的学习要点
cvallance / mongo-k8s-sidecar查找与名称空间的POD标签REGARDLESS匹配的任何mongo实例,因此它将尝试与集群中的任何旧实例进行连接。这使我费了好几个小时,因为我从示例中删除了环境=标签,因为我们使用命名空间来分隔环境。.回想起来很愚蠢,显而易见……一开始就非常令人困惑(mongo日志抛出了各种各样的东西)串扰引起的身份验证错误和服务中断类型错误)
我是ClusterRoleBindings的新手,花了我一段时间才弄清楚它们是集群级别的,尽管我知道这很明显(尽管需要提供名称空间以使kubectl接受它),但是却导致我的重写在每个名称空间之间,因此请确保您为每个环境创建唯一的名称,以避免一个命名空间中的部署弄乱另一个命名空间,因为如果集群中的ClusterRoleBinding不受限制,则它们会被覆盖
MONGODB_DATABASE必须设置为'admin'才能进行身份验证。
根据底层VM的大小,我发现副本集无法初始化,导致身份验证错误。这是由于postStart在mongod完全启动并接受连接之前执行,因此未创建admin用户。我将等待时间增加到20秒来解决此问题,但是更好的方法是以某种方式在超时时间内测试守护程序连接,而不是在发出create user命令之前随意等待x秒。将mongo eval行的输出重定向到/data/db/config.log可帮助我确定问题是由于postStart期间的连接被拒绝(因为postStart没有为我记录到stackdriver)
来自https://docs.mongodb.com/manual/reference/program/mongod/
如果在无法访问系统中所有可用RAM的容器(例如lxc,cgroups,Docker等)中运行mongod,则必须将--wiredTigerCacheSizeGB的值设置为小于容器中可用的RAM。确切的数量取决于容器中运行的其他进程。
下面的YML应该启动并在kubernetes中配置mongo副本集并启用持久性存储和身份验证。 如果您连接到吊舱...
kubectl exec -ti mongo-db-0 --namespace somenamespace /bin/bash
映像中安装了mongo shell,因此您应该能够使用...连接到副本集。
mongo mongodb://mongoadmin:adminpassword@mongo-db/admin?replicaSet=rs0
然后看到您得到了rs0:PRIMARY>或rs0:SECONDARY,表明这两个Pod在mongo复制集中。使用rs.conf()从PRIMARY进行验证。
#Create a Secret to hold the MONGO_INITDB_ROOT_USERNAME/PASSWORD
#so we can enable authentication
apiVersion: v1
data:
#echo -n "mongoadmin" | base64
init.userid: bW9uZ29hZG1pbg==
#echo -n "adminpassword" | base64
init.password: YWRtaW5wYXNzd29yZA==
kind: Secret
metadata:
name: mongo-init-credentials
namespace: somenamespace
type: Opaque
---
# Create a secret to hold a keyfile used to authenticate between replicaset members
# this seems to need to be base64 encoded twice (might not be the case if this
# was an actual file reference as per the examples, but we're using a simple key
# here
apiVersion: v1
data:
#echo -n "CHANGEMECHANGEMECHANGEME" | base64 | base64
mongodb-keyfile: UTBoQlRrZEZUVVZEU0VGT1IwVk5SVU5JUVU1SFJVMUYK
kind: Secret
metadata:
name: mongo-key
namespace: somenamespace
type: Opaque
---
# Create a service account for Mongo and give it Pod List role
# note this is a ClusterROleBinding - the Mongo Pod will be able
# to list all pods present in the cluster regardless of namespace
# (and this is exactly what it does...see below)
apiVersion: v1
kind: ServiceAccount
metadata:
name: mongo-serviceaccount
namespace: somenamespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: mongo-somenamespace-serviceaccount-view
namespace: somenamespace
subjects:
- kind: ServiceAccount
name: mongo-serviceaccount
namespace: somenamespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pod-viewer
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-viewer
namespace: somenamespace
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list"]
---
#Create a Storage Class for Google Container Engine
#Note fstype: xfs isn't supported by GCE yet and the
#Pod startup will hang if you try to specify it.
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
namespace: somenamespace
name: mongodb-ssd-storage
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
allowVolumeExpansion: true
---
#Headless Service for StatefulSets
apiVersion: v1
kind: Service
metadata:
namespace: somenamespace
name: mongo-db
labels:
name: mongo-db
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
app: mongo
---
# Now the fun part
#
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
namespace: somenamespace
name: mongo-db
spec:
serviceName: mongo-db
replicas: 2
template:
metadata:
labels:
# Labels MUST match MONGO_SIDECAR_POD_LABELS
# and MUST differentiate between other mongo
# instances in the CLUSTER not just the namespace
# as the sidecar will search the entire cluster
# for something to configure
app: mongo
environment: somenamespace
spec:
#Run the Pod using the service account
serviceAccountName: mongo-serviceaccount
terminationGracePeriodSeconds: 10
#Prevent a Mongo Replica running on the same node as another (avoid single point of failure)
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mongo
topologyKey: "kubernetes.io/hostname"
containers:
- name: mongo
image: mongo
command:
#Authentication adapted from https://gist.github.com/thilinapiy/0c5abc2c0c28efe1bbe2165b0d8dc115
#in order to pass the new admin user id and password in
- /bin/sh
- -c
- >
if [ -f /data/db/admin-user.lock ]; then
echo "KUBERNETES LOG $HOSTNAME- Starting Mongo Daemon with runtime settings (clusterAuthMode)"
#ensure wiredTigerCacheSize is set within the size of the containers memory limit
mongod --wiredTigerCacheSizeGB 0.25 --replSet rs0 --bind_ip 0.0.0.0 --smallfiles --noprealloc --clusterAuthMode keyFile --keyFile /etc/secrets-volume/mongodb-keyfile --setParameter authenticationMechanisms=SCRAM-SHA-1;
else
echo "KUBERNETES LOG $HOSTNAME- Starting Mongo Daemon with setup setting (authMode)"
mongod --auth;
fi;
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- >
if [ ! -f /data/db/admin-user.lock ]; then
echo "KUBERNETES LOG $HOSTNAME- no Admin-user.lock file found yet"
#upped this to 20 to 'ensure' mongod is accepting connections
sleep 20;
touch /data/db/admin-user.lock
if [ "$HOSTNAME" = "mongo-db-0" ]; then
echo "KUBERNETES LOG $HOSTNAME- creating admin user ${MONGODB_USERNAME}"
mongo --eval "db = db.getSiblingDB('admin'); db.createUser({ user: '${MONGODB_USERNAME}', pwd: '${MONGODB_PASSWORD}', roles: [{ role: 'root', db: 'admin' }]});" >> /data/db/config.log
fi;
echo "KUBERNETES LOG $HOSTNAME-shutting mongod down for final restart"
mongod --shutdown;
fi;
env:
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.userid
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.password
ports:
- containerPort: 27017
resources:
requests:
memory: "250Mi"
limits:
memory: "350Mi"
volumeMounts:
- name: mongo-key
mountPath: "/etc/secrets-volume"
readOnly: true
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
# Sidecar searches for any POD in the CLUSTER with these labels
# not just the namespace..so we need to ensure the POD is labelled
# to differentiate it from other PODS in different namespaces
- name: MONGO_SIDECAR_POD_LABELS
value: "app=mongo,environment=somenamespace"
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.userid
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.password
#don't be fooled by this..it's not your DB that
#needs specifying, it's the admin DB as that
#is what you authenticate against with mongo.
- name: MONGODB_DATABASE
value: admin
volumes:
- name: mongo-key
secret:
defaultMode: 0400
secretName: mongo-key
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "mongodb-ssd-storage"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
答案 1 :(得分:0)
假设您创建了一个秘密:
public class Bottom_Nav extends AppCompatActivity implements BottomNavigationView.OnNavigationItemSelectedListener {
SparseArray<Fragment> myFragments;
String user_type;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_bottom__nav);
SharedPreferences sharedPreferences = getSharedPreferences("DataStore", Context.MODE_PRIVATE);
user_type = sharedPreferences.getString("user_type", "");
BottomNavigationView bottomNavigationView = (BottomNavigationView) findViewById(R.id.navigation);
Nav_Helper.disableShiftMode(bottomNavigationView);
//make array to avoid re creating of fragments
myFragments = new SparseArray<Fragment>();
//loading the default fragment
loadFragment(new HomeFragment());
//getting bottom navigation view and attaching the listener
bottomNavigationView.setOnNavigationItemSelectedListener(this);
}
private void loadFragment(Fragment fragment) {
//switching fragment
FragmentTransaction fragmentTransaction =
getSupportFragmentManager().beginTransaction();
fragmentTransaction.replace(R.id.content, fragment);
fragmentTransaction.commit();
}
@Override
public boolean onNavigationItemSelected(@NonNull MenuItem item) {
Fragment fragment = null;
int id = item.getItemId();
if (id == R.id.navigation_home) {
fragment = new HomeFragment();
myFragments.put(1, fragment);
loadFragment(fragment);
} else if (id == R.id.navigation_use) {
getFragmentManager().popBackStack();
switch (user_type) {
case "detailer":
fragment = new Detailer_Zone();
myFragments.put(4, fragment);
loadFragment(fragment);
break;
case "customer":
fragment = new Customer_Zone();
myFragments.put(4, fragment);
loadFragment(fragment);
break;
case "empty":
Intent intent = new Intent(this, Login_Activity.class);
startActivity(intent);
break;
}
}
else if (id == R.id.navigation_search) {
// get cached instance of the fragment
fragment = myFragments.get(2);
// if fragment doesn't exist in myFragments, create one and add to it
if (fragment == null) {
fragment = new SearchFragment();
myFragments.put(2, fragment);
}
loadFragment(fragment);
} else if (id == R.id.navigation_map) {
// get cached instance of the fragment
fragment = myFragments.get(3);
// if fragment doesn't exist in myFragments, create one and add to it
if (fragment == null) {
fragment = new MapsActivity();
myFragments.put(3, fragment);
}
loadFragment(fragment);
}
return true;
}
以下是从kubernetes yaml文件中的秘密获取值的代码段:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
答案 2 :(得分:0)
我发现此问题与docker-entrypoint.sh中的错误有关,并在节点上检测到numactl时发生。
尝试以下简化代码(这将numactl移开):
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment
labels:
app: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: mongo:4.0.0
command:
- /bin/bash
- -c
# mv is not needed for later versions e.g. 3.4.19 and 4.1.7
- mv /usr/bin/numactl /usr/bin/numactl1 && source docker-entrypoint.sh mongod
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "xxxxx"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "xxxxx"
ports:
- containerPort: 27017
我在以下位置提出了一个问题: https://github.com/docker-library/mongo/issues/330
希望它会在某个时间点修复,因此不需要hack:o)