违反我的查询的人

时间:2018-08-29 19:21:05

标签: javascript mysql node.js

我一直在尝试使用查询进行交易;我已经这样做了,首先,我检查并执行查询以检查用户是否有足够的余额,其次,我要扣除他的余额并处理交易。

问题是如果您下载工具,或者使用的宏可以每秒点击200次(我认为),则信号发送的速度比查询处理的速度更快,因此仍然会认为用户有足够的余额然后他最终没有了,他的余额将变为负数。

以下是快速代码

var processTransaction = function(userid, cost){
    database.query('SELECT `balance` FROM `user` WHERE `id` = ' + database.pool.escape(userid), function(err, row){
        if(err){
            return;
        }

        if(!row.length){
            return;
        }

        var userBalance = row[0].balance;
        if(userBalance >= cost){
            /* User has enough, process */

            addBalance(userid, -cost); //deduct query
        }
    });
}

我在这里犯任何错误,我会采取不同的方法吗?

查询功能

var query = function(sql, callback) {
  if (typeof callback === 'undefined') {
    callback = function() {};
  }
  pool.getConnection(function(err, connection) {
    if(err) return callback(err);
    connection.query(sql, function(err, rows) {
      if(err) return callback(err);
      connection.release();
      return callback(null, rows);
    });
  });
};

3 个答案:

答案 0 :(得分:1)

您需要确保数据库处于一致状态。您可以通过几种方法来做到这一点,其中最简单的方法是:

  • 使用LOCK TABLE user WRITE来防止访问用户表(完成后立即解锁表)。如果您单击200次,请放心-点击将全部排队,并且无法同时运行。

来自the manual的内容:

  

在以下情况下使用LOCK TABLES和UNLOCK TABLES的正确方法   事务表(例如InnoDB表)将开始事务   SET自动提交= 0(不开始交易),然后锁定   TABLES,并且在提交事务之前不要调用UNLOCK TABLES   明确地。例如,如果您需要写入表t1并读取   从表t2中,您可以执行以下操作:

SET autocommit=0;
LOCK TABLES t1 WRITE, t2 READ, ...;
... do something with tables t1 and t2 here ...
COMMIT;
UNLOCK TABLES;
     

当您调用LOCK TABLES时,InnoDB在内部获取其自己的表锁,   MySQL拥有自己的表锁。 InnoDB发布其内部表   在下一次提交时锁定,但要让MySQL释放其表锁,您需要   必须调用解锁表。您不应该具有自动提交= 1,   因为InnoDB之后会立即释放其内部表锁   调用LOCK TABLES和死锁很容易发生。创新数据库   如果autocommit = 1,则根本不获取内部表锁   帮助旧应用程序避免不必要的死锁。

答案 1 :(得分:1)

使用“排队”架构还可以使您更轻松地处理并发请求。概念是将所有请求都放入队列。有一个工作人员设置为“轮询”该队列(可能在某种形式的cron上)以从队列中读取并分配工作。当工作人员收到物品时,它将按顺序发出您的更新/创建请求,并防止出现竞争状况。通过这种行为,您将需要更加有能力处理异步事件。

答案 2 :(得分:0)

在没有apiVersion: v1 kind: ServiceAccount metadata: name: elasticsearch-logging namespace: kube-system labels: k8s-app: elasticsearch-logging --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: elasticsearch-logging labels: k8s-app: elasticsearch-logging rules: - apiGroups: - "" resources: - "services" - "namespaces" - "endpoints" verbs: - "get" --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: namespace: kube-system name: elasticsearch-logging labels: k8s-app: elasticsearch-logging subjects: - kind: ServiceAccount name: elasticsearch-logging namespace: kube-system apiGroup: "" roleRef: kind: ClusterRole name: elasticsearch-logging apiGroup: "" --- apiVersion: v1 kind: ServiceAccount metadata: name: fluentd-es namespace: kube-system labels: k8s-app: fluentd-es --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: fluentd-es labels: k8s-app: fluentd-es rules: - apiGroups: - "" resources: - "namespaces" - "pods" verbs: - "get" - "watch" - "list" --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: fluentd-es labels: k8s-app: fluentd-es subjects: - kind: ServiceAccount name: fluentd-es namespace: kube-system apiGroup: "" roleRef: kind: ClusterRole name: fluentd-es apiGroup: "" --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: fluentd-es namespace: kube-system labels: k8s-addon: logging-elasticsearch.addons.k8s.io k8s-app: fluentd-es kubernetes.io/cluster-service: "true" version: v2.0.4 spec: template: metadata: labels: k8s-app: fluentd-es kubernetes.io/cluster-service: "true" version: v2.0.4 spec: serviceAccountName: fluentd-es containers: - name: fluentd-es image: k8s.gcr.io/fluentd-elasticsearch:1.22 command: - '/bin/sh' - '-c' - '/usr/sbin/td-agent 2>&1 >> /var/log/fluentd.log' resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi volumeMounts: - name: varlog mountPath: /var/log - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true #nodeSelector: # alpha.kubernetes.io/fluentd-ds-ready: "true" terminationGracePeriodSeconds: 30 volumes: - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers --- apiVersion: v1 kind: Service metadata: name: elasticsearch-logging namespace: kube-system labels: k8s-addon: logging-elasticsearch.addons.k8s.io k8s-app: elasticsearch-logging kubernetes.io/cluster-service: "true" kubernetes.io/name: "Elasticsearch" spec: ports: - port: 9200 protocol: TCP targetPort: db selector: k8s-app: elasticsearch-logging --- apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: elasticsearch-logging namespace: kube-system labels: k8s-addon: logging-elasticsearch.addons.k8s.io k8s-app: elasticsearch-logging version: v1 kubernetes.io/cluster-service: "true" spec: serviceName: elasticsearch-logging replicas: 2 template: metadata: labels: k8s-app: elasticsearch-logging version: v1 kubernetes.io/cluster-service: "true" spec: serviceAccountName: elasticsearch-logging containers: - image: k8s.gcr.io/elasticsearch:v5.6.4 name: elasticsearch-logging resources: # need more cpu upon initialization, therefore burstable class limits: cpu: 1000m requests: cpu: 100m ports: - containerPort: 9200 name: db protocol: TCP - containerPort: 9300 name: transport protocol: TCP volumeMounts: - name: es-persistent-storage mountPath: /data env: - name: "NAMESPACE" valueFrom: fieldRef: fieldPath: metadata.namespace volumeClaimTemplates: - metadata: name: es-persistent-storage annotations: volume.beta.kubernetes.io/storage-class: "default" spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 20Gi --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kibana-logging namespace: kube-system labels: k8s-addon: logging-elasticsearch.addons.k8s.io k8s-app: kibana-logging kubernetes.io/cluster-service: "true" spec: replicas: 1 selector: matchLabels: k8s-app: kibana-logging template: metadata: labels: k8s-app: kibana-logging spec: containers: - name: kibana-logging image: docker.elastic.co/kibana/kibana:5.6.4 resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 100m requests: cpu: 100m env: - name: "ELASTICSEARCH_URL" value: "http://elasticsearch-logging:9200" - name: "SERVER_BASEPATH" value: "/api/v1/proxy/namespaces/kube-system/services/kibana-logging" - name: "XPACK_MONITORING_ENABLED" value: "false" - name: "XPACK_SECURITY_ENABLED" value: "false" ports: - containerPort: 5601 name: ui protocol: TCP --- apiVersion: v1 kind: Service metadata: name: kibana-logging namespace: kube-system labels: k8s-addon: logging-elasticsearch.addons.k8s.io k8s-app: kibana-logging kubernetes.io/cluster-service: "true" kubernetes.io/name: "Kibana" spec: ports: - port: 5601 protocol: TCP targetPort: ui selector: k8s-app: kibana-logging 的情况下制作UPDATE怎么办?

SELECT

我相信这种方法和交易本身应有助于避免出现负余额。

[UPD],并用于检测余额是否已实际更新(肯定是由于余额无效),您可以读取受影响的行数(取决于所使用的lib,它可以是独立的方法,也可以是result of UPDATE查询)