我正在编写一个SQL查询来处理产品订单(眼镜镜片)。
在数据库中,一个订单可以有多行。每行代表订单中的不同项目。因此,一行将是正确的镜头,我们将为right.optionA
,right.optionB
等另一行。左镜头也是如此。
在我的查询中,一行是一个镜头,以及随之而来的所有选项。
我的问题是有几个订单有错误。错误是其中一个选项已输入两次,但不可能有两种类型的选项。
我的查询编写时只考虑每个镜头中的一个特定选项(因此每行),因此当发生此错误时(以及其他数量,销售等),它会生成一个额外的行。
我不想更改我的查询以考虑该类型的多个选项,因为它不会发生,所以我不能做SUM
与这些行一起的数据和GROUP BY
。
所以我的问题是,如何在SQL中说我只想考虑第一行?
现在我的想法是在该选项类型的行上包含一个count(*),当它大于1时它应该做什么,但我不知道是什么。
以下是带有错误的订单的截图。 CompType
列表示项目的类型。 CompType
03
是错误所在的位置。我们可以看到行有两次,这就是问题所在:
我尽量保持尽可能短的时间,我知道这有点长时间阅读,对不起。
谢谢你,祝你有个愉快的一天。
答案 0 :(得分:4)
由于position
列,行不相同。最简单的修复方法是MIN(position)
或MAX(position)
,然后GROUP BY
其他列。
MIN()
的示例:
SELECT OrdNumb,
Side,
MIN(Position),
Comptype,
--Rest of cols
FROM YourTable
GROUP BY OrdNumb,
Side,
Comptype,
--Rest of cols
答案 1 :(得分:1)
取决于哪个“位置”是正确的。它必须返回多行,因为有多个值,但正如您所说,它们是错误。我要做的是假设如果每个“comptype”有多个“位置”(这似乎是问题)那么为了摆脱其中一个值,只需选择位置列的最小值或最大值并且只按其他类别分组。希望这会有所帮助。
答案 2 :(得分:1)
我认为最重要的是清理系统中的数据。但话虽如此,假设您总是希望按位置复制第一个并且有问题的行总是与好行完全重复 - 这应该可以解决问题:
waiting for kafka to be ready
[2017-11-22 20:32:16,203] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = PLAINTEXT://172.17.0.6:9093
advertised.port = null
alter.config.policy.class.name = null
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = -1
broker.id.generation.enable = true
broker.rack = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 1.0-IV0
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = PLAINTEXT://:9092
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /kafka/kafka-logs-b0311aaa4e60
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.format.version = 1.0-IV0
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 1440
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.enabled.mechanisms = [GSSAPI]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism.inter.broker.protocol = GSSAPI
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = null
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = zookeeper:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2017-11-22 20:32:16,278] INFO starting (kafka.server.KafkaServer)
[2017-11-22 20:32:16,279] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer)
[2017-11-22 20:32:16,291] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2017-11-22 20:32:16,296] INFO Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT (org.apache.zookeeper.ZooKeeper)
[2017-11-22 20:32:16,297] INFO Client environment:host.name=b0311aaa4e60 (org.apache.zookeeper.ZooKeeper)
[2017-11-22 20:32:16,297] INFO Client environment:java.version=1.8.0_144 (org.apache.zookeeper.ZooKeeper)
[2017-11-22 20:32:16,297] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2017-11-22 20:32:16,297] INFO Client environment:java.home=/opt/jdk1.8.0_144/jre (org.apache.zookeeper.ZooKeeper)
[2017-11-22 20:32:16,297] INFO Client environment:java.class.path=:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b32.jar:/opt/kafka/bin/..(org.apache.zookeeper.ZooKeeper)
[2017-11-22 20:32:16,297] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2017-11-22 20:32:16,297] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2017-11-22 20:32:16,297] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2017-11-22 20:32:16,297] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2017-11-22 20:32:16,297] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2017-11-22 20:32:16,297] INFO Client environment:os.version=4.9.49-moby (org.apache.zookeeper.ZooKeeper)
[2017-11-22 20:32:16,297] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[2017-11-22 20:32:16,297] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[2017-11-22 20:32:16,297] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
[2017-11-22 20:32:16,298] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@1534f01b (org.apache.zookeeper.ZooKeeper)
[2017-11-22 20:32:16,309] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2017-11-22 20:32:16,309] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
[2017-11-22 20:32:16,313] INFO Socket connection established to zookeeper/172.17.0.4:2181, initiating session (org.apache.zookeeper.ClientCnxn)
...
[2017-11-22 20:32:16,952] INFO [ExpirationReaper-1001-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2017-11-22 20:32:16,956] INFO [ExpirationReaper-1001-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2017-11-22 20:32:16,957] INFO [ExpirationReaper-1001-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2017-11-22 20:32:16,960] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
[2017-11-22 20:32:16,972] INFO [GroupCoordinator 1001]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2017-11-22 20:32:16,973] INFO [GroupCoordinator 1001]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2017-11-22 20:32:16,978] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 5 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2017-11-22 20:32:16,988] INFO [ProducerId Manager 1001]: Acquired new producerId block (brokerId:1001,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
[2017-11-22 20:32:17,004] INFO [TransactionCoordinator id=1001] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2017-11-22 20:32:17,005] INFO [Transaction Marker Channel Manager 1001]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2017-11-22 20:32:17,005] INFO [TransactionCoordinator id=1001] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2017-11-22 20:32:17,039] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
[2017-11-22 20:32:17,043] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
[2017-11-22 20:32:17,044] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: EndPoint(172.17.0.6,9093,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
[2017-11-22 20:32:17,046] WARN No meta.properties file under dir /kafka/kafka-logs-b0311aaa4e60/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2017-11-22 20:32:17,064] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
[2017-11-22 20:32:17,064] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
[2017-11-22 20:32:17,065] INFO [KafkaServer id=1001] started (kafka.server.KafkaServer)
[2017-11-22 20:32:17,100] WARN [Controller id=1001, targetBrokerId=1001] Connection to node 1001 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2017-11-22 20:32:17,208] WARN [Controller id=1001, targetBrokerId=1001] Connection to node 1001 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2017-11-22 20:32:17,308] WARN [Controller id=1001, targetBrokerId=1001] Connection to node 1001 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2017-11-22 20:32:17,410] WARN [Controller id=1001, targetBrokerId=1001] Connection to node 1001 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)