我正在设法将自己的大脑包裹在datadog apm周围,以及如何将其与go app结合使用。我一直在阅读以下文档:https://docs.datadoghq.com/tracing/setup/go/
但是我仍然不太了解它是如何工作的。例如
让我们说我有一个这样的应用程序(此处为suedo代码),该应用程序会消耗一些kafka消息,然后将批次写入elasticsearch db:
package main
import (
"gopkg.in/DataDog/dd-trace-go.v1/ddtrace/tracer"
"golang.org/x/text/message"
)
func main() {
tracer.Start(tracer.WithServiceName("program1"))
defer tracer.Stop()
getKafaMessages()
}
func getKafaMessages() {
for {
select {
case msg := <-partitionConsumer.Messages():
consumed++
message.MsgValue = msg.Value
messages = append(messages, *msg)
if consumed >= 200 {
sendMessages(messages)
msgs.Msgs = nil
consumed = 0
}
case <-signals:
break ConsumerLoop
}
}
}
func sendMessages(messages []string) {
//send `messages` to some where like elasticsearch
}
让我进行跟踪,这样我可以看到从卡夫卡getKafkaMessages()
接收消息需要花费多长时间以及写至elasticsearch sendMessages
需要花费多长时间...我只需要添加跟踪到main,但在这样的功能中什么也没做?
package main
import (
"gopkg.in/DataDog/dd-trace-go.v1/ddtrace/tracer"
"golang.org/x/text/message"
)
func main() {
tracer.Start(tracer.WithServiceName("some-app"))
defer tracer.Stop()
span := tracer.StartSpan("kafka-to-es", tracer.ResourceName("consumer"))
defer span.Finish()
span.SetTag("env", "dev", "region", "us-east-1")
getKafaMessages()
}
func getKafaMessages() {
for {
select {
case msg := <-partitionConsumer.Messages():
consumed++
message.MsgValue = msg.Value
messages = append(messages, *msg)
if consumed >= 200 {
sendMessages(messages)
msgs.Msgs = nil
consumed = 0
}
case <-signals:
break ConsumerLoop
}
}
}
func sendMessages(messages []string) {
//send `messages` to some where like elasticsearch
}
我也一直在阅读http://opentracing.io/,但仍然无法解决问题。
我认为了解跨度和子级概念,那么我应该在应用程序的开头启动跨度并在每个函数中启动一个子级吗?这是正确的思考方式吗?