使用Golang和Docker导入外部软件包时构建失败

时间:2020-04-18 13:26:55

标签: docker go

我无法使用Docker构建这个简单的融合kafka示例。可能是带有go path或特殊构建参数的技巧,无法找到,尝试了所有默认文件夹,没有成功。

Dockerfile

FROM golang:alpine AS builder

# Set necessary environmet variables needed for our image
ENV GO111MODULE=on \
    CGO_ENABLED=0 \
    GOOS=linux \
    GOARCH=amd64

ADD . /go/app

# Install librdkafka
RUN apk add librdkafka-dev pkgconf

# Move to working directory /build
WORKDIR /go/app

# Copy and download dependency using go mod
COPY go.mod .
RUN go mod download

# Copy the code into the container
COPY . .

# Build the application
RUN go build -o main .

# Run test
RUN go test ./... -v

# Move to /dist directory as the place for resulting binary folder
WORKDIR /dist

# Copy binary from build to main folder
RUN cp /go/app/main .

############################
# STEP 2 build a small image
############################
FROM scratch

COPY --from=builder /dist/main /

# Command to run the executable
ENTRYPOINT ["/main"]

来源

import (
    "fmt"
    "github.com/confluentinc/confluent-kafka-go/kafka"
    "os"
)

func main() {

    if len(os.Args) != 3 {
        fmt.Fprintf(os.Stderr, "Usage: %s <broker> <topic>\n",
            os.Args[0])
        os.Exit(1)
    }

    broker := os.Args[1]
    topic := os.Args[2]

    p, err := kafka.NewProducer(&kafka.ConfigMap{"bootstrap.servers": broker})

    if err != nil {
        fmt.Printf("Failed to create producer: %s\n", err)
        os.Exit(1)
    }

    fmt.Printf("Created Producer %v\n", p)

    deliveryChan := make(chan kafka.Event)

    value := "Hello Go!"
    err = p.Produce(&kafka.Message{
        TopicPartition: kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny},
        Value:          []byte(value),
        Headers:        []kafka.Header{{Key: "myTestHeader", Value: []byte("header values are binary")}},
    }, deliveryChan)

    e := <-deliveryChan
    m := e.(*kafka.Message)

    if m.TopicPartition.Error != nil {
        fmt.Printf("Delivery failed: %v\n", m.TopicPartition.Error)
    } else {
        fmt.Printf("Delivered message to topic %s [%d] at offset %v\n",
            *m.TopicPartition.Topic, m.TopicPartition.Partition, m.TopicPartition.Offset)
    }

    close(deliveryChan)
}

错误

./producer_example.go:37:12: undefined: kafka.NewProducer
./producer_example.go:37:31: undefined: kafka.ConfigMap
./producer_example.go:48:28: undefined: kafka.Event
./producer_example.go:51:19: undefined: kafka.Message

1 个答案:

答案 0 :(得分:1)

编辑

我可以确认使用musl构建标记可以正常工作:

FROM golang:alpine as build
WORKDIR /go/src/app
# Set necessary environmet variables needed for our image
ENV GOOS=linux GOARCH=amd64 
COPY . .
RUN apk update && apk add gcc librdkafka-dev openssl-libs-static zlib-static zstd-libs libsasl librdkafka-static lz4-dev lz4-static zstd-static libc-dev musl-dev 
RUN go build -tags musl -ldflags '-w -extldflags "-static"' -o main

FROM scratch
COPY --from=build /go/src/app/main /
# Command to run the executable
ENTRYPOINT ["/main"]

使用如下所示的测试设置。


好的,github.com/confluentinc/confluent-kafka-go/kafka的1.4.0版本似乎至少与阿尔卑斯山3.11的当前状态不兼容。 此外,尽管我尽了最大努力,但我无法构建适合FROM scratch使用的静态编译二进制文件。

但是,我能够使您的代码在当前版本的Kafka上运行。图像要大一些,但是我想工作并要好一点比不工作且优雅。

Todos

1。降级为confluent-kafka-go@v1.1.0

简单

$ go get -u -v github.com/confluentinc/confluent-kafka-go@v1.1.0

2。修改您的Dockerfile

一开始您缺少一些构建依赖项。显然,我们也需要运行时依赖项,因为我们不再使用FROM scratch。请注意,我也尝试简化它,并保留了jwilder/dockerize,以免不必要花时间设置测试时间:

FROM golang:alpine as build

# The default location is /go/src
WORKDIR /go/src/app
ENV GOOS=linux \
    GOARCH=amd64
# We simply copy everything to /go/src/app    
COPY . .
# Add the required build libraries
RUN apk update && apk add gcc librdkafka-dev zstd-libs libsasl lz4-dev libc-dev musl-dev 
# Run the build
RUN go build -o main


FROM alpine
# We use dockerize to make sure the kafka sever is up and running before the command starts.
ENV DOCKERIZE_VERSION v0.6.1
ENV KAFKA kafka
# Add dockerize
RUN apk --no-cache upgrade && apk --no-cache --virtual .get add curl \
 && curl -L -O https://github.com/jwilder/dockerize/releases/download/${DOCKERIZE_VERSION}/dockerize-linux-amd64-${DOCKERIZE_VERSION}.tar.gz \
 && tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
 && rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
 && apk del .get \
 # Add the runtime dependency.
 && apk add --no-cache librdkafka
# Fetch the binary 
COPY --from=build /go/src/app/main /
# Wait for kafka to come up, only then start /main
ENTRYPOINT ["sh","-c","/usr/local/bin/dockerize -wait tcp://${KAFKA}:9092 /main kafka test"]

3。测试

我创建了一个docker-compose.yaml来检查一切是否正常:

version: "3.7"

services:
  zookeeper:
    image: 'bitnami/zookeeper:3'
    ports:
      - '2181:2181'
    volumes:
      - 'zookeeper_data:/bitnami'
    environment:
      - ALLOW_ANONYMOUS_LOGIN=yes
  kafka:
    image: 'bitnami/kafka:2'
    ports:
      - '9092:9092'
    volumes:
      - 'kafka_data:/bitnami'
    environment:
      - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
      - ALLOW_PLAINTEXT_LISTENER=yes
    depends_on:
      - zookeeper
  server:
    image: fals/kafka-main
    build: .
    command: "kafka test"
volumes:
  zookeeper_data:
  kafka_data:

您可以检查设置是否可以使用:

$  docker-compose build && docker-compose up -d && docker-compose logs -f server
[...]
server_1     | 2020/04/18 18:37:33 Problem with dial: dial tcp 172.24.0.4:9092: connect: connection refused. Sleeping 1s
server_1     | 2020/04/18 18:37:34 Connected to tcp://kafka:9092
server_1     | Created Producer rdkafka#producer-1
server_1     | Delivered message to topic test [0] at offset 0
server_1     | 2020/04/18 18:37:36 Command finished successfully.
kfka_server_1 exited with code 0