Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is Kafka with KRaft supported? #290

Closed
A1EF opened this issue Sep 29, 2022 · 4 comments
Closed

Is Kafka with KRaft supported? #290

A1EF opened this issue Sep 29, 2022 · 4 comments

Comments

@A1EF
Copy link

A1EF commented Sep 29, 2022

I have a single node with KRaft instead ZooKeeper and I can't to create this simple example topic.
My tf file:

terraform {
  required_providers {
    kafka = {
      source = "Mongey/kafka"
    }
  }
}

provider "kafka" {
  bootstrap_servers = ["localhost:9092"]
}

resource "kafka_topic" "logs" {
  name               = "systemd_logs"
  replication_factor = 1
  partitions         = 1

  config = {
    "segment.ms"     = "20000"
    "cleanup.policy" = "compact"
  }
}

terraform plan show no errors:

~ $ terraform plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # kafka_topic.logs will be created
  + resource "kafka_topic" "logs" {
      + config             = {
          + "cleanup.policy" = "compact"
          + "segment.ms"     = "20000"
        }
      + id                 = (known after apply)
      + name               = "systemd_logs"
      + partitions         = 1
      + replication_factor = 1
    }

Plan: 1 to add, 0 to change, 0 to destroy.

──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.

But when I try to apply it just stuck:

~ $ terraform apply -auto-approve

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # kafka_topic.logs will be created
  + resource "kafka_topic" "logs" {
      + config             = {
          + "cleanup.policy" = "compact"
          + "segment.ms"     = "20000"
        }
      + id                 = (known after apply)
      + name               = "systemd_logs"
      + partitions         = 1
      + replication_factor = 1
    }

Plan: 1 to add, 0 to change, 0 to destroy.
kafka_topic.logs: Creating...
kafka_topic.logs: Still creating... [10s elapsed]
kafka_topic.logs: Still creating... [20s elapsed]
kafka_topic.logs: Still creating... [30s elapsed]
kafka_topic.logs: Still creating... [40s elapsed]
kafka_topic.logs: Still creating... [50s elapsed]
kafka_topic.logs: Still creating... [1m0s elapsed]
kafka_topic.logs: Still creating... [1m10s elapsed]
kafka_topic.logs: Still creating... [1m20s elapsed]
kafka_topic.logs: Still creating... [1m30s elapsed]
kafka_topic.logs: Still creating... [1m40s elapsed]

...

kafka_topic.logs: Still creating... [39m10s elapsed]
kafka_topic.logs: Still creating... [39m20s elapsed]
kafka_topic.logs: Still creating... [39m30s elapsed]
kafka_topic.logs: Still creating... [39m40s elapsed]
kafka_topic.logs: Still creating... [39m50s elapsed]
kafka_topic.logs: Still creating... [40m0s elapsed]
╷
│ Error: kafka: client has run out of available brokers to talk to: EOF
│
│   with kafka_topic.logs,
│   on topic.tf line 13, in resource "kafka_topic" "logs":
│   13: resource "kafka_topic" "logs" {
│
╵

I see in kafka log:

ERROR Processor got uncaught exception. (kafka.network.Processor)
java.lang.OutOfMemoryError: Cannot reserve 369295617 bytes of direct buffer memory (allocated: 738616451, limit: 1073741824)
at java.base/java.nio.Bits.reserveMemory(Bits.java:178)
at java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:121)
at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:332)
at java.base/sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:243)
at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:293)
at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:259)
at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:417)
at org.apache.kafka.common.network.PlaintextTransportLayer.read(PlaintextTransportLayer.java:103)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:118)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
at kafka.network.Processor.poll(SocketServer.scala:1144)
at kafka.network.Processor.run(SocketServer.scala:1047)
at java.base/java.lang.Thread.run(Thread.java:833)
@Mongey
Copy link
Owner

Mongey commented Oct 14, 2022

Hi @A1EF yes, Kraft should be supported. it looks like your cluster doesn't not have enough memory ?

@A1EF
Copy link
Author

A1EF commented Oct 14, 2022

Thank you for answer!

It's test single node kafka without any data. Total 2GB RAM and default heap size (1GB). And I can't create even simpe topic with 1 partition and 1 replication factor. Anyway, I increased total RAM to 8GB and heap to 4 and didn't help either. Is it not anought for this task?

@Mongey
Copy link
Owner

Mongey commented Oct 14, 2022

I can't speak for your setup, but I am unable to replicate.

I have confirmed that the following KRaft setup works.

docker run -p 9092:9092 bashj79/kafka-kraft

terraform {
  required_providers {
    kafka = {
      source = "Mongey/kafka"
    }
  }
}

provider "kafka" {
  bootstrap_servers = ["localhost:9092"]

  tls_enabled = false
}

resource "kafka_topic" "syslog" {
  name               = "syslog"
  replication_factor = 1
  partitions         = 4

  config = {
    "segment.ms"   = "4000"
    "retention.ms" = "86400000"
  }
}
CONTAINER ID   NAME          CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O    PIDS
e8e35d2769e5   busy_bouman   11.86%    540.6MiB / 7.653GiB   6.90%     5.69kB / 4.91kB   0B / 479kB   97

@A1EF
Copy link
Author

A1EF commented Oct 14, 2022

@mileswilson thanks you very much! It was my fault. I just added tls_enabled = false and all works fine! It's imporant option that I missed.

@A1EF A1EF closed this as completed Oct 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants