You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have configured the connector to write to iceberg that use hive metastore with pg backend. i have only one connector and it uses one thread only. When i start the connector then it does work and write nicely to iceberg table. but after some minutes/hours it stop working and throw an waiting for lock exception. Here is the traceback
[2024-12-19 06:54:25,103] WARN Retrying task after failure: Waiting for lock on table default.master_instance_report (org.apache.iceberg.util.Tasks)
org.apache.iceberg.hive.MetastoreLock$WaitingForLockException: Waiting for lock on table default.master_instance_report
at org.apache.iceberg.hive.MetastoreLock.lambda$acquireLock$1(MetastoreLock.java:217)
at org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:413)
at org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:219)
at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:203)
at org.apache.iceberg.hive.MetastoreLock.acquireLock(MetastoreLock.java:209)
at org.apache.iceberg.hive.MetastoreLock.lock(MetastoreLock.java:146)
at org.apache.iceberg.hive.HiveTableOperations.doCommit(HiveTableOperations.java:182)
at org.apache.iceberg.BaseMetastoreTableOperations.commit(BaseMetastoreTableOperations.java:135)
at org.apache.iceberg.BaseTransaction.lambda$commitSimpleTransaction$3(BaseTransaction.java:427)
at org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:413)
at org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:219)
at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:203)
at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:196)
when this exception happens then i see an entry in pg table hive_locks. if i delete the entry then the connector proceed to work but after few hours the same thing happens again.
I have configured the connector to write to iceberg that use hive metastore with pg backend. i have only one connector and it uses one thread only. When i start the connector then it does work and write nicely to iceberg table. but after some minutes/hours it stop working and throw an waiting for lock exception. Here is the traceback
[2024-12-19 06:54:25,103] WARN Retrying task after failure: Waiting for lock on table default.master_instance_report (org.apache.iceberg.util.Tasks)
org.apache.iceberg.hive.MetastoreLock$WaitingForLockException: Waiting for lock on table default.master_instance_report
at org.apache.iceberg.hive.MetastoreLock.lambda$acquireLock$1(MetastoreLock.java:217)
at org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:413)
at org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:219)
at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:203)
at org.apache.iceberg.hive.MetastoreLock.acquireLock(MetastoreLock.java:209)
at org.apache.iceberg.hive.MetastoreLock.lock(MetastoreLock.java:146)
at org.apache.iceberg.hive.HiveTableOperations.doCommit(HiveTableOperations.java:182)
at org.apache.iceberg.BaseMetastoreTableOperations.commit(BaseMetastoreTableOperations.java:135)
at org.apache.iceberg.BaseTransaction.lambda$commitSimpleTransaction$3(BaseTransaction.java:427)
at org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:413)
at org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:219)
at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:203)
at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:196)
my config look like this:
{
"connector.class": "io.tabular.iceberg.connect.IcebergSinkConnector",
"iceberg.control.commit.threads": "1",
"iceberg.catalog.s3.secret-access-key": "XXXXXXXXXX",
"iceberg.catalog.s3.endpoint": "https://",
"iceberg.tables.default-partition-by": "cloud_name,instance_id",
"topics": "instance-report",
"tasks.max": "1",
"iceberg.catalog.io-impl": "org.apache.iceberg.aws.s3.S3FileIO",
"iceberg.control.commit.interval-ms": "180000",
"iceberg.catalog.client.region": "us-east-1",
"iceberg.catalog.uri": "thrift://1.1.1.107:9083",
"iceberg.hive.client-pool-size": "10",
"iceberg.tables.auto-create-enabled": "false",
"iceberg.tables": "default.master_instance_report",
"value.converter.schemas.enable": "false",
"name": "iceberg-sink-connector",
"iceberg.catalog.warehouse": "s3a://",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"iceberg.tables.default-id-columns": "cloud_name,instance_id",
"iceberg.catalog.type": "hive",
"key.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
"iceberg.catalog.s3.access-key-id": "XXXXX"
}
Any help please?
The text was updated successfully, but these errors were encountered: