You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am having java.lang.IllegalStateException: Error while creating OptimisticTransactionDb instance lock : file:/home/centos/rocksdb/rdb/db/state_-1619083917/0/0/LOCK: No locks available. Error while running stream-stream Join on spark-shell. Step to reproduce.
Spark version 2.4.0 and --packages com.qubole.spark/spark-rocksdb-state-store_2.11/1.0.0
spark.conf.set("spark.sql.streaming.stateStore.providerClass", "org.apache.spark.sql.execution.streaming.state.RocksDbStateStoreProvider")
spark.conf.set("spark.sql.streaming.stateStore.rocksDb.localDir", "file:///home/centos/rocksdb/rdb")
import org.apache.spark.sql.types.StructType
val schemaUntyped = new StructType() .add("a", "int") .add("b", "string")
val schemaUntyped1 = new StructType() .add("a1", "int") .add("b1", "string")
var stream1 = spark.readStream.schema(schemaUntyped).csv("file:///home/centos/rocksdb/s1")
var stream2 = spark.readStream.schema(schemaUntyped1).csv("file:///home/centos/rocksdb/s2")
stream1.join(stream2, stream2.col("b1").equalTo(stream1.col("b"))).writeStream.option("checkpointLocation", "file:///home/centos/rocksdb/checkpoint").format("console").start()```
3. sample data
```1,asd
2,dfsf
3,df
4,fdvfdv
5,fdv
6,dv```
Thanks, Please let me know if you need more Info. Thanks!
**full stack trace for Error::**
```2020-10-07 03:19:54 ERROR Utils:91 - Aborting task
java.lang.IllegalStateException: Error while creating OptimisticTransactionDb instance lock : file:/home/centos/rocksdb/rdb/db/state_-1619083917/0/0/LOCK: No locks available
at org.apache.spark.sql.execution.streaming.state.OptimisticTransactionDbInstance.open(RocksDbInstance.scala:281)
at org.apache.spark.sql.execution.streaming.state.OptimisticTransactionDbInstance.open(RocksDbInstance.scala:267)
at org.apache.spark.sql.execution.streaming.state.RocksDbStateStoreProvider$RocksDbStateStore.initTransaction(RocksDbStateStoreProvider.scala:119)
at org.apache.spark.sql.execution.streaming.state.RocksDbStateStoreProvider$RocksDbStateStore.get(RocksDbStateStoreProvider.scala:126)
at org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager$KeyToNumValuesStore.get(SymmetricHashJoinStateManager.scala:354)
at org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.append(SymmetricHashJoinStateManager.scala:84)
at org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinExec$OneSideHashJoiner$$anonfun$storeAndJoinWithOtherSide$1.apply(StreamingSymmetricHashJoinExec.scala:454)
at org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinExec$OneSideHashJoiner$$anonfun$storeAndJoinWithOtherSide$1.apply(StreamingSymmetricHashJoinExec.scala:440)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at scala.collection.Iterator$JoinIterator.hasNext(Iterator.scala:212)
at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$$anonfun$run$3.apply(WriteToDataSourceV2Exec.scala:117)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$$anonfun$run$3.apply(WriteToDataSourceV2Exec.scala:116)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:146)
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$doExecute$2.apply(WriteToDataSourceV2Exec.scala:67)
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$doExecute$2.apply(WriteToDataSourceV2Exec.scala:66)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.rocksdb.RocksDBException: lock : file:/home/centos/rocksdb/rdb/db/state_-1619083917/0/0/LOCK: No locks available
at org.rocksdb.OptimisticTransactionDB.open(Native Method)
at org.rocksdb.OptimisticTransactionDB.open(OptimisticTransactionDB.java:40)
at org.apache.spark.sql.execution.streaming.state.OptimisticTransactionDbInstance.open(RocksDbInstance.scala:276)
... 27 more
2020-10-07 03:19:54 ERROR Utils:91 - Aborting task
java.lang.IllegalStateException: Error while creating OptimisticTransactionDb instance lock : file:/home/centos/rocksdb/rdb/db/state_-1619083917/0/1/LOCK: No locks available
at org.apache.spark.sql.execution.streaming.state.OptimisticTransactionDbInstance.open(RocksDbInstance.scala:281)
at org.apache.spark.sql.execution.streaming.state.OptimisticTransactionDbInstance.open(RocksDbInstance.scala:267)
at org.apache.spark.sql.execution.streaming.state.RocksDbStateStoreProvider$RocksDbStateStore.initTransaction(RocksDbStateStoreProvider.scala:119)
at org.apache.spark.sql.execution.streaming.state.RocksDbStateStoreProvider$RocksDbStateStore.get(RocksDbStateStoreProvider.scala:126)
at org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager$KeyToNumValuesStore.get(SymmetricHashJoinStateManager.scala:354)
at org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.append(SymmetricHashJoinStateManager.scala:84)
at org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinExec$OneSideHashJoiner$$anonfun$storeAndJoinWithOtherSide$1.apply(StreamingSymmetricHashJoinExec.scala:454)
at org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinExec$OneSideHashJoiner$$anonfun$storeAndJoinWithOtherSide$1.apply(StreamingSymmetricHashJoinExec.scala:440)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at scala.collection.Iterator$JoinIterator.hasNext(Iterator.scala:212)
at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$$anonfun$run$3.apply(WriteToDataSourceV2Exec.scala:117)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$$anonfun$run$3.apply(WriteToDataSourceV2Exec.scala:116)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:146)
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$doExecute$2.apply(WriteToDataSourceV2Exec.scala:67)
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$doExecute$2.apply(WriteToDataSourceV2Exec.scala:66)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.rocksdb.RocksDBException: lock : file:/home/centos/rocksdb/rdb/db/state_-1619083917/0/1/LOCK: No locks available
at org.rocksdb.OptimisticTransactionDB.open(Native Method)
at org.rocksdb.OptimisticTransactionDB.open(OptimisticTransactionDB.java:40)
at org.apache.spark.sql.execution.streaming.state.OptimisticTransactionDbInstance.open(RocksDbInstance.scala:276)
... 27 more
2020-10-07 03:19:54 ERROR DataWritingSparkTask:70 - Aborting commit for partition 0 (task 2, attempt 0stage 2.0)
2020-10-07 03:19:54 ERROR DataWritingSparkTask:70 - Aborting commit for partition 1 (task 3, attempt 0stage 2.0)
2020-10-07 03:19:54 ERROR DataWritingSparkTask:70 - Aborted commit for partition 0 (task 2, attempt 0stage 2.0)
2020-10-07 03:19:54 ERROR DataWritingSparkTask:70 - Aborted commit for partition 1 (task 3, attempt 0stage 2.0)
2020-10-07 03:19:54 ERROR TaskContextImpl:91 - Error in TaskCompletionListener
java.lang.IllegalArgumentException: requirement failed: No DB to close
at scala.Predef$.require(Predef.scala:224)
at org.apache.spark.sql.execution.streaming.state.OptimisticTransactionDbInstance.close(RocksDbInstance.scala:346)
at org.apache.spark.sql.execution.streaming.state.RocksDbStateStoreProvider$RocksDbStateStore.close(RocksDbStateStoreProvider.scala:205)
at org.apache.spark.sql.execution.streaming.state.RocksDbStateStoreProvider$RocksDbStateStore.abort(RocksDbStateStoreProvider.scala:199)
at org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager$StateStoreHandler.abortIfNeeded(SymmetricHashJoinStateManager.scala:314)
at org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.abortIfNeeded(SymmetricHashJoinStateManager.scala:258)
at org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager$$anonfun$2$$anonfun$apply$1.apply(SymmetricHashJoinStateManager.scala:298)
at org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager$$anonfun$2$$anonfun$apply$1.apply(SymmetricHashJoinStateManager.scala:298)
at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2020-10-07 03:19:54 ERROR TaskContextImpl:91 - Error in TaskCompletionListener
java.lang.IllegalArgumentException: requirement failed: No DB to close
at scala.Predef$.require(Predef.scala:224)
at org.apache.spark.sql.execution.streaming.state.OptimisticTransactionDbInstance.close(RocksDbInstance.scala:346)
at org.apache.spark.sql.execution.streaming.state.RocksDbStateStoreProvider$RocksDbStateStore.close(RocksDbStateStoreProvider.scala:205)
at org.apache.spark.sql.execution.streaming.state.RocksDbStateStoreProvider$RocksDbStateStore.abort(RocksDbStateStoreProvider.scala:199)
at org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager$StateStoreHandler.abortIfNeeded(SymmetricHashJoinStateManager.scala:314)
at org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.abortIfNeeded(SymmetricHashJoinStateManager.scala:258)
at org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager$$anonfun$2$$anonfun$apply$1.apply(SymmetricHashJoinStateManager.scala:298)
at org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager$$anonfun$2$$anonfun$apply$1.apply(SymmetricHashJoinStateManager.scala:298)
at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2020-10-07 03:19:54 ERROR Executor:91 - Exception in task 0.0 in stage 2.0 (TID 2)
org.apache.spark.util.TaskCompletionListenerException: requirement failed: No DB to close
Previous exception in task: Error while creating OptimisticTransactionDb instance lock : file:/home/centos/rocksdb/rdb/db/state_-1619083917/0/0/LOCK: No locks available
org.apache.spark.sql.execution.streaming.state.OptimisticTransactionDbInstance.open(RocksDbInstance.scala:281)
org.apache.spark.sql.execution.streaming.state.OptimisticTransactionDbInstance.open(RocksDbInstance.scala:267)
org.apache.spark.sql.execution.streaming.state.RocksDbStateStoreProvider$RocksDbStateStore.initTransaction(RocksDbStateStoreProvider.scala:119)
org.apache.spark.sql.execution.streaming.state.RocksDbStateStoreProvider$RocksDbStateStore.get(RocksDbStateStoreProvider.scala:126)
org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager$KeyToNumValuesStore.get(SymmetricHashJoinStateManager.scala:354)
org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.append(SymmetricHashJoinStateManager.scala:84)
org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinExec$OneSideHashJoiner$$anonfun$storeAndJoinWithOtherSide$1.apply(StreamingSymmetricHashJoinExec.scala:454)
org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinExec$OneSideHashJoiner$$anonfun$storeAndJoinWithOtherSide$1.apply(StreamingSymmetricHashJoinExec.scala:440)
scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
scala.collection.Iterator$JoinIterator.hasNext(Iterator.scala:212)
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$$anonfun$run$3.apply(WriteToDataSourceV2Exec.scala:117)
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$$anonfun$run$3.apply(WriteToDataSourceV2Exec.scala:116)
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:146)
org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$doExecute$2.apply(WriteToDataSourceV2Exec.scala:67)
org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$doExecute$2.apply(WriteToDataSourceV2Exec.scala:66)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
org.apache.spark.scheduler.Task.run(Task.scala:121)
org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2020-10-07 03:19:54 ERROR Executor:91 - Exception in task 1.0 in stage 2.0 (TID 3)
org.apache.spark.util.TaskCompletionListenerException: requirement failed: No DB to close
Previous exception in task: Error while creating OptimisticTransactionDb instance lock : file:/home/centos/rocksdb/rdb/db/state_-1619083917/0/1/LOCK: No locks available
org.apache.spark.sql.execution.streaming.state.OptimisticTransactionDbInstance.open(RocksDbInstance.scala:281)
org.apache.spark.sql.execution.streaming.state.OptimisticTransactionDbInstance.open(RocksDbInstance.scala:267)
org.apache.spark.sql.execution.streaming.state.RocksDbStateStoreProvider$RocksDbStateStore.initTransaction(RocksDbStateStoreProvider.scala:119)
org.apache.spark.sql.execution.streaming.state.RocksDbStateStoreProvider$RocksDbStateStore.get(RocksDbStateStoreProvider.scala:126)
org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager$KeyToNumValuesStore.get(SymmetricHashJoinStateManager.scala:354)
org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.append(SymmetricHashJoinStateManager.scala:84)
org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinExec$OneSideHashJoiner$$anonfun$storeAndJoinWithOtherSide$1.apply(StreamingSymmetricHashJoinExec.scala:454)
org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinExec$OneSideHashJoiner$$anonfun$storeAndJoinWithOtherSide$1.apply(StreamingSymmetricHashJoinExec.scala:440)
scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
scala.collection.Iterator$JoinIterator.hasNext(Iterator.scala:212)
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$$anonfun$run$3.apply(WriteToDataSourceV2Exec.scala:117)
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$$anonfun$run$3.apply(WriteToDataSourceV2Exec.scala:116)
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:146)
org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$doExecute$2.apply(WriteToDataSourceV2Exec.scala:67)
org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$doExecute$2.apply(WriteToDataSourceV2Exec.scala:66)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
org.apache.spark.scheduler.Task.run(Task.scala:121)
org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2020-10-07 03:19:54 WARN TaskSetManager:66 - Lost task 0.0 in stage 2.0 (TID 2, localhost, executor driver): org.apache.spark.util.TaskCompletionListenerException: requirement failed: No DB to close
Previous exception in task: Error while creating OptimisticTransactionDb instance lock : file:/home/centos/rocksdb/rdb/db/state_-1619083917/0/0/LOCK: No locks available
org.apache.spark.sql.execution.streaming.state.OptimisticTransactionDbInstance.open(RocksDbInstance.scala:281)
org.apache.spark.sql.execution.streaming.state.OptimisticTransactionDbInstance.open(RocksDbInstance.scala:267)
org.apache.spark.sql.execution.streaming.state.RocksDbStateStoreProvider$RocksDbStateStore.initTransaction(RocksDbStateStoreProvider.scala:119)
org.apache.spark.sql.execution.streaming.state.RocksDbStateStoreProvider$RocksDbStateStore.get(RocksDbStateStoreProvider.scala:126)
org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager$KeyToNumValuesStore.get(SymmetricHashJoinStateManager.scala:354)
org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.append(SymmetricHashJoinStateManager.scala:84)
org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinExec$OneSideHashJoiner$$anonfun$storeAndJoinWithOtherSide$1.apply(StreamingSymmetricHashJoinExec.scala:454)
org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinExec$OneSideHashJoiner$$anonfun$storeAndJoinWithOtherSide$1.apply(StreamingSymmetricHashJoinExec.scala:440)
scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
scala.collection.Iterator$JoinIterator.hasNext(Iterator.scala:212)
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$$anonfun$run$3.apply(WriteToDataSourceV2Exec.scala:117)
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$$anonfun$run$3.apply(WriteToDataSourceV2Exec.scala:116)
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:146)
org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$doExecute$2.apply(WriteToDataSourceV2Exec.scala:67)
org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$doExecute$2.apply(WriteToDataSourceV2Exec.scala:66)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
org.apache.spark.scheduler.Task.run(Task.scala:121)
org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2020-10-07 03:19:54 ERROR TaskSetManager:70 - Task 0 in stage 2.0 failed 1 times; aborting job
2020-10-07 03:19:54 WARN TaskSetManager:66 - Lost task 1.0 in stage 2.0 (TID 3, localhost, executor driver): org.apache.spark.util.TaskCompletionListenerException: requirement failed: No DB to close
Previous exception in task: Error while creating OptimisticTransactionDb instance lock : file:/home/centos/rocksdb/rdb/db/state_-1619083917/0/1/LOCK: No locks available
org.apache.spark.sql.execution.streaming.state.OptimisticTransactionDbInstance.open(RocksDbInstance.scala:281)
org.apache.spark.sql.execution.streaming.state.OptimisticTransactionDbInstance.open(RocksDbInstance.scala:267)
org.apache.spark.sql.execution.streaming.state.RocksDbStateStoreProvider$RocksDbStateStore.initTransaction(RocksDbStateStoreProvider.scala:119)
org.apache.spark.sql.execution.streaming.state.RocksDbStateStoreProvider$RocksDbStateStore.get(RocksDbStateStoreProvider.scala:126)
org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager$KeyToNumValuesStore.get(SymmetricHashJoinStateManager.scala:354)
org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.append(SymmetricHashJoinStateManager.scala:84)
org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinExec$OneSideHashJoiner$$anonfun$storeAndJoinWithOtherSide$1.apply(StreamingSymmetricHashJoinExec.scala:454)
org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinExec$OneSideHashJoiner$$anonfun$storeAndJoinWithOtherSide$1.apply(StreamingSymmetricHashJoinExec.scala:440)
scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
scala.collection.Iterator$JoinIterator.hasNext(Iterator.scala:212)
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$$anonfun$run$3.apply(WriteToDataSourceV2Exec.scala:117)
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$$anonfun$run$3.apply(WriteToDataSourceV2Exec.scala:116)
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:146)
org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$doExecute$2.apply(WriteToDataSourceV2Exec.scala:67)
org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$doExecute$2.apply(WriteToDataSourceV2Exec.scala:66)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
org.apache.spark.scheduler.Task.run(Task.scala:121)
org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2020-10-07 03:19:54 ERROR WriteToDataSourceV2Exec:70 - Data source writer org.apache.spark.sql.execution.streaming.sources.MicroBatchWriter@2c643740 is aborting.
2020-10-07 03:19:54 ERROR WriteToDataSourceV2Exec:70 - Data source writer org.apache.spark.sql.execution.streaming.sources.MicroBatchWriter@2c643740 aborted.
2020-10-07 03:19:54 ERROR MicroBatchExecution:91 - Query [id = 0ae82f91-8457-4efd-983e-93abe15c5caf, runId = aedfa63f-e4fc-4dd1-9ec9-1584394f0d3a] terminated with error
org.apache.spark.SparkException: Writing job aborted.
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2Exec.scala:92)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:247)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:296)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3384)
at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2783)
at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2783)
at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3365)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3364)
at org.apache.spark.sql.Dataset.collect(Dataset.scala:2783)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$5$$anonfun$apply$17.apply(MicroBatchExecution.scala:537)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$5.apply(MicroBatchExecution.scala:532)
at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch(MicroBatchExecution.scala:531)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(MicroBatchExecution.scala:198)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1.apply$mcZ$sp(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:160)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:279)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 1 times, most recent failure: Lost task 0.0 in stage 2.0 (TID 2, localhost, executor driver): org.apache.spark.util.TaskCompletionListenerException: requirement failed: No DB to close
Previous exception in task: Error while creating OptimisticTransactionDb instance lock : file:/home/centos/rocksdb/rdb/db/state_-1619083917/0/0/LOCK: No locks available
org.apache.spark.sql.execution.streaming.state.OptimisticTransactionDbInstance.open(RocksDbInstance.scala:281)
org.apache.spark.sql.execution.streaming.state.OptimisticTransactionDbInstance.open(RocksDbInstance.scala:267)
org.apache.spark.sql.execution.streaming.state.RocksDbStateStoreProvider$RocksDbStateStore.initTransaction(RocksDbStateStoreProvider.scala:119)
org.apache.spark.sql.execution.streaming.state.RocksDbStateStoreProvider$RocksDbStateStore.get(RocksDbStateStoreProvider.scala:126)
org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager$KeyToNumValuesStore.get(SymmetricHashJoinStateManager.scala:354)
org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.append(SymmetricHashJoinStateManager.scala:84)
org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinExec$OneSideHashJoiner$$anonfun$storeAndJoinWithOtherSide$1.apply(StreamingSymmetricHashJoinExec.scala:454)
org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinExec$OneSideHashJoiner$$anonfun$storeAndJoinWithOtherSide$1.apply(StreamingSymmetricHashJoinExec.scala:440)
scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
scala.collection.Iterator$JoinIterator.hasNext(Iterator.scala:212)
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$$anonfun$run$3.apply(WriteToDataSourceV2Exec.scala:117)
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$$anonfun$run$3.apply(WriteToDataSourceV2Exec.scala:116)
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:146)
org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$doExecute$2.apply(WriteToDataSourceV2Exec.scala:67)
org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$doExecute$2.apply(WriteToDataSourceV2Exec.scala:66)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
org.apache.spark.scheduler.Task.run(Task.scala:121)
org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2Exec.scala:64)
... 35 more```
The text was updated successfully, but these errors were encountered:
@nbalusu - I couldnt look into it. I think there might be some corner case which we need to handle when dealing with stream-stream join. Unfortunately I wont be able to fix it since I have moved out of Qubole. I can help you in reviewing design/code in case you want to work on it.
Hi, I am having
java.lang.IllegalStateException: Error while creating OptimisticTransactionDb instance lock : file:/home/centos/rocksdb/rdb/db/state_-1619083917/0/0/LOCK: No locks available.
Error while running stream-stream Join on spark-shell.Step to reproduce.
The text was updated successfully, but these errors were encountered: