bakhridinova

Untitled

May 19th, 2024 (edited)
626
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
  1. --- command line
  2. Windows PowerShell
  3. Copyright (C) Microsoft Corporation. All rights reserved.
  4.  
  5. Install the latest PowerShell for new features and improvements! https://aka.ms/PSWindows
  6.  
  7. PS C:\Users\Lenovo\Desktop\BD_Elastic_Leyla_Baxridinova> docker compose up
  8. [+] Running 35/3
  9.  ✔ elasticsearch Pulled                                                                                                                                                                                  3354.9s
  10.  ✔ logstash Pulled                                                                                                                                                                                       3288.6s
  11.  ✔ kibana Pulled                                                                                                                                                                                         2166.2s
  12. [+] Running 1/1
  13.  ✘ Network bd_elastic_leyla_baxridinova_bigdata_network  Error                                                                                                                                             15.0s
  14. failed to create network bd_elastic_leyla_baxridinova_bigdata_network: Error response from daemon: plugin "local" not found
  15. PS C:\Users\Lenovo\Desktop\BD_Elastic_Leyla_Baxridinova> docker compose up
  16. [+] Running 4/4
  17.  ✔ Network bd_elastic_leyla_baxridinova_default  Created                                                                                                                                                    0.0s
  18.  ✔ Container es                                  Created                                                                                                                                                    0.1s
  19.  ✔ Container kib                                 Created                                                                                                                                                    0.1s
  20.  ✔ Container log                                 Created                                                                                                                                                    0.1s
  21. Attaching to es, kib, log
  22. log  | Using bundled JDK: /usr/share/logstash/jdk
  23. log  | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
  24. Gracefully stopping... (press Ctrl+C again to force)
  25. [+] Stopping 3/3
  26.  ✔ Container log  Stopped                                                                                                                                                                                   0.7s
  27.  ✔ Container kib  Stopped                                                                                                                                                                                   0.6s
  28.  ✔ Container es   Stopped                                                                                                                                                                                   0.4s
  29. PS C:\Users\Lenovo\Desktop\BD_Elastic_Leyla_Baxridinova> docker ps -a
  30. CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS                             PORTS                                                                                            NAMES
  31. 2f581b737b07   kibana:7.16.1          "/bin/tini -- /usr/l…"   40 seconds ago   Up 16 seconds                      0.0.0.0:5601->5601/tcp                                                                           kib
  32. 264e1808c009   logstash:7.16.1        "/usr/local/bin/dock…"   40 seconds ago   Up 16 seconds                      0.0.0.0:5000->5000/tcp, 0.0.0.0:5044->5044/tcp, 0.0.0.0:9600->9600/tcp, 0.0.0.0:5000->5000/udp   log
  33. 6d87bd20cd3e   elasticsearch:7.16.1   "/bin/tini -- /usr/l…"   40 seconds ago   Up 16 seconds (health: starting)   0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp                                                   es
  34.  
  35. PS C:\Users\Lenovo\Desktop\BD_Elastic_Leyla_Baxridinova> docker inspect bd_elastic_leyla_baxridinova_default
  36. [
  37.     {
  38.         "Name": "bd_elastic_leyla_baxridinova_default",
  39.         "Id": "dd4b26d8eef6e45f91a8690b9e1dee3f46911d083f7aa30c3436f6d9d04fddd7",
  40.         "Created": "2024-05-19T07:17:41.837439845Z",
  41.         "Scope": "local",
  42.         "Driver": "bridge",
  43.         "EnableIPv6": false,
  44.         "IPAM": {
  45.             "Driver": "default",
  46.             "Options": null,
  47.             "Config": [
  48.                 {
  49.                     "Subnet": "172.19.0.0/16",
  50.                     "Gateway": "172.19.0.1"
  51.                 }
  52.             ]
  53.         },
  54.         "Internal": false,
  55.         "Attachable": false,
  56.         "Ingress": false,
  57.         "ConfigFrom": {
  58.             "Network": ""
  59.         },
  60.         "ConfigOnly": false,
  61.         "Containers": {
  62.             "2f581b737b073fbf446b4425a23f76442fa66ca77ae9210229709b149930406e": {
  63.                 "Name": "kib",
  64.                 "EndpointID": "396c1875dfd567fd1ab3d2036a15a49612b9a62f9260f4d6a8a3511d7977fc1e",
  65.                 "MacAddress": "02:42:ac:13:00:03",
  66.                 "IPv4Address": "172.19.0.3/16",
  67.                 "IPv6Address": ""
  68.             },
  69.             "6d87bd20cd3e3f86e67d863e264f57818bbd3151aba0ee6eaec62ad0136a9b04": {
  70.                 "Name": "es",
  71.                 "EndpointID": "59ba7b8221eb3a5c91b41b448725deb80a1d1e39a4e1a563a47e0bbec6b1ade2",
  72.                 "MacAddress": "02:42:ac:13:00:02",
  73.                 "IPv4Address": "172.19.0.2/16",
  74.                 "IPv6Address": ""
  75.             }
  76.         },
  77.         "Options": {},
  78.         "Labels": {
  79.             "com.docker.compose.network": "default",
  80.             "com.docker.compose.project": "bd_elastic_leyla_baxridinova",
  81.             "com.docker.compose.version": "2.27.0"
  82.         }
  83.     }
  84. ]
  85. PS C:\Users\Lenovo\Desktop\BD_Elastic_Leyla_Baxridinova>
  86.  
  87. --- stacktrace
  88. Connected to the target VM, address: '127.0.0.1:55141', transport: 'socket'
  89. Connecting to Elasticsearch...
  90. WARNING: An illegal reflective access operation has occurred
  91. WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/C:/Users/Lenovo/.m2/repository/org/apache/spark/spark-unsafe_2.12/3.1.2/spark-unsafe_2.12-3.1.2.jar) to constructor java.nio.DirectByteBuffer(long,int)
  92. WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
  93. WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
  94. WARNING: All illegal access operations will be denied in a future release
  95. Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
  96. 24/05/19 13:21:32 INFO SparkContext: Running Spark version 3.1.2
  97. 24/05/19 13:21:32 INFO ResourceUtils: ==============================================================
  98. 24/05/19 13:21:32 INFO ResourceUtils: No custom resources configured for spark.driver.
  99. 24/05/19 13:21:32 INFO ResourceUtils: ==============================================================
  100. 24/05/19 13:21:32 INFO SparkContext: Submitted application: StreamingElastic
  101. 24/05/19 13:21:32 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
  102. 24/05/19 13:21:32 INFO ResourceProfile: Limiting resource is cpu
  103. 24/05/19 13:21:32 INFO ResourceProfileManager: Added ResourceProfile id: 0
  104. 24/05/19 13:21:32 INFO SecurityManager: Changing view acls to: Lenovo
  105. 24/05/19 13:21:32 INFO SecurityManager: Changing modify acls to: Lenovo
  106. 24/05/19 13:21:32 INFO SecurityManager: Changing view acls groups to:
  107. 24/05/19 13:21:32 INFO SecurityManager: Changing modify acls groups to:
  108. 24/05/19 13:21:32 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(Lenovo); groups with view permissions: Set(); users  with modify permissions: Set(Lenovo); groups with modify permissions: Set()
  109. 24/05/19 13:21:33 INFO Utils: Successfully started service 'sparkDriver' on port 55181.
  110. 24/05/19 13:21:33 INFO SparkEnv: Registering MapOutputTracker
  111. 24/05/19 13:21:33 INFO SparkEnv: Registering BlockManagerMaster
  112. 24/05/19 13:21:33 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
  113. 24/05/19 13:21:33 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
  114. 24/05/19 13:21:33 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
  115. 24/05/19 13:21:33 INFO DiskBlockManager: Created local directory at C:\Users\Lenovo\AppData\Local\Temp\blockmgr-7c0bf709-1376-42b9-86c9-eaf06a20e829
  116. 24/05/19 13:21:33 INFO MemoryStore: MemoryStore started with capacity 4.6 GiB
  117. 24/05/19 13:21:33 INFO SparkEnv: Registering OutputCommitCoordinator
  118. 24/05/19 13:21:33 INFO Utils: Successfully started service 'SparkUI' on port 4040.
  119. 24/05/19 13:21:33 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://Lenovo:4040
  120. 24/05/19 13:21:34 INFO Executor: Starting executor ID driver on host Lenovo
  121. 24/05/19 13:21:34 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 55232.
  122. 24/05/19 13:21:34 INFO NettyBlockTransferService: Server created on Lenovo:55232
  123. 24/05/19 13:21:34 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
  124. 24/05/19 13:21:34 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, Lenovo, 55232, None)
  125. 24/05/19 13:21:34 INFO BlockManagerMasterEndpoint: Registering block manager Lenovo:55232 with 4.6 GiB RAM, BlockManagerId(driver, Lenovo, 55232, None)
  126. 24/05/19 13:21:34 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, Lenovo, 55232, None)
  127. 24/05/19 13:21:34 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, Lenovo, 55232, None)
  128. Preparing simple data...
  129. 24/05/19 13:21:34 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/D:/IdeaProjects/University/Year%202/Semester%204/Big%20data/learning_bigdata_elasticsearch_spark_to_es_simple_app-main/spark-warehouse').
  130. 24/05/19 13:21:34 INFO SharedState: Warehouse path is 'file:/D:/IdeaProjects/University/Year%202/Semester%204/Big%20data/learning_bigdata_elasticsearch_spark_to_es_simple_app-main/spark-warehouse'.
  131. 24/05/19 13:21:35 INFO Version: Elasticsearch Hadoop v8.12.0 [e138d23add]
  132. 24/05/19 13:21:35 INFO InMemoryFileIndex: It took 46 ms to list leaf files for 1 paths.
  133. 24/05/19 13:21:35 INFO InMemoryFileIndex: It took 1 ms to list leaf files for 1 paths.
  134. 24/05/19 13:21:36 INFO FileSourceStrategy: Pushed Filters:
  135. 24/05/19 13:21:36 INFO FileSourceStrategy: Post-Scan Filters: (length(trim(value#0, None)) > 0)
  136. 24/05/19 13:21:36 INFO FileSourceStrategy: Output Data Schema: struct<value: string>
  137. 24/05/19 13:21:37 INFO CodeGenerator: Code generated in 104.9 ms
  138. 24/05/19 13:21:37 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 174.5 KiB, free 4.6 GiB)
  139. 24/05/19 13:21:37 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 27.7 KiB, free 4.6 GiB)
  140. 24/05/19 13:21:37 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on Lenovo:55232 (size: 27.7 KiB, free: 4.6 GiB)
  141. 24/05/19 13:21:37 INFO SparkContext: Created broadcast 0 from csv at SparkJavaElasticStreamTest.java:23
  142. 24/05/19 13:21:37 INFO FileSourceScanExec: Planning scan with bin packing, max size: 4194304 bytes, open cost is considered as scanning 4194304 bytes.
  143. 24/05/19 13:21:37 INFO SparkContext: Starting job: csv at SparkJavaElasticStreamTest.java:23
  144. 24/05/19 13:21:37 INFO DAGScheduler: Got job 0 (csv at SparkJavaElasticStreamTest.java:23) with 1 output partitions
  145. 24/05/19 13:21:37 INFO DAGScheduler: Final stage: ResultStage 0 (csv at SparkJavaElasticStreamTest.java:23)
  146. 24/05/19 13:21:37 INFO DAGScheduler: Parents of final stage: List()
  147. 24/05/19 13:21:37 INFO DAGScheduler: Missing parents: List()
  148. 24/05/19 13:21:37 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[3] at csv at SparkJavaElasticStreamTest.java:23), which has no missing parents
  149. 24/05/19 13:21:37 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 10.8 KiB, free 4.6 GiB)
  150. 24/05/19 13:21:37 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 5.4 KiB, free 4.6 GiB)
  151. 24/05/19 13:21:37 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on Lenovo:55232 (size: 5.4 KiB, free: 4.6 GiB)
  152. 24/05/19 13:21:37 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1388
  153. 24/05/19 13:21:37 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[3] at csv at SparkJavaElasticStreamTest.java:23) (first 15 tasks are for partitions Vector(0))
  154. 24/05/19 13:21:37 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks resource profile 0
  155. 24/05/19 13:21:37 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0) (Lenovo, executor driver, partition 0, PROCESS_LOCAL, 4984 bytes) taskResourceAssignments Map()
  156. 24/05/19 13:21:37 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
  157. 24/05/19 13:21:37 INFO FileScanRDD: Reading File path: file:///D:/IdeaProjects/University/Year%202/Semester%204/Big%20data/learning_bigdata_elasticsearch_spark_to_es_simple_app-main/src/main/resources/test/test.csv, range: 0-248, partition values: [empty row]
  158. 24/05/19 13:21:37 INFO CodeGenerator: Code generated in 11.7618 ms
  159. 24/05/19 13:21:37 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1659 bytes result sent to driver
  160. 24/05/19 13:21:37 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 188 ms on Lenovo (executor driver) (1/1)
  161. 24/05/19 13:21:37 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
  162. 24/05/19 13:21:37 INFO DAGScheduler: ResultStage 0 (csv at SparkJavaElasticStreamTest.java:23) finished in 0.298 s
  163. 24/05/19 13:21:37 INFO DAGScheduler: Job 0 is finished. Cancelling potential speculative or zombie tasks for this job
  164. 24/05/19 13:21:37 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage finished
  165. 24/05/19 13:21:37 INFO DAGScheduler: Job 0 finished: csv at SparkJavaElasticStreamTest.java:23, took 0.339042 s
  166. 24/05/19 13:21:37 INFO CodeGenerator: Code generated in 7.8712 ms
  167. 24/05/19 13:21:37 INFO FileSourceStrategy: Pushed Filters:
  168. 24/05/19 13:21:37 INFO FileSourceStrategy: Post-Scan Filters:
  169. 24/05/19 13:21:37 INFO FileSourceStrategy: Output Data Schema: struct<value: string>
  170. 24/05/19 13:21:37 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 174.5 KiB, free 4.6 GiB)
  171. 24/05/19 13:21:37 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 27.7 KiB, free 4.6 GiB)
  172. 24/05/19 13:21:37 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on Lenovo:55232 (size: 27.7 KiB, free: 4.6 GiB)
  173. 24/05/19 13:21:37 INFO SparkContext: Created broadcast 2 from csv at SparkJavaElasticStreamTest.java:23
  174. 24/05/19 13:21:37 INFO FileSourceScanExec: Planning scan with bin packing, max size: 4194304 bytes, open cost is considered as scanning 4194304 bytes.
  175. 24/05/19 13:21:37 INFO InMemoryFileIndex: It took 2 ms to list leaf files for 1 paths.
  176. Writing simple data...
  177. 24/05/19 13:21:38 INFO FileSourceStrategy: Pushed Filters:
  178. 24/05/19 13:21:38 INFO FileSourceStrategy: Post-Scan Filters:
  179. 24/05/19 13:21:38 INFO FileSourceStrategy: Output Data Schema: struct<id: string, franchise_id: string, franchise_name: string, restaurant_franchise_id: string, country: string ... 10 more fields>
  180. 24/05/19 13:21:38 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 174.4 KiB, free 4.6 GiB)
  181. 24/05/19 13:21:38 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 27.6 KiB, free 4.6 GiB)
  182. 24/05/19 13:21:38 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on Lenovo:55232 (size: 27.6 KiB, free: 4.6 GiB)
  183. 24/05/19 13:21:38 INFO SparkContext: Created broadcast 3 from rdd at EsSparkSQL.scala:103
  184. 24/05/19 13:21:38 INFO FileSourceScanExec: Planning scan with bin packing, max size: 4194304 bytes, open cost is considered as scanning 4194304 bytes.
  185. 24/05/19 13:21:38 INFO SparkContext: Starting job: runJob at EsSparkSQL.scala:103
  186. 24/05/19 13:21:38 INFO DAGScheduler: Got job 1 (runJob at EsSparkSQL.scala:103) with 1 output partitions
  187. 24/05/19 13:21:38 INFO DAGScheduler: Final stage: ResultStage 1 (runJob at EsSparkSQL.scala:103)
  188. 24/05/19 13:21:38 INFO DAGScheduler: Parents of final stage: List()
  189. 24/05/19 13:21:38 INFO DAGScheduler: Missing parents: List()
  190. 24/05/19 13:21:38 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[14] at rdd at EsSparkSQL.scala:103), which has no missing parents
  191. 24/05/19 13:21:38 INFO MemoryStore: Block broadcast_4 stored as values in memory (estimated size 16.2 KiB, free 4.6 GiB)
  192. 24/05/19 13:21:38 INFO MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 7.9 KiB, free 4.6 GiB)
  193. 24/05/19 13:21:38 INFO BlockManagerInfo: Added broadcast_4_piece0 in memory on Lenovo:55232 (size: 7.9 KiB, free: 4.6 GiB)
  194. 24/05/19 13:21:38 INFO SparkContext: Created broadcast 4 from broadcast at DAGScheduler.scala:1388
  195. 24/05/19 13:21:38 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[14] at rdd at EsSparkSQL.scala:103) (first 15 tasks are for partitions Vector(0))
  196. 24/05/19 13:21:38 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks resource profile 0
  197. 24/05/19 13:21:38 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1) (Lenovo, executor driver, partition 0, PROCESS_LOCAL, 4984 bytes) taskResourceAssignments Map()
  198. 24/05/19 13:21:38 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
  199. 24/05/19 13:21:38 INFO CodeGenerator: Code generated in 19.5392 ms
  200. 24/05/19 13:21:38 WARN Resource: Detected type name in resource [receipt_restaurants/data]. Type names are deprecated and will be removed in a later release.
  201. 24/05/19 13:21:38 INFO EsDataFrameWriter: Writing to [receipt_restaurants/data]
  202. 24/05/19 13:21:48 WARN ProcfsMetricsGetter: Exception when trying to compute pagesize, as a result reporting of ProcessTree metrics is stopped
  203. 24/05/19 13:21:59 INFO HttpMethodDirector: I/O exception (java.net.ConnectException) caught when processing request: Connection timed out: connect
  204. 24/05/19 13:21:59 INFO HttpMethodDirector: Retrying request
  205. 24/05/19 13:22:20 INFO HttpMethodDirector: I/O exception (java.net.ConnectException) caught when processing request: Connection timed out: connect
  206. 24/05/19 13:22:20 INFO HttpMethodDirector: Retrying request
  207. 24/05/19 13:22:41 INFO HttpMethodDirector: I/O exception (java.net.ConnectException) caught when processing request: Connection timed out: connect
  208. 24/05/19 13:22:41 INFO HttpMethodDirector: Retrying request
  209. 24/05/19 13:23:02 ERROR NetworkClient: Node [172.19.0.2:9200] failed (java.net.ConnectException: Connection timed out: connect); no other nodes left - aborting...
  210. 24/05/19 13:23:02 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
  211. org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[172.19.0.2:9200]]
  212.     at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:160)
  213.     at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:442)
  214.     at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:438)
  215.     at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:398)
  216.     at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:402)
  217.     at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:178)
  218.     at org.elasticsearch.hadoop.rest.request.GetAliasesRequestBuilder.execute(GetAliasesRequestBuilder.java:68)
  219.     at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:620)
  220.     at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:71)
  221.     at org.elasticsearch.spark.sql.EsSparkSQL$.$anonfun$saveToEs$1(EsSparkSQL.scala:103)
  222.     at org.elasticsearch.spark.sql.EsSparkSQL$.$anonfun$saveToEs$1$adapted(EsSparkSQL.scala:103)
  223.     at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
  224.     at org.apache.spark.scheduler.Task.run(Task.scala:131)
  225.     at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
  226.     at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
  227.     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
  228.     at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  229.     at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  230.     at java.base/java.lang.Thread.run(Thread.java:829)
  231. 24/05/19 13:23:02 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 1) (Lenovo executor driver): org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[172.19.0.2:9200]]
  232.     at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:160)
  233.     at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:442)
  234.     at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:438)
  235.     at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:398)
  236.     at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:402)
  237.     at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:178)
  238.     at org.elasticsearch.hadoop.rest.request.GetAliasesRequestBuilder.execute(GetAliasesRequestBuilder.java:68)
  239.     at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:620)
  240.     at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:71)
  241.     at org.elasticsearch.spark.sql.EsSparkSQL$.$anonfun$saveToEs$1(EsSparkSQL.scala:103)
  242.     at org.elasticsearch.spark.sql.EsSparkSQL$.$anonfun$saveToEs$1$adapted(EsSparkSQL.scala:103)
  243.     at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
  244.     at org.apache.spark.scheduler.Task.run(Task.scala:131)
  245.     at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
  246.     at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
  247.     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
  248.     at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  249.     at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  250.     at java.base/java.lang.Thread.run(Thread.java:829)
  251.  
  252. 24/05/19 13:23:02 ERROR TaskSetManager: Task 0 in stage 1.0 failed 1 times; aborting job
  253. 24/05/19 13:23:02 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
  254. 24/05/19 13:23:02 INFO TaskSchedulerImpl: Cancelling stage 1
  255. 24/05/19 13:23:02 INFO TaskSchedulerImpl: Killing all running tasks in stage 1: Stage cancelled
  256. 24/05/19 13:23:02 INFO DAGScheduler: ResultStage 1 (runJob at EsSparkSQL.scala:103) failed in 84.345 s due to Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1) (Lenovo executor driver): org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[172.19.0.2:9200]]
  257.     at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:160)
  258.     at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:442)
  259.     at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:438)
  260.     at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:398)
  261.     at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:402)
  262.     at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:178)
  263.     at org.elasticsearch.hadoop.rest.request.GetAliasesRequestBuilder.execute(GetAliasesRequestBuilder.java:68)
  264.     at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:620)
  265.     at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:71)
  266.     at org.elasticsearch.spark.sql.EsSparkSQL$.$anonfun$saveToEs$1(EsSparkSQL.scala:103)
  267.     at org.elasticsearch.spark.sql.EsSparkSQL$.$anonfun$saveToEs$1$adapted(EsSparkSQL.scala:103)
  268.     at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
  269.     at org.apache.spark.scheduler.Task.run(Task.scala:131)
  270.     at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
  271.     at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
  272.     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
  273.     at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  274.     at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  275.     at java.base/java.lang.Thread.run(Thread.java:829)
  276.  
  277. Driver stacktrace:
  278. 24/05/19 13:23:02 INFO DAGScheduler: Job 1 failed: runJob at EsSparkSQL.scala:103, took 84.348645 s
  279. org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1) (Lenovo executor driver): org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[172.19.0.2:9200]]
  280.     at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:160)
  281.     at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:442)
  282.     at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:438)
  283.     at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:398)
  284.     at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:402)
  285.     at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:178)
  286.     at org.elasticsearch.hadoop.rest.request.GetAliasesRequestBuilder.execute(GetAliasesRequestBuilder.java:68)
  287.     at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:620)
  288.     at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:71)
  289.     at org.elasticsearch.spark.sql.EsSparkSQL$.$anonfun$saveToEs$1(EsSparkSQL.scala:103)
  290.     at org.elasticsearch.spark.sql.EsSparkSQL$.$anonfun$saveToEs$1$adapted(EsSparkSQL.scala:103)
  291.     at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
  292.     at org.apache.spark.scheduler.Task.run(Task.scala:131)
  293.     at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
  294.     at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
  295.     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
  296.     at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  297.     at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  298.     at java.base/java.lang.Thread.run(Thread.java:829)
  299.  
  300. Driver stacktrace:
  301.     at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2258)
  302.     at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2207)
  303.     at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2206)
  304.     at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
  305.     at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
  306.     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
  307.     at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2206)
  308.     at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1079)
  309.     at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1079)
  310.     at scala.Option.foreach(Option.scala:407)
  311.     at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1079)
  312.     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2445)
  313.     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2387)
  314.     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2376)
  315.     at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
  316.     at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:868)
  317.     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2196)
  318.     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2217)
  319.     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2249)
  320.     at org.elasticsearch.spark.sql.EsSparkSQL$.saveToEs(EsSparkSQL.scala:103)
  321.     at org.elasticsearch.spark.sql.ElasticsearchRelation.insert(DefaultSource.scala:629)
  322.     at org.elasticsearch.spark.sql.DefaultSource.createRelation(DefaultSource.scala:107)
  323.     at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
  324.     at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
  325.     at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
  326.     at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:90)
  327.     at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180)
  328.     at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
  329.     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  330.     at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
  331.     at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
  332.     at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:132)
  333.     at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:131)
  334.     at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989)
  335.     at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
  336.     at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
  337.     at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
  338.     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
  339.     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
  340.     at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989)
  341.     at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438)
  342.     at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:415)
  343.     at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:301)
  344.     at biz.svyatoslav.learning.bigdata.elasticsearch.SparkJavaElasticStreamTest.main(SparkJavaElasticStreamTest.java:38)
  345. Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[172.19.0.2:9200]]
  346.     at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:160)
  347.     at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:442)
  348.     at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:438)
  349.     at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:398)
  350.     at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:402)
  351.     at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:178)
  352.     at org.elasticsearch.hadoop.rest.request.GetAliasesRequestBuilder.execute(GetAliasesRequestBuilder.java:68)
  353.     at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:620)
  354.     at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:71)
  355.     at org.elasticsearch.spark.sql.EsSparkSQL$.$anonfun$saveToEs$1(EsSparkSQL.scala:103)
  356.     at org.elasticsearch.spark.sql.EsSparkSQL$.$anonfun$saveToEs$1$adapted(EsSparkSQL.scala:103)
  357.     at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
  358.     at org.apache.spark.scheduler.Task.run(Task.scala:131)
  359.     at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
  360.     at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
  361.     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
  362.     at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  363.     at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  364.     at java.base/java.lang.Thread.run(Thread.java:829)
  365. 24/05/19 13:23:02 INFO SparkContext: Invoking stop() from shutdown hook
  366. 24/05/19 13:23:02 INFO SparkUI: Stopped Spark web UI at http://Lenovo:4040
  367. 24/05/19 13:23:02 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
  368. 24/05/19 13:23:02 INFO MemoryStore: MemoryStore cleared
  369. 24/05/19 13:23:02 INFO BlockManager: BlockManager stopped
  370. 24/05/19 13:23:02 INFO BlockManagerMaster: BlockManagerMaster stopped
  371. 24/05/19 13:23:02 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
  372. 24/05/19 13:23:02 INFO SparkContext: Successfully stopped SparkContext
  373. 24/05/19 13:23:02 INFO ShutdownHookManager: Shutdown hook called
  374. 24/05/19 13:23:02 INFO ShutdownHookManager: Deleting directory C:\Users\Lenovo\AppData\Local\Temp\spark-bea50fa0-d01b-46c9-865c-b815d505e006
  375. Disconnected from the target VM, address: '127.0.0.1:55141', transport: 'socket'
  376.  
  377. Process finished with exit code 0
  378.  
Add Comment
Please, Sign In to add comment