kriteshpokharel

HP vertica SQOOP

Oct 18th, 2021
14
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 36.20 KB | None | 0 0
  1. -jt <local|jobtracker:port> specify a job tracker
  2. -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
  3. -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
  4. -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
  5.  
  6. The general command line syntax is
  7. bin/hadoop command [genericOptions] [commandOptions]
  8.  
  9.  
  10. At minimum, you must specify --connect and --table
  11. Arguments to mysqldump and other subprograms may be supplied
  12. after a '--' on the command line.
  13. [cloudera@localhost ~]$ sqoop import --connect "jdbc:mysql://localhost/kp" --username root --table sty --target-dir /user/cloudera/perus432 --incremental append --check-column rno --lastvalue 105 -m 1
  14. -21/10/18 03:24:31 ERROR tool.BaseSqoopTool: Error parsing arguments for import:
  15. 21/10/18 03:24:31 ERROR tool.BaseSqoopTool: Unrecognized argument: --lastvalue
  16. 21/10/18 03:24:31 ERROR tool.BaseSqoopTool: Unrecognized argument: 105
  17. 21/10/18 03:24:31 ERROR tool.BaseSqoopTool: Unrecognized argument: -m
  18. 21/10/18 03:24:31 ERROR tool.BaseSqoopTool: Unrecognized argument: 1
  19.  
  20. Try --help for usage instructions.
  21. usage: sqoop import [GENERIC-ARGS] [TOOL-ARGS]
  22.  
  23. Common arguments:
  24. --connect <jdbc-uri> Specify JDBC connect
  25. string
  26. --connection-manager <class-name> Specify connection manager
  27. class name
  28. --connection-param-file <properties-file> Specify connection
  29. parameters file
  30. --driver <class-name> Manually specify JDBC
  31. driver class to use
  32. --hadoop-home <hdir> Override
  33. $HADOOP_MAPRED_HOME_ARG
  34. --hadoop-mapred-home <dir> Override
  35. $HADOOP_MAPRED_HOME_ARG
  36. --help Print usage instructions
  37. -P Read password from console
  38. --password <password> Set authentication
  39. password
  40. --password-file <password-file> Set authentication
  41. password file path
  42. --username <username> Set authentication
  43. username
  44. --verbose Print more information
  45. while working
  46.  
  47. Import control arguments:
  48. --append Imports data
  49. in append
  50. mode
  51. --as-avrodatafile Imports data
  52. to Avro data
  53. files
  54. --as-sequencefile Imports data
  55. to
  56. SequenceFile
  57. s
  58. --as-textfile Imports data
  59. as plain
  60. text
  61. (default)
  62. --boundary-query <statement> Set boundary
  63. query for
  64. retrieving
  65. max and min
  66. value of the
  67. primary key
  68. --columns <col,col,col...> Columns to
  69. import from
  70. table
  71. --compression-codec <codec> Compression
  72. codec to use
  73. for import
  74. --delete-target-dir Imports data
  75. in delete
  76. mode
  77. --direct Use direct
  78. import fast
  79. path
  80. --direct-split-size <n> Split the
  81. input stream
  82. every 'n'
  83. bytes when
  84. importing in
  85. direct mode
  86. -e,--query <statement> Import
  87. results of
  88. SQL
  89. 'statement'
  90. --fetch-size <n> Set number
  91. 'n' of rows
  92. to fetch
  93. from the
  94. database
  95. when more
  96. rows are
  97. needed
  98. --inline-lob-limit <n> Set the
  99. maximum size
  100. for an
  101. inline LOB
  102. -m,--num-mappers <n> Use 'n' map
  103. tasks to
  104. import in
  105. parallel
  106. --mapreduce-job-name <name> Set name for
  107. generated
  108. mapreduce
  109. job
  110. --split-by <column-name> Column of
  111. the table
  112. used to
  113. split work
  114. units
  115. --table <table-name> Table to
  116. read
  117. --target-dir <dir> HDFS plain
  118. table
  119. destination
  120. --validate Validate the
  121. copy using
  122. the
  123. configured
  124. validator
  125. --validation-failurehandler <validation-failurehandler> Fully
  126. qualified
  127. class name
  128. for
  129. ValidationFa
  130. ilureHandler
  131. --validation-threshold <validation-threshold> Fully
  132. qualified
  133. class name
  134. for
  135. ValidationTh
  136. reshold
  137. --validator <validator> Fully
  138. qualified
  139. class name
  140. for the
  141. Validator
  142. --warehouse-dir <dir> HDFS parent
  143. for table
  144. destination
  145. --where <where clause> WHERE clause
  146. to use
  147. during
  148. import
  149. -z,--compress Enable
  150. compression
  151.  
  152. Incremental import arguments:
  153. --check-column <column> Source column to check for incremental
  154. change
  155. --incremental <import-type> Define an incremental import of type
  156. 'append' or 'lastmodified'
  157. --last-value <value> Last imported value in the incremental
  158. check column
  159.  
  160. Output line formatting arguments:
  161. --enclosed-by <char> Sets a required field enclosing
  162. character
  163. --escaped-by <char> Sets the escape character
  164. --fields-terminated-by <char> Sets the field separator character
  165. --lines-terminated-by <char> Sets the end-of-line character
  166. --mysql-delimiters Uses MySQL's default delimiter set:
  167. fields: , lines: \n escaped-by: \
  168. optionally-enclosed-by: '
  169. --optionally-enclosed-by <char> Sets a field enclosing character
  170.  
  171. Input parsing arguments:
  172. --input-enclosed-by <char> Sets a required field encloser
  173. --input-escaped-by <char> Sets the input escape
  174. character
  175. --input-fields-terminated-by <char> Sets the input field separator
  176. --input-lines-terminated-by <char> Sets the input end-of-line
  177. char
  178. --input-optionally-enclosed-by <char> Sets a field enclosing
  179. character
  180.  
  181. Hive arguments:
  182. --create-hive-table Fail if the target hive
  183. table exists
  184. --hive-database <database-name> Sets the database name to
  185. use when importing to hive
  186. --hive-delims-replacement <arg> Replace Hive record \0x01
  187. and row delimiters (\n\r)
  188. from imported string fields
  189. with user-defined string
  190. --hive-drop-import-delims Drop Hive record \0x01 and
  191. row delimiters (\n\r) from
  192. imported string fields
  193. --hive-home <dir> Override $HIVE_HOME
  194. --hive-import Import tables into Hive
  195. (Uses Hive's default
  196. delimiters if none are
  197. set.)
  198. --hive-overwrite Overwrite existing data in
  199. the Hive table
  200. --hive-partition-key <partition-key> Sets the partition key to
  201. use when importing to hive
  202. --hive-partition-value <partition-value> Sets the partition value to
  203. use when importing to hive
  204. --hive-table <table-name> Sets the table name to use
  205. when importing to hive
  206. --map-column-hive <arg> Override mapping for
  207. specific column to hive
  208. types.
  209.  
  210. HBase arguments:
  211. --column-family <family> Sets the target column family for the
  212. import
  213. --hbase-create-table If specified, create missing HBase tables
  214. --hbase-row-key <col> Specifies which input column to use as the
  215. row key
  216. --hbase-table <table> Import to <table> in HBase
  217.  
  218. HCatalog arguments:
  219. --hcatalog-database <arg> HCatalog database name
  220. --hcatalog-home <hdir> Override $HCAT_HOME
  221. --hcatalog-table <arg> HCatalog table name
  222. --hive-home <dir> Override $HIVE_HOME
  223. --hive-partition-key <partition-key> Sets the partition key to
  224. use when importing to hive
  225. --hive-partition-value <partition-value> Sets the partition value to
  226. use when importing to hive
  227. --map-column-hive <arg> Override mapping for
  228. specific column to hive
  229. types.
  230.  
  231. HCatalog import specific options:
  232. --create-hcatalog-table Create HCatalog before import
  233. --hcatalog-storage-stanza <arg> HCatalog storage stanza for table
  234. creation
  235.  
  236. Code generation arguments:
  237. --bindir <dir> Output directory for compiled
  238. objects
  239. --class-name <name> Sets the generated class name.
  240. This overrides --package-name.
  241. When combined with --jar-file,
  242. sets the input class.
  243. --input-null-non-string <null-str> Input null non-string
  244. representation
  245. --input-null-string <null-str> Input null string representation
  246. --jar-file <file> Disable code generation; use
  247. specified jar
  248. --map-column-java <arg> Override mapping for specific
  249. columns to java types
  250. --null-non-string <null-str> Null non-string representation
  251. --null-string <null-str> Null string representation
  252. --outdir <dir> Output directory for generated
  253. code
  254. --package-name <name> Put auto-generated classes in
  255. this package
  256.  
  257. Generic Hadoop command-line arguments:
  258. (must preceed any tool-specific arguments)
  259. Generic options supported are
  260. -conf <configuration file> specify an application configuration file
  261. -D <property=value> use value for given property
  262. -fs <local|namenode:port> specify a namenode
  263. -jt <local|jobtracker:port> specify a job tracker
  264. -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
  265. -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
  266. -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
  267.  
  268. The general command line syntax is
  269. bin/hadoop command [genericOptions] [commandOptions]
  270.  
  271.  
  272. At minimum, you must specify --connect and --table
  273. Arguments to mysqldump and other subprograms may be supplied
  274. after a '--' on the command line.
  275. [cloudera@localhost ~]$ sqoop import --connect "jdbc:mysql://localhost/kp" --username root --table sty --target-dir /user/cloudera/perus432 --incremental append --check-column rno --last-value 105 -m 1
  276. 21/10/18 03:24:38 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
  277. 21/10/18 03:24:38 INFO tool.CodeGenTool: Beginning code generation
  278. 21/10/18 03:24:38 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sty` AS t LIMIT 1
  279. 21/10/18 03:24:38 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sty` AS t LIMIT 1
  280. 21/10/18 03:24:38 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-0.20-mapreduce
  281. 21/10/18 03:24:38 INFO orm.CompilationManager: Found hadoop core jar at: /usr/lib/hadoop-0.20-mapreduce/hadoop-core.jar
  282. Note: /tmp/sqoop-cloudera/compile/bc530dde82a1aca35c52bb74329c6299/sty.java uses or overrides a deprecated API.
  283. Note: Recompile with -Xlint:deprecation for details.
  284. 21/10/18 03:24:40 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/bc530dde82a1aca35c52bb74329c6299/sty.jar
  285. 21/10/18 03:24:42 INFO tool.ImportTool: Maximal id query for free form incremental import: SELECT MAX(`rno`) FROM sty
  286. 21/10/18 03:24:42 INFO tool.ImportTool: Incremental import based on column `rno`
  287. 21/10/18 03:24:42 INFO tool.ImportTool: No new rows detected since last import.
  288. [cloudera@localhost ~]$ sqoop eval -connect "jdbc:mysql://localhost/kp" -username root -query "insert into sty values(107)"
  289. 21/10/18 03:25:52 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
  290. 21/10/18 03:25:52 INFO tool.EvalSqlTool: 1 row(s) updated.
  291. [cloudera@localhost ~]$ sqoop import --connect "jdbc:mysql://localhost/kp" --username root --table sty --target-dir /user/cloudera/perus432 --incremental append --check-column rno --last-value 105 -m 1
  292. 21/10/18 03:25:57 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
  293. 21/10/18 03:25:57 INFO tool.CodeGenTool: Beginning code generation
  294. 21/10/18 03:25:57 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sty` AS t LIMIT 1
  295. 21/10/18 03:25:57 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sty` AS t LIMIT 1
  296. 21/10/18 03:25:57 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-0.20-mapreduce
  297. 21/10/18 03:25:57 INFO orm.CompilationManager: Found hadoop core jar at: /usr/lib/hadoop-0.20-mapreduce/hadoop-core.jar
  298. Note: /tmp/sqoop-cloudera/compile/bd2a85bb59dad2946414a6b49cd4efc5/sty.java uses or overrides a deprecated API.
  299. Note: Recompile with -Xlint:deprecation for details.
  300. 21/10/18 03:25:58 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/bd2a85bb59dad2946414a6b49cd4efc5/sty.jar
  301. 21/10/18 03:25:58 INFO tool.ImportTool: Maximal id query for free form incremental import: SELECT MAX(`rno`) FROM sty
  302. 21/10/18 03:25:58 INFO tool.ImportTool: Incremental import based on column `rno`
  303. 21/10/18 03:25:58 INFO tool.ImportTool: Lower bound value: 105
  304. 21/10/18 03:25:58 INFO tool.ImportTool: Upper bound value: 107
  305. 21/10/18 03:25:58 WARN manager.MySQLManager: It looks like you are importing from mysql.
  306. 21/10/18 03:25:58 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
  307. 21/10/18 03:25:58 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
  308. 21/10/18 03:25:58 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
  309. 21/10/18 03:25:58 INFO mapreduce.ImportJobBase: Beginning import of sty
  310. 21/10/18 03:25:59 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
  311. 21/10/18 03:26:02 INFO mapred.JobClient: Running job: job_202110180153_0013
  312. 21/10/18 03:26:03 INFO mapred.JobClient: map 0% reduce 0%
  313. 21/10/18 03:26:13 INFO mapred.JobClient: map 100% reduce 0%
  314. 21/10/18 03:26:14 INFO mapred.JobClient: Job complete: job_202110180153_0013
  315. 21/10/18 03:26:14 INFO mapred.JobClient: Counters: 23
  316. 21/10/18 03:26:14 INFO mapred.JobClient: File System Counters
  317. 21/10/18 03:26:14 INFO mapred.JobClient: FILE: Number of bytes read=0
  318. 21/10/18 03:26:14 INFO mapred.JobClient: FILE: Number of bytes written=175604
  319. 21/10/18 03:26:14 INFO mapred.JobClient: FILE: Number of read operations=0
  320. 21/10/18 03:26:14 INFO mapred.JobClient: FILE: Number of large read operations=0
  321. 21/10/18 03:26:14 INFO mapred.JobClient: FILE: Number of write operations=0
  322. 21/10/18 03:26:14 INFO mapred.JobClient: HDFS: Number of bytes read=87
  323. 21/10/18 03:26:14 INFO mapred.JobClient: HDFS: Number of bytes written=4
  324. 21/10/18 03:26:14 INFO mapred.JobClient: HDFS: Number of read operations=1
  325. 21/10/18 03:26:14 INFO mapred.JobClient: HDFS: Number of large read operations=0
  326. 21/10/18 03:26:14 INFO mapred.JobClient: HDFS: Number of write operations=1
  327. 21/10/18 03:26:14 INFO mapred.JobClient: Job Counters
  328. 21/10/18 03:26:14 INFO mapred.JobClient: Launched map tasks=1
  329. 21/10/18 03:26:14 INFO mapred.JobClient: Total time spent by all maps in occupied slots (ms)=7526
  330. 21/10/18 03:26:14 INFO mapred.JobClient: Total time spent by all reduces in occupied slots (ms)=0
  331. 21/10/18 03:26:14 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
  332. 21/10/18 03:26:14 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
  333. 21/10/18 03:26:14 INFO mapred.JobClient: Map-Reduce Framework
  334. 21/10/18 03:26:14 INFO mapred.JobClient: Map input records=1
  335. 21/10/18 03:26:14 INFO mapred.JobClient: Map output records=1
  336. 21/10/18 03:26:14 INFO mapred.JobClient: Input split bytes=87
  337. 21/10/18 03:26:14 INFO mapred.JobClient: Spilled Records=0
  338. 21/10/18 03:26:14 INFO mapred.JobClient: CPU time spent (ms)=40
  339. 21/10/18 03:26:14 INFO mapred.JobClient: Physical memory (bytes) snapshot=102674432
  340. 21/10/18 03:26:14 INFO mapred.JobClient: Virtual memory (bytes) snapshot=656605184
  341. 21/10/18 03:26:14 INFO mapred.JobClient: Total committed heap usage (bytes)=60751872
  342. 21/10/18 03:26:14 INFO mapreduce.ImportJobBase: Transferred 4 bytes in 15.1659 seconds (0.2637 bytes/sec)
  343. 21/10/18 03:26:14 INFO mapreduce.ImportJobBase: Retrieved 1 records.
  344. 21/10/18 03:26:14 INFO util.AppendUtils: Creating missing output directory - perus432
  345. 21/10/18 03:26:14 INFO tool.ImportTool: Incremental import complete! To run another incremental import of all data following this import, supply the following arguments:
  346. 21/10/18 03:26:14 INFO tool.ImportTool: --incremental append
  347. 21/10/18 03:26:14 INFO tool.ImportTool: --check-column rno
  348. 21/10/18 03:26:14 INFO tool.ImportTool: --last-value 107
  349. 21/10/18 03:26:14 INFO tool.ImportTool: (Consider saving this with 'sqoop job --create')
  350. [cloudera@localhost ~]$ sqoop import --connect "jdbc:mysql://localhost/kp" --username root --table sty --target-dir /user/cloudera/perus432 --incremental append --check-column rno --last-value 107 -m 1
  351. 21/10/18 03:28:47 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
  352. 21/10/18 03:28:47 INFO tool.CodeGenTool: Beginning code generation
  353. 21/10/18 03:28:47 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sty` AS t LIMIT 1
  354. 21/10/18 03:28:47 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sty` AS t LIMIT 1
  355. 21/10/18 03:28:47 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-0.20-mapreduce
  356. 21/10/18 03:28:47 INFO orm.CompilationManager: Found hadoop core jar at: /usr/lib/hadoop-0.20-mapreduce/hadoop-core.jar
  357. Note: /tmp/sqoop-cloudera/compile/69c522efe01549fb21b889101a795694/sty.java uses or overrides a deprecated API.
  358. Note: Recompile with -Xlint:deprecation for details.
  359. 21/10/18 03:28:49 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/69c522efe01549fb21b889101a795694/sty.jar
  360. 21/10/18 03:28:49 INFO tool.ImportTool: Maximal id query for free form incremental import: SELECT MAX(`rno`) FROM sty
  361. 21/10/18 03:28:49 INFO tool.ImportTool: Incremental import based on column `rno`
  362. 21/10/18 03:28:49 INFO tool.ImportTool: No new rows detected since last import.
  363. [cloudera@localhost ~]$ sqoop eval -connect "jdbc:mysql://localhost/kp" -username root -query "insert into sty values(108)"21/10/18 03:29:34 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
  364. 21/10/18 03:29:35 INFO tool.EvalSqlTool: 1 row(s) updated.
  365. [cloudera@localhost ~]$ sqoop import --connect "jdbc:mysql://localhost/kp" --username root --table sty --target-dir /user/cloudera/perus432 --incremental append --check-column rno --last-value 107 -m 1
  366. 21/10/18 03:29:38 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
  367. 21/10/18 03:29:38 INFO tool.CodeGenTool: Beginning code generation
  368. 21/10/18 03:29:39 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sty` AS t LIMIT 1
  369. 21/10/18 03:29:39 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sty` AS t LIMIT 1
  370. 21/10/18 03:29:39 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-0.20-mapreduce
  371. 21/10/18 03:29:39 INFO orm.CompilationManager: Found hadoop core jar at: /usr/lib/hadoop-0.20-mapreduce/hadoop-core.jar
  372. Note: /tmp/sqoop-cloudera/compile/dcc01946ba20b8b06410dd8ef51c770d/sty.java uses or overrides a deprecated API.
  373. Note: Recompile with -Xlint:deprecation for details.
  374. 21/10/18 03:29:40 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/dcc01946ba20b8b06410dd8ef51c770d/sty.jar
  375. 21/10/18 03:29:40 INFO tool.ImportTool: Maximal id query for free form incremental import: SELECT MAX(`rno`) FROM sty
  376. 21/10/18 03:29:40 INFO tool.ImportTool: Incremental import based on column `rno`
  377. 21/10/18 03:29:40 INFO tool.ImportTool: Lower bound value: 107
  378. 21/10/18 03:29:40 INFO tool.ImportTool: Upper bound value: 108
  379. 21/10/18 03:29:40 WARN manager.MySQLManager: It looks like you are importing from mysql.
  380. 21/10/18 03:29:40 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
  381. 21/10/18 03:29:40 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
  382. 21/10/18 03:29:40 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
  383. 21/10/18 03:29:40 INFO mapreduce.ImportJobBase: Beginning import of sty
  384. 21/10/18 03:29:41 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
  385. 21/10/18 03:29:42 INFO mapred.JobClient: Running job: job_202110180153_0014
  386. 21/10/18 03:29:43 INFO mapred.JobClient: map 0% reduce 0%
  387. 21/10/18 03:29:54 INFO mapred.JobClient: map 100% reduce 0%
  388. 21/10/18 03:29:55 INFO mapred.JobClient: Job complete: job_202110180153_0014
  389. 21/10/18 03:29:55 INFO mapred.JobClient: Counters: 23
  390. 21/10/18 03:29:55 INFO mapred.JobClient: File System Counters
  391. 21/10/18 03:29:55 INFO mapred.JobClient: FILE: Number of bytes read=0
  392. 21/10/18 03:29:55 INFO mapred.JobClient: FILE: Number of bytes written=175602
  393. 21/10/18 03:29:55 INFO mapred.JobClient: FILE: Number of read operations=0
  394. 21/10/18 03:29:55 INFO mapred.JobClient: FILE: Number of large read operations=0
  395. 21/10/18 03:29:55 INFO mapred.JobClient: FILE: Number of write operations=0
  396. 21/10/18 03:29:55 INFO mapred.JobClient: HDFS: Number of bytes read=87
  397. 21/10/18 03:29:55 INFO mapred.JobClient: HDFS: Number of bytes written=4
  398. 21/10/18 03:29:55 INFO mapred.JobClient: HDFS: Number of read operations=1
  399. 21/10/18 03:29:55 INFO mapred.JobClient: HDFS: Number of large read operations=0
  400. 21/10/18 03:29:55 INFO mapred.JobClient: HDFS: Number of write operations=1
  401. 21/10/18 03:29:55 INFO mapred.JobClient: Job Counters
  402. 21/10/18 03:29:55 INFO mapred.JobClient: Launched map tasks=1
  403. 21/10/18 03:29:55 INFO mapred.JobClient: Total time spent by all maps in occupied slots (ms)=8746
  404. 21/10/18 03:29:55 INFO mapred.JobClient: Total time spent by all reduces in occupied slots (ms)=0
  405. 21/10/18 03:29:55 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
  406. 21/10/18 03:29:55 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
  407. 21/10/18 03:29:55 INFO mapred.JobClient: Map-Reduce Framework
  408. 21/10/18 03:29:55 INFO mapred.JobClient: Map input records=1
  409. 21/10/18 03:29:55 INFO mapred.JobClient: Map output records=1
  410. 21/10/18 03:29:55 INFO mapred.JobClient: Input split bytes=87
  411. 21/10/18 03:29:55 INFO mapred.JobClient: Spilled Records=0
  412. 21/10/18 03:29:55 INFO mapred.JobClient: CPU time spent (ms)=540
  413. 21/10/18 03:29:55 INFO mapred.JobClient: Physical memory (bytes) snapshot=98377728
  414. 21/10/18 03:29:55 INFO mapred.JobClient: Virtual memory (bytes) snapshot=656605184
  415. 21/10/18 03:29:55 INFO mapred.JobClient: Total committed heap usage (bytes)=60751872
  416. 21/10/18 03:29:55 INFO mapreduce.ImportJobBase: Transferred 4 bytes in 14.4759 seconds (0.2763 bytes/sec)
  417. 21/10/18 03:29:55 INFO mapreduce.ImportJobBase: Retrieved 1 records.
  418. 21/10/18 03:29:55 INFO util.AppendUtils: Appending to directory perus432
  419. 21/10/18 03:29:55 INFO util.AppendUtils: Using found partition 1
  420. 21/10/18 03:29:55 INFO tool.ImportTool: Incremental import complete! To run another incremental import of all data following this import, supply the following arguments:
  421. 21/10/18 03:29:55 INFO tool.ImportTool: --incremental append
  422. 21/10/18 03:29:55 INFO tool.ImportTool: --check-column rno
  423. 21/10/18 03:29:55 INFO tool.ImportTool: --last-value 108
  424. 21/10/18 03:29:55 INFO tool.ImportTool: (Consider saving this with 'sqoop job --create')
  425. [cloudera@localhost ~]$ sqoop eval -connect "jdbc:mysql://localhost/kp" -username root -query "select * from sty"21/10/18 03:32:09 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
  426. ---------------
  427. | rno |
  428. ---------------
  429. | 100 |
  430. | 101 |
  431. | 102 |
  432. | 103 |
  433. | 104 |
  434. | 105 |
  435. | 107 |
  436. | 108 |
  437. ---------------
  438. [cloudera@localhost ~]$ sqoop eval -connect "jdbc:mysql://localhost/kp" -username root -query "insert into sty values(109)"21/10/18 03:32:19 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
  439. 21/10/18 03:32:19 INFO tool.EvalSqlTool: 1 row(s) updated.
  440. [cloudera@localhost ~]$ sqoop import --connect "jdbc:mysql://localhost/kp" --username root --table sty --target-dir /user/cloudera/perus432 --incremental append --check-column rno --last-value 107 -m 1
  441. 21/10/18 03:32:25 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
  442. 21/10/18 03:32:25 INFO tool.CodeGenTool: Beginning code generation
  443. 21/10/18 03:32:26 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sty` AS t LIMIT 1
  444. 21/10/18 03:32:26 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sty` AS t LIMIT 1
  445. 21/10/18 03:32:26 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-0.20-mapreduce
  446. 21/10/18 03:32:26 INFO orm.CompilationManager: Found hadoop core jar at: /usr/lib/hadoop-0.20-mapreduce/hadoop-core.jar
  447. Note: /tmp/sqoop-cloudera/compile/812090ff49459670f8f1112b64bc5bcd/sty.java uses or overrides a deprecated API.
  448. Note: Recompile with -Xlint:deprecation for details.
  449. 21/10/18 03:32:27 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/812090ff49459670f8f1112b64bc5bcd/sty.jar
  450. 21/10/18 03:32:27 INFO tool.ImportTool: Maximal id query for free form incremental import: SELECT MAX(`rno`) FROM sty
  451. 21/10/18 03:32:27 INFO tool.ImportTool: Incremental import based on column `rno`
  452. 21/10/18 03:32:27 INFO tool.ImportTool: Lower bound value: 107
  453. 21/10/18 03:32:27 INFO tool.ImportTool: Upper bound value: 109
  454. 21/10/18 03:32:27 WARN manager.MySQLManager: It looks like you are importing from mysql.
  455. 21/10/18 03:32:27 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
  456. 21/10/18 03:32:27 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
  457. 21/10/18 03:32:27 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
  458. 21/10/18 03:32:27 INFO mapreduce.ImportJobBase: Beginning import of sty
  459. 21/10/18 03:32:28 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
  460. 21/10/18 03:32:29 INFO mapred.JobClient: Running job: job_202110180153_0015
  461. 21/10/18 03:32:30 INFO mapred.JobClient: map 0% reduce 0%
  462. 21/10/18 03:32:36 INFO mapred.JobClient: map 100% reduce 0%
  463. 21/10/18 03:32:38 INFO mapred.JobClient: Job complete: job_202110180153_0015
  464. 21/10/18 03:32:38 INFO mapred.JobClient: Counters: 23
  465. 21/10/18 03:32:38 INFO mapred.JobClient: File System Counters
  466. 21/10/18 03:32:38 INFO mapred.JobClient: FILE: Number of bytes read=0
  467. 21/10/18 03:32:38 INFO mapred.JobClient: FILE: Number of bytes written=175598
  468. 21/10/18 03:32:38 INFO mapred.JobClient: FILE: Number of read operations=0
  469. 21/10/18 03:32:38 INFO mapred.JobClient: FILE: Number of large read operations=0
  470. 21/10/18 03:32:38 INFO mapred.JobClient: FILE: Number of write operations=0
  471. 21/10/18 03:32:38 INFO mapred.JobClient: HDFS: Number of bytes read=87
  472. 21/10/18 03:32:38 INFO mapred.JobClient: HDFS: Number of bytes written=8
  473. 21/10/18 03:32:38 INFO mapred.JobClient: HDFS: Number of read operations=1
  474. 21/10/18 03:32:38 INFO mapred.JobClient: HDFS: Number of large read operations=0
  475. 21/10/18 03:32:38 INFO mapred.JobClient: HDFS: Number of write operations=1
  476. 21/10/18 03:32:38 INFO mapred.JobClient: Job Counters
  477. 21/10/18 03:32:38 INFO mapred.JobClient: Launched map tasks=1
  478. 21/10/18 03:32:38 INFO mapred.JobClient: Total time spent by all maps in occupied slots (ms)=6547
  479. 21/10/18 03:32:38 INFO mapred.JobClient: Total time spent by all reduces in occupied slots (ms)=0
  480. 21/10/18 03:32:38 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
  481. 21/10/18 03:32:38 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
  482. 21/10/18 03:32:38 INFO mapred.JobClient: Map-Reduce Framework
  483. 21/10/18 03:32:38 INFO mapred.JobClient: Map input records=2
  484. 21/10/18 03:32:38 INFO mapred.JobClient: Map output records=2
  485. 21/10/18 03:32:38 INFO mapred.JobClient: Input split bytes=87
  486. 21/10/18 03:32:38 INFO mapred.JobClient: Spilled Records=0
  487. 21/10/18 03:32:38 INFO mapred.JobClient: CPU time spent (ms)=40
  488. 21/10/18 03:32:38 INFO mapred.JobClient: Physical memory (bytes) snapshot=98889728
  489. 21/10/18 03:32:38 INFO mapred.JobClient: Virtual memory (bytes) snapshot=656605184
  490. 21/10/18 03:32:38 INFO mapred.JobClient: Total committed heap usage (bytes)=60751872
  491. 21/10/18 03:32:38 INFO mapreduce.ImportJobBase: Transferred 8 bytes in 10.5294 seconds (0.7598 bytes/sec)
  492. 21/10/18 03:32:38 INFO mapreduce.ImportJobBase: Retrieved 2 records.
  493. 21/10/18 03:32:38 INFO util.AppendUtils: Appending to directory perus432
  494. 21/10/18 03:32:38 INFO util.AppendUtils: Using found partition 2
  495. 21/10/18 03:32:38 INFO tool.ImportTool: Incremental import complete! To run another incremental import of all data following this import, supply the following arguments:
  496. 21/10/18 03:32:38 INFO tool.ImportTool: --incremental append
  497. 21/10/18 03:32:38 INFO tool.ImportTool: --check-column rno
  498. 21/10/18 03:32:38 INFO tool.ImportTool: --last-value 109
  499. 21/10/18 03:32:38 INFO tool.ImportTool: (Consider saving this with 'sqoop job --create')
  500. [cloudera@localhost ~]$
Add Comment
Please, Sign In to add comment