Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- -jt <local|jobtracker:port> specify a job tracker
- -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
- -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
- -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
- The general command line syntax is
- bin/hadoop command [genericOptions] [commandOptions]
- At minimum, you must specify --connect and --table
- Arguments to mysqldump and other subprograms may be supplied
- after a '--' on the command line.
- [cloudera@localhost ~]$ sqoop import --connect "jdbc:mysql://localhost/kp" --username root --table sty --target-dir /user/cloudera/perus432 --incremental append --check-column rno --lastvalue 105 -m 1
- -21/10/18 03:24:31 ERROR tool.BaseSqoopTool: Error parsing arguments for import:
- 21/10/18 03:24:31 ERROR tool.BaseSqoopTool: Unrecognized argument: --lastvalue
- 21/10/18 03:24:31 ERROR tool.BaseSqoopTool: Unrecognized argument: 105
- 21/10/18 03:24:31 ERROR tool.BaseSqoopTool: Unrecognized argument: -m
- 21/10/18 03:24:31 ERROR tool.BaseSqoopTool: Unrecognized argument: 1
- Try --help for usage instructions.
- usage: sqoop import [GENERIC-ARGS] [TOOL-ARGS]
- Common arguments:
- --connect <jdbc-uri> Specify JDBC connect
- string
- --connection-manager <class-name> Specify connection manager
- class name
- --connection-param-file <properties-file> Specify connection
- parameters file
- --driver <class-name> Manually specify JDBC
- driver class to use
- --hadoop-home <hdir> Override
- $HADOOP_MAPRED_HOME_ARG
- --hadoop-mapred-home <dir> Override
- $HADOOP_MAPRED_HOME_ARG
- --help Print usage instructions
- -P Read password from console
- --password <password> Set authentication
- password
- --password-file <password-file> Set authentication
- password file path
- --username <username> Set authentication
- username
- --verbose Print more information
- while working
- Import control arguments:
- --append Imports data
- in append
- mode
- --as-avrodatafile Imports data
- to Avro data
- files
- --as-sequencefile Imports data
- to
- SequenceFile
- s
- --as-textfile Imports data
- as plain
- text
- (default)
- --boundary-query <statement> Set boundary
- query for
- retrieving
- max and min
- value of the
- primary key
- --columns <col,col,col...> Columns to
- import from
- table
- --compression-codec <codec> Compression
- codec to use
- for import
- --delete-target-dir Imports data
- in delete
- mode
- --direct Use direct
- import fast
- path
- --direct-split-size <n> Split the
- input stream
- every 'n'
- bytes when
- importing in
- direct mode
- -e,--query <statement> Import
- results of
- SQL
- 'statement'
- --fetch-size <n> Set number
- 'n' of rows
- to fetch
- from the
- database
- when more
- rows are
- needed
- --inline-lob-limit <n> Set the
- maximum size
- for an
- inline LOB
- -m,--num-mappers <n> Use 'n' map
- tasks to
- import in
- parallel
- --mapreduce-job-name <name> Set name for
- generated
- mapreduce
- job
- --split-by <column-name> Column of
- the table
- used to
- split work
- units
- --table <table-name> Table to
- read
- --target-dir <dir> HDFS plain
- table
- destination
- --validate Validate the
- copy using
- the
- configured
- validator
- --validation-failurehandler <validation-failurehandler> Fully
- qualified
- class name
- for
- ValidationFa
- ilureHandler
- --validation-threshold <validation-threshold> Fully
- qualified
- class name
- for
- ValidationTh
- reshold
- --validator <validator> Fully
- qualified
- class name
- for the
- Validator
- --warehouse-dir <dir> HDFS parent
- for table
- destination
- --where <where clause> WHERE clause
- to use
- during
- import
- -z,--compress Enable
- compression
- Incremental import arguments:
- --check-column <column> Source column to check for incremental
- change
- --incremental <import-type> Define an incremental import of type
- 'append' or 'lastmodified'
- --last-value <value> Last imported value in the incremental
- check column
- Output line formatting arguments:
- --enclosed-by <char> Sets a required field enclosing
- character
- --escaped-by <char> Sets the escape character
- --fields-terminated-by <char> Sets the field separator character
- --lines-terminated-by <char> Sets the end-of-line character
- --mysql-delimiters Uses MySQL's default delimiter set:
- fields: , lines: \n escaped-by: \
- optionally-enclosed-by: '
- --optionally-enclosed-by <char> Sets a field enclosing character
- Input parsing arguments:
- --input-enclosed-by <char> Sets a required field encloser
- --input-escaped-by <char> Sets the input escape
- character
- --input-fields-terminated-by <char> Sets the input field separator
- --input-lines-terminated-by <char> Sets the input end-of-line
- char
- --input-optionally-enclosed-by <char> Sets a field enclosing
- character
- Hive arguments:
- --create-hive-table Fail if the target hive
- table exists
- --hive-database <database-name> Sets the database name to
- use when importing to hive
- --hive-delims-replacement <arg> Replace Hive record \0x01
- and row delimiters (\n\r)
- from imported string fields
- with user-defined string
- --hive-drop-import-delims Drop Hive record \0x01 and
- row delimiters (\n\r) from
- imported string fields
- --hive-home <dir> Override $HIVE_HOME
- --hive-import Import tables into Hive
- (Uses Hive's default
- delimiters if none are
- set.)
- --hive-overwrite Overwrite existing data in
- the Hive table
- --hive-partition-key <partition-key> Sets the partition key to
- use when importing to hive
- --hive-partition-value <partition-value> Sets the partition value to
- use when importing to hive
- --hive-table <table-name> Sets the table name to use
- when importing to hive
- --map-column-hive <arg> Override mapping for
- specific column to hive
- types.
- HBase arguments:
- --column-family <family> Sets the target column family for the
- import
- --hbase-create-table If specified, create missing HBase tables
- --hbase-row-key <col> Specifies which input column to use as the
- row key
- --hbase-table <table> Import to <table> in HBase
- HCatalog arguments:
- --hcatalog-database <arg> HCatalog database name
- --hcatalog-home <hdir> Override $HCAT_HOME
- --hcatalog-table <arg> HCatalog table name
- --hive-home <dir> Override $HIVE_HOME
- --hive-partition-key <partition-key> Sets the partition key to
- use when importing to hive
- --hive-partition-value <partition-value> Sets the partition value to
- use when importing to hive
- --map-column-hive <arg> Override mapping for
- specific column to hive
- types.
- HCatalog import specific options:
- --create-hcatalog-table Create HCatalog before import
- --hcatalog-storage-stanza <arg> HCatalog storage stanza for table
- creation
- Code generation arguments:
- --bindir <dir> Output directory for compiled
- objects
- --class-name <name> Sets the generated class name.
- This overrides --package-name.
- When combined with --jar-file,
- sets the input class.
- --input-null-non-string <null-str> Input null non-string
- representation
- --input-null-string <null-str> Input null string representation
- --jar-file <file> Disable code generation; use
- specified jar
- --map-column-java <arg> Override mapping for specific
- columns to java types
- --null-non-string <null-str> Null non-string representation
- --null-string <null-str> Null string representation
- --outdir <dir> Output directory for generated
- code
- --package-name <name> Put auto-generated classes in
- this package
- Generic Hadoop command-line arguments:
- (must preceed any tool-specific arguments)
- Generic options supported are
- -conf <configuration file> specify an application configuration file
- -D <property=value> use value for given property
- -fs <local|namenode:port> specify a namenode
- -jt <local|jobtracker:port> specify a job tracker
- -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
- -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
- -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
- The general command line syntax is
- bin/hadoop command [genericOptions] [commandOptions]
- At minimum, you must specify --connect and --table
- Arguments to mysqldump and other subprograms may be supplied
- after a '--' on the command line.
- [cloudera@localhost ~]$ sqoop import --connect "jdbc:mysql://localhost/kp" --username root --table sty --target-dir /user/cloudera/perus432 --incremental append --check-column rno --last-value 105 -m 1
- 21/10/18 03:24:38 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
- 21/10/18 03:24:38 INFO tool.CodeGenTool: Beginning code generation
- 21/10/18 03:24:38 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sty` AS t LIMIT 1
- 21/10/18 03:24:38 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sty` AS t LIMIT 1
- 21/10/18 03:24:38 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-0.20-mapreduce
- 21/10/18 03:24:38 INFO orm.CompilationManager: Found hadoop core jar at: /usr/lib/hadoop-0.20-mapreduce/hadoop-core.jar
- Note: /tmp/sqoop-cloudera/compile/bc530dde82a1aca35c52bb74329c6299/sty.java uses or overrides a deprecated API.
- Note: Recompile with -Xlint:deprecation for details.
- 21/10/18 03:24:40 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/bc530dde82a1aca35c52bb74329c6299/sty.jar
- 21/10/18 03:24:42 INFO tool.ImportTool: Maximal id query for free form incremental import: SELECT MAX(`rno`) FROM sty
- 21/10/18 03:24:42 INFO tool.ImportTool: Incremental import based on column `rno`
- 21/10/18 03:24:42 INFO tool.ImportTool: No new rows detected since last import.
- [cloudera@localhost ~]$ sqoop eval -connect "jdbc:mysql://localhost/kp" -username root -query "insert into sty values(107)"
- 21/10/18 03:25:52 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
- 21/10/18 03:25:52 INFO tool.EvalSqlTool: 1 row(s) updated.
- [cloudera@localhost ~]$ sqoop import --connect "jdbc:mysql://localhost/kp" --username root --table sty --target-dir /user/cloudera/perus432 --incremental append --check-column rno --last-value 105 -m 1
- 21/10/18 03:25:57 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
- 21/10/18 03:25:57 INFO tool.CodeGenTool: Beginning code generation
- 21/10/18 03:25:57 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sty` AS t LIMIT 1
- 21/10/18 03:25:57 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sty` AS t LIMIT 1
- 21/10/18 03:25:57 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-0.20-mapreduce
- 21/10/18 03:25:57 INFO orm.CompilationManager: Found hadoop core jar at: /usr/lib/hadoop-0.20-mapreduce/hadoop-core.jar
- Note: /tmp/sqoop-cloudera/compile/bd2a85bb59dad2946414a6b49cd4efc5/sty.java uses or overrides a deprecated API.
- Note: Recompile with -Xlint:deprecation for details.
- 21/10/18 03:25:58 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/bd2a85bb59dad2946414a6b49cd4efc5/sty.jar
- 21/10/18 03:25:58 INFO tool.ImportTool: Maximal id query for free form incremental import: SELECT MAX(`rno`) FROM sty
- 21/10/18 03:25:58 INFO tool.ImportTool: Incremental import based on column `rno`
- 21/10/18 03:25:58 INFO tool.ImportTool: Lower bound value: 105
- 21/10/18 03:25:58 INFO tool.ImportTool: Upper bound value: 107
- 21/10/18 03:25:58 WARN manager.MySQLManager: It looks like you are importing from mysql.
- 21/10/18 03:25:58 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
- 21/10/18 03:25:58 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
- 21/10/18 03:25:58 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
- 21/10/18 03:25:58 INFO mapreduce.ImportJobBase: Beginning import of sty
- 21/10/18 03:25:59 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
- 21/10/18 03:26:02 INFO mapred.JobClient: Running job: job_202110180153_0013
- 21/10/18 03:26:03 INFO mapred.JobClient: map 0% reduce 0%
- 21/10/18 03:26:13 INFO mapred.JobClient: map 100% reduce 0%
- 21/10/18 03:26:14 INFO mapred.JobClient: Job complete: job_202110180153_0013
- 21/10/18 03:26:14 INFO mapred.JobClient: Counters: 23
- 21/10/18 03:26:14 INFO mapred.JobClient: File System Counters
- 21/10/18 03:26:14 INFO mapred.JobClient: FILE: Number of bytes read=0
- 21/10/18 03:26:14 INFO mapred.JobClient: FILE: Number of bytes written=175604
- 21/10/18 03:26:14 INFO mapred.JobClient: FILE: Number of read operations=0
- 21/10/18 03:26:14 INFO mapred.JobClient: FILE: Number of large read operations=0
- 21/10/18 03:26:14 INFO mapred.JobClient: FILE: Number of write operations=0
- 21/10/18 03:26:14 INFO mapred.JobClient: HDFS: Number of bytes read=87
- 21/10/18 03:26:14 INFO mapred.JobClient: HDFS: Number of bytes written=4
- 21/10/18 03:26:14 INFO mapred.JobClient: HDFS: Number of read operations=1
- 21/10/18 03:26:14 INFO mapred.JobClient: HDFS: Number of large read operations=0
- 21/10/18 03:26:14 INFO mapred.JobClient: HDFS: Number of write operations=1
- 21/10/18 03:26:14 INFO mapred.JobClient: Job Counters
- 21/10/18 03:26:14 INFO mapred.JobClient: Launched map tasks=1
- 21/10/18 03:26:14 INFO mapred.JobClient: Total time spent by all maps in occupied slots (ms)=7526
- 21/10/18 03:26:14 INFO mapred.JobClient: Total time spent by all reduces in occupied slots (ms)=0
- 21/10/18 03:26:14 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
- 21/10/18 03:26:14 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
- 21/10/18 03:26:14 INFO mapred.JobClient: Map-Reduce Framework
- 21/10/18 03:26:14 INFO mapred.JobClient: Map input records=1
- 21/10/18 03:26:14 INFO mapred.JobClient: Map output records=1
- 21/10/18 03:26:14 INFO mapred.JobClient: Input split bytes=87
- 21/10/18 03:26:14 INFO mapred.JobClient: Spilled Records=0
- 21/10/18 03:26:14 INFO mapred.JobClient: CPU time spent (ms)=40
- 21/10/18 03:26:14 INFO mapred.JobClient: Physical memory (bytes) snapshot=102674432
- 21/10/18 03:26:14 INFO mapred.JobClient: Virtual memory (bytes) snapshot=656605184
- 21/10/18 03:26:14 INFO mapred.JobClient: Total committed heap usage (bytes)=60751872
- 21/10/18 03:26:14 INFO mapreduce.ImportJobBase: Transferred 4 bytes in 15.1659 seconds (0.2637 bytes/sec)
- 21/10/18 03:26:14 INFO mapreduce.ImportJobBase: Retrieved 1 records.
- 21/10/18 03:26:14 INFO util.AppendUtils: Creating missing output directory - perus432
- 21/10/18 03:26:14 INFO tool.ImportTool: Incremental import complete! To run another incremental import of all data following this import, supply the following arguments:
- 21/10/18 03:26:14 INFO tool.ImportTool: --incremental append
- 21/10/18 03:26:14 INFO tool.ImportTool: --check-column rno
- 21/10/18 03:26:14 INFO tool.ImportTool: --last-value 107
- 21/10/18 03:26:14 INFO tool.ImportTool: (Consider saving this with 'sqoop job --create')
- [cloudera@localhost ~]$ sqoop import --connect "jdbc:mysql://localhost/kp" --username root --table sty --target-dir /user/cloudera/perus432 --incremental append --check-column rno --last-value 107 -m 1
- 21/10/18 03:28:47 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
- 21/10/18 03:28:47 INFO tool.CodeGenTool: Beginning code generation
- 21/10/18 03:28:47 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sty` AS t LIMIT 1
- 21/10/18 03:28:47 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sty` AS t LIMIT 1
- 21/10/18 03:28:47 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-0.20-mapreduce
- 21/10/18 03:28:47 INFO orm.CompilationManager: Found hadoop core jar at: /usr/lib/hadoop-0.20-mapreduce/hadoop-core.jar
- Note: /tmp/sqoop-cloudera/compile/69c522efe01549fb21b889101a795694/sty.java uses or overrides a deprecated API.
- Note: Recompile with -Xlint:deprecation for details.
- 21/10/18 03:28:49 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/69c522efe01549fb21b889101a795694/sty.jar
- 21/10/18 03:28:49 INFO tool.ImportTool: Maximal id query for free form incremental import: SELECT MAX(`rno`) FROM sty
- 21/10/18 03:28:49 INFO tool.ImportTool: Incremental import based on column `rno`
- 21/10/18 03:28:49 INFO tool.ImportTool: No new rows detected since last import.
- [cloudera@localhost ~]$ sqoop eval -connect "jdbc:mysql://localhost/kp" -username root -query "insert into sty values(108)"21/10/18 03:29:34 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
- 21/10/18 03:29:35 INFO tool.EvalSqlTool: 1 row(s) updated.
- [cloudera@localhost ~]$ sqoop import --connect "jdbc:mysql://localhost/kp" --username root --table sty --target-dir /user/cloudera/perus432 --incremental append --check-column rno --last-value 107 -m 1
- 21/10/18 03:29:38 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
- 21/10/18 03:29:38 INFO tool.CodeGenTool: Beginning code generation
- 21/10/18 03:29:39 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sty` AS t LIMIT 1
- 21/10/18 03:29:39 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sty` AS t LIMIT 1
- 21/10/18 03:29:39 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-0.20-mapreduce
- 21/10/18 03:29:39 INFO orm.CompilationManager: Found hadoop core jar at: /usr/lib/hadoop-0.20-mapreduce/hadoop-core.jar
- Note: /tmp/sqoop-cloudera/compile/dcc01946ba20b8b06410dd8ef51c770d/sty.java uses or overrides a deprecated API.
- Note: Recompile with -Xlint:deprecation for details.
- 21/10/18 03:29:40 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/dcc01946ba20b8b06410dd8ef51c770d/sty.jar
- 21/10/18 03:29:40 INFO tool.ImportTool: Maximal id query for free form incremental import: SELECT MAX(`rno`) FROM sty
- 21/10/18 03:29:40 INFO tool.ImportTool: Incremental import based on column `rno`
- 21/10/18 03:29:40 INFO tool.ImportTool: Lower bound value: 107
- 21/10/18 03:29:40 INFO tool.ImportTool: Upper bound value: 108
- 21/10/18 03:29:40 WARN manager.MySQLManager: It looks like you are importing from mysql.
- 21/10/18 03:29:40 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
- 21/10/18 03:29:40 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
- 21/10/18 03:29:40 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
- 21/10/18 03:29:40 INFO mapreduce.ImportJobBase: Beginning import of sty
- 21/10/18 03:29:41 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
- 21/10/18 03:29:42 INFO mapred.JobClient: Running job: job_202110180153_0014
- 21/10/18 03:29:43 INFO mapred.JobClient: map 0% reduce 0%
- 21/10/18 03:29:54 INFO mapred.JobClient: map 100% reduce 0%
- 21/10/18 03:29:55 INFO mapred.JobClient: Job complete: job_202110180153_0014
- 21/10/18 03:29:55 INFO mapred.JobClient: Counters: 23
- 21/10/18 03:29:55 INFO mapred.JobClient: File System Counters
- 21/10/18 03:29:55 INFO mapred.JobClient: FILE: Number of bytes read=0
- 21/10/18 03:29:55 INFO mapred.JobClient: FILE: Number of bytes written=175602
- 21/10/18 03:29:55 INFO mapred.JobClient: FILE: Number of read operations=0
- 21/10/18 03:29:55 INFO mapred.JobClient: FILE: Number of large read operations=0
- 21/10/18 03:29:55 INFO mapred.JobClient: FILE: Number of write operations=0
- 21/10/18 03:29:55 INFO mapred.JobClient: HDFS: Number of bytes read=87
- 21/10/18 03:29:55 INFO mapred.JobClient: HDFS: Number of bytes written=4
- 21/10/18 03:29:55 INFO mapred.JobClient: HDFS: Number of read operations=1
- 21/10/18 03:29:55 INFO mapred.JobClient: HDFS: Number of large read operations=0
- 21/10/18 03:29:55 INFO mapred.JobClient: HDFS: Number of write operations=1
- 21/10/18 03:29:55 INFO mapred.JobClient: Job Counters
- 21/10/18 03:29:55 INFO mapred.JobClient: Launched map tasks=1
- 21/10/18 03:29:55 INFO mapred.JobClient: Total time spent by all maps in occupied slots (ms)=8746
- 21/10/18 03:29:55 INFO mapred.JobClient: Total time spent by all reduces in occupied slots (ms)=0
- 21/10/18 03:29:55 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
- 21/10/18 03:29:55 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
- 21/10/18 03:29:55 INFO mapred.JobClient: Map-Reduce Framework
- 21/10/18 03:29:55 INFO mapred.JobClient: Map input records=1
- 21/10/18 03:29:55 INFO mapred.JobClient: Map output records=1
- 21/10/18 03:29:55 INFO mapred.JobClient: Input split bytes=87
- 21/10/18 03:29:55 INFO mapred.JobClient: Spilled Records=0
- 21/10/18 03:29:55 INFO mapred.JobClient: CPU time spent (ms)=540
- 21/10/18 03:29:55 INFO mapred.JobClient: Physical memory (bytes) snapshot=98377728
- 21/10/18 03:29:55 INFO mapred.JobClient: Virtual memory (bytes) snapshot=656605184
- 21/10/18 03:29:55 INFO mapred.JobClient: Total committed heap usage (bytes)=60751872
- 21/10/18 03:29:55 INFO mapreduce.ImportJobBase: Transferred 4 bytes in 14.4759 seconds (0.2763 bytes/sec)
- 21/10/18 03:29:55 INFO mapreduce.ImportJobBase: Retrieved 1 records.
- 21/10/18 03:29:55 INFO util.AppendUtils: Appending to directory perus432
- 21/10/18 03:29:55 INFO util.AppendUtils: Using found partition 1
- 21/10/18 03:29:55 INFO tool.ImportTool: Incremental import complete! To run another incremental import of all data following this import, supply the following arguments:
- 21/10/18 03:29:55 INFO tool.ImportTool: --incremental append
- 21/10/18 03:29:55 INFO tool.ImportTool: --check-column rno
- 21/10/18 03:29:55 INFO tool.ImportTool: --last-value 108
- 21/10/18 03:29:55 INFO tool.ImportTool: (Consider saving this with 'sqoop job --create')
- [cloudera@localhost ~]$ sqoop eval -connect "jdbc:mysql://localhost/kp" -username root -query "select * from sty"21/10/18 03:32:09 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
- ---------------
- | rno |
- ---------------
- | 100 |
- | 101 |
- | 102 |
- | 103 |
- | 104 |
- | 105 |
- | 107 |
- | 108 |
- ---------------
- [cloudera@localhost ~]$ sqoop eval -connect "jdbc:mysql://localhost/kp" -username root -query "insert into sty values(109)"21/10/18 03:32:19 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
- 21/10/18 03:32:19 INFO tool.EvalSqlTool: 1 row(s) updated.
- [cloudera@localhost ~]$ sqoop import --connect "jdbc:mysql://localhost/kp" --username root --table sty --target-dir /user/cloudera/perus432 --incremental append --check-column rno --last-value 107 -m 1
- 21/10/18 03:32:25 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
- 21/10/18 03:32:25 INFO tool.CodeGenTool: Beginning code generation
- 21/10/18 03:32:26 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sty` AS t LIMIT 1
- 21/10/18 03:32:26 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sty` AS t LIMIT 1
- 21/10/18 03:32:26 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-0.20-mapreduce
- 21/10/18 03:32:26 INFO orm.CompilationManager: Found hadoop core jar at: /usr/lib/hadoop-0.20-mapreduce/hadoop-core.jar
- Note: /tmp/sqoop-cloudera/compile/812090ff49459670f8f1112b64bc5bcd/sty.java uses or overrides a deprecated API.
- Note: Recompile with -Xlint:deprecation for details.
- 21/10/18 03:32:27 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/812090ff49459670f8f1112b64bc5bcd/sty.jar
- 21/10/18 03:32:27 INFO tool.ImportTool: Maximal id query for free form incremental import: SELECT MAX(`rno`) FROM sty
- 21/10/18 03:32:27 INFO tool.ImportTool: Incremental import based on column `rno`
- 21/10/18 03:32:27 INFO tool.ImportTool: Lower bound value: 107
- 21/10/18 03:32:27 INFO tool.ImportTool: Upper bound value: 109
- 21/10/18 03:32:27 WARN manager.MySQLManager: It looks like you are importing from mysql.
- 21/10/18 03:32:27 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
- 21/10/18 03:32:27 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
- 21/10/18 03:32:27 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
- 21/10/18 03:32:27 INFO mapreduce.ImportJobBase: Beginning import of sty
- 21/10/18 03:32:28 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
- 21/10/18 03:32:29 INFO mapred.JobClient: Running job: job_202110180153_0015
- 21/10/18 03:32:30 INFO mapred.JobClient: map 0% reduce 0%
- 21/10/18 03:32:36 INFO mapred.JobClient: map 100% reduce 0%
- 21/10/18 03:32:38 INFO mapred.JobClient: Job complete: job_202110180153_0015
- 21/10/18 03:32:38 INFO mapred.JobClient: Counters: 23
- 21/10/18 03:32:38 INFO mapred.JobClient: File System Counters
- 21/10/18 03:32:38 INFO mapred.JobClient: FILE: Number of bytes read=0
- 21/10/18 03:32:38 INFO mapred.JobClient: FILE: Number of bytes written=175598
- 21/10/18 03:32:38 INFO mapred.JobClient: FILE: Number of read operations=0
- 21/10/18 03:32:38 INFO mapred.JobClient: FILE: Number of large read operations=0
- 21/10/18 03:32:38 INFO mapred.JobClient: FILE: Number of write operations=0
- 21/10/18 03:32:38 INFO mapred.JobClient: HDFS: Number of bytes read=87
- 21/10/18 03:32:38 INFO mapred.JobClient: HDFS: Number of bytes written=8
- 21/10/18 03:32:38 INFO mapred.JobClient: HDFS: Number of read operations=1
- 21/10/18 03:32:38 INFO mapred.JobClient: HDFS: Number of large read operations=0
- 21/10/18 03:32:38 INFO mapred.JobClient: HDFS: Number of write operations=1
- 21/10/18 03:32:38 INFO mapred.JobClient: Job Counters
- 21/10/18 03:32:38 INFO mapred.JobClient: Launched map tasks=1
- 21/10/18 03:32:38 INFO mapred.JobClient: Total time spent by all maps in occupied slots (ms)=6547
- 21/10/18 03:32:38 INFO mapred.JobClient: Total time spent by all reduces in occupied slots (ms)=0
- 21/10/18 03:32:38 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
- 21/10/18 03:32:38 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
- 21/10/18 03:32:38 INFO mapred.JobClient: Map-Reduce Framework
- 21/10/18 03:32:38 INFO mapred.JobClient: Map input records=2
- 21/10/18 03:32:38 INFO mapred.JobClient: Map output records=2
- 21/10/18 03:32:38 INFO mapred.JobClient: Input split bytes=87
- 21/10/18 03:32:38 INFO mapred.JobClient: Spilled Records=0
- 21/10/18 03:32:38 INFO mapred.JobClient: CPU time spent (ms)=40
- 21/10/18 03:32:38 INFO mapred.JobClient: Physical memory (bytes) snapshot=98889728
- 21/10/18 03:32:38 INFO mapred.JobClient: Virtual memory (bytes) snapshot=656605184
- 21/10/18 03:32:38 INFO mapred.JobClient: Total committed heap usage (bytes)=60751872
- 21/10/18 03:32:38 INFO mapreduce.ImportJobBase: Transferred 8 bytes in 10.5294 seconds (0.7598 bytes/sec)
- 21/10/18 03:32:38 INFO mapreduce.ImportJobBase: Retrieved 2 records.
- 21/10/18 03:32:38 INFO util.AppendUtils: Appending to directory perus432
- 21/10/18 03:32:38 INFO util.AppendUtils: Using found partition 2
- 21/10/18 03:32:38 INFO tool.ImportTool: Incremental import complete! To run another incremental import of all data following this import, supply the following arguments:
- 21/10/18 03:32:38 INFO tool.ImportTool: --incremental append
- 21/10/18 03:32:38 INFO tool.ImportTool: --check-column rno
- 21/10/18 03:32:38 INFO tool.ImportTool: --last-value 109
- 21/10/18 03:32:38 INFO tool.ImportTool: (Consider saving this with 'sqoop job --create')
- [cloudera@localhost ~]$
Add Comment
Please, Sign In to add comment