This repository was archived by the owner on Apr 5, 2022. It is now read-only.
Adding SQL support for JDBC incremental sink to HDFS#1894
Open
sridharpaladugu wants to merge 5 commits intospring-attic:masterfrom
Open
Adding SQL support for JDBC incremental sink to HDFS#1894sridharpaladugu wants to merge 5 commits intospring-attic:masterfrom
sridharpaladugu wants to merge 5 commits intospring-attic:masterfrom
Conversation
Contributor
|
I did a quick read over this. A couple concerns:
|
Author
|
Added a Test case. The SQL increment load works using check column only. If the check column not specified then it has no way to identify whether it is incremental load, so falls back to legacy code where we just run the SQL and load data. That is every subsequent job invocation load will produce a file of data with duplicates (This is where i have doubts too). If check column present then the incremental load works fine. |
Contributor
|
The test case you added demonstrates the partitioning capabilities, but not the incremental load part. Can you please address that part as well? |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Adding SQL support to load data incrementally to HDFS. This can support a join column queries too.
For example if i have a tables in mysql;
CREATE TABLE
user(useridvarchar(25) NOT NULL,firstNamevarchar(50) NOT NULL,lastNamevarchar(50) NOT NULL,emailvarchar(125) NOT NULL,PRIMARY KEY (
userid)) ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE TABLE
tweet(useridvarchar(25) NOT NULL,msgidint(11) NOT NULL AUTO_INCREMENT,messagevarchar(2096) NOT NULL,timestampdatetime NOT NULL DEFAULT CURRENT_TIMESTAMP,PRIMARY KEY (
msgid),KEY
userid_idx(userid),CONSTRAINT
useridFKFOREIGN KEY (userid) REFERENCESuser(userid) ON DELETE CASCADE ON UPDATE CASCADE) ENGINE=InnoDB AUTO_INCREMENT=4705 DEFAULT CHARSET=latin1;
i use the below job definition to load data in hdfs/hawq;
job create loadTweetsJobTest --definition "jdbchdfs --driverClassName='com.mysql.jdbc.Driver' --url='jdbc:mysql://localhost/tweets?autoReconnect=true&useSSL=false' --username='root' --password='omsai' --sql='select msgid, firstname, lastname, message, timestamp from tweets.tweet join tweets.user on tweets.tweet.userid = tweets.user.userid' --checkColumn='msgid' --restartable=true"
Hawq table;
CREATE EXTERNAL TABLE tweets.extn_tweet
(
msgid integer,
firstname text,
lastname text,
message text,
ts timestamp without time zone
)
LOCATION (
'pxf://172.16.65.129:50070/xd/loadTweetsJob/*.csv?profile=HdfsTextSimple'
)
FORMAT 'text' (delimiter ',' null '\N' escape '')
ENCODING 'UTF8';
ALTER TABLE tweets.extn_tweet
OWNER TO gpadmin;