However, the trade-off for the speed is that when importing data, it bypasses a large amount of error checking and handling. As a result, any error messages you do get from the database server can often be cryptic or can even appear to be unrelated to what you’re trying to accomplish!
As well as the command-line BCP tool, JBCP is a pure Java library that reimplements the BCP protocol for Java developers. Unfortunately it’s still relatively immature, and you may find that Sybase gives the following error message in response to almost all JBCP issues:
Bad row data received from the client while bulk copying into object 1874356894 in database 4
This can be caused by several things. The most common is simply that the table definition has not been setup correctly to allow rows to be BCP’d into it. One way to resolve the issue is to ensure that the table you are copying into has ‘lock allpages’ as part of its definition, for example:
CREATE TABLE bcptest (some_key int not null, mydata varchar(50) null) lock allpages
Other issues can be caused by using univarchar columns that exceed 255 bytes. This means that all univarchar columns you attempt to copy into must be no larger than univarchar(127). A valid workaround for this issue is to split large column definitions into several smaller columns, and then use an UPDATE statement to reassemble the data from the split columns after it has been uploaded.
To help googlers, here is the complete stack trace I was typically seeing from JBCP before I created my table with the additional lock allpages clauses.
java.sql.SQLException: Bad row data received from the client while bulk
copying into object 1874356894 in database 4. Received a row of length
72 whilst maximum or expected row length is 15.
[19/12 10:46:37.160] [TID:lee2BCP1]:Upload of 1 result: FAILED
One final warning for using JBCP – you don’t always even see the problem by default – some exceptions are supressed causing silent failure! There be dragons!