'How to implement the bulk load to amazon Redshift from s3 bucket using Pentaho

I have downloaded the Pentaho 9.2 community edition to implement the bulk load to amazon Redshift from AWS S3 bucket. But I am not able to find a way to do this using the Pentaho.

So can anyone please help me to implement this?

Thanks in advance.



Solution 1:[1]

If you really have to use Pentaho, use "Execute SQL Script" step. Create a connection to Redshift and run a COPY command. See an example below:

enter image description here

COPY s3_sch.event_log_int_stg (LOG_ID,EVENT_TYPE_ID,USER_ID,LOG_DESC,EVENT_LOG_DATE,OLD_VALUE,NEW_VALUE,APP_SERVER,CLIENT_ADDRESS) from '${s3redshiftpath}${SHORT_FILENAME}' credentials 'aws_access_key_id=${accessKeyId};aws_secret_access_key=${secretKey}' timeformat 'DD-MON-YY HH:MI:SS';

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Nikhil