'How to copy data from Amazon S3 to DDB using AWS Glue
I am following AWS documentation on how to transfer DDB table from one account to another. There are two steps:
- Export DDB table into Amazon S3
- Use a Glue job to read the files from the Amazon S3 bucket and write them to the target DynamoDB table
I was able to do the first step. Unfortunately the instructions don't say how to do the second step. I have worked with Glue a couple of times, but the console UI is very user un-friendly and I have no idea how to achieve it.
Can somebody please explain how to import the data from S3 into the DDB?
Solution 1:[1]
You could use Glue studio to generate a script.
- Log into AWS 
- Go to Glue 
- Go to Glue studio 
- Set up the source , basically point it to S3 and then use something like below this is for a dynamo db with pk and sk as a composite primary key 
This is just the mapping to a Dataframe and writing it to DynamoDB
      ApplyMapping_node2 = ApplyMapping.apply( 
        frame=S3bucket_node1, 
        mappings=[ 
        ("Item.pk.S", "string", "Item.pk.S", "string"), 
        ("Item.sk.S", "string", "Item.sk.S", "string") 
        ], 
          transformation_ctx="ApplyMapping_node2"  
        )
            
      S3bucket_node3 = glueContext.write_dynamic_frame.from_options( 
        frame=ApplyMapping_node2, 
        connection_type="dynamodb", 
        connection_options={"dynamodb.output.tableName": "my-target-table"}
       }
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source | 
|---|---|
| Solution 1 | fedonev | 

 amazon-web-services
amazon-web-services