'PDF Merge using Itext7 throwing NullPointerException

We have a requirement to merge multiple pdf documents and I am using Itext7 to achieve this. I have around 20k files to merge and I am doing them in chunks. During the merge process I am getting below exception. Can someone help to figure out the issue

2022-04-21 05:55:27 ERROR AggregatorServiceItext: - Exception occurred while append file letter_aggregation/PDFFiles/Test149.pdf for date 04-11-2022 , hour 4:
java.lang.NullPointerException: null
    at com.itextpdf.kernel.pdf.PdfOutline.addOutline(PdfOutline.java:280) ~[kernel-7.2.1.jar!/:?]
    at com.itextpdf.kernel.pdf.PdfOutline.addOutline(PdfOutline.java:313) ~[kernel-7.2.1.jar!/:?]
    at com.itextpdf.kernel.pdf.PdfDocument.cloneOutlines(PdfDocument.java:2408) ~[kernel-7.2.1.jar!/:?]
    at com.itextpdf.kernel.pdf.PdfDocument.copyOutlines(PdfDocument.java:2370) ~[kernel-7.2.1.jar!/:?]
    at com.itextpdf.kernel.pdf.PdfDocument.copyPagesTo(PdfDocument.java:1325) ~[kernel-7.2.1.jar!/:?]
    at com.itextpdf.kernel.pdf.PdfDocument.copyPagesTo(PdfDocument.java:1366) ~[kernel-7.2.1.jar!/:?]
    at com.itextpdf.kernel.pdf.PdfDocument.copyPagesTo(PdfDocument.java:1345) ~[kernel-7.2.1.jar!/:?]
    at com.itextpdf.kernel.utils.PdfMerger.merge(PdfMerger.java:140) ~[kernel-7.2.1.jar!/:?]
    at com.itextpdf.kernel.utils.PdfMerger.merge(PdfMerger.java:117) ~[kernel-7.2.1.jar!/:?]
    at com.test.letterbatch.service.AggregatorServiceItext.lambda$null$3(AggregatorServiceItext.java:177) ~[pdf-aggregator-utils-1.7.jar!/:?]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_275]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_275]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_275]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_275]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_275]

Here is the code I am using to Merge. Iam getting the documents from s3 and merging them in chunks.

s3ObjectReplicatedSummaries.stream().collect(Collectors.groupingBy(x -> index.getAndIncrement() / chunkSize)).entrySet().forEach(integerListEntry -> {
    try {
        PdfDocument pdfDoc = new PdfDocument(new PdfWriter(aggregatedFileLocalPath +FOLDER_PATH_SEPARATE + "aggregated_"+integerListEntry.getKey()+".pdf", new WriterProperties().useSmartMode()));
        PdfMerger merger = new PdfMerger(pdfDoc);
        final CountDownLatch latch = new CountDownLatch(integerListEntry.getValue().size());
        integerListEntry.getValue().forEach(s3ObjectSummary -> executorService.submit(() -> {
            try {
                S3Object s3Object = s3Client.getObject(s3ObjectSummary.getBucketName(), s3ObjectSummary.getKey());
                PdfDocument pdfDocument = new PdfDocument(new PdfReader(s3Object.getObjectContent().getDelegateStream()));
                merger.merge(pdfDocument, 1, pdfDocument.getNumberOfPages());
                s3Object.close();
                pdfDocument.close();
            } catch (Exception ex) {
                logger.error("Exception occurred while append file {} for date {} , hour {}:",s3ObjectSummary.getKey(),dateOfProcessing, hourOfProcessing, ex);
            }
            finally {
                latch.countDown();
                logger.info("Down:{}",latch.getCount());
            }
        }));
        logger.info("Before latch await");
        latch.await();
        logger.info("After latch await");
        pdfDoc.close();
        logger.info("Successfully created Chunk pdf");
        filesList.add(aggregatedFileLocalPath +FOLDER_PATH_SEPARATE + "aggregated_"+integerListEntry.getKey()+".pdf");
    }
    catch (Exception ex) {
        logger.error("Exception occurred while append for date {} , hour {}:",dateOfProcessing, hourOfProcessing, ex);
    }
});


Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source