'Laravel error: SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry (only when run as artisan command)

I am experiencing a strange issue where, on my local machine, my code runs perfectly fine without any errors, but on my live server (production) I am getting these errors when running jobs automatically (which goes to the CLI instead of running through PHP, I believe):

SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry

...this only seems to happen when the function that calls the SQL inserts is run via a "php artisan" command (or when that command is run automatically by the server/jobs/cronjobs). The inserts are done like this:

    $carValues = [
        'name' => $carName,
        'car_id' => $carID,
        'count' => $count
    ];

    if ($car){
        //Record already exists - update it
        $car->fill($carValues);
        $car->save(); //save changes
    } else {
        //Record does not exist - add new record
        $car = Car::create($carValues);
    }

In the above example, 'name' has a unique key and is triggering the errors. Basically if $car was already not null before the above code segment, then we'd do an update, otherwise it would create the record. This works all 100% of the time on my local machine. It seems that, somehow, on the live server only (when using php artisan command or letting the command run through CLI/scheduled jobs) it's running into these duplicate entries but it's not pointing me to any specific segment of code, the errors are dispatching as PDOException from .../vendor/laravel/framework/src/Illuminate/Database/Connection.php

Really confused on this one. Is there perhaps some way to have PDOException ignore these as it's stopping my scheduled jobs from running consistently, when ideally it should continue on without throwing these errors. Again - it works on my local machine (running Homestead/Vagrant box) which matches my online servers setup.



Solution 1:[1]

This is a concurrency problem.

You are only experiencing this in your production environment because that is where you have set up queue execution of jobs.

This means that there might be multiple jobs that run simultaneously.

So this happens:

  • job A tries to fetch $car (does not exist)
  • job B tries to fetch $car (does not exist)
  • job A then inserts it into the database
  • job B then tries to insert it into the database but can't because it has just been inserted by job A.

So you have to either add retries, or make use of "insert.. on duplicate key update" which makes the "update or create" on database level.

Even though laravel has a build in "updateOrCreate" function, this is not concurrency safe either!

An easy fix/way to test that this is actually the case is to wrap your code into

DB::transaction(function () {
    ... your code ....
}, 5);

Which will retry the transaction 5 times if it fails.

Solution 2:[2]

Another way to solve this is to ensure that the retry_after is longer than the longest executing job time.

So, if a job takes you 120 seconds, make sure to keep this value 180 seconds. You can find the retry_after value in the config/queue.php file.

Here is what mine looks like for the Redis connection

'redis' => [
            'driver' => 'redis',
            'connection' => 'default',
            'queue' => 'default',
            'retry_after' => 240, // the longest job runs for 180 seconds. To prevent job duplication, make this longer
            'block_for' => 2,
            'after_commit' => false,
        ],

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 thephper
Solution 2 Koushik Das