'How to circumvent ActiveRecord query optimization when counter cache is 0
It seems that ActiveRecord optimizes the loading of associated models: when the counter cache column is 0, it assumes there are no associated models and therefore does not execute a SELECT, and immediately returns an empty CollectionProxy.
This causes an annoyance in a model test, where fixtures are INSERTed into the database outside of the ActiveRecord lifecycle - all counter caches are 0. One workaround is to explicitly define values for counter cache attributes in the fixture files. Another is to invoke update_counters in a test.
But, is there a way to "force" ActiveRecord to execute an association query when a counter cache column is 0? (And, why the heck does it execute the query within the debugger but not in the test? See below for the suspense on that...)
Here are the details of a scenario to illustrate.
After adding a counter cache column to a model, a deletion test is now failing, because the fixture data does not seem to be loading associated objects the same way as it was before the counter cache was added.
Given existing Customer and CustomerType models:
class Customer < ApplicationRecord
belongs_to :customer_type
# ...
end
class CustomerType < ApplicationRecord
has_many :customers, dependent: :restrict_with_error
# ...
end
I have the following fixture in customers.yml:
one:
name: Foo
customer_type: one
And the following fixture in customer_types.yml:
one:
name: Fake Customer Type 1
Prior to the addition of the counter cache, this test passes:
test 'cannot be deleted if it has associated customers' do
customer_type = customers.first.customer_type
assert_not_empty customer_type.customers
customer_type.destroy
refute customer_type.destroyed?
end
I run the following migration:
class AddCustomersCountToCustomerType < ActiveRecord::Migration[7.0]
def change
add_column :customer_types, :customers_count, :integer, default: 0, null: false
reversible do |dir|
dir.up { update_counter }
end
end
def update_counter
execute <<-SQL.squish
UPDATE customer_types
SET customers_count = (SELECT count(1)
FROM customers
WHERE customers.customer_type_id = customer_types.id);
SQL
end
end
And update the belongs_to declaration in Customer:
class Customer < ApplicationRecord
belongs_to :customer_type, counter_cache: true
# ...
end
And then the test fails on the first assertion!
test 'cannot be deleted if it has associated customers' do
customer_type = customers.first.customer_type
assert_not_empty customer_type.customers
Now, when I add a binding.break before the assertion, and I evaluate:
customer_type.customers
The collection has a customer in it and is not empty. Continuing from the breakpoint then passes the assert_not_empty assertion!
But, if I rerun the test and, at the breakpoint, I only call assert_not_empty customer_type.customers the assertion fails, and calling customer_type.customers returns an empty list. (!)
What is maddening is that, if I invoke customer_type.customers in the test before the assertion, it still fails, despite seeing different behavior when I drop into the breakpoint!
This fails:
customer_type = customers.first.customer_type
customer_type.customers
binding.break # in the debugger, immediately continue
assert_not_empty customer_type.customers
But this passes:
customer_type = customers.first.customer_type
customer_type.customers
binding.break # in the debugger, call customer_type.customers
assert_not_empty customer_type.customers
There are no other callbacks or interesting persistence behaviors in these models - they are a very vanilla 1:M.
Here are some observations of the test log.
Here is the test.
test 'cannot be deleted if it has associated customers' do
Rails.logger.info("A")
customer_type = customers.first.customer_type
Rails.logger.info("B")
customer_type.customers
Rails.logger.info("C")
binding.break
Rails.logger.info("D")
assert_not_empty customer_type.customers
Rails.logger.info("E")
customer_type.destroy
refute customer_type.destroyed?
end
Now, without the counter_cache option, the test passes and I see this in the log:
-----------------------------------------------------------------------
CustomerTypeTest: test_cannot_be_deleted_if_it_has_associated_customers
-----------------------------------------------------------------------
A
Customer Load (0.4ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 980190962], ["LIMIT", 1]]
Customer Load (0.5ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 298486374], ["LIMIT", 1]]
Customer Load (0.3ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 338564009], ["LIMIT", 1]]
CustomerType Load (0.5ms) SELECT "customer_types".* FROM "customer_types" WHERE "customer_types"."id" = $1 ORDER BY name ASC LIMIT $2 [["id", 980190962], ["LIMIT", 1]]
B
C
D
Customer Exists? (0.9ms) SELECT 1 AS one FROM "customers" WHERE "customers"."customer_type_id" = $1 LIMIT $2 [["customer_type_id", 980190962], ["LIMIT", 1]]
E
CACHE Customer Exists? (0.0ms) SELECT 1 AS one FROM "customers" WHERE "customers"."customer_type_id" = $1 LIMIT $2 [["customer_type_id", 980190962], ["LIMIT", 1]]
TRANSACTION (0.8ms) ROLLBACK
Makes sense.
Now, I re-enable the counter_cache option. When I hit the breakpoint, I immediately continue. Test fails. Here is the log output:
-----------------------------------------------------------------------
CustomerTypeTest: test_cannot_be_deleted_if_it_has_associated_customers
-----------------------------------------------------------------------
A
Customer Load (0.2ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 980190962], ["LIMIT", 1]]
Customer Load (0.4ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 298486374], ["LIMIT", 1]]
Customer Load (0.4ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 338564009], ["LIMIT", 1]]
CustomerType Load (0.6ms) SELECT "customer_types".* FROM "customer_types" WHERE "customer_types"."id" = $1 ORDER BY name ASC LIMIT $2 [["id", 980190962], ["LIMIT", 1]]
B
C
D
TRANSACTION (0.6ms) ROLLBACK
So, here it seems that, when an association has a counter_cache column, and the value is 0 (It is zero because fixtures do not trigger counter incrementing), ActiveRecord is "optimizing" by not even executing a query. (Can anyone confirm this in the source / changelog?)
Now, here is the messed up thing. Same test, but when I hit the breakpoint, in the debugger I invoke customer_type.customers. Test passes. Here is the log.
-----------------------------------------------------------------------
CustomerTypeTest: test_cannot_be_deleted_if_it_has_associated_customers
-----------------------------------------------------------------------
A
Customer Load (0.6ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 980190962], ["LIMIT", 1]]
Customer Load (0.4ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 298486374], ["LIMIT", 1]]
Customer Load (0.3ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 338564009], ["LIMIT", 1]]
CustomerType Load (0.8ms) SELECT "customer_types".* FROM "customer_types" WHERE "customer_types"."id" = $1 ORDER BY name ASC LIMIT $2 [["id", 980190962], ["LIMIT", 1]]
B
C
Customer Load (48.1ms) SELECT "customers".* FROM "customers" WHERE "customers"."customer_type_id" = $1 ORDER BY last_name ASC [["customer_type_id", 980190962]]
D
E
TRANSACTION (1.6ms) ROLLBACK
Why the heck is an explicit customer_type.customers invocation in the debugger causing a query, but that exact same statement in my test is not?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
