'Clickhouse is using only one core after upgrading to version 22.3.2.1
I am using clickhouse version 22.3.2.1. I want my clickhouse to utilise multiple cores.
This is my profile configuration
<?xml version="1.0"?>
<yandex>
<profiles>
<default>
<max_insert_threads>12</max_insert_threads>
<max_threads>12</max_threads>
<min_insert_block_size_bytes>536870912</min_insert_block_size_bytes>
<min_insert_block_size_rows>1000000</min_insert_block_size_rows>
</default>
</profiles>
I had the same configuration with version v21.12 and it was working fine but after upgrading clickhouse to latest version. It is not using multiple cores.
this is my settings file
min_compress_block_size 65536
max_compress_block_size 1048576
max_block_size 65505
max_insert_block_size 1048545
min_insert_block_size_rows 1000000
min_insert_block_size_bytes 536870912
min_insert_block_size_rows_for_materialized_views 0
min_insert_block_size_bytes_for_materialized_views 0
max_joined_block_size_rows 65505
max_insert_threads 12
max_final_threads 16
max_threads 12
max_read_buffer_size 1048576
max_distributed_connections 1024
max_query_size 262144
interactive_delay 100000
connect_timeout 10
connect_timeout_with_failover_ms 50
connect_timeout_with_failover_secure_ms 100
receive_timeout 300
send_timeout 300
drain_timeout 3
tcp_keep_alive_timeout 290
hedged_connection_timeout_ms 100
receive_data_timeout_ms 2000
use_hedged_requests 1
allow_changing_replica_until_first_data_packet 0
queue_max_wait_ms 0
connection_pool_max_wait_ms 0
replace_running_query_max_wait_ms 5000
kafka_max_wait_ms 5000
rabbitmq_max_wait_ms 5000
poll_interval 10
idle_connection_timeout 3600
distributed_connections_pool_size 1024
connections_with_failover_max_tries 3
s3_min_upload_part_size 16777216
s3_upload_part_size_multiply_factor 2
s3_upload_part_size_multiply_parts_count_threshold 1000
s3_max_single_part_upload_size 33554432
s3_max_single_read_retries 4
s3_max_redirects 10
s3_max_connections 1024
s3_truncate_on_insert 0
s3_create_new_file_on_insert 0
hdfs_replication 0
hdfs_truncate_on_insert 0
hdfs_create_new_file_on_insert 0
hsts_max_age 0
extremes 0
use_uncompressed_cache 0
replace_running_query 0
background_buffer_flush_schedule_pool_size 16
background_pool_size 16
background_merges_mutations_concurrency_ratio 2
background_move_pool_size 8
background_fetches_pool_size 8
background_common_pool_size 8
background_schedule_pool_size 128
background_message_broker_schedule_pool_size 16
background_distributed_schedule_pool_size 16
max_replicated_fetches_network_bandwidth_for_server 0
max_replicated_sends_network_bandwidth_for_server 0
stream_like_engine_allow_direct_select 0
distributed_directory_monitor_sleep_time_ms 100
distributed_directory_monitor_max_sleep_time_ms 30000
distributed_directory_monitor_batch_inserts 0
distributed_directory_monitor_split_batch_on_failure 0
optimize_move_to_prewhere 1
optimize_move_to_prewhere_if_final 0
replication_alter_partitions_sync 1
replication_wait_for_inactive_replica_timeout 120
load_balancing random
load_balancing_first_offset 0
totals_mode after_having_exclusive
totals_auto_threshold 0.5
allow_suspicious_low_cardinality_types 0
compile_expressions 1
min_count_to_compile_expression 3
compile_aggregate_expressions 1
min_count_to_compile_aggregate_expression 3
group_by_two_level_threshold 100000
group_by_two_level_threshold_bytes 50000000
distributed_aggregation_memory_efficient 1
aggregation_memory_efficient_merge_threads 0
enable_positional_arguments 0
max_parallel_replicas 1
parallel_replicas_count 0
parallel_replica_offset 0
allow_experimental_parallel_reading_from_replicas 0
skip_unavailable_shards 0
parallel_distributed_insert_select 0
distributed_group_by_no_merge 0
distributed_push_down_limit 1
optimize_distributed_group_by_sharding_key 1
optimize_skip_unused_shards_limit 1000
optimize_skip_unused_shards 0
optimize_skip_unused_shards_rewrite_in 1
allow_nondeterministic_optimize_skip_unused_shards 0
force_optimize_skip_unused_shards 0
optimize_skip_unused_shards_nesting 0
force_optimize_skip_unused_shards_nesting 0
input_format_parallel_parsing 1
min_chunk_bytes_for_parallel_parsing 10485760
output_format_parallel_formatting 1
merge_tree_min_rows_for_concurrent_read 163840
merge_tree_min_bytes_for_concurrent_read 251658240
merge_tree_min_rows_for_seek 0
merge_tree_min_bytes_for_seek 0
merge_tree_coarse_index_granularity 8
merge_tree_max_rows_to_use_cache 1048576
merge_tree_max_bytes_to_use_cache 2013265920
do_not_merge_across_partitions_select_final 0
mysql_max_rows_to_insert 65536
optimize_min_equality_disjunction_chain_length 3
min_bytes_to_use_direct_io 0
min_bytes_to_use_mmap_io 0
checksum_on_read 1
force_index_by_date 0
force_primary_key 0
use_skip_indexes 1
use_skip_indexes_if_final 0
force_data_skipping_indices
max_streams_to_max_threads_ratio 1
max_streams_multiplier_for_merge_tables 5
network_compression_method LZ4
network_zstd_compression_level 1
priority 0
os_thread_priority 0
log_queries 1
log_formatted_queries 0
log_queries_min_type QUERY_START
log_queries_min_query_duration_ms 0
log_queries_cut_to_length 100000
log_queries_probability 1
distributed_product_mode deny
max_concurrent_queries_for_all_users 0
max_concurrent_queries_for_user 0
insert_deduplicate 1
insert_quorum 0
insert_quorum_timeout 600000
insert_quorum_parallel 1
select_sequential_consistency 0
table_function_remote_max_addresses 1000
read_backoff_min_latency_ms 1000
read_backoff_max_throughput 1048576
read_backoff_min_interval_between_events_ms 1000
read_backoff_min_events 2
read_backoff_min_concurrency 1
memory_tracker_fault_probability 0
enable_http_compression 0
http_zlib_compression_level 3
http_native_compression_disable_checksumming_on_decompress 0
count_distinct_implementation uniqExact
add_http_cors_header 0
max_http_get_redirects 0
use_client_time_zone 0
send_progress_in_http_headers 0
http_headers_progress_interval_ms 100
fsync_metadata 1
join_use_nulls 0
join_default_strictness ALL
any_join_distinct_right_table_keys 0
preferred_block_size_bytes 1000000
max_replica_delay_for_distributed_queries 300
fallback_to_stale_replicas_for_distributed_queries 1
preferred_max_column_in_block_size_bytes 0
insert_distributed_sync 0
insert_distributed_timeout 0
distributed_ddl_task_timeout 180
stream_flush_interval_ms 7500
stream_poll_timeout_ms 500
sleep_in_send_tables_status_ms 0
sleep_in_send_data_ms 0
unknown_packet_in_send_data 0
sleep_in_receive_cancel_ms 0
insert_allow_materialized_columns 0
http_connection_timeout 1
http_send_timeout 180
http_receive_timeout 180
http_max_uri_size 1048576
http_max_fields 1000000
http_max_field_name_size 1048576
http_max_field_value_size 1048576
http_skip_not_found_url_for_globs 1
optimize_throw_if_noop 0
use_index_for_in_with_subqueries 1
joined_subquery_requires_alias 1
empty_result_for_aggregation_by_empty_set 0
empty_result_for_aggregation_by_constant_keys_on_empty_set 1
allow_distributed_ddl 1
allow_suspicious_codecs 0
allow_experimental_codecs 0
query_profiler_real_time_period_ns 1000000000
query_profiler_cpu_time_period_ns 1000000000
metrics_perf_events_enabled 0
metrics_perf_events_list
opentelemetry_start_trace_probability 0
prefer_column_name_to_alias 0
prefer_global_in_and_join 0
max_rows_to_read 0
max_bytes_to_read 0
read_overflow_mode throw
max_rows_to_read_leaf 0
max_bytes_to_read_leaf 0
read_overflow_mode_leaf throw
max_rows_to_group_by 0
group_by_overflow_mode throw
max_bytes_before_external_group_by 0
max_rows_to_sort 0
max_bytes_to_sort 0
sort_overflow_mode throw
max_bytes_before_external_sort 0
max_bytes_before_remerge_sort 1000000000
remerge_sort_lowered_memory_bytes_ratio 2
max_result_rows 0
max_result_bytes 0
result_overflow_mode throw
max_execution_time 0
timeout_overflow_mode throw
min_execution_speed 0
max_execution_speed 0
min_execution_speed_bytes 0
max_execution_speed_bytes 0
timeout_before_checking_execution_speed 10
max_columns_to_read 0
max_temporary_columns 0
max_temporary_non_const_columns 0
max_subquery_depth 100
max_pipeline_depth 1000
max_ast_depth 1000
max_ast_elements 50000
max_expanded_ast_elements 500000
readonly 0
max_rows_in_set 0
max_bytes_in_set 0
set_overflow_mode throw
max_rows_in_join 0
max_bytes_in_join 0
join_overflow_mode throw
join_any_take_last_row 0
join_algorithm hash
default_max_bytes_in_join 1000000000
partial_merge_join_left_table_buffer_bytes 0
partial_merge_join_rows_in_right_blocks 65536
join_on_disk_max_files_to_merge 64
temporary_files_codec LZ4
max_rows_to_transfer 0
max_bytes_to_transfer 0
transfer_overflow_mode throw
max_rows_in_distinct 0
max_bytes_in_distinct 0
distinct_overflow_mode throw
max_memory_usage 28000000000
max_guaranteed_memory_usage 0
max_memory_usage_for_user 0
max_guaranteed_memory_usage_for_user 0
max_untracked_memory 4194304
memory_profiler_step 4194304
memory_profiler_sample_probability 0
memory_usage_overcommit_max_wait_microseconds 0
max_network_bandwidth 0
max_network_bytes 0
max_network_bandwidth_for_user 0
max_network_bandwidth_for_all_users 0
max_backup_threads 0
log_profile_events 1
log_query_settings 1
log_query_threads 1
log_query_views 1
log_comment
send_logs_level fatal
enable_optimize_predicate_expression 1
enable_optimize_predicate_expression_to_final_subquery 1
allow_push_predicate_when_subquery_contains_with 1
low_cardinality_max_dictionary_size 8192
low_cardinality_use_single_dictionary_for_part 0
decimal_check_overflow 1
prefer_localhost_replica 1
max_fetch_partition_retries_count 5
http_max_multipart_form_data_size 1073741824
calculate_text_stack_trace 1
allow_ddl 1
parallel_view_processing 0
enable_unaligned_array_join 0
optimize_read_in_order 1
optimize_aggregation_in_order 0
aggregation_in_order_max_block_bytes 50000000
read_in_order_two_level_merge_threshold 100
low_cardinality_allow_in_native_format 1
cancel_http_readonly_queries_on_client_close 0
external_table_functions_use_nulls 1
external_table_strict_query 0
allow_hyperscan 1
max_hyperscan_regexp_length 0
max_hyperscan_regexp_total_length 0
allow_simdjson 1
allow_introspection_functions 0
max_partitions_per_insert_block 100
max_partitions_to_read -1
check_query_single_value_result 1
allow_drop_detached 0
postgresql_connection_pool_size 16
postgresql_connection_pool_wait_timeout 5000
glob_expansion_max_elements 1000
odbc_bridge_connection_pool_size 16
distributed_replica_error_half_life 60
distributed_replica_error_cap 1000
distributed_replica_max_ignored_errors 0
allow_experimental_live_view 0
live_view_heartbeat_interval 15
max_live_view_insert_blocks_before_refresh 64
allow_experimental_window_view 0
window_view_clean_interval 5
window_view_heartbeat_interval 15
min_free_disk_space_for_temporary_data 0
default_database_engine Atomic
default_table_engine None
show_table_uuid_in_table_create_query_if_not_nil 0
database_atomic_wait_for_drop_and_detach_synchronously 0
enable_scalar_subquery_optimization 1
optimize_trivial_count_query 1
optimize_respect_aliases 1
mutations_sync 0
optimize_move_functions_out_of_any 0
optimize_normalize_count_variants 1
optimize_injective_functions_inside_uniq 1
convert_query_to_cnf 0
optimize_arithmetic_operations_in_aggregate_functions 1
optimize_duplicate_order_by_and_distinct 1
optimize_redundant_functions_in_order_by 1
optimize_if_chain_to_multiif 0
optimize_if_transform_strings_to_enum 0
optimize_monotonous_functions_in_order_by 1
optimize_functions_to_subcolumns 0
optimize_using_constraints 0
optimize_substitute_columns 0
optimize_append_index 0
normalize_function_names 1
allow_experimental_alter_materialized_view_structure 0
enable_early_constant_folding 1
deduplicate_blocks_in_dependent_materialized_views 0
use_compact_format_in_distributed_parts_names 1
validate_polygons 1
max_parser_depth 1000
temporary_live_view_timeout 5
periodic_live_view_refresh 60
transform_null_in 0
allow_nondeterministic_mutations 0
lock_acquire_timeout 120
materialize_ttl_after_modify 1
function_implementation
allow_experimental_geo_types 0
data_type_default_nullable 0
cast_keep_nullable 0
cast_ipv4_ipv6_default_on_conversion_error 0
alter_partition_verbose_result 0
allow_experimental_database_materialized_mysql 0
allow_experimental_database_materialized_postgresql 0
system_events_show_zero_values 0
mysql_datatypes_support_level
optimize_trivial_insert_select 1
allow_non_metadata_alters 1
enable_global_with_statement 1
aggregate_functions_null_for_empty 0
optimize_syntax_fuse_functions 0
optimize_fuse_sum_count_avg 0
flatten_nested 1
asterisk_include_materialized_columns 0
asterisk_include_alias_columns 0
optimize_skip_merged_partitions 0
optimize_on_insert 1
force_optimize_projection 0
async_socket_for_remote 1
insert_null_as_default 1
describe_extend_object_types 0
describe_include_subcolumns 0
optimize_rewrite_sum_if_to_count_if 1
insert_shard_id 0
allow_experimental_query_deduplication 0
engine_file_empty_if_not_exists 0
engine_file_truncate_on_insert 0
engine_file_allow_create_multiple_files 0
allow_experimental_database_replicated 0
database_replicated_initial_query_timeout_sec 300
max_distributed_depth 5
database_replicated_always_detach_permanently 0
database_replicated_allow_only_replicated_engine 0
distributed_ddl_output_mode throw
distributed_ddl_entry_format_version 1
external_storage_max_read_rows 0
external_storage_max_read_bytes 0
external_storage_connect_timeout_sec 10
external_storage_rw_timeout_sec 300
union_default_mode
optimize_aggregators_of_group_by_keys 1
optimize_group_by_function_keys 1
legacy_column_name_of_tuple_literal 0
query_plan_enable_optimizations 1
query_plan_max_optimizations_to_apply 10000
query_plan_filter_push_down 1
regexp_max_matches_per_row 1000
limit 0
offset 0
function_range_max_elements_in_block 500000000
short_circuit_function_evaluation enable
local_filesystem_read_method pread
remote_filesystem_read_method threadpool
local_filesystem_read_prefetch 0
remote_filesystem_read_prefetch 1
read_priority 0
merge_tree_min_rows_for_concurrent_read_for_remote_filesystem 163840
merge_tree_min_bytes_for_concurrent_read_for_remote_filesystem 251658240
remote_read_min_bytes_for_seek 4194304
async_insert_threads 16
async_insert 0
wait_for_async_insert 1
wait_for_async_insert_timeout 120
async_insert_max_data_size 100000
async_insert_busy_timeout_ms 200
async_insert_stale_timeout_ms 0
remote_fs_read_max_backoff_ms 10000
remote_fs_read_backoff_max_tries 5
remote_fs_enable_cache 1
remote_fs_cache_max_wait_sec 5
http_max_tries 10
http_retry_initial_backoff_ms 100
http_retry_max_backoff_ms 10000
force_remove_data_recursively_on_drop 0
check_table_dependencies 1
use_local_cache_for_remote_storage 1
allow_unrestricted_reads_from_keeper 0
allow_experimental_funnel_functions 0
allow_experimental_nlp_functions 0
allow_experimental_object_type 0
insert_deduplication_token
max_memory_usage_for_all_queries 0
multiple_joins_rewriter_version 0
enable_debug_queries 0
allow_experimental_database_atomic 1
allow_experimental_bigint_types 1
allow_experimental_window_functions 1
handle_kafka_error_mode default
database_replicated_ddl_output 1
replication_alter_columns_timeout 60
odbc_max_field_size 0
allow_experimental_map_type 1
merge_tree_clear_old_temporary_directories_interval_seconds 60
merge_tree_clear_old_parts_interval_seconds 1
partial_merge_join_optimizations 0
max_alter_threads \'auto(12)\'
allow_experimental_projection_optimization 1
format_csv_delimiter ,
format_csv_allow_single_quotes 1
format_csv_allow_double_quotes 1
output_format_csv_crlf_end_of_line 0
input_format_csv_enum_as_number 0
input_format_csv_arrays_as_nested_csv 0
input_format_skip_unknown_fields 0
input_format_with_names_use_header 1
input_format_with_types_use_header 1
input_format_import_nested_json 0
input_format_defaults_for_omitted_fields 1
input_format_csv_empty_as_default 1
input_format_tsv_empty_as_default 0
input_format_tsv_enum_as_number 0
input_format_null_as_default 1
input_format_use_lowercase_column_name 0
input_format_arrow_import_nested 0
input_format_orc_import_nested 0
input_format_orc_row_batch_size 100000
input_format_parquet_import_nested 0
input_format_allow_seeks 1
input_format_orc_allow_missing_columns 0
input_format_parquet_allow_missing_columns 0
input_format_arrow_allow_missing_columns 0
input_format_hive_text_fields_delimiter
input_format_hive_text_collection_items_delimiter
input_format_hive_text_map_keys_delimiter
input_format_msgpack_number_of_columns 0
output_format_msgpack_uuid_representation ext
input_format_max_rows_to_read_for_schema_inference 100
date_time_input_format basic
date_time_output_format simple
bool_true_representation true
bool_false_representation false
input_format_values_interpret_expressions 1
input_format_values_deduce_templates_of_expressions 1
input_format_values_accurate_types_of_literals 1
input_format_avro_allow_missing_fields 0
format_avro_schema_registry_url
output_format_json_quote_64bit_integers 1
output_format_json_quote_denormals 0
output_format_json_escape_forward_slashes 1
output_format_json_named_tuples_as_objects 0
output_format_json_array_of_rows 0
output_format_pretty_max_rows 10000
output_format_pretty_max_column_pad_width 250
output_format_pretty_max_value_width 10000
output_format_pretty_color 1
output_format_pretty_grid_charset UTF-8
output_format_parquet_row_group_size 1000000
output_format_avro_codec
output_format_avro_sync_interval 16384
output_format_avro_string_column_pattern
output_format_avro_rows_in_file 1
output_format_tsv_crlf_end_of_line 0
format_csv_null_representation \\N
format_tsv_null_representation \\N
output_format_decimal_trailing_zeros 0
input_format_allow_errors_num 0
input_format_allow_errors_ratio 0
format_schema
format_template_resultset
format_template_row
format_template_rows_between_delimiter \n
format_custom_escaping_rule Escaped
format_custom_field_delimiter \t
format_custom_row_before_delimiter
format_custom_row_after_delimiter \n
format_custom_row_between_delimiter
format_custom_result_before_delimiter
format_custom_result_after_delimiter
format_regexp
format_regexp_escaping_rule Raw
format_regexp_skip_unmatched 0
output_format_enable_streaming 0
output_format_write_statistics 1
output_format_pretty_row_numbers 0
insert_distributed_one_random_shard 0
cross_to_inner_join_rewrite 1
output_format_arrow_low_cardinality_as_dictionary 0
format_capn_proto_enum_comparising_mode by_values
Any help would be appreciated thanks
Solution 1:[1]
Look like you run clickhouse inside docker, issue related with cgroups limits calculation And fixed on next 22.3.x
Look details https://github.com/ClickHouse/ClickHouse/pull/35815
Solution 2:[2]
After your comments
look like you need to increase max_insert_threads for INSERT ... SELECT ...
https://clickhouse.com/docs/en/operations/settings/settings/#settings-max-insert-threads
and check EXPLAIN for SELECT https://clickhouse.com/docs/en/sql-reference/statements/explain/
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Slach |
| Solution 2 | Slach |
