'Does it make sense to use Multithreading, Multiprocessing or Concurrent Futures to speed up a process that is basically just iterating

In an endeavour to find the best starting values for a minimisation problem I build something similar following code:

def minimize(values):
value1, value2, value3 = values
return (complex_bunch_of_maths)
# (((value1_o_t_c*0.95)*t_i*value1_w_a*value1_coe*value4_w_a*value4_coe)-(value1_cv*0.98))**2+(((value1_o_t_c*0.05)*t_i*value1_w_a*value1_coe*value5_w_a*value5_coe)-(value1_cv*0.02))**2+(((value2_o_t_c*0.95)*t_i*value2_w_a*value2_coe*value4_w_a*value4_coe)-(value2_cv*0.98))**2+(((value2_o_t_c*0.05)*t_i*value2_w_a*value2_coe*value5_w_a*value5_coe)-(value2_cv*0.02))**2+(((value3_o_t_c*0.95)*t_i*value3_w_a*value3_coe*value4_w_a*value4_coe)-(value3_cv*0.98))**2+(((value3_o_t_c*0.05)*t_i*value3_w_a*value3_coe*value5_w_a*value5_coe)-(value3_cv*0.02))**2

def fun():
    for x in range(start,iterations):
        for y in range(start, iterations):
            for z in range(start, iterations):
                res = optimize.minimize(minimize, (x,y,z))
                #If it's the best result so far, keep it, otherwise continue
#Print the best result

Would it make sense to instead somehow let every "x" iteration run separately and then make them run "parallel" "threaded" or "concurrent".

I don't know what any of these terms truly mean, but since the runtime of this increases exponentially I want to use all tools at my disposal (that I actually have the skills to implement)



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source