'the performance difference between atomic.AddInt64 and sync.Mutex?

I'm learning about golang sync/atomic and did some performance test. below is the test code.

var TestNum int64

var benchCount = 1000000

func Benchmark_T1(b *testing.B) {
    wg := sync.WaitGroup{}
    for i := 0; i < benchCount; i++ {
        go func() {
            wg.Add(1)
            defer wg.Done()
            t1()
        }()
    }
    wg.Wait()
}

func Benchmark_T2(b *testing.B) {
    wg := sync.WaitGroup{}
    mutex := &sync.Mutex{}
    for i := 0; i < benchCount; i++ {
        go func() {
            wg.Add(1)
            defer wg.Done()
            t2(mutex)
        }()
    }
    wg.Wait()
}

func t1() {
    atomic.AddInt64(&TestNum, 1)
    time.Sleep(time.Second)
    defer atomic.AddInt64(&TestNum, -1)
}

func t2(mutex *sync.Mutex) {
    mutex.Lock()
    TestNum++
    mutex.Unlock()
    time.Sleep(time.Second)
    defer func() {
        mutex.Lock()
        TestNum--
        mutex.Unlock()
    }()
}

The benchmark result is

goos: darwin
goarch: amd64
cpu: Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz
Benchmark_T1
Benchmark_T1-16            1    2561935189 ns/op
Benchmark_T2
Benchmark_T2-16            1    1612732160 ns/op

The result shows the performance of sync.Mutex is better than atomic.AddInt64? I thought atomic should have better performance since it uses atomic CPU instruction. So why I got the opposite result?



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source