Go Concurrency
0 0 votes
Article Rating

Go Concurrency: Mutexes vs Channels with Examples

Can we use Buffered and Unbuffered Channels instead of Mutexes to handle synchronization in Go? Let’s know how.

Introduction

When building concurrent applications in Go, synchronization is crucial to ensure that shared data is accessed safely. In Go, Mutexes and Channels are the primary tools used for synchronization.

Motivation

I am learning Golang these days, and I came across an interesting problem in which I need to build a counter that is safe to use concurrently.

However, in the mentioned article, the author solved the problem using one approach: Mutexes. But I was wondering if I could solve the same problem using the Buffered Channels and the Unbuffered Channels.

Have a look at the counter code:

package main

type Counter struct {
	count int
}

func (c *Counter) Inc() {
	c.count++
}

func (c *Counter) Value() int {
	return c.count
}

Please find the code here.

To ensure our code is safe to use concurrently, let’s dive into writing some tests.

Let’s start with the simplest approach first.

1) Mutexes

A Mutex (short for “mutual exclusion”) is a synchronization primitive that ensures only one goroutine can access a critical section of code at a time.

It provides a lock mechanism, when one goroutine locks a mutex, other goroutines trying to lock it will block until the mutex is unlocked. So it is typically used when you need to protect shared variables or resources from race conditions.

package main

import (
	"sync"
	"testing"
)

func TestCounter(t *testing.T) {
	t.Run("using mutexes and wait groups", func(t *testing.T) {
		counter := Counter{}
		wantedCount := 1000

		var wg sync.WaitGroup
		var mut sync.Mutex

		wg.Add(wantedCount)

		for i := 0; i < wantedCount; i++ {
			go func() {
				mut.Lock()
				counter.Inc()
				mut.Unlock()
				wg.Done()
			}()
		}

		wg.Wait()

		if counter.Value() != wantedCount {
			t.Errorf("got %d, want %d", counter.Value(), wantedCount)
		}
	})
}
  • sync.WaitGroup Wait Groups are used to track the completion of all goroutines.
  • sync.Mutex is used to prevent concurrent access to the shared counter by multiple goroutines at the same time (to avoid a race condition).
  • The loop starts 1000 goroutines. Each goroutine does the following:
    1. mut.Lock(): First locks the mutex before accessing the counter and calling its Inc() method. This ensures that only one goroutine can increment the counter at a time, preventing race conditions.
    2. counter.Inc(): Only one goroutine can call this method at a time, thanks to the mutex lock.
    3. mut.Unlock(): Unlocks the mutex after the counter is incremented. This allows other goroutines to acquire the lock and perform their own increment operations.
    4. wg.Done(): Calls wg.Done() to signal that it has completed its work (incrementing the counter). This reduces the WaitGroup counter by one.
  • wg.Wait(): This makes the main goroutine wait until all 1000 worker goroutines have been completed. The Wait() method blocks until the WaitGroup counter reaches zero (when all wg.Done() calls have been made).

2) Buffered Channels

Buffered-Channels in Go
By MART PRODUCTION

Channels are Go’s way of allowing goroutines to communicate with each other safely. They enable the transfer of data between goroutines and provide synchronization by controlling access to the data being passed.

Having that said, in our example, we will leverage this fact in channels to block goroutines and let just one goroutine access the shared data.

In this case, Buffered Channels have a fixed capacity, meaning they can hold a predefined number of elements before blocking the sender. The sender only blocks when the buffer is full.

package main

import (
	"sync"
	"testing"
)

func TestCounter(t *testing.T) {
	t.Run("using buffered channels and wait groups", func(t *testing.T) {
		counter := Counter{}
		wantedCount := 1000

		var wg sync.WaitGroup
		wg.Add(wantedCount)

		ch := make(chan struct{}, 1)

		ch <- struct{}{}

		for i := 0; i < wantedCount; i++ {
			go func() {
				<-ch
				counter.Inc()
				ch <- struct{}{}
				wg.Done()
			}()
		}

		wg.Wait()

		if counter.Value() != wantedCount {
			t.Errorf("got %d, want %d", counter.Value(), wantedCount)
		}
	})
}
  • ch := make(chan struct{}, 1): A buffered channel ch with a capacity of 1 is created. The buffer size of 1 allows just one goroutine to write to the channel at a time.
  • chan struct{}: Used an empty struct instead of other types (like int, bool, etc.), because it takes no memory. It has a size of 0 bytes. This makes it ideal for scenarios like signaling where you don’t need to pass any actual data, just a signal. On the other hand, other types (like int, bool, etc.) would consume more memory, which isn’t necessary when you just need a signal.
  • ch <- struct{}{}: The first signal is sent to the buffered channel from the main function to allow the first goroutine to start. Since the channel has a capacity of 1, this operation doesn’t block and enables the first worker goroutine to proceed.
  • The loop starts 1000 goroutines. Each goroutine does the following:
    1. <-ch Waits for a signal coming from the previously finished goroutine or the first signal in case of the first loop (the previous point) to increment the counter.
    2. counter.Inc(): Once the signal is received, the counter is incremented by 1.
    3. ch <- struct{}{}: After incrementing, the goroutine sends a signal to the channel, allowing the next goroutine to proceed.

3) Unbuffered Channels

By Khwanchai Phanthong

These channels do not have a buffer. They block the sender until the receiver is ready to receive the data. This provides strict synchronization where data is passed between goroutines one at a time.

package main

import (
	"sync"
	"testing"
)

func TestCounter(t *testing.T) {
	t.Run("using unbuffered channels and wait groups", func(t *testing.T) {
		counter := Counter{}
		wantedCount := 1000

		var wg sync.WaitGroup
		wg.Add(wantedCount)

		ch := make(chan struct{})

		go func() {
			ch <- struct{}{}
		}()

		for i := 0; i < wantedCount; i++ {
			go func() {
				<-ch

				counter.Inc()
				
				go func() {
					ch <- struct{}{}
				}()

				wg.Done()
			}()
		}

		wg.Wait()

		if counter.Value() != wantedCount {
			t.Errorf("got %d, want %d", counter.Value(), wantedCount)
		}
	})
}
  • ch := make(chan struct{}): An Unbuffered Channel is created with the type struct{}. The type struct{} is used because it doesn’t hold any data, and the channel is used purely for signaling.
  • go func() { ch <- struct{}{} } (): This anonymous goroutine sends an initial signal to the channel, which allows the first worker goroutine to start executing. The channel will be empty initially, so the first signal unlocks the first goroutine.
  • The loop starts 1000 goroutines. Each goroutine does the following:
    1. <-ch: Waits for the signal (the struct{}{} value) from the channel. Since the channel is unbuffered, the goroutine is blocked until another goroutine sends a signal to the channel. This ensures that only one goroutine will run at a time.
    2. counter.Inc(): Increments the counter by 1 once the signal is received. This is the critical section where the counter is updated, and it is protected by the signaling mechanism, so only one goroutine can increment the counter at any time.
    3. go func() { ch <- struct{}{} } (): Sends a signal back to the channel after incrementing the counter, allowing the next waiting goroutine to start. The sending of the signal is done in a separate goroutine to ensure that the channel operation (ch <- struct{}{}) doesn’t block the outer goroutine. This ensures that the channel operations do not cause a deadlock.

4) Buffered Channels without Wait Groups

After solving this problem using the aforementioned solutions, I asked myself, “Can I solve it without Wait Groups?”. Actually, I came up with two solutions.

In fact, Wait Groups make the main function wait until all the sub-goroutines are completed. So I thought that we could use either an infinite loop that breaks in a condition or we can use another channel to track the goroutines’ completion.

Let’s jump into the code using the infinite loop.

package main

import (
	"sync"
	"testing"
)

func TestCounter(t *testing.T) {
	t.Run("using buffered channels without wait groups (infinite loop)", func(t *testing.T) {
		counter := Counter{}
		wantedCount := 1000

		ch := make(chan struct{}, 1)

		ch <- struct{}{}

		for i := 0; i < wantedCount; i++ {
			go func() {
				<-ch
				counter.Inc()
				ch <- struct{}{}
			}()
		}

		for {
			if counter.Value() == wantedCount {
				break
			}
		}

		if counter.Value() != wantedCount {
			t.Errorf("got %d, want %d", counter.Value(), wantedCount)
		}
	})
}

As you see, it is a naive solution, instead of the Wait Groups I am using an Infinite Loop that loops until matching this condition counter.Value() == wantedCount which means that all goroutines are completed. Simple.

The other solution is using another channel.

package main

import (
	"sync"
	"testing"
)

func TestCounter(t *testing.T) {
	t.Run("using buffered channels without wait groups (another channel)", func(t *testing.T) {
		counter := Counter{}
		wantedCount := 1000

		ch := make(chan struct{}, 1)
		wc := make(chan struct{}, 1)

		ch <- struct{}{}

		for i := 0; i < wantedCount; i++ {
			go func() {
				<-ch
				counter.Inc()
				ch <- struct{}{}

				if counter.Value() == wantedCount {
					close(wc)
				}
			}()
		}

		<-wc

		if counter.Value() != wantedCount {
			t.Errorf("got %d, want %d", counter.Value(), wantedCount)
		}
	})
}
  • As you see, I am using another waiting channel wc that will close “signal” by the end of the last goroutine close(wc).
  • <-wc During the work of the goroutines, this receiver blocks the code until receiving a signal from its sender wc.
  • close(wc) By closing the wc channel, it sends a signal to the receiver <-wc that it is done.
  • At this time, the wc channel releases the block which makes us guarantee that all goroutines are completed.

Conclusion

In this article, we explored different ways to solve the problem of building a counter that is safe to use concurrently in Go. While the article we referenced implemented the solution using Mutexes, we also discussed alternative approaches using Buffered and Unbuffered Channels.

Understanding these tools and when to use them is key to writing efficient and safe concurrent Go programs.

So, whether you choose Mutexes, Buffered, or Unbuffered Channels, mastering synchronization in Go is essential, it will help you build robust applications that can handle concurrency with ease.

Resources

In fact, this article is inspired by the Sync Chapter in Learn Go with tests.

Think about it

If you liked this article please rate and share it to spread the word, really, that encourages me a lot to create more content like this.

If you found my content helpful, it is a good idea to subscribe to my newsletter. Make sure that I respect your inbox so I'll never spam you, and you can unsubscribe at any time!

You can check out more articles as well:

Thanks a lot for staying with me up till this point. I hope you enjoy reading this article.

0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Scroll to Top