Go (or Golang) has revolutionized concurrent programming with its elegant implementation of coroutines, known as goroutines. If you’re coming from languages that make concurrency complicated, you’re in for a treat. Go’s approach makes concurrent programming accessible, efficient, and dare I say it—fun!
Today, we’ll dive deep into goroutines and explore how Go’s concurrency model works.
What Are Coroutines?
Coroutines are computer program components that allow execution to be suspended and resumed. Unlike traditional functions that must run to completion before returning control, coroutines can pause their execution, yield control, and then resume where they left off.
In Go, these are implemented as goroutines—lightweight threads managed by the Go runtime rather than the operating system. This makes them extremely efficient, allowing you to run thousands (or even millions) of concurrent tasks with minimal overhead.
Creating Your First Goroutine
Let’s start with a simple example:
package main
import (
"fmt"
"time"
)
func sayHello() {
fmt.Println("Hello Mum!")
}
func main() {
// Start a goroutine
go sayHello()
// This prevents the main function from exiting before the goroutine completes
time.Sleep(100 * time.Millisecond)
fmt.Println("Main function")
}
The magic happens with that simple go
keyword. By prefixing a function call with go
, you launch it as a separate goroutine that runs concurrently with the rest of your program.
Communication Between Goroutines: Channels
One of Go’s mottos is: “Do not communicate by sharing memory; instead, share memory by communicating.”
This is where channels come in. Channels are the pipes that connect concurrent goroutines, allowing them to send and receive values:
package main
import "fmt"
func sum(s []int, c chan int) {
sum := 0
for _, v := range s {
sum += v
}
c <- sum // Send sum to channel
}
func main() {
s := []int{7, 2, 8, -9, 4, 0}
c := make(chan int)
go sum(s[:len(s)/2], c)
go sum(s[len(s)/2:], c)
x, y := <-c, <-c // Receive from channel
fmt.Println(x, y, x+y)
}
Buffered Channels
By default, channels are unbuffered, meaning they only accept sends if there is a corresponding receive ready. Buffered channels accept a limited number of values without a corresponding receiver:
ch := make(chan int, 100)
This creates a buffered channel with a capacity of 100 integers.
Select Statement: Managing Multiple Channels
The select
statement lets a goroutine wait on multiple communication operations:
package main
import (
"fmt"
"time"
)
func main() {
c1 := make(chan string)
c2 := make(chan string)
go func() {
time.Sleep(1 * time.Second)
c1 <- "one"
}()
go func() {
time.Sleep(2 * time.Second)
c2 <- "two"
}()
for i := 0; i < 2; i++ {
select {
case msg1 := <-c1:
fmt.Println("Received", msg1)
case msg2 := <-c2:
fmt.Println("Received", msg2)
}
}
}
The select
statement blocks until one of its cases can proceed, then executes that case. If multiple cases are ready, it chooses one at random.
Synchronization with WaitGroups
What if we need to wait for multiple goroutines to finish? That’s where sync.WaitGroup
comes in:
package main
import (
"fmt"
"sync"
"time"
)
func worker(id int, wg *sync.WaitGroup) {
defer wg.Done() // Mark this goroutine as done when the function completes
fmt.Printf("Worker %d starting\n", id)
time.Sleep(time.Second)
fmt.Printf("Worker %d done\n", id)
}
func main() {
var wg sync.WaitGroup
for i := 1; i <= 5; i++ {
wg.Add(1) // Increment the counter
go worker(i, &wg)
}
wg.Wait() // Wait for all goroutines to complete
fmt.Println("All workers done")
}
Advanced Patterns: Worker Pools
Now let’s look at a common pattern: worker pools. This pattern allows you to process many tasks with a fixed number of workers:
package main
import (
"fmt"
"time"
)
func worker(id int, jobs <-chan int, results chan<- int) {
for j := range jobs {
fmt.Printf("Worker %d started job %d\n", id, j)
time.Sleep(time.Second) // Simulating work
fmt.Printf("Worker %d finished job %d\n", id, j)
results <- j * 2
}
}
func main() {
const numJobs = 10
jobs := make(chan int, numJobs)
results := make(chan int, numJobs)
// Start 3 workers
for w := 1; w <= 3; w++ {
go worker(w, jobs, results)
}
// Send jobs
for j := 1; j <= numJobs; j++ {
jobs <- j
}
close(jobs)
// Collect results
for a := 1; a <= numJobs; a++ {
<-results
}
}
Context for Cancellation and Timeouts
For more sophisticated control of goroutines, Go provides the context
package, which allows for cancellation, timeouts, and passing request-scoped values:
package main
import (
"context"
"fmt"
"time"
)
func doSomething(ctx context.Context) {
select {
case <-time.After(5 * time.Second):
fmt.Println("Work completed")
case <-ctx.Done():
fmt.Println("Work cancelled:", ctx.Err())
}
}
func main() {
// Create a context with a timeout of 2 seconds
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel() // Always cancel when done to release resources
go doSomething(ctx)
// Simulate other work
time.Sleep(3 * time.Second)
fmt.Println("Main function done")
}
Common Pitfalls and Best Practices
1. Race Conditions
Go provides a race detector to help identify race conditions:
go run -race yourprogram.go
2. Deadlocks
When goroutines are waiting for each other indefinitely, Go’s runtime will detect this:
func main() {
c := make(chan int)
c <- 1 // This will deadlock since nobody is receiving
<-c
}
Run this and Go will helpfully tell you: fatal error: all goroutines are asleep - deadlock!
3. Memory Leaks
Always close channels when no more data will be sent, and ensure all goroutines will eventually terminate:
// Correct way to close a channel (done by the sender)
close(channel)
Real-world Example: Concurrent Web Scraper
Let’s put everything together with a practical example—a simple concurrent web scraper:
package main
import (
"fmt"
"io"
"net/http"
"sync"
"time"
)
func fetchURL(url string, wg *sync.WaitGroup, results chan<- string) {
defer wg.Done()
start := time.Now()
resp, err := http.Get(url)
if err != nil {
results <- fmt.Sprintf("Error fetching %s: %v", url, err)
return
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
results <- fmt.Sprintf("Error reading %s: %v", url, err)
return
}
elapsed := time.Since(start)
results <- fmt.Sprintf("Fetched %s: %d bytes in %v", url, len(body), elapsed)
}
func main() {
urls := []string{
"https://golang.org",
"https://gitlab.com",
"https://stackoverflow.com”, // Still not dead, Impressive.
"https://smsk.dev",
}
var wg sync.WaitGroup
results := make(chan string, len(urls))
for _, url := range urls {
wg.Add(1)
go fetchURL(url, &wg, results)
}
// Start a goroutine to close the results channel when all fetches are done
go func() {
wg.Wait()
close(results)
}()
// Collect and print results
for result := range results {
fmt.Println(result)
}
}
Conclusion
Goroutines and channels are what make Go special. They provide a simple, powerful approach to concurrency that avoids many of the pitfalls found in other languages.
By mastering these concepts, you’ll be able to write concurrent programs that are both efficient and maintainable. Remember, Go’s philosophy is all about simplicity and practicality—let the language do the heavy lifting while you focus on solving the problem at hand.
Happy coding, Gophers!
Further Reading
Did you find this tutorial helpful? Let me know in the comments below!
Leave a Reply