March 15, 2026|10 min read

Why I'm Learning Go in 2026

GoCLI ToolsBackend DevelopmentSystems Programming

After years of TypeScript and a recent deep dive into Ruby on Rails, I'm adding Go to my toolkit. This isn't about chasing trends. It's about aligning my skills with the kind of systems I want to build.

The Why

TypeScript is great for web applications. Rails is great for rapid prototyping. But when I look at the infrastructure and platform-level work I'm gravitating toward (CLI tools, distributed systems, high-performance APIs), Go keeps showing up.

The language is intentionally simple. The concurrency model is built-in. The binaries are small and fast. And the ecosystem around DevOps and cloud-native tooling is dominated by Go: Docker, Kubernetes, Terraform, Prometheus. If you're building infrastructure tools, Go is the lingua franca.

Learning by Building: Cerebro

I didn't learn Go from tutorials. I built a real tool called Cerebro: a CLI that wraps the Datadog API for fast incident diagnosis. Instead of clicking through dashboards in a browser, you type a command and get a color-coded health check across all your services. That project taught me more about Go in a few weeks than months of reading docs would have.

Here's what I picked up along the way.

Package Main and the Entry Point

Every Go program starts with package main and a main() function. That's it. No framework, no bootstrap file, no dependency injection container. Compare that to a NestJS app where you need a module, a bootstrap function, and a decorator-heavy setup.

package main

import "github.com/EDU-20/cerebro/cmd"

func main() {
    cmd.Execute()
}

Three lines. The entire application entry point. The cmd package handles everything else. This is one of Go's core principles: simplicity at the top level, complexity pushed down into packages.

Structs Instead of Classes

Go doesn't have classes. It has structs. If you're coming from TypeScript or Java, this feels limiting at first. But structs with methods give you everything you need without inheritance hierarchies.

type Client struct {
    Ctx       context.Context
    APIClient *datadog.APIClient
    Config    *config.Config
}

func NewClient(cfg *config.Config) (*Client, error) {
    if cfg.APIKey == "" {
        return nil, fmt.Errorf("DD_API_KEY is required")
    }

    ctx := context.WithValue(context.Background(),
        datadog.ContextAPIKeys, map[string]datadog.APIKey{
            "apiKeyAuth": {Key: cfg.APIKey},
            "appKeyAuth": {Key: cfg.AppKey},
        })

    configuration := datadog.NewConfiguration()
    apiClient := datadog.NewAPIClient(configuration)

    return &Client{
        Ctx:       ctx,
        APIClient: apiClient,
        Config:    cfg,
    }, nil
}

No class, no constructor, no this. You define a struct, then write a NewXxx function that returns a pointer to it. That's the constructor pattern in Go. Validation happens before construction: if the API key is missing, you get an error back immediately.

Custom Types and Enums with iota

Go doesn't have enums the way TypeScript does. Instead, you create a custom type and use iota to auto-increment constants. Then you attach a String() method so the type can describe itself.

type ServiceStatus int

const (
    StatusGreen  ServiceStatus = iota  // 0
    StatusYellow                        // 1
    StatusRed                           // 2
)

func (s ServiceStatus) String() string {
    switch s {
    case StatusGreen:
        return "GREEN"
    case StatusYellow:
        return "YELLOW"
    case StatusRed:
        return "RED"
    default:
        return "UNKNOWN"
    }
}

In Cerebro, this powers the health check output. Each service gets evaluated against thresholds, and the status drives the terminal color: green, yellow, or red.

Error Handling: Verbose, but Honest

This is the one that trips up every developer coming from try/catch languages. Go doesn't have exceptions. Every function that can fail returns an error as its last return value, and you check it immediately.

func Load() (*Config, error) {
    home, err := os.UserHomeDir()
    if err != nil {
        return nil, fmt.Errorf("finding home directory: %w", err)
    }

    viper.SetConfigName(".cerebro")
    viper.SetConfigType("yaml")
    viper.AddConfigPath(home)

    if err := viper.ReadInConfig(); err != nil {
        if _, ok := err.(viper.ConfigFileNotFoundError); !ok {
            return nil, fmt.Errorf("reading config: %w", err)
        }
    }

    var cfg Config
    if err := viper.Unmarshal(&cfg); err != nil {
        return nil, fmt.Errorf("parsing config: %w", err)
    }

    return &cfg, nil
}

The %w verb wraps the original error with context so you get error chains like "reading config: file is corrupt." The type assertion (err.(viper.ConfigFileNotFoundError)) lets you check for specific error types and handle them differently. A missing config file is fine (use defaults). A corrupt config file is not.

Yes, it's verbose. But after working with it, I started to appreciate how explicit it is. Every failure path is visible. There's no hidden exception that bubbles up three layers and crashes your program at 2am.

Goroutines and Concurrency

This is where Go really shines. Concurrency is a first-class citizen, not a library you bolt on. When Cerebro checks the health of all services, it fans out the requests in parallel using goroutines and a WaitGroup.

results := make([]result, len(svcNames))
var wg sync.WaitGroup

for i, name := range svcNames {
    wg.Add(1)
    go func(idx int, svcName string) {
        defer wg.Done()
        svc, err := services.GetService(cfg, svcName)
        if err != nil {
            results[idx] = result{name: svcName, err: err}
            return
        }
        // ... query metrics, evaluate thresholds
        overall := services.OverallStatus(metricStatuses)
        results[idx] = result{name: svcName, status: overall}
    }(i, name)
}

wg.Wait()

Each goroutine writes to its own index in the results slice, so no mutex is needed. But when multiple goroutines append to a shared slice, you need a mutex to prevent race conditions:

var mu sync.Mutex
var wg sync.WaitGroup

for metricName, metric := range svc.Metrics {
    wg.Add(1)
    go func(mName string, m config.Metric) {
        defer wg.Done()
        ms := services.EvaluateMetric(mName, m, res)
        mu.Lock()
        metricStatuses = append(metricStatuses, ms)
        mu.Unlock()
    }(metricName, metric)
}

wg.Wait()

In TypeScript, you'd use Promise.all() for parallel execution. In Go, you spawn goroutines with go, wait with sync.WaitGroup, and protect shared state with sync.Mutex. The patterns map to each other conceptually, but Go gives you lower-level control.

The internal/ Convention

Go has a built-in access control mechanism that I wish TypeScript had. Any package under an internal/ directory cannot be imported by code outside your module. In Cerebro, the structure looks like:

cerebro/
  main.go
  cmd/
    root.go
    status.go
    metrics.go
    alerts.go
    logs.go
  internal/
    config/config.go
    datadog/client.go
    datadog/metrics.go
    datadog/monitors.go
    datadog/logs.go
    output/formatter.go
    services/registry.go

Everything under internal/ is private to the module. The cmd package orchestrates, the internal packages do the actual work. This is enforced by the Go compiler, not by convention or a linter rule. If someone tries to import internal/datadog from outside the module, it won't compile.

CLI Tooling with Cobra

Building CLI tools is where Go feels most natural. The Cobra framework (used by kubectl, Hugo, and GitHub CLI) gives you subcommands, flags, and help text generation.

var rootCmd = &cobra.Command{
    Use:   "cerebro",
    Short: "Datadog CLI for rapid incident diagnosis",
}

func Execute() {
    if err := rootCmd.Execute(); err != nil {
        fmt.Fprintln(os.Stderr, err)
        os.Exit(1)
    }
}

func init() {
    rootCmd.PersistentFlags().StringVarP(&flagEnv, "env", "e", "", "environment")
    rootCmd.PersistentFlags().StringVarP(&flagFormat, "format", "f", "table", "output format")
}

The init() function is a Go-specific concept: it runs automatically when the package is loaded. No explicit registration needed. And PersistentFlags means those flags are inherited by every subcommand. So cerebro status --env prod and cerebro alerts --env staging both work without duplicating flag definitions.

HTTP Without a Framework

Go's standard library includes a production-ready HTTP server. No Express, no Fastify, no framework required.

package main

import (
    "encoding/json"
    "net/http"
)

type HealthResponse struct {
    Status  string `json:"status"`
    Version string `json:"version"`
}

func healthHandler(w http.ResponseWriter, r *http.Request) {
    resp := HealthResponse{Status: "ok", Version: "1.0.0"}
    w.Header().Set("Content-Type", "application/json")
    json.NewEncoder(w).Encode(resp)
}

func main() {
    http.HandleFunc("/health", healthHandler)
    http.ListenAndServe(":8080", nil)
}

That's a complete HTTP server with JSON responses. No dependencies. The struct tags (json:"status") control serialization, similar to decorators in NestJS but built into the language.

What I'd Tell a TypeScript Developer Starting Go

  • Stop looking for the framework. Go's standard library covers HTTP, JSON, testing, and concurrency. You don't need an Express equivalent.
  • Embrace the verbosity. The if err != nil pattern feels repetitive, but it makes every failure explicit. After a few weeks, you stop fighting it.
  • Think in packages, not classes. Go organizes code by package, not by class hierarchy. Group by responsibility, not by type.
  • Start with a CLI tool. Go compiles to a single binary with zero dependencies. No node_modules, no runtime. Build a small CLI tool and experience the deployment story firsthand.
  • Concurrency is not parallelism. Goroutines are lightweight, but shared state still needs protection. Learn WaitGroup and Mutex before reaching for channels.

What's Next

Go is becoming a permanent part of my toolkit alongside TypeScript. For web applications and frontends, TypeScript is still my first choice. For CLI tools, infrastructure automation, and high-performance backend services, Go is the better fit.

I'll be writing more about specific patterns as I go deeper. The goal is to build production-quality tools, not just toy projects. Cerebro was the first one. More to come.