In the world of Go development, writing code is only half the battle. The other half—often more challenging and equally critical—is ensuring that your code works correctly, performs efficiently, and remains resilient under all conditions. As Go continues to power mission-critical systems across industries, from financial services to cloud infrastructure, the importance of comprehensive testing strategies cannot be overstated.
This guide explores advanced testing techniques that go beyond basic unit tests to help you build truly bulletproof Go applications. We’ll dive deep into sophisticated testing patterns, benchmarking strategies, and quality assurance practices that experienced Go developers use to ensure their systems perform reliably at scale.
Advanced Unit Testing Patterns
While basic unit testing is familiar to most Go developers, advanced patterns can dramatically improve test quality, maintainability, and coverage.
Table-Driven Tests with Sophisticated Fixtures
Table-driven tests are a Go testing staple, but they can be elevated to handle complex scenarios with sophisticated fixtures:
package payment
import (
"context"
"errors"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// TestProcessPayment demonstrates advanced table-driven tests with complex fixtures
func TestProcessPayment(t *testing.T) {
// Define reusable test fixtures
type testAccount struct {
id string
balance float64
currency string
isBlocked bool
}
type testPayment struct {
amount float64
currency string
destination string
metadata map[string]interface{}
}
// Define test case structure
type testCase struct {
name string
account testAccount
payment testPayment
setupMock func(*mockPaymentProcessor)
expectedResult *PaymentResult
expectedError error
timeout time.Duration
}
// Create test cases
testCases := []testCase{
{
name: "successful payment with sufficient funds",
account: testAccount{
id: "acc-123",
balance: 1000.00,
currency: "USD",
},
payment: testPayment{
amount: 50.00,
currency: "USD",
destination: "acc-456",
metadata: map[string]interface{}{
"purpose": "subscription",
},
},
setupMock: func(m *mockPaymentProcessor) {
m.On("ValidateAccount", "acc-123").Return(true, nil)
m.On("ProcessTransaction",
mock.MatchedBy(func(tx Transaction) bool {
return tx.Amount == 50.00 && tx.Currency == "USD"
}),
).Return("tx-789", nil)
},
expectedResult: &PaymentResult{
TransactionID: "tx-789",
Status: "completed",
ProcessedAt: time.Now(),
},
timeout: 500 * time.Millisecond,
},
{
name: "insufficient funds",
account: testAccount{
id: "acc-123",
balance: 30.00,
currency: "USD",
},
payment: testPayment{
amount: 50.00,
currency: "USD",
destination: "acc-456",
},
setupMock: func(m *mockPaymentProcessor) {
m.On("ValidateAccount", "acc-123").Return(true, nil)
m.On("ProcessTransaction", mock.Anything).Return("",
ErrInsufficientFunds)
},
expectedError: ErrInsufficientFunds,
timeout: 500 * time.Millisecond,
},
{
name: "currency mismatch",
account: testAccount{
id: "acc-123",
balance: 1000.00,
currency: "USD",
},
payment: testPayment{
amount: 50.00,
currency: "EUR",
destination: "acc-456",
},
setupMock: func(m *mockPaymentProcessor) {
m.On("ValidateAccount", "acc-123").Return(true, nil)
m.On("ProcessTransaction", mock.Anything).Return("",
ErrCurrencyMismatch)
},
expectedError: ErrCurrencyMismatch,
timeout: 500 * time.Millisecond,
},
{
name: "blocked account",
account: testAccount{
id: "acc-123",
balance: 1000.00,
currency: "USD",
isBlocked: true,
},
payment: testPayment{
amount: 50.00,
currency: "USD",
destination: "acc-456",
},
setupMock: func(m *mockPaymentProcessor) {
m.On("ValidateAccount", "acc-123").Return(false,
ErrAccountBlocked)
},
expectedError: ErrAccountBlocked,
timeout: 500 * time.Millisecond,
},
{
name: "timeout during processing",
account: testAccount{
id: "acc-123",
balance: 1000.00,
currency: "USD",
},
payment: testPayment{
amount: 50.00,
currency: "USD",
destination: "acc-456",
},
setupMock: func(m *mockPaymentProcessor) {
m.On("ValidateAccount", "acc-123").Return(true, nil)
m.On("ProcessTransaction", mock.Anything).Run(func(args mock.Arguments) {
// Simulate a slow operation that will trigger timeout
time.Sleep(200 * time.Millisecond)
}).Return("", nil)
},
expectedError: context.DeadlineExceeded,
timeout: 100 * time.Millisecond, // Set timeout shorter than processing time
},
}
// Execute test cases
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Create mock processor and set up expectations
mockProcessor := new(mockPaymentProcessor)
if tc.setupMock != nil {
tc.setupMock(mockProcessor)
}
// Create payment service with mock dependencies
service := NewPaymentService(mockProcessor)
// Create context with timeout
ctx, cancel := context.WithTimeout(context.Background(), tc.timeout)
defer cancel()
// Execute the function being tested
result, err := service.ProcessPayment(ctx, tc.account.id, tc.payment.amount,
tc.payment.currency, tc.payment.destination, tc.payment.metadata)
// Verify results
if tc.expectedError != nil {
assert.True(t, errors.Is(err, tc.expectedError),
"Expected error %v, got %v", tc.expectedError, err)
} else {
require.NoError(t, err)
assert.Equal(t, tc.expectedResult.TransactionID, result.TransactionID)
assert.Equal(t, tc.expectedResult.Status, result.Status)
assert.WithinDuration(t, tc.expectedResult.ProcessedAt, result.ProcessedAt,
2*time.Second)
}
// Verify all mock expectations were met
mockProcessor.AssertExpectations(t)
})
}
}
This advanced table-driven test demonstrates several sophisticated techniques:
- Complex test fixtures: Structured test data with nested types
- Mock setup functions: Each test case configures its own mocks
- Context with timeouts: Testing timeout behavior explicitly
- Matcher functions: Using
mock.MatchedBy
for flexible argument matching - Comprehensive assertions: Checking both success and error cases
Behavioral Testing with BDD-Style Assertions
Behavioral testing focuses on describing the expected behavior of your code in a more natural language format:
package user
import (
"context"
"testing"
"time"
"github.com/stretchr/testify/suite"
)
// UserServiceSuite is a test suite for the UserService
type UserServiceSuite struct {
suite.Suite
service *UserService
mockRepo *mockUserRepository
mockAuth *mockAuthProvider
mockNotifier *mockNotificationService
ctx context.Context
}
// SetupTest runs before each test
func (s *UserServiceSuite) SetupTest() {
s.mockRepo = new(mockUserRepository)
s.mockAuth = new(mockAuthProvider)
s.mockNotifier = new(mockNotificationService)
s.service = NewUserService(s.mockRepo, s.mockAuth, s.mockNotifier)
s.ctx = context.Background()
}
// TearDownTest runs after each test
func (s *UserServiceSuite) TearDownTest() {
// Verify all expectations were met
s.mockRepo.AssertExpectations(s.T())
s.mockAuth.AssertExpectations(s.T())
s.mockNotifier.AssertExpectations(s.T())
}
// TestUserRegistration tests the user registration flow
func (s *UserServiceSuite) TestUserRegistration() {
// Given a valid user registration request
req := &RegistrationRequest{
Email: "[email protected]",
Password: "secureP@ssw0rd",
FirstName: "Test",
LastName: "User",
}
// And the email is not already in use
s.mockRepo.On("FindByEmail", s.ctx, "[email protected]").Return(nil, ErrUserNotFound)
// And the password meets security requirements
s.mockAuth.On("ValidatePassword", "secureP@ssw0rd").Return(true, nil)
// And the password can be hashed
s.mockAuth.On("HashPassword", "secureP@ssw0rd").Return("hashed_password_123", nil)
// And the user can be saved to the repository
s.mockRepo.On("Create", s.ctx, mock.MatchedBy(func(u *User) bool {
return u.Email == "[email protected]" &&
u.FirstName == "Test" &&
u.LastName == "User" &&
u.PasswordHash == "hashed_password_123"
})).Return("user_123", nil)
// And a welcome notification can be sent
s.mockNotifier.On("SendWelcomeMessage", s.ctx, "[email protected]", "Test").Return(nil)
// When registering a new user
userID, err := s.service.RegisterUser(s.ctx, req)
// Then the registration should succeed
s.NoError(err)
s.Equal("user_123", userID)
}
// TestUserRegistrationWithExistingEmail tests registration with an email that's already in use
func (s *UserServiceSuite) TestUserRegistrationWithExistingEmail() {
// Given a user registration request with an email that's already in use
req := &RegistrationRequest{
Email: "[email protected]",
Password: "secureP@ssw0rd",
FirstName: "Test",
LastName: "User",
}
// And the email is already in use
existingUser := &User{
ID: "existing_user_123",
Email: "[email protected]",
FirstName: "Existing",
LastName: "User",
CreatedAt: time.Now().Add(-24 * time.Hour),
}
s.mockRepo.On("FindByEmail", s.ctx, "[email protected]").Return(existingUser, nil)
// When registering a new user
userID, err := s.service.RegisterUser(s.ctx, req)
// Then the registration should fail with an appropriate error
s.Empty(userID)
s.ErrorIs(err, ErrEmailAlreadyExists)
}
// TestUserRegistrationWithWeakPassword tests registration with a weak password
func (s *UserServiceSuite) TestUserRegistrationWithWeakPassword() {
// Given a user registration request with a weak password
req := &RegistrationRequest{
Email: "[email protected]",
Password: "weak",
FirstName: "Test",
LastName: "User",
}
// And the email is not already in use
s.mockRepo.On("FindByEmail", s.ctx, "[email protected]").Return(nil, ErrUserNotFound)
// And the password fails security validation
s.mockAuth.On("ValidatePassword", "weak").Return(false, ErrPasswordTooWeak)
// When registering a new user
userID, err := s.service.RegisterUser(s.ctx, req)
// Then the registration should fail with an appropriate error
s.Empty(userID)
s.ErrorIs(err, ErrPasswordTooWeak)
}
// Run the test suite
func TestUserServiceSuite(t *testing.T) {
suite.Run(t, new(UserServiceSuite))
}
This BDD-style test suite demonstrates:
- Test suite structure: Using setup and teardown for consistent test environments
- Given-When-Then format: Clear separation of test arrangement, action, and assertion
- Descriptive test names: Tests that document behavior
- Comprehensive scenarios: Testing both happy paths and error cases
- Mock verification: Ensuring all expected interactions occurred
Advanced Mocking Strategies
Effective mocking is crucial for isolating the code under test. Here’s an example of advanced mocking techniques:
package database
import (
"context"
"database/sql"
"errors"
"testing"
"time"
"github.com/DATA-DOG/go-sqlmock"
"github.com/jmoiron/sqlx"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// ProductRepository handles database operations for products
type ProductRepository struct {
db *sqlx.DB
}
// Product represents a product in the database
type Product struct {
ID string `db:"id"`
Name string `db:"name"`
Description string `db:"description"`
Price float64 `db:"price"`
CategoryID string `db:"category_id"`
CreatedAt time.Time `db:"created_at"`
UpdatedAt time.Time `db:"updated_at"`
}
// FindByID retrieves a product by its ID
func (r *ProductRepository) FindByID(ctx context.Context, id string) (*Product, error) {
query := `
SELECT id, name, description, price, category_id, created_at, updated_at
FROM products
WHERE id = $1 AND deleted_at IS NULL
`
var product Product
err := r.db.GetContext(ctx, &product, query, id)
if err != nil {
if errors.Is(err, sql.ErrNoRows) {
return nil, ErrProductNotFound
}
return nil, err
}
return &product, nil
}
// TestProductRepositoryFindByID demonstrates advanced database mocking
func TestProductRepositoryFindByID(t *testing.T) {
// Create a new mock database connection
mockDB, mock, err := sqlmock.New()
require.NoError(t, err)
defer mockDB.Close()
// Wrap with sqlx
sqlxDB := sqlx.NewDb(mockDB, "postgres")
// Create repository with mocked DB
repo := &ProductRepository{db: sqlxDB}
// Test case: successful product retrieval
t.Run("successful product retrieval", func(t *testing.T) {
// Arrange
productID := "prod-123"
expectedProduct := &Product{
ID: productID,
Name: "Test Product",
Description: "A test product",
Price: 29.99,
CategoryID: "cat-456",
CreatedAt: time.Date(2025, 1, 1, 0, 0, 0, 0, time.UTC),
UpdatedAt: time.Date(2025, 1, 2, 0, 0, 0, 0, time.UTC),
}
// Set up the expected query with precise argument and column matching
rows := sqlmock.NewRows([]string{
"id", "name", "description", "price", "category_id", "created_at", "updated_at",
}).AddRow(
expectedProduct.ID,
expectedProduct.Name,
expectedProduct.Description,
expectedProduct.Price,
expectedProduct.CategoryID,
expectedProduct.CreatedAt,
expectedProduct.UpdatedAt,
)
mock.ExpectQuery("^SELECT (.+) FROM products WHERE id = \\$1 AND deleted_at IS NULL$").
WithArgs(productID).
WillReturnRows(rows)
// Act
ctx := context.Background()
product, err := repo.FindByID(ctx, productID)
// Assert
require.NoError(t, err)
assert.Equal(t, expectedProduct.ID, product.ID)
assert.Equal(t, expectedProduct.Name, product.Name)
assert.Equal(t, expectedProduct.Price, product.Price)
assert.Equal(t, expectedProduct.CategoryID, product.CategoryID)
assert.Equal(t, expectedProduct.CreatedAt, product.CreatedAt)
// Verify all expectations were met
assert.NoError(t, mock.ExpectationsWereMet())
})
// Test case: product not found
t.Run("product not found", func(t *testing.T) {
// Arrange
productID := "non-existent"
mock.ExpectQuery("^SELECT (.+) FROM products WHERE id = \\$1 AND deleted_at IS NULL$").
WithArgs(productID).
WillReturnError(sql.ErrNoRows)
// Act
ctx := context.Background()
product, err := repo.FindByID(ctx, productID)
// Assert
assert.Nil(t, product)
assert.ErrorIs(t, err, ErrProductNotFound)
// Verify all expectations were met
assert.NoError(t, mock.ExpectationsWereMet())
})
// Test case: database error
t.Run("database error", func(t *testing.T) {
// Arrange
productID := "prod-123"
dbError := errors.New("connection reset by peer")
mock.ExpectQuery("^SELECT (.+) FROM products WHERE id = \\$1 AND deleted_at IS NULL$").
WithArgs(productID).
WillReturnError(dbError)
// Act
ctx := context.Background()
product, err := repo.FindByID(ctx, productID)
// Assert
assert.Nil(t, product)
assert.ErrorIs(t, err, dbError)
// Verify all expectations were met
assert.NoError(t, mock.ExpectationsWereMet())
})
// Test case: context cancellation
t.Run("context cancellation", func(t *testing.T) {
// Arrange
productID := "prod-123"
// Create a cancelled context
ctx, cancel := context.WithCancel(context.Background())
cancel() // Cancel immediately
// Set up mock to delay response to ensure context cancellation takes effect
mock.ExpectQuery("^SELECT (.+) FROM products WHERE id = \\$1 AND deleted_at IS NULL$").
WithArgs(productID).
WillDelayFor(10 * time.Millisecond).
WillReturnError(context.Canceled)
// Act
product, err := repo.FindByID(ctx, productID)
// Assert
assert.Nil(t, product)
assert.ErrorIs(t, err, context.Canceled)
// Verify all expectations were met
assert.NoError(t, mock.ExpectationsWereMet())
})
}
This example demonstrates:
- SQL mocking: Using
go-sqlmock
to simulate database interactions - Precise query matching: Using regex to match SQL queries
- Multiple test scenarios: Testing success, not found, errors, and cancellation
- Argument verification: Ensuring correct parameters are passed
- Expectations verification: Confirming all expected database calls were made
Integration and End-to-End Testing
While unit tests are valuable, they don’t verify how components work together. Integration and end-to-end tests fill this gap.
Integration Testing with Test Containers
Test containers allow you to run real dependencies like databases in isolated environments:
package integration
import (
"context"
"database/sql"
"fmt"
"testing"
"time"
"github.com/google/uuid"
"github.com/jmoiron/sqlx"
_ "github.com/lib/pq"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/testcontainers/testcontainers-go"
"github.com/testcontainers/testcontainers-go/wait"
"myapp/internal/repository"
"myapp/internal/models"
)
// PostgresContainer represents a PostgreSQL container for testing
type PostgresContainer struct {
Container testcontainers.Container
URI string
DB *sqlx.DB
}
// SetupPostgresContainer creates a new PostgreSQL container for testing
func SetupPostgresContainer(ctx context.Context) (*PostgresContainer, error) {
// Define container request
req := testcontainers.ContainerRequest{
Image: "postgres:14-alpine",
ExposedPorts: []string{"5432/tcp"},
Env: map[string]string{
"POSTGRES_USER": "testuser",
"POSTGRES_PASSWORD": "testpass",
"POSTGRES_DB": "testdb",
},
WaitingFor: wait.ForLog("database system is ready to accept connections"),
}
// Start container
container, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
ContainerRequest: req,
Started: true,
})
if err != nil {
return nil, fmt.Errorf("failed to start container: %w", err)
}
// Get host and port
host, err := container.Host(ctx)
if err != nil {
return nil, fmt.Errorf("failed to get container host: %w", err)
}
port, err := container.MappedPort(ctx, "5432")
if err != nil {
return nil, fmt.Errorf("failed to get container port: %w", err)
}
// Construct connection URI
uri := fmt.Sprintf("postgres://%s:%s@%s:%s/%s?sslmode=disable",
"testuser", "testpass", host, port.Port(), "testdb")
// Connect to the database
db, err := sqlx.Connect("postgres", uri)
if err != nil {
return nil, fmt.Errorf("failed to connect to database: %w", err)
}
// Set connection pool settings
db.SetMaxOpenConns(5)
db.SetMaxIdleConns(5)
db.SetConnMaxLifetime(5 * time.Minute)
// Create a container wrapper
pgContainer := &PostgresContainer{
Container: container,
URI: uri,
DB: db,
}
// Run migrations
if err := runMigrations(db); err != nil {
return nil, fmt.Errorf("failed to run migrations: %w", err)
}
return pgContainer, nil
}
// runMigrations applies database migrations
func runMigrations(db *sqlx.DB) error {
// Create products table
_, err := db.Exec(`
CREATE TABLE IF NOT EXISTS products (
id VARCHAR(36) PRIMARY KEY,
name VARCHAR(255) NOT NULL,
description TEXT,
price DECIMAL(10, 2) NOT NULL,
category_id VARCHAR(36) NOT NULL,
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP NOT NULL DEFAULT NOW(),
deleted_at TIMESTAMP
)
`)
if err != nil {
return err
}
// Create categories table
_, err = db.Exec(`
CREATE TABLE IF NOT EXISTS categories (
id VARCHAR(36) PRIMARY KEY,
name VARCHAR(255) NOT NULL,
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
)
`)
return err
}
// TestProductRepositoryIntegration demonstrates integration testing with a real database
func TestProductRepositoryIntegration(t *testing.T) {
// Skip in short mode
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
// Set up test container
ctx := context.Background()
pgContainer, err := SetupPostgresContainer(ctx)
require.NoError(t, err)
defer func() {
// Clean up
if err := pgContainer.Container.Terminate(ctx); err != nil {
t.Fatalf("Failed to terminate container: %v", err)
}
}()
// Create repository
productRepo := repository.NewProductRepository(pgContainer.DB)
// Test case: create and retrieve a product
t.Run("create and retrieve product", func(t *testing.T) {
// Create a category first
categoryID := uuid.New().String()
_, err := pgContainer.DB.ExecContext(ctx,
"INSERT INTO categories (id, name) VALUES ($1, $2)",
categoryID, "Test Category")
require.NoError(t, err)
// Create a new product
product := &models.Product{
ID: uuid.New().String(),
Name: "Test Product",
Description: "A test product for integration testing",
Price: 99.99,
CategoryID: categoryID,
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
}
// Save the product
err = productRepo.Create(ctx, product)
require.NoError(t, err)
// Retrieve the product
retrieved, err := productRepo.FindByID(ctx, product.ID)
require.NoError(t, err)
require.NotNil(t, retrieved)
// Verify the retrieved product
assert.Equal(t, product.ID, retrieved.ID)
assert.Equal(t, product.Name, retrieved.Name)
assert.Equal(t, product.Description, retrieved.Description)
assert.Equal(t, product.Price, retrieved.Price)
assert.Equal(t, product.CategoryID, retrieved.CategoryID)
})
// Test case: update a product
t.Run("update product", func(t *testing.T) {
// Create a category first
categoryID := uuid.New().String()
_, err := pgContainer.DB.ExecContext(ctx,
"INSERT INTO categories (id, name) VALUES ($1, $2)",
categoryID, "Another Category")
require.NoError(t, err)
// Create a new product
product := &models.Product{
ID: uuid.New().String(),
Name: "Product to Update",
Description: "This product will be updated",
Price: 50.00,
CategoryID: categoryID,
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
}
// Save the product
err = productRepo.Create(ctx, product)
require.NoError(t, err)
// Update the product
product.Name = "Updated Product Name"
product.Price = 75.00
product.UpdatedAt = time.Now()
err = productRepo.Update(ctx, product)
require.NoError(t, err)
// Retrieve the updated product
retrieved, err := productRepo.FindByID(ctx, product.ID)
require.NoError(t, err)
require.NotNil(t, retrieved)
// Verify the updated fields
assert.Equal(t, "Updated Product Name", retrieved.Name)
assert.Equal(t, 75.00, retrieved.Price)
})
// Test case: delete a product
t.Run("delete product", func(t *testing.T) {
// Create a category first
categoryID := uuid.New().String()
_, err := pgContainer.DB.ExecContext(ctx,
"INSERT INTO categories (id, name) VALUES ($1, $2)",
categoryID, "Delete Test Category")
require.NoError(t, err)
// Create a new product
product := &models.Product{
ID: uuid.New().String(),
Name: "Product to Delete",
Description: "This product will be deleted",
Price: 25.00,
CategoryID: categoryID,
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
}
// Save the product
err = productRepo.Create(ctx, product)
require.NoError(t, err)
// Delete the product
err = productRepo.Delete(ctx, product.ID)
require.NoError(t, err)
// Try to retrieve the deleted product
retrieved, err := productRepo.FindByID(ctx, product.ID)
assert.Error(t, err)
assert.Nil(t, retrieved)
assert.ErrorIs(t, err, repository.ErrProductNotFound)
})
}
This integration test demonstrates:
- Test containers: Using Docker containers for real database testing
- Database setup: Creating schema and initial data
- Full CRUD operations: Testing create, read, update, and delete
- Proper cleanup: Ensuring containers are terminated after tests
- Test isolation: Each test case operates independently
API Testing with HTTP Handlers
Testing HTTP handlers ensures your API behaves correctly:
package api
import (
"bytes"
"context"
"encoding/json"
"net/http"
"net/http/httptest"
"testing"
"time"
"github.com/gin-gonic/gin"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
"myapp/internal/models"
"myapp/internal/service"
)
// MockProductService is a mock implementation of the ProductService interface
type MockProductService struct {
mock.Mock
}
// GetProduct mocks the GetProduct method
func (m *MockProductService) GetProduct(ctx context.Context, id string) (*models.Product, error) {
args := m.Called(ctx, id)
if args.Get(0) == nil {
return nil, args.Error(1)
}
return args.Get(0).(*models.Product), args.Error(1)
}
// ListProducts mocks the ListProducts method
func (m *MockProductService) ListProducts(ctx context.Context, limit, offset int) ([]*models.Product, error) {
args := m.Called(ctx, limit, offset)
if args.Get(0) == nil {
return nil, args.Error(1)
}
return args.Get(0).([]*models.Product), args.Error(1)
}
// CreateProduct mocks the CreateProduct method
func (m *MockProductService) CreateProduct(ctx context.Context, product *models.Product) error {
args := m.Called(ctx, product)
return args.Error(0)
}
// Setup initializes the test environment
func Setup() *gin.Engine {
// Set Gin to test mode
gin.SetMode(gin.TestMode)
// Create a new Gin engine
r := gin.New()
r.Use(gin.Recovery())
return r
}
// TestGetProductHandler tests the GetProduct HTTP handler
func TestGetProductHandler(t *testing.T) {
// Setup
router := Setup()
mockService := new(MockProductService)
// Create handler with mock service
handler := NewProductHandler(mockService)
// Register routes
router.GET("/products/:id", handler.GetProduct)
// Test cases
testCases := []struct {
name string
productID string
setupMock func()
expectedStatus int
expectedBody map[string]interface{}
}{
{
name: "successful product retrieval",
productID: "prod-123",
setupMock: func() {
product := &models.Product{
ID: "prod-123",
Name: "Test Product",
Description: "A test product",
Price: 29.99,
CategoryID: "cat-456",
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
}
mockService.On("GetProduct", mock.Anything, "prod-123").Return(product, nil)
},
expectedStatus: http.StatusOK,
expectedBody: map[string]interface{}{
"id": "prod-123",
"name": "Test Product",
"description": "A test product",
"price": 29.99,
"category_id": "cat-456",
},
},
{
name: "product not found",
productID: "non-existent",
setupMock: func() {
mockService.On("GetProduct", mock.Anything, "non-existent").Return(nil, service.ErrProductNotFound)
},
expectedStatus: http.StatusNotFound,
expectedBody: map[string]interface{}{
"error": "Product not found",
},
},
{
name: "internal server error",
productID: "error-id",
setupMock: func() {
mockService.On("GetProduct", mock.Anything, "error-id").Return(nil, errors.New("database connection failed"))
},
expectedStatus: http.StatusInternalServerError,
expectedBody: map[string]interface{}{
"error": "Internal server error",
},
},
}
// Run test cases
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Setup mock expectations
tc.setupMock()
// Create request
req, err := http.NewRequest(http.MethodGet, "/products/"+tc.productID, nil)
require.NoError(t, err)
// Create response recorder
w := httptest.NewRecorder()
// Serve the request
router.ServeHTTP(w, req)
// Check status code
assert.Equal(t, tc.expectedStatus, w.Code)
// Parse response body
var response map[string]interface{}
err = json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
// Check response body
for key, expectedValue := range tc.expectedBody {
assert.Equal(t, expectedValue, response[key])
}
// Verify mock expectations
mockService.AssertExpectations(t)
})
}
}
// TestCreateProductHandler tests the CreateProduct HTTP handler
func TestCreateProductHandler(t *testing.T) {
// Setup
router := Setup()
mockService := new(MockProductService)
// Create handler with mock service
handler := NewProductHandler(mockService)
// Register routes
router.POST("/products", handler.CreateProduct)
// Test cases
testCases := []struct {
name string
requestBody map[string]interface{}
setupMock func(map[string]interface{})
expectedStatus int
expectedBody map[string]interface{}
}{
{
name: "successful product creation",
requestBody: map[string]interface{}{
"name": "New Product",
"description": "A new product",
"price": 39.99,
"category_id": "cat-789",
},
setupMock: func(body map[string]interface{}) {
mockService.On("CreateProduct", mock.Anything, mock.MatchedBy(func(p *models.Product) bool {
return p.Name == body["name"] &&
p.Description == body["description"] &&
p.Price == body["price"] &&
p.CategoryID == body["category_id"]
})).Return(nil)
},
expectedStatus: http.StatusCreated,
expectedBody: map[string]interface{}{
"message": "Product created successfully",
},
},
{
name: "invalid request body",
requestBody: map[string]interface{}{
"name": "",
"description": "Missing name",
"price": 39.99,
"category_id": "cat-789",
},
setupMock: func(body map[string]interface{}) {
// No mock setup needed as validation should fail before service is called
},
expectedStatus: http.StatusBadRequest,
expectedBody: map[string]interface{}{
"error": "Invalid request body",
},
},
{
name: "service error",
requestBody: map[string]interface{}{
"name": "Error Product",
"description": "A product that causes an error",
"price": 39.99,
"category_id": "cat-789",
},
setupMock: func(body map[string]interface{}) {
mockService.On("CreateProduct", mock.Anything, mock.Anything).Return(errors.New("database error"))
},
expectedStatus: http.StatusInternalServerError,
expectedBody: map[string]interface{}{
"error": "Internal server error",
},
},
}
// Run test cases
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Setup mock expectations
tc.setupMock(tc.requestBody)
// Create request body
jsonBody, err := json.Marshal(tc.requestBody)
require.NoError(t, err)
// Create request
req, err := http.NewRequest(http.MethodPost, "/products", bytes.NewBuffer(jsonBody))
require.NoError(t, err)
req.Header.Set("Content-Type", "application/json")
// Create response recorder
w := httptest.NewRecorder()
// Serve the request
router.ServeHTTP(w, req)
// Check status code
assert.Equal(t, tc.expectedStatus, w.Code)
// Parse response body
var response map[string]interface{}
err = json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
// Check response body
for key, expectedValue := range tc.expectedBody {
assert.Equal(t, expectedValue, response[key])
}
// Verify mock expectations
mockService.AssertExpectations(t)
})
}
}
This API testing approach demonstrates:
- HTTP handler testing: Using
httptest
to simulate HTTP requests - Response validation: Checking status codes and response bodies
- Multiple scenarios: Testing success, validation errors, and server errors
- Mock services: Isolating handlers from business logic
- Matcher functions: Using
mock.MatchedBy
for flexible argument matching
End-to-End Testing with API Clients
For true end-to-end testing, we can test the entire API as a client would:
package e2e
import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/http"
"net/http/httptest"
"os"
"testing"
"time"
"github.com/gin-gonic/gin"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/testcontainers/testcontainers-go"
"github.com/testcontainers/testcontainers-go/wait"
"myapp/internal/api"
"myapp/internal/config"
"myapp/internal/db"
"myapp/internal/models"
"myapp/internal/repository"
"myapp/internal/service"
)
// TestEnvironment holds all the components needed for E2E testing
type TestEnvironment struct {
Router *gin.Engine
Server *httptest.Server
Config *config.Config
DB *db.Database
PgContainer testcontainers.Container
ProductRepo *repository.ProductRepository
ProductSvc *service.ProductService
CategoryRepo *repository.CategoryRepository
CategorySvc *service.CategoryService
}
// SetupTestEnvironment creates a complete test environment with real dependencies
func SetupTestEnvironment(t *testing.T) *TestEnvironment {
// Skip in short mode
if testing.Short() {
t.Skip("Skipping E2E test in short mode")
}
// Create context
ctx := context.Background()
// Start PostgreSQL container
pgContainer, pgURI := startPostgresContainer(t, ctx)
// Create configuration
cfg := &config.Config{
Database: config.DatabaseConfig{
URI: pgURI,
},
Server: config.ServerConfig{
Port: 8080,
},
}
// Initialize database
database, err := db.NewDatabase(cfg)
require.NoError(t, err)
// Run migrations
err = database.Migrate()
require.NoError(t, err)
// Create repositories
productRepo := repository.NewProductRepository(database.DB)
categoryRepo := repository.NewCategoryRepository(database.DB)
// Create services
productSvc := service.NewProductService(productRepo, categoryRepo)
categorySvc := service.NewCategoryService(categoryRepo)
// Create API handlers
productHandler := api.NewProductHandler(productSvc)
categoryHandler := api.NewCategoryHandler(categorySvc)
// Set up router
gin.SetMode(gin.TestMode)
router := gin.New()
router.Use(gin.Recovery())
// Register routes
api.RegisterRoutes(router, productHandler, categoryHandler)
// Create test server
server := httptest.NewServer(router)
return &TestEnvironment{
Router: router,
Server: server,
Config: cfg,
DB: database,
PgContainer: pgContainer,
ProductRepo: productRepo,
ProductSvc: productSvc,
CategoryRepo: categoryRepo,
CategorySvc: categorySvc,
}
}
// CleanupTestEnvironment tears down the test environment
func CleanupTestEnvironment(t *TestEnvironment) {
// Close test server
if t.Server != nil {
t.Server.Close()
}
// Close database connection
if t.DB != nil {
t.DB.Close()
}
// Stop container
if t.PgContainer != nil {
ctx := context.Background()
if err := t.PgContainer.Terminate(ctx); err != nil {
fmt.Fprintf(os.Stderr, "Failed to terminate container: %v\n", err)
}
}
}
// startPostgresContainer starts a PostgreSQL container for testing
func startPostgresContainer(t *testing.T, ctx context.Context) (testcontainers.Container, string) {
// Define container request
req := testcontainers.ContainerRequest{
Image: "postgres:14-alpine",
ExposedPorts: []string{"5432/tcp"},
Env: map[string]string{
"POSTGRES_USER": "testuser",
"POSTGRES_PASSWORD": "testpass",
"POSTGRES_DB": "testdb",
},
WaitingFor: wait.ForLog("database system is ready to accept connections"),
}
// Start container
container, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
ContainerRequest: req,
Started: true,
})
require.NoError(t, err)
// Get host and port
host, err := container.Host(ctx)
require.NoError(t, err)
port, err := container.MappedPort(ctx, "5432")
require.NoError(t, err)
// Construct connection URI
uri := fmt.Sprintf("postgres://%s:%s@%s:%s/%s?sslmode=disable",
"testuser", "testpass", host, port.Port(), "testdb")
return container, uri
}
// TestProductAPI performs end-to-end testing of the product API
func TestProductAPI(t *testing.T) {
// Set up test environment
env := SetupTestEnvironment(t)
defer CleanupTestEnvironment(env)
// Create a client for making requests
client := &http.Client{
Timeout: 5 * time.Second,
}
// Test case: create a category
t.Run("create category", func(t *testing.T) {
// Create request body
categoryData := map[string]interface{}{
"name": "Test Category",
}
jsonData, err := json.Marshal(categoryData)
require.NoError(t, err)
// Create request
req, err := http.NewRequest(http.MethodPost,
env.Server.URL+"/api/categories",
bytes.NewBuffer(jsonData))
require.NoError(t, err)
req.Header.Set("Content-Type", "application/json")
// Send request
resp, err := client.Do(req)
require.NoError(t, err)
defer resp.Body.Close()
// Check status code
assert.Equal(t, http.StatusCreated, resp.StatusCode)
// Parse response
var response map[string]interface{}
err = json.NewDecoder(resp.Body).Decode(&response)
require.NoError(t, err)
// Verify response
assert.Contains(t, response, "id")
assert.Contains(t, response, "message")
assert.Equal(t, "Category created successfully", response["message"])
// Store category ID for later use
categoryID := response["id"].(string)
assert.NotEmpty(t, categoryID)
// Test case: create a product in that category
t.Run("create product", func(t *testing.T) {
// Create request body
productData := map[string]interface{}{
"name": "Test Product",
"description": "A test product for E2E testing",
"price": 99.99,
"category_id": categoryID,
}
jsonData, err := json.Marshal(productData)
require.NoError(t, err)
// Create request
req, err := http.NewRequest(http.MethodPost,
env.Server.URL+"/api/products",
bytes.NewBuffer(jsonData))
require.NoError(t, err)
req.Header.Set("Content-Type", "application/json")
// Send request
resp, err := client.Do(req)
require.NoError(t, err)
defer resp.Body.Close()
// Check status code
assert.Equal(t, http.StatusCreated, resp.StatusCode)
// Parse response
var response map[string]interface{}
err = json.NewDecoder(resp.Body).Decode(&response)
require.NoError(t, err)
// Verify response
assert.Contains(t, response, "id")
assert.Contains(t, response, "message")
assert.Equal(t, "Product created successfully", response["message"])
// Store product ID for later use
productID := response["id"].(string)
assert.NotEmpty(t, productID)
// Test case: get the created product
t.Run("get product", func(t *testing.T) {
// Create request
req, err := http.NewRequest(http.MethodGet,
env.Server.URL+"/api/products/"+productID, nil)
require.NoError(t, err)
// Send request
resp, err := client.Do(req)
require.NoError(t, err)
defer resp.Body.Close()
// Check status code
assert.Equal(t, http.StatusOK, resp.StatusCode)
// Parse response
var product map[string]interface{}
err = json.NewDecoder(resp.Body).Decode(&product)
require.NoError(t, err)
// Verify product data
assert.Equal(t, productID, product["id"])
assert.Equal(t, "Test Product", product["name"])
assert.Equal(t, "A test product for E2E testing", product["description"])
assert.Equal(t, 99.99, product["price"])
assert.Equal(t, categoryID, product["category_id"])
})
// Test case: list all products
t.Run("list products", func(t *testing.T) {
// Create request
req, err := http.NewRequest(http.MethodGet,
env.Server.URL+"/api/products", nil)
require.NoError(t, err)
// Send request
resp, err := client.Do(req)
require.NoError(t, err)
defer resp.Body.Close()
// Check status code
assert.Equal(t, http.StatusOK, resp.StatusCode)
// Parse response
var response map[string]interface{}
err = json.NewDecoder(resp.Body).Decode(&response)
require.NoError(t, err)
// Verify response
assert.Contains(t, response, "products")
products := response["products"].([]interface{})
assert.GreaterOrEqual(t, len(products), 1)
// Find our product in the list
found := false
for _, p := range products {
product := p.(map[string]interface{})
if product["id"] == productID {
found = true
assert.Equal(t, "Test Product", product["name"])
break
}
}
assert.True(t, found, "Created product not found in product list")
})
// Test case: update the product
t.Run("update product", func(t *testing.T) {
// Create request body
updateData := map[string]interface{}{
"name": "Updated Product Name",
"description": "Updated product description",
"price": 129.99,
"category_id": categoryID,
}
jsonData, err := json.Marshal(updateData)
require.NoError(t, err)
// Create request
req, err := http.NewRequest(http.MethodPut,
env.Server.URL+"/api/products/"+productID,
bytes.NewBuffer(jsonData))
require.NoError(t, err)
req.Header.Set("Content-Type", "application/json")
// Send request
resp, err := client.Do(req)
require.NoError(t, err)
defer resp.Body.Close()
// Check status code
assert.Equal(t, http.StatusOK, resp.StatusCode)
// Verify the update with a GET request
req, err = http.NewRequest(http.MethodGet,
env.Server.URL+"/api/products/"+productID, nil)
require.NoError(t, err)
resp, err = client.Do(req)
require.NoError(t, err)
defer resp.Body.Close()
assert.Equal(t, http.StatusOK, resp.StatusCode)
var product map[string]interface{}
err = json.NewDecoder(resp.Body).Decode(&product)
require.NoError(t, err)
assert.Equal(t, "Updated Product Name", product["name"])
assert.Equal(t, "Updated product description", product["description"])
assert.Equal(t, 129.99, product["price"])
})
// Test case: delete the product
t.Run("delete product", func(t *testing.T) {
// Create request
req, err := http.NewRequest(http.MethodDelete,
env.Server.URL+"/api/products/"+productID, nil)
require.NoError(t, err)
// Send request
resp, err := client.Do(req)
require.NoError(t, err)
defer resp.Body.Close()
// Check status code
assert.Equal(t, http.StatusOK, resp.StatusCode)
// Verify deletion with a GET request
req, err = http.NewRequest(http.MethodGet,
env.Server.URL+"/api/products/"+productID, nil)
require.NoError(t, err)
resp, err = client.Do(req)
require.NoError(t, err)
defer resp.Body.Close()
assert.Equal(t, http.StatusNotFound, resp.StatusCode)
})
})
})
}
This end-to-end test demonstrates:
- Complete environment setup: Creating a full test environment with real dependencies
- HTTP client testing: Testing the API as a client would
- Test flow: Creating, reading, updating, and deleting resources
- Nested tests: Building a logical test flow with dependencies
- Proper cleanup: Ensuring all resources are released after testing
Performance Benchmarking and Profiling
Performance testing is crucial for Go applications, especially those with high throughput requirements.
Writing Effective Benchmarks
Go’s testing package provides excellent support for benchmarking:
package performance
import (
"bytes"
"crypto/sha256"
"encoding/json"
"fmt"
"sync"
"testing"
)
// Item represents a data structure to benchmark
type Item struct {
ID string `json:"id"`
Name string `json:"name"`
Tags []string `json:"tags"`
Count int `json:"count"`
Value float64 `json:"value"`
IsEnabled bool `json:"is_enabled"`
}
// generateItem creates a test item
func generateItem(id string) Item {
return Item{
ID: id,
Name: "Test Item " + id,
Tags: []string{"tag1", "tag2", "tag3", "tag4", "tag5"},
Count: 42,
Value: 99.99,
IsEnabled: true,
}
}
// BenchmarkJSONMarshal benchmarks JSON marshaling performance
func BenchmarkJSONMarshal(b *testing.B) {
item := generateItem("test-1")
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := json.Marshal(item)
if err != nil {
b.Fatal(err)
}
}
}
// BenchmarkJSONMarshalParallel benchmarks JSON marshaling in parallel
func BenchmarkJSONMarshalParallel(b *testing.B) {
item := generateItem("test-1")
b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
_, err := json.Marshal(item)
if err != nil {
b.Fatal(err)
}
}
})
}
// ItemCache is a simple cache implementation to benchmark
type ItemCache struct {
items map[string]Item
mu sync.RWMutex
}
// NewItemCache creates a new item cache
func NewItemCache() *ItemCache {
return &ItemCache{
items: make(map[string]Item),
}
}
// Get retrieves an item from the cache
func (c *ItemCache) Get(id string) (Item, bool) {
c.mu.RLock()
defer c.mu.RUnlock()
item, ok := c.items[id]
return item, ok
}
// Set adds an item to the cache
func (c *ItemCache) Set(id string, item Item) {
c.mu.Lock()
defer c.mu.Unlock()
c.items[id] = item
}
// BenchmarkCacheGet benchmarks cache retrieval
func BenchmarkCacheGet(b *testing.B) {
// Setup
cache := NewItemCache()
for i := 0; i < 1000; i++ {
id := fmt.Sprintf("item-%d", i)
cache.Set(id, generateItem(id))
}
// Benchmark different cache sizes
benchmarks := []struct {
name string
cacheSize int
}{
{"Small_10", 10},
{"Medium_100", 100},
{"Large_1000", 1000},
}
for _, bm := range benchmarks {
b.Run(bm.name, func(b *testing.B) {
// Create cache with specified size
cache := NewItemCache()
for i := 0; i < bm.cacheSize; i++ {
id := fmt.Sprintf("item-%d", i)
cache.Set(id, generateItem(id))
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
// Get random item
id := fmt.Sprintf("item-%d", i%bm.cacheSize)
_, found := cache.Get(id)
if !found {
b.Fatalf("Item %s not found", id)
}
}
})
}
}
// BenchmarkCacheGetParallel benchmarks parallel cache retrieval
func BenchmarkCacheGetParallel(b *testing.B) {
// Setup
cache := NewItemCache()
for i := 0; i < 1000; i++ {
id := fmt.Sprintf("item-%d", i)
cache.Set(id, generateItem(id))
}
b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
i := 0
for pb.Next() {
id := fmt.Sprintf("item-%d", i%1000)
_, found := cache.Get(id)
if !found {
b.Fatalf("Item %s not found", id)
}
i++
}
})
}
// HashItem hashes an item using SHA-256
func HashItem(item Item) []byte {
data, _ := json.Marshal(item)
hash := sha256.Sum256(data)
return hash[:]
}
// BenchmarkHashingComparison compares different hashing strategies
func BenchmarkHashingComparison(b *testing.B) {
item := generateItem("test-1")
b.Run("JSON_Marshal_Then_Hash", func(b *testing.B) {
for i := 0; i < b.N; i++ {
data, _ := json.Marshal(item)
hash := sha256.Sum256(data)
_ = hash
}
})
b.Run("Direct_Field_Concatenation", func(b *testing.B) {
for i := 0; i < b.N; i++ {
var buf bytes.Buffer
buf.WriteString(item.ID)
buf.WriteString(item.Name)
for _, tag := range item.Tags {
buf.WriteString(tag)
}
buf.WriteString(fmt.Sprintf("%d", item.Count))
buf.WriteString(fmt.Sprintf("%f", item.Value))
buf.WriteString(fmt.Sprintf("%t", item.IsEnabled))
hash := sha256.Sum256(buf.Bytes())
_ = hash
}
})
}
// BenchmarkWithMemoryTracking demonstrates memory allocation tracking
func BenchmarkWithMemoryTracking(b *testing.B) {
// Run with: go test -bench=BenchmarkWithMemoryTracking -benchmem
b.Run("WithPreallocation", func(b *testing.B) {
for i := 0; i < b.N; i++ {
// Preallocate slice with capacity
data := make([]Item, 0, 1000)
for j := 0; j < 1000; j++ {
id := fmt.Sprintf("item-%d", j)
data = append(data, generateItem(id))
}
_ = data
}
})
b.Run("WithoutPreallocation", func(b *testing.B) {
for i := 0; i < b.N; i++ {
// No preallocation
var data []Item
for j := 0; j < 1000; j++ {
id := fmt.Sprintf("item-%d", j)
data = append(data, generateItem(id))
}
_ = data
}
})
}
To run these benchmarks:
# Run all benchmarks
go test -bench=. ./performance
# Run specific benchmark
go test -bench=BenchmarkJSONMarshal ./performance
# Run benchmarks with memory allocation statistics
go test -bench=. -benchmem ./performance
# Run benchmarks with more iterations for statistical significance
go test -bench=. -benchtime=5s ./performance
# Compare benchmarks before and after changes
go test -bench=. -benchmem ./performance > before.txt
# Make changes
go test -bench=. -benchmem ./performance > after.txt
benchstat before.txt after.txt
These benchmarks demonstrate:
- Basic benchmarking: Using
testing.B
to measure performance - Parallel benchmarks: Testing concurrent performance with
b.RunParallel
- Sub-benchmarks: Using
b.Run
to organize related benchmarks - Memory tracking: Measuring allocations with
-benchmem
- Comparison benchmarks: Comparing different implementations
CPU and Memory Profiling
Profiling helps identify performance bottlenecks in your code:
package main
import (
"flag"
"fmt"
"log"
"os"
"runtime"
"runtime/pprof"
"sync"
"time"
"crypto/sha256"
)
var cpuprofile = flag.String("cpuprofile", "", "write cpu profile to file")
var memprofile = flag.String("memprofile", "", "write memory profile to file")
// Worker represents a task processor
type Worker struct {
ID int
Tasks chan Task
Results chan Result
QuitChan chan bool
wg *sync.WaitGroup
}
// Task represents a unit of work
type Task struct {
ID int
Payload string
Strength int // Computational intensity
}
// Result represents the outcome of processing a task
type Result struct {
TaskID int
WorkerID int
Output string
TimeNanos int64
}
// NewWorker creates a new worker
func NewWorker(id int, tasks chan Task, results chan Result, wg *sync.WaitGroup) *Worker {
return &Worker{
ID: id,
Tasks: tasks,
Results: results,
QuitChan: make(chan bool),
wg: wg,
}
}
// Start begins the worker's processing loop
func (w *Worker) Start() {
go func() {
defer w.wg.Done()
for {
select {
case task := <-w.Tasks:
// Process the task
result := w.processTask(task)
w.Results <- result
case <-w.QuitChan:
return
}
}
}()
}
// Stop signals the worker to stop processing
func (w *Worker) Stop() {
go func() {
w.QuitChan <- true
}()
}
// processTask handles the actual work
func (w *Worker) processTask(task Task) Result {
// Simulate CPU-intensive work
start := time.Now()
// This is our "hot" function that will show up in CPU profiles
output := performComputation(task.Payload, task.Strength)
elapsed := time.Since(start)
return Result{
TaskID: task.ID,
WorkerID: w.ID,
Output: output,
TimeNanos: elapsed.Nanoseconds(),
}
}
// performComputation is a CPU-intensive function
func performComputation(input string, strength int) string {
// Create a large slice to show up in memory profiles
data := make([]byte, 0, strength*1000)
// Perform some CPU-intensive work
for i := 0; i < strength; i++ {
h := sha256.New()
h.Write([]byte(input))
hash := h.Sum(nil)
data = append(data, hash...)
input = fmt.Sprintf("%x", hash)
}
return fmt.Sprintf("%x", sha256.Sum256(data))
}
func main() {
flag.Parse()
// CPU profiling
if *cpuprofile != "" {
f, err := os.Create(*cpuprofile)
if err != nil {
log.Fatal("could not create CPU profile: ", err)
}
defer f.Close()
if err := pprof.StartCPUProfile(f); err != nil {
log.Fatal("could not start CPU profile: ", err)
}
defer pprof.StopCPUProfile()
}
// Run the workload
runWorkload()
// Memory profiling
if *memprofile != "" {
f, err := os.Create(*memprofile)
if err != nil {
log.Fatal("could not create memory profile: ", err)
}
defer f.Close()
runtime.GC() // Get up-to-date statistics
if err := pprof.WriteHeapProfile(f); err != nil {
log.Fatal("could not write memory profile: ", err)
}
}
}
func runWorkload() {
numWorkers := runtime.NumCPU()
numTasks := 100
// Create channels
tasks := make(chan Task, numTasks)
results := make(chan Result, numTasks)
// Create worker pool
var wg sync.WaitGroup
wg.Add(numWorkers)
workers := make([]*Worker, numWorkers)
for i := 0; i < numWorkers; i++ {
workers[i] = NewWorker(i, tasks, results, &wg)
workers[i].Start()
}
// Generate tasks
go func() {
for i := 0; i < numTasks; i++ {
tasks <- Task{
ID: i,
Payload: fmt.Sprintf("task-%d", i),
Strength: i % 10, // Vary computational intensity
}
}
close(tasks)
}()
// Collect results
go func() {
for i := 0; i < numTasks; i++ {
result := <-results
fmt.Printf("Task %d completed by Worker %d in %d ns\n",
result.TaskID, result.WorkerID, result.TimeNanos)
}
}()
// Wait for all workers to finish
wg.Wait()
}
To run with profiling:
# CPU profiling
go build -o app
./app -cpuprofile=cpu.prof
# Memory profiling
./app -memprofile=mem.prof
# Analyze profiles
go tool pprof -http=:8080 cpu.prof
go tool pprof -http=:8080 mem.prof
This profiling example demonstrates:
- CPU profiling: Capturing CPU usage patterns
- Memory profiling: Tracking heap allocations
- Profile visualization: Using pprof’s web interface
- Hotspot identification: Finding performance bottlenecks
- Workload simulation: Creating realistic test scenarios
Benchmarking HTTP Handlers
For web services, benchmarking HTTP handlers is crucial:
package api
import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/http"
"net/http/httptest"
"testing"
"time"
"github.com/gin-gonic/gin"
)
// BenchmarkProductListHandler benchmarks the product listing endpoint
func BenchmarkProductListHandler(b *testing.B) {
// Setup
gin.SetMode(gin.ReleaseMode) // Disable debug mode for benchmarking
router := gin.New()
// Create mock service with controlled data size
mockService := &MockProductService{}
// Register handler
handler := NewProductHandler(mockService)
router.GET("/products", handler.ListProducts)
// Benchmark with different dataset sizes
benchmarks := []struct {
name string
numItems int
setupMock func(*MockProductService, int)
}{
{
name: "small_10_items",
numItems: 10,
setupMock: func(m *MockProductService, n int) {
products := generateTestProducts(n)
m.On("ListProducts", mock.Anything, 100, 0).Return(products, nil)
},
},
{
name: "medium_100_items",
numItems: 100,
setupMock: func(m *MockProductService, n int) {
products := generateTestProducts(n)
m.On("ListProducts", mock.Anything, 100, 0).Return(products, nil)
},
},
{
name: "large_1000_items",
numItems: 1000,
setupMock: func(m *MockProductService, n int) {
products := generateTestProducts(n)
m.On("ListProducts", mock.Anything, 1000, 0).Return(products, nil)
},
},
}
for _, bm := range benchmarks {
b.Run(bm.name, func(b *testing.B) {
// Setup mock for this benchmark
mockService := new(MockProductService)
bm.setupMock(mockService, bm.numItems)
// Create handler with this mock
handler := NewProductHandler(mockService)
router := gin.New()
router.GET("/products", handler.ListProducts)
// Create request
req, _ := http.NewRequest(http.MethodGet, "/products", nil)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
// Create a response recorder for each iteration
w := httptest.NewRecorder()
// Serve the request
router.ServeHTTP(w, req)
// Verify response code (but don't parse body in benchmark)
if w.Code != http.StatusOK {
b.Fatalf("Expected status code 200, got %d", w.Code)
}
}
})
}
}
// BenchmarkProductCreateHandler benchmarks the product creation endpoint
func BenchmarkProductCreateHandler(b *testing.B) {
// Setup
gin.SetMode(gin.ReleaseMode) // Disable debug mode for benchmarking
// Prepare test data
product := &models.Product{
Name: "Test Product",
Description: "A test product for benchmarking",
Price: 99.99,
CategoryID: "cat-123",
}
// Serialize once outside the benchmark loop
jsonData, _ := json.Marshal(product)
// Benchmark with different response scenarios
benchmarks := []struct {
name string
setupMock func(*MockProductService)
delay time.Duration // Simulate processing time
}{
{
name: "fast_response",
setupMock: func(m *MockProductService) {
m.On("CreateProduct", mock.Anything, mock.Anything).Return(nil)
},
delay: 0,
},
{
name: "medium_response_time",
setupMock: func(m *MockProductService) {
m.On("CreateProduct", mock.Anything, mock.Anything).
Run(func(args mock.Arguments) {
time.Sleep(10 * time.Millisecond)
}).
Return(nil)
},
delay: 10 * time.Millisecond,
},
{
name: "slow_response_time",
setupMock: func(m *MockProductService) {
m.On("CreateProduct", mock.Anything, mock.Anything).
Run(func(args mock.Arguments) {
time.Sleep(50 * time.Millisecond)
}).
Return(nil)
},
delay: 50 * time.Millisecond,
},
}
for _, bm := range benchmarks {
b.Run(bm.name, func(b *testing.B) {
// Setup mock for this benchmark
mockService := new(MockProductService)
bm.setupMock(mockService)
// Create handler with this mock
handler := NewProductHandler(mockService)
router := gin.New()
router.POST("/products", handler.CreateProduct)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
// Create a new request for each iteration
req, _ := http.NewRequest(http.MethodPost, "/products", bytes.NewBuffer(jsonData))
req.Header.Set("Content-Type", "application/json")
// Create a response recorder
w := httptest.NewRecorder()
// Serve the request
router.ServeHTTP(w, req)
// Verify response code
if w.Code != http.StatusCreated {
b.Fatalf("Expected status code 201, got %d", w.Code)
}
}
})
}
}
// generateTestProducts creates a slice of test products
func generateTestProducts(n int) []*models.Product {
products := make([]*models.Product, n)
now := time.Now()
for i := 0; i < n; i++ {
products[i] = &models.Product{
ID: fmt.Sprintf("prod-%d", i),
Name: fmt.Sprintf("Product %d", i),
Description: fmt.Sprintf("Description for product %d", i),
Price: float64(10 + i%90),
CategoryID: fmt.Sprintf("cat-%d", i%5),
CreatedAt: now.Add(-time.Duration(i) * time.Hour),
UpdatedAt: now,
}
}
return products
}
This HTTP benchmarking demonstrates:
- Handler benchmarking: Measuring API endpoint performance
- Data size impact: Testing with different payload sizes
- Response time simulation: Measuring the impact of backend delays
- Memory allocation tracking: Using
b.ReportAllocs()
to monitor memory usage - Realistic scenarios: Testing with representative data volumes
Analyzing Benchmark Results
Interpreting benchmark results is crucial for making informed optimizations:
package main
import (
"fmt"
"math"
"sort"
"strings"
)
// BenchmarkResult represents the outcome of a benchmark run
type BenchmarkResult struct {
Name string
NsPerOp float64
AllocsPerOp int64
BytesPerOp int64
MBPerSecond float64
Measurements []float64
}
// AnalyzeBenchmarks demonstrates how to analyze benchmark data
func AnalyzeBenchmarks(results []BenchmarkResult) {
// Sort by ns/op (fastest first)
sort.Slice(results, func(i, j int) bool {
return results[i].NsPerOp < results[j].NsPerOp
})
// Print summary table
fmt.Println("Performance Summary (sorted by ns/op):")
fmt.Printf("%-30s %-15s %-15s %-15s %-15s\n",
"Benchmark", "Time (ns/op)", "Allocs (count)", "Memory (B/op)", "Throughput (MB/s)")
fmt.Println(strings.Repeat("-", 90))
for _, r := range results {
fmt.Printf("%-30s %-15.2f %-15d %-15d %-15.2f\n",
r.Name, r.NsPerOp, r.AllocsPerOp, r.BytesPerOp, r.MBPerSecond)
}
// Statistical analysis for a specific benchmark
if len(results) > 0 {
result := results[0]
if len(result.Measurements) > 0 {
mean, stdDev := calculateStats(result.Measurements)
cv := (stdDev / mean) * 100 // Coefficient of variation
fmt.Printf("\nStatistical Analysis for %s:\n", result.Name)
fmt.Printf(" Mean: %.2f ns/op\n", mean)
fmt.Printf(" Standard Deviation: %.2f ns/op\n", stdDev)
fmt.Printf(" Coefficient of Var: %.2f%%\n", cv)
// Interpret the results
fmt.Println("\nInterpretation:")
if cv < 1.0 {
fmt.Println(" Excellent stability (CV < 1%)")
} else if cv < 5.0 {
fmt.Println(" Good stability (CV < 5%)")
} else if cv < 10.0 {
fmt.Println(" Moderate stability (CV < 10%)")
} else {
fmt.Println(" Poor stability (CV >= 10%) - Results may not be reliable")
}
// Performance comparison
if len(results) > 1 {
baseline := results[0]
comparison := results[1]
improvement := (baseline.NsPerOp - comparison.NsPerOp) / baseline.NsPerOp * 100
fmt.Printf("\nComparison (%s vs %s):\n", baseline.Name, comparison.Name)
fmt.Printf(" Time difference: %.2f ns/op (%.2f%%)\n",
baseline.NsPerOp - comparison.NsPerOp, improvement)
fmt.Printf(" Memory difference: %d bytes/op\n",
baseline.BytesPerOp - comparison.BytesPerOp)
fmt.Printf(" Allocation difference: %d allocs/op\n",
baseline.AllocsPerOp - comparison.AllocsPerOp)
}
}
}
}
// calculateStats computes mean and standard deviation
func calculateStats(measurements []float64) (float64, float64) {
sum := 0.0
for _, m := range measurements {
sum += m
}
mean := sum / float64(len(measurements))
sumSquaredDiff := 0.0
for _, m := range measurements {
diff := m - mean
sumSquaredDiff += diff * diff
}
variance := sumSquaredDiff / float64(len(measurements))
stdDev := math.Sqrt(variance)
return mean, stdDev
}
This analysis approach demonstrates:
- Result sorting: Ranking implementations by performance
- Statistical analysis: Computing mean, standard deviation, and coefficient of variation
- Stability assessment: Evaluating the reliability of benchmark results
- Comparative analysis: Quantifying improvements between implementations
- Throughput calculation: Converting timing results to operations per second
Test Automation and CI/CD Integration
Integrating tests into your CI/CD pipeline ensures code quality throughout the development lifecycle.
Configuring GitHub Actions for Go Testing
GitHub Actions provides a powerful platform for automating Go tests:
# .github/workflows/go-test.yml
name: Go Tests
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
name: Test
runs-on: ubuntu-latest
services:
# Add PostgreSQL service container
postgres:
image: postgres:14
env:
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
POSTGRES_DB: testdb
ports:
- 5432:5432
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
# Add Redis service container
redis:
image: redis:6
ports:
- 6379:6379
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: '1.20'
cache: true
- name: Install dependencies
run: go mod download
- name: Verify dependencies
run: go mod verify
- name: Run linters
uses: golangci/golangci-lint-action@v3
with:
version: latest
args: --timeout=5m
- name: Run unit tests
run: go test -v -race -coverprofile=coverage.txt -covermode=atomic ./...
- name: Run integration tests
run: go test -v -tags=integration ./...
env:
POSTGRES_HOST: localhost
POSTGRES_PORT: 5432
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
POSTGRES_DB: testdb
REDIS_HOST: localhost
REDIS_PORT: 6379
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
with:
file: ./coverage.txt
flags: unittests
fail_ci_if_error: true
benchmark:
name: Performance Benchmarks
runs-on: ubuntu-latest
needs: test
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: '1.20'
cache: true
- name: Install dependencies
run: go mod download
- name: Run benchmarks
run: |
go test -bench=. -benchmem ./... > benchmark_results.txt
cat benchmark_results.txt
- name: Store benchmark result
uses: actions/upload-artifact@v3
with:
name: benchmark-results
path: benchmark_results.txt
This GitHub Actions workflow demonstrates:
- Multi-service testing: Setting up PostgreSQL and Redis for integration tests
- Test separation: Running unit and integration tests separately
- Race detection: Using
-race
to find concurrency issues - Coverage reporting: Generating and uploading coverage reports
- Performance tracking: Running and storing benchmark results
Makefile for Local Test Automation
A well-structured Makefile simplifies local testing:
# Makefile
# Variables
GOCMD=go
GOBUILD=$(GOCMD) build
GOCLEAN=$(GOCMD) clean
GOTEST=$(GOCMD) test
GOGET=$(GOCMD) get
GOMOD=$(GOCMD) mod
GOLINT=golangci-lint
BINARY_NAME=myapp
COVERAGE_FILE=coverage.txt
# Build targets
.PHONY: all build clean test coverage lint integration benchmark docker-test
all: lint test build
build:
$(GOBUILD) -o $(BINARY_NAME) -v ./cmd/main.go
clean:
$(GOCLEAN)
rm -f $(BINARY_NAME)
rm -f $(COVERAGE_FILE)
# Dependency management
deps:
$(GOMOD) download
$(GOMOD) tidy
# Testing targets
test: deps
$(GOTEST) -v -race -coverprofile=$(COVERAGE_FILE) -covermode=atomic ./...
test-short:
$(GOTEST) -v -short ./...
integration: deps
$(GOTEST) -v -tags=integration ./...
benchmark:
$(GOTEST) -bench=. -benchmem ./...
benchmark-compare:
$(GOTEST) -bench=. -benchmem ./... > old.txt
@echo "Make your changes, then run: make benchmark-compare-after"
benchmark-compare-after:
$(GOTEST) -bench=. -benchmem ./... > new.txt
benchstat old.txt new.txt
coverage: test
go tool cover -html=$(COVERAGE_FILE)
# Code quality
lint:
$(GOLINT) run
# Docker-based testing
docker-test:
docker-compose -f docker-compose.test.yml up --build --abort-on-container-exit
# Generate mocks
generate-mocks:
mockery --all --keeptree --outpkg=mocks --output=./internal/mocks
# Database migrations
migrate-up:
migrate -path ./migrations -database "postgres://user:pass@localhost:5432/dbname?sslmode=disable" up
migrate-down:
migrate -path ./migrations -database "postgres://user:pass@localhost:5432/dbname?sslmode=disable" down
This Makefile demonstrates:
- Comprehensive targets: Covering build, test, and quality assurance
- Test variations: Supporting different test types and modes
- Benchmark comparison: Facilitating before/after performance analysis
- Docker integration: Running tests in containerized environments
- Development utilities: Including mock generation and database migrations
Continuous Testing with Test Containers
For more complex integration testing in CI/CD environments:
package integration
import (
"context"
"database/sql"
"fmt"
"os"
"testing"
"time"
"github.com/testcontainers/testcontainers-go"
"github.com/testcontainers/testcontainers-go/wait"
)
// TestMain sets up the test environment for all integration tests
func TestMain(m *testing.M) {
// Skip integration tests in short mode
if testing.Short() {
fmt.Println("Skipping integration tests in short mode")
os.Exit(0)
}
// Check if we're running in CI environment
inCI := os.Getenv("CI") == "true"
var cleanup func()
var err error
// Set up test environment
if inCI {
// In CI, use the services provided by the CI environment
err = setupCIEnvironment()
} else {
// Locally, use test containers
cleanup, err = setupLocalTestContainers()
}
if err != nil {
fmt.Printf("Failed to set up test environment: %v\n", err)
os.Exit(1)
}
// Run tests
code := m.Run()
// Clean up
if cleanup != nil {
cleanup()
}
os.Exit(code)
}
// setupCIEnvironment configures the test environment for CI
func setupCIEnvironment() error {
// In CI, services are already running, just set up connections
dbHost := os.Getenv("POSTGRES_HOST")
dbPort := os.Getenv("POSTGRES_PORT")
dbUser := os.Getenv("POSTGRES_USER")
dbPass := os.Getenv("POSTGRES_PASSWORD")
dbName := os.Getenv("POSTGRES_DB")
// Construct connection string
dbURI := fmt.Sprintf("postgres://%s:%s@%s:%s/%s?sslmode=disable",
dbUser, dbPass, dbHost, dbPort, dbName)
// Set environment variable for tests to use
os.Setenv("DATABASE_URL", dbURI)
// Verify connection
db, err := sql.Open("postgres", dbURI)
if err != nil {
return fmt.Errorf("failed to connect to database: %w", err)
}
defer db.Close()
// Check connection
if err := db.Ping(); err != nil {
return fmt.Errorf("failed to ping database: %w", err)
}
fmt.Println("Successfully connected to CI database")
return nil
}
// setupLocalTestContainers creates and configures test containers
func setupLocalTestContainers() (func(), error) {
ctx := context.Background()
var containers []testcontainers.Container
var err error
// Start PostgreSQL container
pgContainer, pgURI, err := startPostgresContainer(ctx)
if err != nil {
return nil, err
}
containers = append(containers, pgContainer)
os.Setenv("DATABASE_URL", pgURI)
// Start Redis container
redisContainer, redisURI, err := startRedisContainer(ctx)
if err != nil {
// Clean up PostgreSQL container
for _, c := range containers {
c.Terminate(ctx)
}
return nil, err
}
containers = append(containers, redisContainer)
os.Setenv("REDIS_URL", redisURI)
// Return cleanup function
cleanup := func() {
for _, c := range containers {
c.Terminate(ctx)
}
}
fmt.Println("Test containers started successfully")
return cleanup, nil
}
// startPostgresContainer starts a PostgreSQL container
func startPostgresContainer(ctx context.Context) (testcontainers.Container, string, error) {
req := testcontainers.ContainerRequest{
Image: "postgres:14-alpine",
ExposedPorts: []string{"5432/tcp"},
Env: map[string]string{
"POSTGRES_USER": "testuser",
"POSTGRES_PASSWORD": "testpass",
"POSTGRES_DB": "testdb",
},
WaitingFor: wait.ForLog("database system is ready to accept connections"),
}
container, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
ContainerRequest: req,
Started: true,
})
if err != nil {
return nil, "", fmt.Errorf("failed to start postgres container: %w", err)
}
// Get host and port
host, err := container.Host(ctx)
if err != nil {
container.Terminate(ctx)
return nil, "", fmt.Errorf("failed to get container host: %w", err)
}
port, err := container.MappedPort(ctx, "5432")
if err != nil {
container.Terminate(ctx)
return nil, "", fmt.Errorf("failed to get container port: %w", err)
}
// Construct connection URI
uri := fmt.Sprintf("postgres://%s:%s@%s:%s/%s?sslmode=disable",
"testuser", "testpass", host, port.Port(), "testdb")
// Verify connection
db, err := sql.Open("postgres", uri)
if err != nil {
container.Terminate(ctx)
return nil, "", fmt.Errorf("failed to connect to database: %w", err)
}
defer db.Close()
// Wait for database to be ready
for i := 0; i < 10; i++ {
err = db.Ping()
if err == nil {
break
}
time.Sleep(time.Second)
}
if err != nil {
container.Terminate(ctx)
return nil, "", fmt.Errorf("failed to ping database: %w", err)
}
return container, uri, nil
}
// startRedisContainer starts a Redis container
func startRedisContainer(ctx context.Context) (testcontainers.Container, string, error) {
req := testcontainers.ContainerRequest{
Image: "redis:6-alpine",
ExposedPorts: []string{"6379/tcp"},
WaitingFor: wait.ForLog("Ready to accept connections"),
}
container, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
ContainerRequest: req,
Started: true,
})
if err != nil {
return nil, "", fmt.Errorf("failed to start redis container: %w", err)
}
// Get host and port
host, err := container.Host(ctx)
if err != nil {
container.Terminate(ctx)
return nil, "", fmt.Errorf("failed to get container host: %w", err)
}
port, err := container.MappedPort(ctx, "6379")
if err != nil {
container.Terminate(ctx)
return nil, "", fmt.Errorf("failed to get container port: %w", err)
}
// Construct connection URI
uri := fmt.Sprintf("redis://%s:%s", host, port.Port())
return container, uri, nil
}
This test container setup demonstrates:
- Environment detection: Adapting to CI or local environments
- Container orchestration: Managing multiple service containers
- Connection verification: Ensuring services are ready before testing
- Resource cleanup: Properly terminating containers after tests
- Configuration sharing: Setting environment variables for tests to use
Test Matrix with Multiple Go Versions
Testing across multiple Go versions ensures compatibility:
# .github/workflows/go-matrix.yml
name: Go Matrix Tests
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
name: Test Go ${{ matrix.go-version }} on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: ['1.18', '1.19', '1.20']
os: [ubuntu-latest, macos-latest, windows-latest]
steps:
- uses: actions/checkout@v3
- name: Set up Go ${{ matrix.go-version }}
uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}
cache: true
- name: Install dependencies
run: go mod download
- name: Run tests
run: go test -v ./...
compatibility:
name: API Compatibility Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: '1.20'
cache: true
- name: Check API compatibility
uses: smola/go-compat-check@v1
with:
go-version: '1.18'
This matrix testing approach demonstrates:
- Version matrix: Testing across multiple Go versions
- Platform matrix: Testing on different operating systems
- Compatibility checking: Ensuring API backward compatibility
- Parallel execution: Running tests concurrently for efficiency
- Conditional testing: Adapting tests to different environments
Property-Based and Fuzz Testing
Traditional unit tests verify specific inputs and outputs, but property-based and fuzz testing take a more comprehensive approach by testing properties and invariants across a wide range of inputs.
Property-Based Testing with Rapid
Property-based testing verifies that your code satisfies certain properties for all valid inputs:
package property
import (
"strings"
"testing"
"unicode"
"pgregory.net/rapid"
)
// PasswordValidator validates password strength
type PasswordValidator struct {
MinLength int
RequireUpper bool
RequireLower bool
RequireNumber bool
RequireSpecial bool
}
// NewPasswordValidator creates a new password validator with default settings
func NewPasswordValidator() *PasswordValidator {
return &PasswordValidator{
MinLength: 8,
RequireUpper: true,
RequireLower: true,
RequireNumber: true,
RequireSpecial: true,
}
}
// Validate checks if a password meets the requirements
func (v *PasswordValidator) Validate(password string) (bool, []string) {
var issues []string
if len(password) < v.MinLength {
issues = append(issues, "password too short")
}
if v.RequireUpper && !containsUpper(password) {
issues = append(issues, "missing uppercase letter")
}
if v.RequireLower && !containsLower(password) {
issues = append(issues, "missing lowercase letter")
}
if v.RequireNumber && !containsNumber(password) {
issues = append(issues, "missing number")
}
if v.RequireSpecial && !containsSpecial(password) {
issues = append(issues, "missing special character")
}
return len(issues) == 0, issues
}
// Helper functions
func containsUpper(s string) bool {
for _, r := range s {
if unicode.IsUpper(r) {
return true
}
}
return false
}
func containsLower(s string) bool {
for _, r := range s {
if unicode.IsLower(r) {
return true
}
}
return false
}
func containsNumber(s string) bool {
for _, r := range s {
if unicode.IsNumber(r) {
return true
}
}
return false
}
func containsSpecial(s string) bool {
for _, r := range s {
if !unicode.IsLetter(r) && !unicode.IsNumber(r) && !unicode.IsSpace(r) {
return true
}
}
return false
}
// TestPasswordValidatorProperties demonstrates property-based testing func TestPasswordValidatorProperties(t *testing.T) { validator := NewPasswordValidator()
// Property: A valid password should pass validation
t.Run("valid password passes", func(t *testing.T) {
rapid.Check(t, func(t *rapid.T) {
// Generate a valid password
length := rapid.IntRange(8, 100).Draw(t, "length")
hasUpper := true
hasLower := true
hasNumber := true
hasSpecial := true
password := generatePassword(t, length, hasUpper, hasLower, hasNumber, hasSpecial)
// Verify the property
valid, issues := validator.Validate(password)
if !valid {
t.Fatalf("Expected valid password, but got issues: %v for password: %s", issues, password)
}
})
})
// Property: A password missing uppercase should fail validation
t.Run("password without uppercase fails", func(t *testing.T) {
rapid.Check(t, func(t *rapid.T) {
// Generate a password without uppercase
length := rapid.IntRange(8, 100).Draw(t, "length")
hasUpper := false
hasLower := true
hasNumber := true
hasSpecial := true
password := generatePassword(t, length, hasUpper, hasLower, hasNumber, hasSpecial)
// Verify the property
valid, issues := validator.Validate(password)
if valid {
t.Fatalf("Expected invalid password, but it passed validation: %s", password)
}
if !containsIssue(issues, "missing uppercase") {
t.Fatalf("Expected 'missing uppercase' issue, but got: %v", issues)
}
})
})
// Property: A password missing lowercase should fail validation
t.Run("password without lowercase fails", func(t *rapid.T) {
rapid.Check(t, func(t *rapid.T) {
// Generate a password without lowercase
length := rapid.IntRange(8, 100).Draw(t, "length")
hasUpper := true
hasLower := false
hasNumber := true
hasSpecial := true
password := generatePassword(t, length, hasUpper, hasLower, hasNumber, hasSpecial)
// Verify the property
valid, issues := validator.Validate(password)
if valid {
t.Fatalf("Expected invalid password, but it passed validation: %s", password)
}
if !containsIssue(issues, "missing lowercase") {
t.Fatalf("Expected 'missing lowercase' issue, but got: %v", issues)
}
})
})
// Property: A password shorter than minimum length should fail validation
t.Run("short password fails", func(t *rapid.T) {
rapid.Check(t, func(t *rapid.T) {
// Generate a short password
length := rapid.IntRange(1, 7).Draw(t, "length")
hasUpper := true
hasLower := true
hasNumber := true
hasSpecial := true
password := generatePassword(t, length, hasUpper, hasLower, hasNumber, hasSpecial)
// Verify the property
valid, issues := validator.Validate(password)
if valid {
t.Fatalf("Expected invalid password, but it passed validation: %s", password)
}
if !containsIssue(issues, "too short") {
t.Fatalf("Expected 'too short' issue, but got: %v", issues)
}
})
})
// Property: Validation requirements should be configurable
t.Run("configurable validator", func(t *rapid.T) {
rapid.Check(t, func(t *rapid.T) {
// Create a custom validator
customValidator := &PasswordValidator{
MinLength: rapid.IntRange(5, 15).Draw(t, "minLength"),
RequireUpper: rapid.Bool().Draw(t, "requireUpper"),
RequireLower: rapid.Bool().Draw(t, "requireLower"),
RequireNumber: rapid.Bool().Draw(t, "requireNumber"),
RequireSpecial: rapid.Bool().Draw(t, "requireSpecial"),
}
// Generate a password that meets all possible requirements
password := generatePassword(t, 20, true, true, true, true)
// Verify the property
valid, _ := customValidator.Validate(password)
if !valid && len(password) >= customValidator.MinLength {
t.Fatalf("Expected valid password for relaxed validator, but it failed: %s", password)
}
})
})
}
// Helper function to generate passwords with specific characteristics func generatePassword(t *rapid.T, length int, hasUpper, hasLower, hasNumber, hasSpecial bool) string { var chars []rune var builder strings.Builder
// Ensure we have at least one of each required character type
if hasUpper {
chars = append(chars, rapid.RuneFrom([]rune("ABCDEFGHIJKLMNOPQRSTUVWXYZ")).Draw(t, "upper"))
}
if hasLower {
chars = append(chars, rapid.RuneFrom([]rune("abcdefghijklmnopqrstuvwxyz")).Draw(t, "lower"))
}
if hasNumber {
chars = append(chars, rapid.RuneFrom([]rune("0123456789")).Draw(t, "number"))
}
if hasSpecial {
chars = append(chars, rapid.RuneFrom([]rune("!@#$%^&*()_+-=[]{}|;:,.<>?")).Draw(t, "special"))
}
// Fill the rest with allowed characters
for len(chars) < length {
var char rune
switch rapid.IntRange(0, 3).Draw(t, "charType") {
case 0:
if hasUpper {
char = rapid.RuneFrom([]rune("ABCDEFGHIJKLMNOPQRSTUVWXYZ")).Draw(t, "fillUpper")
}
case 1:
if hasLower {
char = rapid.RuneFrom([]rune("abcdefghijklmnopqrstuvwxyz")).Draw(t, "fillLower")
}
case 2:
if hasNumber {
char = rapid.RuneFrom([]rune("0123456789")).Draw(t, "fillNumber")
}
case 3:
if hasSpecial {
char = rapid.RuneFrom([]rune("!@#$%^&*()_+-=[]{}|;:,.<>?")).Draw(t, "fillSpecial")
}
}
if char != 0 {
chars = append(chars, char)
}
}
// Shuffle the characters
rapid.Shuffle(t, chars)
// Build the password
for _, c := range chars {
builder.WriteRune(c)
}
return builder.String()
}
// Helper function to check if an issue is in the list func containsIssue(issues []string, substring string) bool { for _, issue := range issues { if strings.Contains(issue, substring) { return true } } return false }
This property-based test demonstrates:
1. **Property verification**: Testing that code satisfies general properties
2. **Random input generation**: Creating diverse test inputs automatically
3. **Shrinking**: Finding the simplest failing case when a test fails
4. **Comprehensive coverage**: Testing edge cases that might be missed in traditional tests
5. **Configurable generators**: Creating custom input generators for domain-specific testing
#### Fuzz Testing with Go's Built-in Fuzzer
Go 1.18+ includes built-in support for fuzz testing:
```go
package fuzz
import (
"testing"
"unicode/utf8"
)
// ReverseString reverses a UTF-8 string
func ReverseString(s string) string {
// Convert string to runes to handle multi-byte characters correctly
runes := []rune(s)
// Reverse the runes
for i, j := 0, len(runes)-1; i < j; i, j = i+1, j-1 {
runes[i], runes[j] = runes[j], runes[i]
}
// Convert back to string
return string(runes)
}
// TestReverseString is a traditional unit test
func TestReverseString(t *testing.T) {
testCases := []struct {
name string
input string
expected string
}{
{"empty string", "", ""},
{"single character", "a", "a"},
{"simple string", "hello", "olleh"},
{"with spaces", "hello world", "dlrow olleh"},
{"palindrome", "racecar", "racecar"},
{"unicode", "你好世界", "界世好你"},
{"mixed characters", "hello世界", "界世olleh"},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
result := ReverseString(tc.input)
if result != tc.expected {
t.Errorf("ReverseString(%q) = %q, expected %q", tc.input, result, tc.expected)
}
})
}
}
// FuzzReverseString is a fuzz test that verifies properties of string reversal
func FuzzReverseString(f *testing.F) {
// Seed corpus with some interesting values
seeds := []string{"", "a", "ab", "hello", "hello world", "racecar", "你好世界", "hello世界"}
for _, seed := range seeds {
f.Add(seed)
}
// Fuzz test function
f.Fuzz(func(t *testing.T, s string) {
// Skip invalid UTF-8 strings
if !utf8.ValidString(s) {
t.Skip("Skipping invalid UTF-8 string")
}
// Property 1: Reversing twice should return the original string
reversed := ReverseString(s)
doubleReversed := ReverseString(reversed)
if s != doubleReversed {
t.Errorf("ReverseString(ReverseString(%q)) = %q, expected %q", s, doubleReversed, s)
}
// Property 2: Length should be preserved
if len([]rune(s)) != len([]rune(reversed)) {
t.Errorf("len([]rune(%q)) = %d, len([]rune(%q)) = %d", s, len([]rune(s)), reversed, len([]rune(reversed)))
}
// Property 3: Character frequency should be preserved
if !sameCharFrequency(s, reversed) {
t.Errorf("Character frequency differs: %q vs %q", s, reversed)
}
})
}
// Helper function to check if two strings have the same character frequency
func sameCharFrequency(s1, s2 string) bool {
freq1 := make(map[rune]int)
freq2 := make(map[rune]int)
for _, r := range s1 {
freq1[r]++
}
for _, r := range s2 {
freq2[r]++
}
if len(freq1) != len(freq2) {
return false
}
for r, count := range freq1 {
if freq2[r] != count {
return false
}
}
return true
}
To run the fuzz test:
# Run the fuzz test for a short time
go test -fuzz=FuzzReverseString -fuzztime=10s
# Run the fuzz test with a specific seed
go test -fuzz=FuzzReverseString -fuzztime=10s -seed=1234
# Run the fuzz test with a specific corpus entry
go test -run=FuzzReverseString/12345
This fuzz testing approach demonstrates:
- Automated input generation: Generating diverse test inputs automatically
- Property verification: Testing invariants that should hold for all inputs
- Corpus management: Building and maintaining a corpus of interesting inputs
- Crash reproduction: Automatically saving and replaying failing inputs
- Integration with Go tooling: Using built-in Go fuzzing support
Combining Property-Based Testing with Fuzzing
For maximum coverage, combine property-based testing with fuzzing:
package combined
import (
"fmt"
"testing"
"unicode/utf8"
"pgregory.net/rapid"
)
// JSONEscaper escapes special characters in JSON strings
type JSONEscaper struct {
EscapeUnicode bool
}
// Escape escapes special characters in a string for JSON
func (e *JSONEscaper) Escape(s string) string {
var result []rune
for _, r := range s {
switch r {
case '"':
result = append(result, '\\', '"')
case '\\':
result = append(result, '\\', '\\')
case '/':
result = append(result, '\\', '/')
case '\b':
result = append(result, '\\', 'b')
case '\f':
result = append(result, '\\', 'f')
case '\n':
result = append(result, '\\', 'n')
case '\r':
result = append(result, '\\', 'r')
case '\t':
result = append(result, '\\', 't')
default:
if e.EscapeUnicode && r > 127 {
// Escape as \uXXXX
result = append(result, []rune(fmt.Sprintf("\\u%04x", r))...)
} else {
result = append(result, r)
}
}
}
return string(result)
}
// Unescape unescapes a JSON string
func (e *JSONEscaper) Unescape(s string) (string, error) {
var result []rune
runes := []rune(s)
for i := 0; i < len(runes); i++ {
if runes[i] == '\\' && i+1 < len(runes) {
i++
switch runes[i] {
case '"':
result = append(result, '"')
case '\\':
result = append(result, '\\')
case '/':
result = append(result, '/')
case 'b':
result = append(result, '\b')
case 'f':
result = append(result, '\f')
case 'n':
result = append(result, '\n')
case 'r':
result = append(result, '\r')
case 't':
result = append(result, '\t')
case 'u':
if i+4 < len(runes) {
var codePoint int
_, err := fmt.Sscanf(string(runes[i+1:i+5]), "%04x", &codePoint)
if err != nil {
return "", fmt.Errorf("invalid unicode escape: %s", string(runes[i-1:i+5]))
}
result = append(result, rune(codePoint))
i += 4
} else {
return "", fmt.Errorf("incomplete unicode escape")
}
default:
return "", fmt.Errorf("invalid escape sequence: \\%c", runes[i])
}
} else {
result = append(result, runes[i])
}
}
return string(result), nil
}
// TestJSONEscaperProperties demonstrates property-based testing
func TestJSONEscaperProperties(t *testing.T) {
t.Run("escape-unescape roundtrip", func(t *testing.T) {
rapid.Check(t, func(t *rapid.T) {
// Generate a valid UTF-8 string
s := rapid.StringN(0, 100, -1).Filter(utf8.ValidString).Draw(t, "input")
// Create escaper
escapeUnicode := rapid.Bool().Draw(t, "escapeUnicode")
escaper := &JSONEscaper{EscapeUnicode: escapeUnicode}
// Escape and then unescape
escaped := escaper.Escape(s)
unescaped, err := escaper.Unescape(escaped)
// Verify properties
if err != nil {
t.Fatalf("Failed to unescape: %v", err)
}
if unescaped != s {
t.Fatalf("Roundtrip failed: %q -> %q -> %q", s, escaped, unescaped)
}
})
})
}
// FuzzJSONEscaper is a fuzz test for the JSONEscaper
func FuzzJSONEscaper(f *testing.F) {
// Seed corpus
seeds := []string{"", "hello", "\"quoted\"", "line\nbreak", "tab\there", "unicode: 你好"}
for _, seed := range seeds {
f.Add(seed, true) // With Unicode escaping
f.Add(seed, false) // Without Unicode escaping
}
f.Fuzz(func(t *testing.T, s string, escapeUnicode bool) {
// Skip invalid UTF-8
if !utf8.ValidString(s) {
t.Skip("Invalid UTF-8")
}
escaper := &JSONEscaper{EscapeUnicode: escapeUnicode}
// Test escape-unescape roundtrip
escaped := escaper.Escape(s)
unescaped, err := escaper.Unescape(escaped)
if err != nil {
t.Fatalf("Failed to unescape: %v", err)
}
if unescaped != s {
t.Fatalf("Roundtrip failed: %q -> %q -> %q", s, escaped, unescaped)
}
// Verify that escaped string doesn't contain unescaped special characters
for _, r := range escaped {
switch r {
case '"', '\b', '\f', '\n', '\r', '\t':
t.Fatalf("Escaped string contains unescaped special character: %q", escaped)
}
}
// Verify Unicode escaping behavior
if escapeUnicode {
for _, r := range s {
if r > 127 {
// Check that the escaped string contains the Unicode escape sequence
escapeSeq := fmt.Sprintf("\\u%04x", r)
if !contains(escaped, escapeSeq) {
t.Fatalf("Unicode character %q not properly escaped in %q", r, escaped)
}
}
}
}
})
}
// Helper function to check if a string contains a substring
func contains(s, substr string) bool {
for i := 0; i <= len(s)-len(substr); i++ {
if s[i:i+len(substr)] == substr {
return true
}
}
return false
}
This combined approach demonstrates:
- Comprehensive testing: Using both property-based testing and fuzzing
- Shared properties: Testing the same invariants with different techniques
- Complementary strengths: Property tests for structured inputs, fuzzing for edge cases
- Seed corpus sharing: Using similar seed values for both approaches
- Focused verification: Testing specific properties with each technique
Testing in Production and Observability
While pre-deployment testing is essential, modern systems also require testing and monitoring in production environments.
Feature Flags and Canary Deployments
Feature flags enable controlled rollouts and testing in production:
package featureflags
import (
"context"
"fmt"
"math/rand"
"sync"
"time"
)
// FeatureFlag represents a configurable feature flag
type FeatureFlag struct {
Name string
Description string
Enabled bool
Percentage float64 // 0.0 to 1.0 for percentage rollout
UserGroups []string
}
// FeatureFlagService manages feature flags
type FeatureFlagService struct {
flags map[string]*FeatureFlag
userGroups map[string][]string
mu sync.RWMutex
}
// NewFeatureFlagService creates a new feature flag service
func NewFeatureFlagService() *FeatureFlagService {
return &FeatureFlagService{
flags: make(map[string]*FeatureFlag),
userGroups: make(map[string][]string),
}
}
// RegisterFlag adds a new feature flag
func (s *FeatureFlagService) RegisterFlag(flag *FeatureFlag) {
s.mu.Lock()
defer s.mu.Unlock()
s.flags[flag.Name] = flag
}
// AssignUserToGroup assigns a user to a group
func (s *FeatureFlagService) AssignUserToGroup(userID, group string) {
s.mu.Lock()
defer s.mu.Unlock()
if groups, ok := s.userGroups[userID]; ok {
// Check if user is already in the group
for _, g := range groups {
if g == group {
return
}
}
s.userGroups[userID] = append(groups, group)
} else {
s.userGroups[userID] = []string{group}
}
}
// IsEnabled checks if a feature flag is enabled for a specific user
func (s *FeatureFlagService) IsEnabled(ctx context.Context, flagName, userID string) bool {
s.mu.RLock()
defer s.mu.RUnlock()
flag, ok := s.flags[flagName]
if !ok || !flag.Enabled {
return false
}
// Check if user is in an enabled group
if userGroups, ok := s.userGroups[userID]; ok {
for _, userGroup := range userGroups {
for _, flagGroup := range flag.UserGroups {
if userGroup == flagGroup {
return true
}
}
}
}
// Check percentage rollout
if flag.Percentage > 0 {
// Use consistent hashing to ensure the same user always gets the same result
hash := consistentHash(flagName + userID)
return hash <= flag.Percentage
}
return false
}
// consistentHash generates a consistent hash value between 0.0 and 1.0
func consistentHash(s string) float64 {
h := 0
for i := 0; i < len(s); i++ {
h = 31*h + int(s[i])
}
// Convert to a value between 0.0 and 1.0
return float64(h&0x7fffffff) / float64(0x7fffffff)
}
// CanaryDeployment demonstrates a canary deployment strategy
func CanaryDeployment(ctx context.Context, flagService *FeatureFlagService) {
// Register feature flag for new algorithm
flagService.RegisterFlag(&FeatureFlag{
Name: "new-recommendation-algorithm",
Description: "New machine learning recommendation algorithm",
Enabled: true,
Percentage: 0.05, // Start with 5% of users
UserGroups: []string{"beta-testers", "internal-users"},
})
// Monitor the canary deployment
go func() {
ticker := time.NewTicker(1 * time.Hour)
defer ticker.Stop()
for {
select {
case <-ticker.C:
// Collect metrics for the canary deployment
metrics := collectMetrics("new-recommendation-algorithm")
// If metrics look good, increase the rollout percentage
if metrics.ErrorRate < 0.01 && metrics.Latency < 100*time.Millisecond {
flag, ok := flagService.flags["new-recommendation-algorithm"]
if ok && flag.Percentage < 1.0 {
flagService.mu.Lock()
// Increase by 10% each time
flag.Percentage = min(flag.Percentage+0.1, 1.0)
flagService.mu.Unlock()
fmt.Printf("Increased rollout to %.1f%%\n", flag.Percentage*100)
}
} else {
// If metrics look bad, roll back
flagService.mu.Lock()
flag, ok := flagService.flags["new-recommendation-algorithm"]
if ok {
flag.Enabled = false
fmt.Println("Rolling back due to poor metrics")
}
flagService.mu.Unlock()
return
}
case <-ctx.Done():
return
}
}
}()
}
// DeploymentMetrics represents metrics for a deployment
type DeploymentMetrics struct {
ErrorRate float64
Latency time.Duration
Throughput int
}
// collectMetrics simulates collecting metrics for a feature
func collectMetrics(featureName string) DeploymentMetrics {
// In a real system, this would collect actual metrics from monitoring systems
return DeploymentMetrics{
ErrorRate: rand.Float64() * 0.02, // 0-2% error rate
Latency: time.Duration(rand.Intn(200)) * time.Millisecond,
Throughput: rand.Intn(1000),
}
}
// min returns the minimum of two float64 values
func min(a, b float64) float64 {
if a < b {
return a
}
return b
}
This feature flag implementation demonstrates:
- Gradual rollouts: Controlling feature exposure with percentage-based rollouts
- User targeting: Enabling features for specific user groups
- Consistent hashing: Ensuring users get consistent experiences
- Metric-based decisions: Automatically adjusting rollout based on metrics
- Rollback capability: Quickly disabling features if problems arise
Distributed Tracing and Observability
Integrating tracing into your tests ensures observability in production:
package observability
import (
"context"
"fmt"
"log"
"time"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/exporters/stdout/stdouttrace"
"go.opentelemetry.io/otel/sdk/resource"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.7.0"
"go.opentelemetry.io/otel/trace"
)
// OrderService handles order processing
type OrderService struct {
inventoryClient *InventoryClient
paymentClient *PaymentClient
shippingClient *ShippingClient
notificationClient *NotificationClient
tracer trace.Tracer
}
// NewOrderService creates a new order service with tracing
func NewOrderService() (*OrderService, error) {
// Initialize tracer
exporter, err := stdouttrace.New(stdouttrace.WithPrettyPrint())
if err != nil {
return nil, fmt.Errorf("failed to create exporter: %w", err)
}
resource := resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceNameKey.String("order-service"),
semconv.ServiceVersionKey.String("1.0.0"),
)
provider := sdktrace.NewTracerProvider(
sdktrace.WithBatcher(exporter),
sdktrace.WithResource(resource),
sdktrace.WithSampler(sdktrace.AlwaysSample()),
)
otel.SetTracerProvider(provider)
tracer := otel.Tracer("order-service")
return &OrderService{
inventoryClient: NewInventoryClient(),
paymentClient: NewPaymentClient(),
shippingClient: NewShippingClient(),
notificationClient: NewNotificationClient(),
tracer: tracer,
}, nil
}
// ProcessOrder handles the end-to-end order processing flow
func (s *OrderService) ProcessOrder(ctx context.Context, order *Order) error {
ctx, span := s.tracer.Start(ctx, "ProcessOrder",
trace.WithAttributes(
attribute.String("order.id", order.ID),
attribute.Float64("order.total", order.Total),
))
defer span.End()
// Check inventory
if err := s.checkInventory(ctx, order); err != nil {
span.RecordError(err)
return fmt.Errorf("inventory check failed: %w", err)
}
// Process payment
if err := s.processPayment(ctx, order); err != nil {
span.RecordError(err)
return fmt.Errorf("payment processing failed: %w", err)
}
// Create shipment
if err := s.createShipment(ctx, order); err != nil {
span.RecordError(err)
return fmt.Errorf("shipment creation failed: %w", err)
}
// Send notification
if err := s.sendNotification(ctx, order); err != nil {
// Non-critical error, just log it
span.RecordError(err)
log.Printf("Failed to send notification: %v", err)
}
return nil
}
// checkInventory verifies that all items are in stock
func (s *OrderService) checkInventory(ctx context.Context, order *Order) error {
ctx, span := s.tracer.Start(ctx, "CheckInventory")
defer span.End()
// Add order items as span attributes for better debugging
for i, item := range order.Items {
span.SetAttributes(
attribute.String(fmt.Sprintf("item.%d.id", i), item.ProductID),
attribute.Int(fmt.Sprintf("item.%d.quantity", i), item.Quantity),
)
}
return s.inventoryClient.CheckAvailability(ctx, order.Items)
}
// processPayment handles payment processing
func (s *OrderService) processPayment(ctx context.Context, order *Order) error {
ctx, span := s.tracer.Start(ctx, "ProcessPayment",
trace.WithAttributes(
attribute.String("payment.method", order.PaymentMethod),
attribute.Float64("payment.amount", order.Total),
))
defer span.End()
return s.paymentClient.ProcessPayment(ctx, order.ID, order.Total, order.PaymentMethod)
}
// createShipment creates a shipment for the order
func (s *OrderService) createShipment(ctx context.Context, order *Order) error {
ctx, span := s.tracer.Start(ctx, "CreateShipment")
defer span.End()
return s.shippingClient.CreateShipment(ctx, order.ID, order.ShippingAddress)
}
// sendNotification sends an order confirmation
func (s *OrderService) sendNotification(ctx context.Context, order *Order) error {
ctx, span := s.tracer.Start(ctx, "SendNotification")
defer span.End()
return s.notificationClient.SendOrderConfirmation(ctx, order.ID, order.CustomerEmail)
}
// Order represents a customer order
type Order struct {
ID string
CustomerID string
CustomerEmail string
Items []OrderItem
Total float64
PaymentMethod string
ShippingAddress string
}
// OrderItem represents an item in an order
type OrderItem struct {
ProductID string
Quantity int
Price float64
}
// Mock clients for demonstration
type InventoryClient struct{}
type PaymentClient struct{}
type ShippingClient struct{}
type NotificationClient struct{}
func NewInventoryClient() *InventoryClient { return &InventoryClient{} }
func NewPaymentClient() *PaymentClient { return &PaymentClient{} }
func NewShippingClient() *ShippingClient { return &ShippingClient{} }
func NewNotificationClient() *NotificationClient { return &NotificationClient{} }
func (c *InventoryClient) CheckAvailability(ctx context.Context, items []OrderItem) error {
// Simulate API call with tracing
tracer := otel.Tracer("inventory-client")
_, span := tracer.Start(ctx, "InventoryAPI.CheckAvailability")
defer span.End()
// Simulate processing time
time.Sleep(50 * time.Millisecond)
return nil
}
func (c *PaymentClient) ProcessPayment(ctx context.Context, orderID string, amount float64, method string) error {
tracer := otel.Tracer("payment-client")
_, span := tracer.Start(ctx, "PaymentAPI.ProcessPayment")
defer span.End()
// Simulate processing time
time.Sleep(100 * time.Millisecond)
return nil
}
func (c *ShippingClient) CreateShipment(ctx context.Context, orderID, address string) error {
tracer := otel.Tracer("shipping-client")
_, span := tracer.Start(ctx, "ShippingAPI.CreateShipment")
defer span.End()
// Simulate processing time
time.Sleep(75 * time.Millisecond)
return nil
}
func (c *NotificationClient) SendOrderConfirmation(ctx context.Context, orderID, email string) error {
tracer := otel.Tracer("notification-client")
_, span := tracer.Start(ctx, "NotificationAPI.SendOrderConfirmation")
defer span.End()
// Simulate processing time
time.Sleep(25 * time.Millisecond)
return nil
}
This tracing implementation demonstrates:
- Distributed tracing: Tracking requests across service boundaries
- Context propagation: Passing trace context between components
- Span attributes: Adding metadata to traces for debugging
- Error recording: Capturing errors in the trace
- Sampling control: Configuring trace sampling rates
Chaos Testing
Chaos testing verifies system resilience by deliberately introducing failures:
package chaos
import (
"context"
"errors"
"fmt"
"math/rand"
"sync"
"time"
)
// ChaosMonkey introduces controlled failures into the system
type ChaosMonkey struct {
enabled bool
failureRate float64
latencyRange [2]time.Duration // Min and max latency
mu sync.RWMutex
targetService string
}
// NewChaosMonkey creates a new chaos monkey
func NewChaosMonkey(targetService string) *ChaosMonkey {
return &ChaosMonkey{
enabled: false,
failureRate: 0.05, // 5% failure rate by default
latencyRange: [2]time.Duration{50 * time.Millisecond, 500 * time.Millisecond},
targetService: targetService,
}
}
// Enable activates the chaos monkey
func (c *ChaosMonkey) Enable() {
c.mu.Lock()
defer c.mu.Unlock()
c.enabled = true
fmt.Printf("Chaos Monkey enabled for %s\n", c.targetService)
}
// Disable deactivates the chaos monkey
func (c *ChaosMonkey) Disable() {
c.mu.Lock()
defer c.mu.Unlock()
c.enabled = false
fmt.Printf("Chaos Monkey disabled for %s\n", c.targetService)
}
// SetFailureRate sets the probability of failures
func (c *ChaosMonkey) SetFailureRate(rate float64) {
c.mu.Lock()
defer c.mu.Unlock()
c.failureRate = rate
fmt.Printf("Chaos Monkey failure rate set to %.1f%% for %s\n", rate*100, c.targetService)
}
// SetLatencyRange sets the min and max latency range
func (c *ChaosMonkey) SetLatencyRange(min, max time.Duration) {
c.mu.Lock()
defer c.mu.Unlock()
c.latencyRange = [2]time.Duration{min, max}
fmt.Printf("Chaos Monkey latency range set to %v-%v for %s\n", min, max, c.targetService)
}
// MaybeInjectFailure potentially injects a failure based on configuration
func (c *ChaosMonkey) MaybeInjectFailure(ctx context.Context) error {
c.mu.RLock()
defer c.mu.RUnlock()
if !c.enabled {
return nil
}
// Check if context is already canceled
if err := ctx.Err(); err != nil {
return err
}
// Randomly inject failures
if rand.Float64() < c.failureRate {
failureType := rand.Intn(3)
switch failureType {
case 0:
// Error injection
return errors.New("chaos monkey injected error")
case 1:
// Latency injection
latency := c.latencyRange[0] + time.Duration(rand.Int63n(int64(c.latencyRange[1]-c.latencyRange[0])))
fmt.Printf("Chaos Monkey injecting %v latency in %s\n", latency, c.targetService)
select {
case <-time.After(latency):
return nil
case <-ctx.Done():
return ctx.Err()
}
case 2:
// Context cancellation
fmt.Printf("Chaos Monkey canceling context in %s\n", c.targetService)
return context.Canceled
}
}
return nil
}
// ResilientService demonstrates resilience patterns
type ResilientService struct {
dependencies map[string]Service
chaos map[string]*ChaosMonkey
}
// Service represents a dependency service
type Service interface {
Call(ctx context.Context, request interface{}) (interface{}, error)
}
// NewResilientService creates a new resilient service
func NewResilientService(dependencies map[string]Service) *ResilientService {
chaos := make(map[string]*ChaosMonkey)
for name := range dependencies {
chaos[name] = NewChaosMonkey(name)
}
return &ResilientService{
dependencies: dependencies,
chaos: chaos,
}
}
// EnableChaosFor enables chaos testing for a specific dependency
func (s *ResilientService) EnableChaosFor(dependency string) {
if monkey, ok := s.chaos[dependency]; ok {
monkey.Enable()
}
}
// DisableChaosFor disables chaos testing for a specific dependency
func (s *ResilientService) DisableChaosFor(dependency string) {
if monkey, ok := s.chaos[dependency]; ok {
monkey.Disable()
}
}
// CallWithRetry demonstrates resilient service calls with retries
func (s *ResilientService) CallWithRetry(ctx context.Context, dependency string, request interface{}) (interface{}, error) {
if service, ok := s.dependencies[dependency]; ok {
monkey := s.chaos[dependency]
// Retry configuration
maxRetries := 3
backoff := 100 * time.Millisecond
var lastErr error
for attempt := 0; attempt <= maxRetries; attempt++ {
// Maybe inject chaos
if err := monkey.MaybeInjectFailure(ctx); err != nil {
fmt.Printf("Chaos injected failure in %s: %v (attempt %d)\n", dependency, err, attempt)
lastErr = err
// Exponential backoff
if attempt < maxRetries {
sleepTime := backoff * time.Duration(1<<attempt)
time.Sleep(sleepTime)
}
continue
}
// Make the actual service call
result, err := service.Call(ctx, request)
if err != nil {
fmt.Printf("Service call to %s failed: %v (attempt %d)\n", dependency, err, attempt)
lastErr = err
// Exponential backoff
if attempt < maxRetries {
sleepTime := backoff * time.Duration(1<<attempt)
time.Sleep(sleepTime)
}
continue
}
// Success
return result, nil
}
return nil, fmt.Errorf("service call to %s failed after %d attempts: %w", dependency, maxRetries+1, lastErr)
}
return nil, fmt.Errorf("unknown dependency: %s", dependency)
}
// CircuitBreaker implements the circuit breaker pattern
type CircuitBreaker struct {
name string
state string // "closed", "open", "half-open"
failureCount int
failureThreshold int
resetTimeout time.Duration
lastFailure time.Time
mu sync.RWMutex
}
// NewCircuitBreaker creates a new circuit breaker
func NewCircuitBreaker(name string) *CircuitBreaker {
return &CircuitBreaker{
name: name,
state: "closed",
failureCount: 0,
failureThreshold: 5,
resetTimeout: 30 * time.Second,
}
}
// Execute runs a function with circuit breaker protection
func (cb *CircuitBreaker) Execute(ctx context.Context, fn func(context.Context) error) error {
cb.mu.RLock()
state := cb.state
cb.mu.RUnlock()
// If circuit is open, check if we should try again
if state == "open" {
cb.mu.RLock()
timeSinceLastFailure := time.Since(cb.lastFailure)
cb.mu.RUnlock()
if timeSinceLastFailure < cb.resetTimeout {
return fmt.Errorf("circuit breaker %s is open", cb.name)
}
// Transition to half-open
cb.mu.Lock()
cb.state = "half-open"
cb.mu.Unlock()
fmt.Printf("Circuit breaker %s transitioning to half-open\n", cb.name)
}
// Execute the function
err := fn(ctx)
// Update circuit breaker state based on result
cb.mu.Lock()
defer cb.mu.Unlock()
if err != nil {
// Handle failure
cb.failureCount++
cb.lastFailure = time.Now()
// Check if we should open the circuit
if cb.state == "closed" && cb.failureCount >= cb.failureThreshold {
cb.state = "open"
fmt.Printf("Circuit breaker %s opened after %d failures\n", cb.name, cb.failureCount)
} else if cb.state == "half-open" {
cb.state = "open"
fmt.Printf("Circuit breaker %s reopened after failure in half-open state\n", cb.name)
}
return err
}
// Handle success
if cb.state == "half-open" {
cb.state = "closed"
cb.failureCount = 0
fmt.Printf("Circuit breaker %s closed after success in half-open state\n", cb.name)
} else if cb.state == "closed" {
cb.failureCount = 0
}
return nil
}
This chaos testing implementation demonstrates:
- Controlled failure injection: Introducing errors, latency, and cancellations
- Resilience patterns: Implementing retries and circuit breakers
- Configurable chaos: Adjusting failure rates and types
- Service targeting: Applying chaos to specific dependencies
- Failure recovery: Testing system recovery from failures
Parting Thoughts
Advanced testing strategies are essential for building robust, high-performance Go applications. By combining traditional unit tests with more sophisticated approaches like property-based testing, fuzzing, benchmarking, and chaos testing, you can ensure your systems perform reliably under all conditions.
The techniques covered in this guide—from table-driven tests and mocking to distributed tracing and canary deployments—provide a comprehensive toolkit for Go developers building mission-critical systems. By implementing these strategies, you can catch bugs earlier, optimize performance, and build confidence in your code’s resilience.
Remember that testing is not a one-time activity but an ongoing process. As your systems evolve, so should your testing strategies. Continuously refine your approach based on production insights, and always be prepared to adapt to new challenges.
With these advanced testing techniques in your arsenal, you’re well-equipped to build truly bulletproof Go applications that can withstand the demands of modern production environments.