Building a Resilient Event-Driven SSR Frontend for a Legacy Oracle Monolith Using Go SQS and Storybook


The project began with a familiar, suffocating constraint: a monolithic enterprise application, its core business logic and data entombed within an Oracle database. The front end, a brittle tapestry of server-side generated JSPs, was directly coupled to this database. Every request for a product detail page hammered the Oracle instance with complex joins. Performance was degrading, and the business demanded a modern, fast, and SEO-friendly product catalog experience. A full rewrite was out of the question due to budget and risk. The mandate was to build a new presentation layer that was completely decoupled, highly performant, and could be developed and iterated on independently. The monolith’s write operations had to remain untouched, but we needed to reflect its data changes in near real-time.

The Initial Architectural Sketch and Technology Rationale

The core pain point was the synchronous, tight coupling between the presentation layer and the Oracle database. Any new solution had to break this coupling. This immediately pointed towards an event-driven architecture. Instead of the new frontend pulling data from Oracle, the Oracle system would push change events into a durable message queue. Our new frontend service would consume these events and maintain its own optimized read model.

This led to the following technology choices, each made for pragmatic reasons:

  1. Backend Service: Go with Echo Framework. The consumer service needed to be lightweight, fast, and excellent at handling concurrent I/O—specifically, long-polling an SQS queue while simultaneously serving HTTP requests for Server-Side Rendering (SSR). Go’s concurrency model with goroutines and channels was a perfect fit. The Echo framework was chosen for its performance and minimalist API, providing just enough structure without imposing heavy-handed conventions.

  2. Message Bus: AWS SQS. While Kafka is often the default for event streaming, it felt like overkill here. Our requirement was simple, reliable, point-to-point decoupling. SQS provides exactly that with at-least-once delivery, automatic scaling, and a dead-letter queue (DLQ) mechanism for handling failed messages. It’s a managed service that requires minimal operational overhead, a key consideration for a small team.

  3. Frontend Rendering: Server-Side Rendering (SSR). The primary driver was SEO for the public-facing product catalog. A secondary benefit was a fast First Contentful Paint (FCP), as the user receives a fully rendered HTML page. We decided to use React for the component model but render it on the Go server.

  4. UI Development: Storybook. The UI components for the new catalog pages had to be developed in parallel with the backend infrastructure. Storybook provided an isolated workshop to build, test, and document these React components with mocked data, completely independent of the Go service or the SQS pipeline. This decoupling of front-end and back-end development workflows was critical to moving quickly.

  5. The Unmovable Object: Oracle DB. We could not change the database. The assumption was that another team could implement a mechanism (using database triggers or Change Data Capture tools like GoldenGate) to publish a JSON payload representing a product update to an SQS topic whenever a relevant record in the Oracle DB was changed. Our system would start from the SQS message.

The overall data flow would be:
Oracle DB Change -> Event Publisher -> AWS SQS Message -> Go/Echo Consumer -> In-Memory Cache Update -> User HTTP Request -> Go/Echo SSR Handler -> Rendered HTML Response

sequenceDiagram
    participant Oracle as Oracle DB
    participant Publisher as Event Publisher
    participant SQS as AWS SQS Queue
    participant GoService as Go/Echo Service
    participant Cache as In-Memory Cache
    participant User as User's Browser

    Oracle ->> Publisher: Record Changed (e.g., Trigger)
    Publisher ->> SQS: Publish Product Update Message
    GoService ->> SQS: Long-poll for messages
    SQS -->> GoService: Delivers message
    GoService ->> GoService: Process message JSON
    GoService ->> Cache: Update/Insert product data
    GoService ->> SQS: Delete message

    User ->> GoService: GET /product/{id}
    GoService ->> Cache: Read product data for {id}
    Cache -->> GoService: Return product data
    GoService ->> GoService: Server-Side Render HTML with data
    GoService -->> User: Return fully rendered HTML page

Implementing the SQS Consumer Backbone in Go

The heart of the system is the SQS consumer. It must be resilient, handle failures gracefully, and run continuously in the background. A common mistake is to write a simple loop that polls and processes. In a real-world project, you need structured logging, graceful shutdown handling, and robust error management.

First, let’s define the configuration and the service structure. We’ll use a struct to hold dependencies like the AWS client, logger, and the data store.

// file: internal/config/config.go
package config

import (
	"github.com/kelseyhightower/envconfig"
	"log"
)

type AppConfig struct {
	AWSRegion        string `envconfig:"AWS_REGION" default:"us-east-1"`
	SQSQueueURL      string `envconfig:"SQS_QUEUE_URL" required:"true"`
	SQSWaitTimeSecs  int64  `envconfig:"SQS_WAIT_TIME_SECS" default:"20"`
	HTTPServerAddr   string `envconfig:"HTTP_SERVER_ADDR" default:":8080"`
}

func Load() *AppConfig {
	var cfg AppConfig
	err := envconfig.Process("", &cfg)
	if err != nil {
		log.Fatalf("Failed to load configuration: %v", err)
	}
	return &cfg
}

Now for the consumer service itself. It will run in a dedicated goroutine and communicate its shutdown status via a channel.

// file: internal/consumer/consumer.go
package consumer

import (
	"context"
	"encoding/json"
	"log/slog"
	"sync"
	"time"

	"github.com/aws/aws-sdk-go-v2/service/sqs"
	"github.com/aws/aws-sdk-go-v2/service/sqs/types"
)

// SQSAPI is an interface for the subset of SQS client methods we use.
// This is crucial for unit testing, allowing us to mock the SQS client.
type SQSAPI interface {
	ReceiveMessage(ctx context.Context, params *sqs.ReceiveMessageInput, optFns ...func(*sqs.Options)) (*sqs.ReceiveMessageOutput, error)
	DeleteMessage(ctx context.Context, params *sqs.DeleteMessageInput, optFns ...func(*sqs.Options)) (*sqs.DeleteMessageOutput, error)
}

// Product represents the structure of the data we expect from SQS.
type Product struct {
	ID          string   `json:"id"`
	Name        string   `json:"name"`
	Description string   `json:"description"`
	Price       float64  `json:"price"`
	ImageURLs   []string `json:"image_urls"`
	IsDeleted   bool     `json:"is_deleted,omitempty"` // A soft delete flag
}

// DataStore is an interface for our data persistence layer.
// In this example, it will be an in-memory map, but it could be Redis in production.
type DataStore interface {
	UpsertProduct(ctx context.Context, product Product) error
	DeleteProduct(ctx context.Context, productID string) error
}

type Service struct {
	sqsClient SQSAPI
	queueURL  string
	store     DataStore
	logger    *slog.Logger
	waitTime  int64
}

func NewService(client SQSAPI, queueURL string, store DataStore, logger *slog.Logger, waitTimeSecs int64) *Service {
	return &Service{
		sqsClient: client,
		queueURL:  queueURL,
		store:     store,
		logger:    logger,
		waitTime:  waitTimeSecs,
	}
}

// Start runs the consumer loop until the context is canceled.
func (s *Service) Start(ctx context.Context, wg *sync.WaitGroup) {
	defer wg.Done()
	s.logger.Info("starting SQS consumer", "queue_url", s.queueURL)

	for {
		select {
		case <-ctx.Done():
			s.logger.Info("shutting down SQS consumer")
			return
		default:
			s.pollAndProcess(ctx)
		}
	}
}

func (s *Service) pollAndProcess(ctx context.Context) {
	receiveInput := &sqs.ReceiveMessageInput{
		QueueUrl:            &s.queueURL,
		MaxNumberOfMessages: 10, // Process in batches
		WaitTimeSeconds:     int32(s.waitTime),
		// We set a visibility timeout on the queue itself.
		// If processing takes longer, we'd need to extend it.
	}

	output, err := s.sqsClient.ReceiveMessage(ctx, receiveInput)
	if err != nil {
		s.logger.Error("failed to receive messages from SQS", "error", err)
		// Backoff before retrying to avoid hammering a failing service
		time.Sleep(5 * time.Second)
		return
	}

	if len(output.Messages) == 0 {
		return // No messages, just loop again
	}

	s.logger.Info("received messages", "count", len(output.Messages))

	var processWg sync.WaitGroup
	for _, msg := range output.Messages {
		processWg.Add(1)
		// Process each message in a separate goroutine for concurrency.
		go func(m types.Message) {
			defer processWg.Done()
			if err := s.processMessage(ctx, m); err != nil {
				s.logger.Error("failed to process message", "message_id", *m.MessageId, "error", err)
				// The message will become visible again on the queue after the visibility timeout
				// and will be retried. After enough failures, it should go to a DLQ.
			} else {
				// Only delete the message if processing was successful.
				if err := s.deleteMessage(ctx, m.ReceiptHandle); err != nil {
					s.logger.Error("failed to delete message from SQS", "message_id", *m.MessageId, "receipt_handle", *m.ReceiptHandle, "error", err)
				}
			}
		}(msg)
	}
	processWg.Wait()
}

func (s *Service) processMessage(ctx context.Context, msg types.Message) error {
	var product Product
	if err := json.Unmarshal([]byte(*msg.Body), &product); err != nil {
		// This is a "poison pill" - a malformed message.
		// We return an error, but importantly, we do NOT delete it.
		// It will be retried and eventually land in the DLQ for manual inspection.
		s.logger.Error("failed to unmarshal message body", "body", *msg.Body, "error", err)
		return err
	}
    
    // The pitfall here is not validating the deserialized data.
    // In production, you'd add validation logic here.
    if product.ID == "" {
        s.logger.Error("product data validation failed: missing ID", "body", *msg.Body)
        return // again, treat as poison pill
    }

	if product.IsDeleted {
		if err := s.store.DeleteProduct(ctx, product.ID); err != nil {
			s.logger.Error("failed to delete product in data store", "product_id", product.ID, "error", err)
			return err
		}
		s.logger.Info("processed delete event", "product_id", product.ID)
	} else {
		if err := s.store.UpsertProduct(ctx, product); err != nil {
			s.logger.Error("failed to upsert product in data store", "product_id", product.ID, "error", err)
			return err
		}
		s.logger.Info("processed upsert event", "product_id", product.ID, "product_name", product.Name)
	}

	return nil
}

func (s *Service) deleteMessage(ctx context.Context, receiptHandle *string) error {
	deleteInput := &sqs.DeleteMessageInput{
		QueueUrl:      &s.queueURL,
		ReceiptHandle: receiptHandle,
	}
	_, err := s.sqsClient.DeleteMessage(ctx, deleteInput)
	return err
}

This code is substantially more robust than a simple loop. It uses interfaces for testability, handles graceful shutdowns via context, processes messages concurrently, and has a clear strategy for poison pills (let them fail and go to the DLQ).

The In-Memory Data Store

For this implementation, we’ll use a simple thread-safe map as our DataStore. In a production system that needs to scale horizontally, this would be replaced with Redis or another external cache. The key is that the DataStore interface hides this implementation detail.

// file: internal/store/memory.go
package store

import (
	"context"
	"fmt"
	"sync"

	"github.com/your-repo/your-project/internal/consumer"
)

// InMemoryStore is a thread-safe in-memory store for products.
type InMemoryStore struct {
	mu       sync.RWMutex
	products map[string]consumer.Product
}

func NewInMemoryStore() *InMemoryStore {
	return &InMemoryStore{
		products: make(map[string]consumer.Product),
	}
}

func (s *InMemoryStore) UpsertProduct(_ context.Context, product consumer.Product) error {
	s.mu.Lock()
	defer s.mu.Unlock()
	s.products[product.ID] = product
	return nil
}

func (s *InMemoryStore) DeleteProduct(_ context.Context, productID string) error {
	s.mu.Lock()
	defer s.mu.Unlock()
	delete(s.products, productID)
	return nil
}

func (s *InMemoryStore) GetProduct(_ context.Context, productID string) (consumer.Product, error) {
	s.mu.RLock()
	defer s.mu.RUnlock()
	product, ok := s.products[productID]
	if !ok {
		return consumer.Product{}, fmt.Errorf("product with ID %s not found", productID)
	}
	return product, nil
}

The Echo SSR Server

Now we build the HTTP server that will serve the server-rendered pages. The handler for a product page will fetch data from our InMemoryStore and use a template to render the final HTML.

A significant challenge with Go SSR and React is bridging the two worlds. A common, pragmatic approach is to have the Go server execute a pre-built JavaScript bundle that exposes a simple rendering function. We can use a library like v8go to embed the V8 engine in our Go binary.

// file: internal/server/server.go
package server

import (
	"context"
	"fmt"
	"log/slog"
	"net/http"

	// Simplified for example. In reality, you'd use a JS runtime like v8go.
	// For this example, we will just use Go's html/template for simplicity,
	// but the principle of injecting JSON data remains the same.
	"html/template"
	"io"

	"github.com/labstack/echo/v4"
	"github.com/labstack/echo/v4/middleware"
	"github.com/your-repo/your-project/internal/consumer"
)

// DataStoreReader defines the read-only interface our server needs.
type DataStoreReader interface {
	GetProduct(ctx context.Context, productID string) (consumer.Product, error)
}

// TemplateRenderer is a custom renderer for Echo that uses Go's html/template.
type TemplateRenderer struct {
	templates *template.Template
}

func (t *TemplateRenderer) Render(w io.Writer, name string, data interface{}, c echo.Context) error {
	return t.templates.ExecuteTemplate(w, name, data)
}

func New(store DataStoreReader, logger *slog.Logger) *echo.Echo {
	e := echo.New()
	e.Use(middleware.Logger())
	e.Use(middleware.Recover())
	
	// In a real SSR setup with React, you would load a JS bundle here and
	// initialize a pool of JS runtimes. The template below simulates this.
	// It assumes the client-side React will hydrate from a JSON blob.
	renderer := &TemplateRenderer{
		templates: template.Must(template.ParseGlob("web/templates/*.html")),
	}
	e.Renderer = renderer

	h := &handler{store: store, logger: logger}
	e.GET("/product/:id", h.handleGetProduct)

	return e
}

type handler struct {
	store  DataStoreReader
	logger *slog.Logger
}

func (h *handler) handleGetProduct(c echo.Context) error {
	productID := c.Param("id")
	if productID == "" {
		return c.String(http.StatusBadRequest, "Product ID is required")
	}

	product, err := h.store.GetProduct(c.Request().Context(), productID)
	if err != nil {
		h.logger.Warn("product not found in store", "product_id", productID)
		// Returning a 404 is crucial for SEO and user experience.
		return c.Render(http.StatusNotFound, "notfound.html", nil)
	}

	// This is the data that will be used to render the page on the server.
	// It's also embedded as JSON in the HTML for client-side hydration.
	return c.Render(http.StatusOK, "product.html", product)
}

Here’s the corresponding product.html Go template. This demonstrates how data from the Go backend is used to construct the initial HTML and also embedded for the client-side JavaScript.

<!-- file: web/templates/product.html -->
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-g">
    <title>{{ .Name }}</title>
    <!-- Add other meta tags for SEO -->
</head>
<body>
    <!-- The root element where the React app will mount -->
    <div id="root">
        <!-- Server-Side Rendered Content -->
        <h1>{{ .Name }}</h1>
        <p>{{ .Description }}</p>
        <strong>Price: ${{ .Price }}</strong>
        <!-- End Server-Side Rendered Content -->
    </div>

    <!-- 
      Embed the initial state for the client-side app to hydrate.
      A common mistake is not properly escaping this JSON, leading to XSS vulnerabilities.
      Go's template engine handles this safely by default.
    -->
    <script type="application/json" id="initial-data">
        {{ . }}
    </script>
    
    <!-- Load the client-side React bundle -->
    <script src="/static/js/bundle.js"></script>
</body>
</html>

Storybook and the Component Workflow

The power of Storybook comes from developing the UI in a vacuum. The React components that would render the product information are built without needing the Go server to be running.

A typical product card component story might look like this:

// file: web/src/components/ProductCard.stories.jsx
import React from 'react';
import ProductCard from './ProductCard';

export default {
  title: 'Components/ProductCard',
  component: ProductCard,
};

const Template = (args) => <ProductCard {...args} />;

export const Default = Template.bind({});
Default.args = {
  product: {
    id: 'prod-123',
    name: 'Quantum Entanglement Device',
    description: 'A handy device for violating causality in your local spacetime continuum.',
    price: 1999.99,
    imageURLs: ['/static/images/qed.jpg'],
  },
};

export const OutOfStock = Template.bind({});
OutOfStock.args = {
  product: {
    ...Default.args.product,
    name: 'Flux Capacitor (Out of Stock)',
  },
  isOutOfStock: true, // Example of controlling state via props
};

The team develops ProductCard.jsx and dozens of other components in Storybook. The build process (e.g., using Webpack or Vite) then creates the bundle.js that our Go server references. This bundle contains all the React code needed for client-side hydration. The server provides the initial HTML and data, and the client-side JS takes over for subsequent interactivity. This separation of concerns was the key to unlocking parallel development.

Tying It All Together in main.go

The final piece is the main application entrypoint that initializes all components and manages their lifecycles.

// file: cmd/server/main.go
package main

import (
	"context"
	"log/slog"
	"net/http"
	"os"
	"os/signal"
	"sync"
	"syscall"
	"time"

	"github.com/aws/aws-sdk-go-v2/config as awsconfig"
	"github.com/aws/aws-sdk-go-v2/service/sqs"
	"github.com/your-repo/your-project/internal/config"
	"github.com/your-repo/your-project/internal/consumer"
	"github.com/your-repo/your-project/internal/server"
	"github.com/your-repo/your-project/internal/store"
)

func main() {
	logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
	cfg := config.Load()

	// Set up main context for graceful shutdown
	ctx, stop := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
	defer stop()

	// AWS SDK Configuration
	awsCfg, err := awsconfig.LoadDefaultConfig(ctx, awsconfig.WithRegion(cfg.AWSRegion))
	if err != nil {
		logger.Error("failed to load AWS config", "error", err)
		os.Exit(1)
	}
	sqsClient := sqs.NewFromConfig(awsCfg)

	// Dependencies
	dataStore := store.NewInMemoryStore()

	// Initialize services
	consumerSvc := consumer.NewService(sqsClient, cfg.SQSQueueURL, dataStore, logger, cfg.SQSWaitTimeSecs)
	echoServer := server.New(dataStore, logger)

	var wg sync.WaitGroup

	// Start the SQS consumer in a background goroutine
	wg.Add(1)
	go consumerSvc.Start(ctx, &wg)

	// Start the HTTP server
	go func() {
		logger.Info("starting HTTP server", "addr", cfg.HTTPServerAddr)
		if err := echoServer.Start(cfg.HTTPServerAddr); err != nil && err != http.ErrServerClosed {
			logger.Error("HTTP server shut down unexpectedly", "error", err)
			stop() // Trigger shutdown if server fails
		}
	}()

	// Wait for shutdown signal
	<-ctx.Done()

	logger.Info("shutdown signal received, initiating graceful shutdown")

	// Gracefully shut down the HTTP server
	shutdownCtx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
	defer cancel()
	if err := echoServer.Shutdown(shutdownCtx); err != nil {
		logger.Error("HTTP server graceful shutdown failed", "error", err)
	} else {
		logger.Info("HTTP server shut down gracefully")
	}

	// Wait for the consumer to finish its current work
	wg.Wait()
	logger.Info("all services shut down")
}

Lingering Issues and Future Iterations

This architecture successfully decoupled the presentation layer, achieving the primary goal. However, it’s not without its own set of trade-offs and areas for improvement. The in-memory data store is a glaring single point of failure and will not scale beyond a single service instance; replacing it with a distributed cache like Redis is the immediate next step for production readiness. This introduces its own complexity around cache invalidation and connection management.

Furthermore, the initial state of the system is undefined. When a new instance of the service starts, its cache is empty. A “cache warming” mechanism is required, likely a batch job that reads from a snapshot of the Oracle database and populates the cache directly, bypassing SQS for the initial load.

Finally, the current model is purely eventually consistent. The time between a record changing in Oracle and it being visible on the SSR page is subject to the latency of the publisher, SQS, and our consumer. For a product catalog, this is acceptable. For use cases requiring read-your-writes consistency or transactional integrity, this architecture would be entirely unsuitable. The boundaries of its applicability must be clearly understood.


  TOC