Automating React Native Client Credential Rotation Using a Spinnaker Pipeline and DynamoDB Audit Trail


The incident that forced this entire project began with a routine penetration test report. A set of client-side API credentials for our primary React Native application were found hardcoded within a transpiled JavaScript bundle. While obfuscated, they were recoverable. The immediate mitigation was painful: manually generate a new key, deploy a new backend configuration, and push a mandatory app update to every user, invalidating the old key after a frantic 72-hour grace period. This process was manual, error-prone, and unsustainable. It was clear we needed a system for short-lived, dynamically rotated credentials on the client-side, with a robust, auditable trail and a zero-downtime guarantee for legitimate users.

Our initial concept was to build a credential rotation orchestrator. The mobile client would hold a credential valid for a short period, say 7 days. Before expiration, the client would need to fetch a new one. In the event of a suspected leak, we needed a “kill switch” to revoke a specific credential set instantly. The challenge was not just in the client-side logic, but in the backend automation. A collection of cron jobs and shell scripts felt fragile. We already used Spinnaker for our backend deployments, and its core competency is orchestrating complex, multi-stage workflows. The decision was made to model this security process as a first-class delivery pipeline within Spinnaker.

For the audit trail, a traditional relational database felt like overkill. We didn’t need complex joins or transactions. What we needed was a highly available, scalable, write-optimized ledger to record every event in a credential’s lifecycle: issuance, supersession, and revocation. DynamoDB was the logical choice. Its schema-less nature, pay-per-request model, and Time-to-Live (TTL) feature for purging old records fit the requirements perfectly.

The Foundation: DynamoDB as an Immutable Audit Log

Before orchestrating anything, we needed a source of truth. The DynamoDB table acts as this immutable log. Every action taken by the Spinnaker pipeline against a credential must be recorded here first.

Here is the Terraform configuration for the table. In a real-world project, you never configure production infrastructure manually.

# terraform/dynamodb/main.tf

resource "aws_dynamodb_table" "credential_audit_log" {
  name           = "mobile-credential-audit-log"
  billing_mode   = "PAY_PER_REQUEST"
  hash_key       = "credential_id"
  range_key      = "event_timestamp"

  attribute {
    name = "credential_id"
    type = "S"
  }

  attribute {
    name = "event_timestamp"
    type = "N"
  }
  
  attribute {
    name = "status"
    type = "S"
  }

  # Global Secondary Index to query by status
  global_secondary_index {
    name            = "StatusIndex"
    hash_key        = "status"
    projection_type = "ALL"
  }

  ttl {
    attribute_name = "expires_at"
    enabled        = true
  }

  tags = {
    Project     = "ClientSecurity"
    ManagedBy   = "Terraform"
  }
}

# IAM policy for Spinnaker to interact with the table and Secrets Manager
resource "aws_iam_policy" "spinnaker_credential_manager_policy" {
  name        = "SpinnakerCredentialManagerPolicy"
  description = "Allows Spinnaker to manage client credentials and the audit log"

  policy = jsonencode({
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DynamoDBAuditLogAccess",
            "Effect": "Allow",
            "Action": [
                "dynamodb:PutItem",
                "dynamodb:UpdateItem",
                "dynamodb:Query"
            ],
            "Resource": aws_dynamodb_table.credential_audit_log.arn
        },
        {
            "Sid": "SecretsManagerAccess",
            "Effect": "Allow",
            "Action": [
                "secretsmanager:CreateSecret",
                "secretsmanager:UpdateSecret",
                "secretsmanager:PutSecretValue",
                "secretsmanager:DescribeSecret",
                "secretsmanager:TagResource"
            ],
            // A common mistake is using a wildcard here.
            // Lock this down to secrets with a specific path or tag.
            "Resource": "arn:aws:secretsmanager:us-east-1:123456789012:secret:mobile/client/*"
        },
        {
            "Sid": "ConfigPointerAccess",
            "Effect": "Allow",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::my-app-config-bucket/mobile/active-credential-pointer.json"
        }
    ]
  })
}

The schema is critical. The primary key is a composite of credential_id (a UUID for a specific secret) and event_timestamp. This allows us to store an immutable history for each credential. The GSI on the status field (ACTIVE, SUPERSEDED, REVOKED) is for operational queries, like “show me all currently active keys.” We also enable TTL on an expires_at attribute, which we’ll set to 90 days post-revocation, ensuring we don’t store audit data forever.

The Orchestrator: The Spinnaker Rotation Pipeline

With the data store defined, the core logic resides in a Spinnaker pipeline. This isn’t a deployment pipeline; it’s a security process pipeline. It’s triggered on a weekly schedule.

graph TD
    A[Trigger: Cron Weekly] --> B{Generate New Secret};
    B --> C{Log Issuance in DynamoDB};
    C --> D{Update Client Config Pointer};
    D --> E[Wait: 6 Days];
    E --> F{Log Supersession in DynamoDB};
    F --> G{Decommission Old Secret};

Here’s a simplified representation of the pipeline’s JSON definition. In practice, this is managed via Spinnaker’s pipeline templates or a Git-based workflow.

{
  "name": "Weekly-React-Native-Credential-Rotation",
  "application": "mobile-security",
  "triggers": [
    {
      "type": "cron",
      "cronExpression": "0 0 0 ? * MON *" // Every Monday at midnight UTC
    }
  ],
  "stages": [
    {
      "name": "1-GenerateNewSecret",
      "type": "script",
      "refId": "1",
      "requisiteStageRefIds": [],
      "script.source": "inline",
      "script.command": [
        "set -e",
        "NEW_CRED_ID=$(uuidgen)",
        "NEW_API_KEY=$(openssl rand -hex 32)",
        "SECRET_ARN=$(aws secretsmanager create-secret --name mobile/client/$NEW_CRED_ID --secret-string \"{\\\"apiKey\\\":\\\"$NEW_API_KEY\\\"}\" --query ARN --output text)",
        "echo \"{\\\"newCredentialId\\\":\\\"$NEW_CRED_ID\\\", \\\"newSecretArn\\\":\\\"$SECRET_ARN\\\"}\" > /tmp/context.json"
      ],
      "producesArtifacts": [
        { "name": "context.json", "location": "/tmp" }
      ]
    },
    {
      "name": "2-LogNewSecretIssuance",
      "type": "script",
      "refId": "2",
      "requisiteStageRefIds": ["1"],
      "script.source": "inline",
      "script.command": [
        "set -e",
        "TIMESTAMP=$(date +%s)",
        "CRED_ID=$(cat /tmp/context.json | jq -r .newCredentialId)",
        "EXPIRES_AT=$(date -d '+7 days' +%s)",
        "aws dynamodb put-item --table-name mobile-credential-audit-log \\",
        "--item '{ \\",
        "  \"credential_id\": {\"S\": \"'$CRED_ID'\"}, \\",
        "  \"event_timestamp\": {\"N\": \"'$TIMESTAMP'\"}, \\",
        "  \"status\": {\"S\": \"ACTIVE\"}, \\",
        "  \"expires_at\": {\"N\": \"'$EXPIRES_AT'\"} \\",
        "}'"
      ],
      "consumesArtifacts": [
        { "name": "context.json", "location": "/tmp" }
      ]
    },
    {
      "name": "3-UpdateClientConfigPointer",
      "type": "script",
      "refId": "3",
      "requisiteStageRefIds": ["2"],
      "script.source": "inline",
      "script.command": [
        "set -e",
        "SECRET_ARN=$(cat /tmp/context.json | jq -r .newSecretArn)",
        "echo \"{\\\"currentSecretArn\\\":\\\"$SECRET_ARN\\\"}\" > /tmp/pointer.json",
        "aws s3 cp /tmp/pointer.json s3://my-app-config-bucket/mobile/active-credential-pointer.json --acl public-read"
      ],
      "consumesArtifacts": [
        { "name": "context.json", "location": "/tmp" }
      ]
    },
    {
      "name": "4-GracePeriod",
      "type": "wait",
      "refId": "4",
      "requisiteStageRefIds": ["3"],
      "waitTime": 518400 // 6 days in seconds
    },
    {
      "name": "5-LogOldSecretSupersession",
      "type": "script",
      "refId": "5",
      "requisiteStageRefIds": ["4"],
      "script.source": "inline",
      "script.command": [
        "set -e",
        "# This part is more complex. It needs to find the *previous* active credential.",
        "# A real implementation would fetch this from a reliable source, maybe a pipeline parameter.",
        "OLD_CRED_ID=$(aws dynamodb query ... --filter-expression 'status = :s' --expression-attribute-values '{\":s\":{\"S\":\"ACTIVE\"}}' ...)",
        "TIMESTAMP=$(date +%s)",
        "aws dynamodb put-item --table-name mobile-credential-audit-log \\",
        "--item '{ \\",
        "  \"credential_id\": {\"S\": \"'$OLD_CRED_ID'\"}, \\",
        "  \"event_timestamp\": {\"N\": \"'$TIMESTAMP'\"}, \\",
        "  \"status\": {\"S\": \"SUPERSEDED\"} \\",
        "}'"
      ]
    },
    {
      "name": "6-DecommissionOldSecret",
      "type": "script",
      "refId": "6",
      "requisiteStageRefIds": ["5"],
      "script.source": "inline",
      "script.command": [
        "set -e",
        "# Same logic to find OLD_CRED_ID as previous step",
        "OLD_SECRET_ARN=\"arn:aws:secretsmanager:us-east-1:123456789012:secret:mobile/client/$OLD_CRED_ID\"",
        "# This doesn't delete it, just prevents new clients from fetching it.",
        "aws secretsmanager update-secret --secret-id $OLD_SECRET_ARN --description \"Decommissioned at $(date)\""
      ]
    }
  ]
}

A key decision here is how the client discovers the latest secret. Instead of the client knowing the secret’s name, it fetches a pointer file from a well-known, public S3 location. This active-credential-pointer.json file contains only the ARN of the currently active secret in AWS Secrets Manager. The mobile client has IAM permissions to read a secret from Secrets Manager, but not to list or discover them. This pointer acts as a level of indirection, which is critical for the rotation mechanism.

The Consumer: React Native Client-Side Logic

The React Native application can no longer have a static API key. It needs a small state machine to manage credentials. The core of this is an API request interceptor. We use axios for this example.

First, a secure storage module to handle the credential on the device. react-native-keychain is a solid choice as it utilizes the platform’s native secure elements (Keychain on iOS, Keystore on Android).

// src/services/SecureCredentialStorage.js
import * as Keychain from 'react-native-keychain';

const SERVICE_NAME = 'com.yourapp.api';

export const saveCredential = async (credential) => {
  try {
    // We store the ARN and the actual key value
    await Keychain.setGenericPassword(
      credential.arn, // username field
      credential.apiKey, // password field
      { service: SERVICE_NAME }
    );
  } catch (error) {
    console.error('Failed to save credential securely.', error);
    // In a real app, log this error to your monitoring service
  }
};

export const getCredential = async () => {
  try {
    const credentials = await Keychain.getGenericPassword({ service: SERVICE_NAME });
    if (credentials) {
      return { arn: credentials.username, apiKey: credentials.password };
    }
    return null;
  } catch (error) {
    console.error('Failed to retrieve credential.', error);
    return null;
  }
};

export const clearCredential = async () => {
  await Keychain.resetGenericPassword({ service: SERVICE_NAME });
};

Next, the service that handles fetching the fresh credential. This is the only part that needs to know about the S3 pointer.

// src/services/CredentialManager.js

// This is a minimal-privilege AWS client, configured with a low-permission IAM user
// that can ONLY call `secretsmanager:GetSecretValue` and `s3:GetObject`.
// The credentials for THIS client are the only static ones left in the app.
// Their compromise is low-impact as they can only fetch secrets, not manage them.
import { SecretsManagerClient, GetSecretValueCommand } from "@aws-sdk/client-secrets-manager";
import { S3Client, GetObjectCommand } from "@aws-sdk/client-s3";
import { saveCredential } from './SecureCredentialStorage';

const POINTER_BUCKET = 'my-app-config-bucket';
const POINTER_KEY = 'mobile/active-credential-pointer.json';

// IMPORTANT: Configure your AWS clients with static, low-privilege credentials.
// In a production app, these would be managed by a service like AppCenter or a build variable system.
const s3Client = new S3Client({ region: 'us-east-1', credentials: { ... } });
const secretsClient = new SecretsManagerClient({ region: 'us-east-1', credentials: { ... } });

export const fetchAndStoreNewCredential = async () => {
  try {
    console.log('Attempting to fetch new credential pointer...');
    // 1. Fetch the pointer file from S3
    const getObjectCmd = new GetObjectCommand({ Bucket: POINTER_BUCKET, Key: POINTER_KEY });
    const s3Response = await s3Client.send(getObjectCmd);
    const pointerData = JSON.parse(await s3Response.Body.transformToString());
    const newSecretArn = pointerData.currentSecretArn;

    if (!newSecretArn) {
        throw new Error('Invalid credential pointer format.');
    }

    console.log(`Pointer acquired. Fetching secret from ARN: ${newSecretArn}`);
    // 2. Fetch the actual secret value from Secrets Manager
    const getSecretCmd = new GetSecretValueCommand({ SecretId: newSecretArn });
    const secretResponse = await secretsClient.send(getSecretCmd);
    const secretValue = JSON.parse(secretResponse.SecretString);
    const newApiKey = secretValue.apiKey;
    
    if (!newApiKey) {
        throw new Error('Secret format is incorrect.');
    }

    // 3. Save the new credential securely on the device
    const newCredential = { arn: newSecretArn, apiKey: newApiKey };
    await saveCredential(newCredential);
    
    console.log('Successfully fetched and stored new credential.');
    return newCredential;
  } catch (error) {
    console.error('CRITICAL: Credential refresh failed.', error);
    // This is a fatal error. The app should probably force the user to log out.
    // Notify your observability platform.
    throw error; // Re-throw to be handled by the caller
  }
};

Finally, the axios interceptor that ties it all together. This logic is transparent to the rest of the application code.

// src/api/ApiClient.js
import axios from 'axios';
import { getCredential, clearCredential } from '../services/SecureCredentialStorage';
import { fetchAndStoreNewCredential } from '../services/CredentialManager';

const apiClient = axios.create({
  baseURL: 'https://api.yourapp.com',
});

let isRefreshing = false;
let failedQueue = [];

const processQueue = (error, token = null) => {
  failedQueue.forEach(prom => {
    if (error) {
      prom.reject(error);
    } else {
      prom.resolve(token);
    }
  });
  failedQueue = [];
};

apiClient.interceptors.request.use(
  async (config) => {
    const credential = await getCredential();
    if (credential?.apiKey) {
      config.headers['X-API-Key'] = credential.apiKey;
    }
    return config;
  },
  (error) => Promise.reject(error)
);

apiClient.interceptors.response.use(
  (response) => response,
  async (error) => {
    const originalRequest = error.config;

    // The key conditions: 401 Unauthorized and not a retry request
    if (error.response?.status === 401 && !originalRequest._retry) {
      if (isRefreshing) {
        // If a refresh is already in progress, queue this request
        return new Promise((resolve, reject) => {
          failedQueue.push({ resolve, reject });
        })
        .then(apiKey => {
          originalRequest.headers['X-API-Key'] = apiKey;
          return apiClient(originalRequest);
        });
      }

      originalRequest._retry = true;
      isRefreshing = true;

      try {
        const newCredential = await fetchAndStoreNewCredential();
        apiClient.defaults.headers.common['X-API-Key'] = newCredential.apiKey;
        originalRequest.headers['X-API-Key'] = newCredential.apiKey;
        processQueue(null, newCredential.apiKey);
        return apiClient(originalRequest);
      } catch (refreshError) {
        processQueue(refreshError, null);
        // If refresh fails, it's game over.
        await clearCredential();
        // Here you would navigate the user to a login/error screen.
        // For example: RootNavigation.navigate('Auth');
        return Promise.reject(refreshError);
      } finally {
        isRefreshing = false;
      }
    }
    return Promise.reject(error);
  }
);

export default apiClient;

This interceptor logic is non-trivial. The isRefreshing flag and failedQueue are essential to handle concurrent API calls failing with a 401. Without this, each failed call would trigger a separate fetchAndStoreNewCredential invocation, which is inefficient and a potential race condition.

The Kill Switch: The Emergency Revocation Pipeline

The final piece is a separate, manually-triggered Spinnaker pipeline for emergencies. It’s much simpler.

graph TD
    A[Trigger: Manual with Credential ID] --> B{Log Revocation in DynamoDB};
    B --> C{Decommission Secret Immediately};

This pipeline takes one parameter: the credential_id to revoke.

  1. Stage 1: Log Revocation: Immediately writes a new item to DynamoDB for the given credential_id with status: "REVOKED".
  2. Stage 2: Decommission Secret: Calls aws secretsmanager update-secret to immediately prevent any further fetches of this secret’s value.

When the client app attempts to use a revoked key, it will get a 401. The interceptor will trigger the refresh flow. However, the API gateway should be configured to reject requests from decommissioned keys before they even hit the application logic. The client’s refresh attempt will succeed, fetching the new active key, and the user session will continue uninterrupted. The only time the user is impacted is if the refresh mechanism itself fails.

This entire system, orchestrated by Spinnaker, moves client credential management from a reactive, high-stress manual task to a proactive, automated, and fully-audited security process. The pitfall here is complexity. The interaction between the Spinnaker pipeline, IAM roles, DynamoDB, and the client-side state machine has many moving parts. A failure in the rotation pipeline could, in a worst-case scenario, leave clients without a valid credential path. Rigorous testing of the pipeline logic in a staging environment is not optional; it’s a core requirement for system stability. The bootstrapping credential for the AWS SDK in the client remains a point of static secret exposure, albeit one with heavily restricted permissions. A future iteration would involve integrating a full OIDC flow, where the user’s identity token is exchanged for temporary AWS credentials, removing the last static secret from the app entirely. The current design, however, represents a massive leap in security posture from the original hardcoded key.


  TOC