Blog, COLDSURF

Convert to Monorepo.

#monorepo

#workspace

#infra

Yarn Workspaces

For my side project, I created a new GitHub organization to manage everything there. Initially, I faced a few challenges:
  • Deploying via Vercel made it challenging to manage in a monorepo format.
  • Thus, instead of using a monorepo structure, I managed separate, scattered repositories.
I initially considered a monorepo setup, but Vercel’s limitations for monorepo management added complexity. So, I stuck with separate repositories until I began customizing server management on EC2, which allowed me to transition to a monorepo.

Project Structure

The structure is as follows:
root |-- apps | |-- client | |-- server | |-- react-native-app |-- packages | |-- design-system | |-- notion-utils | |-- prisma-schema | |-- storybook |-- docker |-- terraform
In apps, I have actual service projects like web, mobile, and server apps. These services are deployed on EC2 using Docker, which I’ll cover shortly.
In packages, I collected reusable libraries like utility functions and the design system. With a monorepo setup, I didn’t need to publish these to npm, which simplified reuse.

React Native in a Monorepo

An interesting aspect is the integration of the React Native project as a package in the monorepo. I referred to react-native-universal-monorepo for guidance. By adjusting paths for node_modules and Podfile references, the setup worked smoothly.
Using the react-native-monorepo-tools npm package, I modified metro.config.js as follows:
/* eslint-disable @typescript-eslint/no-var-requires */ const { getDefaultConfig, mergeConfig } = require('@react-native/metro-config') const exclusionList = require('metro-config/src/defaults/exclusionList') const { getMetroTools, getMetroAndroidAssetsResolutionFix } = require('react-native-monorepo-tools') const monorepoMetroTools = getMetroTools() const androidAssetsResolutionFix = getMetroAndroidAssetsResolutionFix() const config = { transformer: { publicPath: androidAssetsResolutionFix.publicPath, getTransformOptions: async () => ({ transform: { experimentalImportSupport: false, inlineRequires: false, }, }), }, server: { enhanceMiddleware: (middleware) => { return androidAssetsResolutionFix.applyMiddleware(middleware) }, }, watchFolders: monorepoMetroTools.watchFolders, resolver: { blockList: exclusionList(monorepoMetroTools.blockList), extraNodeModules: monorepoMetroTools.extraNodeModules, }, } module.exports = mergeConfig(getDefaultConfig(__dirname), config)
For Android, I modified paths in app/build.gradle to reference the packages correctly.

Docker and Docker Compose

With the monorepo established, I moved to set up CI/CD pipelines.
Initially, I used Load Balancer + EC2 for the infrastructure, but since this is a side project, I realized that a 24/7 setup was unnecessary and costly. So, I switched to an NGINX + EC2 stack, significantly reducing costs.
notion image
I also set up a naming convention for Docker files and wrote the docker-compose.yml file as follows:
version: '3.8' services: coldsurf-io: platform: linux/amd64 build: context: ../ dockerfile: ./docker/Dockerfile.coldsurf-io args: GITHUB_TOKEN: '${GITHUB_TOKEN}' PORT: 4001 image: '${ECR_REMOTE_HOST}/coldsurf/coldsurf-io:latest' env_file: - ../apps/coldsurf-io/.env ports: - '4001:4001' ...
After building Docker containers, I uploaded them to ECR and pulled them from EC2. Using GitHub Actions, I set up manual deployment triggers.

Terraform Introduction

For minor needs, I tried using Terraform. One task involved granting access to EC2 on port 22 (SSH) to the GitHub Actions runner’s IP. I automated this security rule using Terraform:
provider "aws" { region = "ap-northeast-2" } data "http" "myip" { url = "<http://ipv4.icanhazip.com>" } resource "aws_security_group" "coldsurf-terraform-sg" { name = "coldsurf-terraform-sg" description = "Security group for coldsurf and terraform" ingress { from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["${chomp(data.http.myip.body)}/32"] } ... }
This configuration allows only the current GitHub Actions runner’s IP access to port 22 during CI runs.

NGINX Configuration on EC2

Deploying multiple services via Docker Compose required a root NGINX configuration on the EC2 server. I set up NGINX to direct traffic based on domain as follows:
server { if ($host = api.billets.coldsurf.io) { return 301 https://$host$request_uri; } server_name api.billets.coldsurf.io api.wa-museum.coldsurf.io wamuseum.coldsurf.io blog.coldsurf.io coldsurf.io; ... }
Each service is then routed via NGINX config:
server { server_name wamuseum.coldsurf.io; location / { proxy_pass <http://127.0.0.1:4002>; ... } }
If 403 Forbidden errors occur, it may be due to filesystem permission issues. Adding www-data to the appropriate group can help resolve this:
$ sudo usermod -aG ubuntu www-data $ sudo chmod g+rw /path/to/file_or_directory

Managing SSL with Let’s Encrypt

Replacing the original Load Balancer + ACM setup, I transitioned to SSL management with Let’s Encrypt on NGINX.

In Closing

I’m considering Fargate for the server infrastructure because, as a side project, the service doesn’t need continuous uptime, and Fargate’s usage-based billing is a better fit for variable traffic.

EC2 + ECR vs. Fargate

  1. EC2 + ECR:
      • Cost-effective if resources are needed 24/7.
      • Auto Scaling is possible but less flexible.
      • Pros: Cost-saving with reserved or spot instances.
      • Cons: Higher management overhead.
  1. Fargate:
      • Serverless: Start/stop based on requests; ideal for irregular traffic.
      • Billing: Charges per second of CPU/memory usage.
      • Auto Scaling is more flexible.
      • Cons: Higher costs if resources are needed constantly.
Summary:
  • 24/7 resource needs: EC2 + ECR is more cost-effective.
  • Intermittent usage with traffic spikes: Fargate is ideal for optimizing resource costs.
For anyone interested, you can check out the monorepo at https://github.com/coldsurfers/surfers-root 🙂
← Go home