Dual Local & AWS Dev Environment Setup Guide
Developing and testing applications in multiple environments can be a game-changer, especially when you aim for a robust and reliable deployment process. This comprehensive guide walks you through setting up a dual development/test environment, allowing you to seamlessly switch between local and AWS-deployed setups. This approach ensures thorough pre-staging validation testing, helping you catch potential issues early on.
Understanding the Context
Currently, the BATbern project operates in a LOCAL deployment mode. This means Docker containers run locally and connect to shared AWS infrastructure components such as RDS, Cognito, and S3. While this setup is convenient for rapid development and iteration, it lacks the fidelity of a production-like environment. To bridge this gap, we're transitioning to a dual setup that supports both local and AWS-deployed environments in parallel.
Why a Dual Environment?
Having both local and AWS development environments offers several key advantages:
- Realistic Testing: An AWS environment closely mirrors the production setup, providing a more accurate testing ground.
- Early Issue Detection: Identifying and resolving issues in a pre-staging environment saves time and resources.
- Parallel Development: Developers can work locally while the AWS environment is used for integration and validation.
- Reduced Risk: Validating changes in a dedicated AWS dev environment minimizes the risk of introducing bugs into production.
Objective: Deploying a Full AWS DEV Environment
The primary goal is to deploy a complete AWS DEV environment alongside the existing local Docker Compose setup. This includes ECS, API Gateway, Frontend, and all necessary microservices. By achieving this, we can perform comprehensive pre-staging validation testing before promoting changes to higher environments.
Current State Analysis
Let's assess our current setup:
- Local Dev: Docker Compose containers are connected to shared AWS infrastructure.
- AWS DEV: Infrastructure components (RDS, Cognito, S3, EventBridge) are deployed but lack application stacks.
- Missing Components: ECS, API Gateway, Frontend, and microservice stacks are yet to be deployed in the AWS DEV environment.
Proposed Solution: Time-Based Isolation Approach
To manage shared resources effectively, we'll implement a time-based isolation approach. This strategy allows us to use either the local or the cloud environment at a given time, ensuring no conflicts arise from simultaneous access to shared resources.
Key Implementation Tasks
Let’s break down the implementation into manageable tasks:
1. Updating CDK Configuration
The first step involves modifying the CDK configuration file (infrastructure/lib/config/dev-config.ts).
- Modify
infrastructure/lib/config/dev-config.ts:- Change
deploymentModefromLOCALtoCLOUD. - This crucial step enables the deployment of ECS, API Gateway, and Frontend stacks within our AWS environment. By switching the deployment mode, we signal to our infrastructure-as-code that we intend to provision resources in the cloud rather than relying solely on local Docker containers. This ensures that the necessary AWS services are created and configured to support our application.
- Change
This configuration update is the cornerstone of our transition. It sets the stage for deploying the application's core components to AWS, paving the way for a robust and realistic testing environment. The CDK (Cloud Development Kit) uses this configuration to orchestrate the deployment, ensuring that all the necessary resources are provisioned correctly and consistently. Guys, think of this as the master switch that tells our system where the app should live – either locally in Docker or up in the AWS cloud.
2. Deploying Full AWS Development Stacks
Next, we'll deploy the core application stacks to AWS. This includes:
- Deploy
BATbern-development-Cluster(ECS Fargate):- This involves setting up an ECS (Elastic Container Service) cluster using the Fargate launch type. Fargate allows us to run containers without managing the underlying EC2 instances, providing a serverless container orchestration experience. Deploying the ECS cluster is pivotal as it forms the backbone for running our microservices and applications in the cloud. This step ensures that we have a dedicated and scalable compute environment ready to host our containerized applications. ECS Fargate simplifies the operational overhead, allowing us to focus more on application development and less on infrastructure management.
- Deploy
BATbern-development-ApiGateway(AWS API Gateway):- Deploying the API Gateway is crucial for creating a managed and scalable interface for our backend services. The API Gateway acts as the entry point for client applications, handling routing, authentication, and request transformations. By deploying this component, we establish a robust and secure way to expose our microservices to the outside world. It also enables us to implement rate limiting, caching, and other essential features to ensure the performance and security of our application. This is where all the external requests come in, like the front door to our app, and it makes sure everything is secure and running smoothly. It's a pretty big deal!
- Deploy
BATbern-development-Frontend(S3 + CloudFront):- The frontend deployment involves configuring an S3 bucket to store our static web assets and CloudFront to serve them with low latency and high availability. S3 provides scalable and cost-effective storage, while CloudFront ensures fast content delivery to users worldwide. This setup is essential for delivering a responsive and performant user experience. CloudFront also offers features like SSL/TLS encryption and DDoS protection, enhancing the security of our frontend application. We're talking about speed and security here, guys. Nobody wants a slow website, and everyone wants to keep their data safe!
- Deploy
BATbern-development-EventManagement(microservice) - Deploy
BATbern-development-CompanyUserManagement(microservice) - Deploy remaining microservice stacks:
- These steps involve deploying the individual microservices that make up our application. Each microservice is deployed as a separate container within the ECS cluster, allowing for independent scaling and deployment. Deploying microservices ensures a modular and maintainable architecture, where each service can be developed, tested, and deployed independently. This approach also enables us to use different technologies and programming languages for different services, providing flexibility in our technology stack. Think of these as the little engines that power our app. Each one has a specific job, and they all work together to make the magic happen.
3. DNS Configuration (Optional)
For easier access and a more professional setup, we can configure a subdomain in Route 53:
- Set up
dev.batbern.chsubdomain in Route 53:- This involves creating a DNS record in Route 53 that points to our CloudFront distribution. This step allows users to access our AWS dev environment using a human-readable URL, making it easier to share and test the environment. A custom domain also provides a more polished look and feel, which is essential for demonstrating our application to stakeholders. It's like giving our app its own street address, making it easier for everyone to find and visit.
- Point to CloudFront distribution
- Or document CloudFront URL for direct access:
- If setting up a custom domain is not immediately feasible, we can document the CloudFront URL for direct access. This ensures that developers and testers can still access the AWS dev environment without a custom domain. The CloudFront URL is automatically generated when the distribution is created and provides a direct way to access our frontend application. Think of this as the secret back door to our app, just in case the front door isn't ready yet.
4. Documentation
Comprehensive documentation is crucial for maintaining and operating the dual environment:
- Create workflow guide for switching between local and AWS dev:
- A clear and concise workflow guide is essential for enabling developers to switch seamlessly between the local and AWS dev environments. This guide should outline the steps required to start and stop each environment, as well as any configuration changes needed. A well-documented workflow reduces confusion and ensures that everyone on the team can effectively use both environments. It's like having a roadmap for our development journey, making sure we don't get lost along the way.
- Document shared resource implications (RDS, Cognito, S3):
- Documenting the implications of shared resources is critical for preventing conflicts and ensuring data consistency. This documentation should clearly outline which resources are shared between environments and the potential risks associated with concurrent access. It should also provide guidelines on how to manage these resources effectively. This is super important, guys. We need to make sure everyone knows how the shared stuff works so we don't step on each other's toes!
- Add best practices to avoid conflicts:
- Establishing and documenting best practices for avoiding conflicts is essential for maintaining a stable and reliable dual environment. These best practices should include guidelines on how to manage data changes, coordinate deployments, and handle shared resources. By following these practices, we can minimize the risk of data corruption and ensure that both environments function smoothly. Think of these as the rules of the road for our development team, making sure everyone gets where they need to go safely and efficiently.
- Update
docs/architecture/deployment-workflow.md:- This step involves updating the deployment workflow documentation to reflect the new dual environment setup. The updated documentation should include detailed instructions on how to deploy to both local and AWS environments, as well as any specific considerations for each environment. Keeping the documentation up-to-date is essential for ensuring that the team has a clear understanding of the deployment process. It's like keeping our instruction manual up to date, so everyone knows how to use the fancy new features.
5. Testing & Validation
Thorough testing is vital to ensure the new AWS dev environment functions correctly:
- Deploy and verify AWS dev environment:
- This involves deploying all the necessary components to the AWS dev environment and verifying that they are functioning as expected. This includes checking the status of ECS tasks, API Gateway endpoints, and frontend assets. Verifying the deployment ensures that the environment is ready for testing and validation. It's like checking the engine before we hit the road, making sure everything is purring like a kitten.
- Test registration wizard on both environments (separately):
- Testing the registration wizard in both the local and AWS dev environments is crucial for ensuring that user registration works correctly in both setups. This testing should cover all aspects of the registration process, including form validation, data persistence, and email verification. By testing in both environments, we can identify and resolve any environment-specific issues. This is like making sure the welcome mat is out in both our houses, so new users can get in without any fuss.
- Validate shared resources work correctly:
- Validating that shared resources work correctly is essential for ensuring data consistency and preventing conflicts. This involves testing access to the RDS database, Cognito user pool, S3 buckets, and EventBridge. We need to verify that both environments can access and modify shared resources without issues. This is super crucial, guys! We need to make sure the shared stuff plays nice together in both worlds.
- Create runbook for environment switching:
- A runbook for environment switching provides step-by-step instructions on how to switch between the local and AWS dev environments. This runbook should include detailed steps on how to start and stop each environment, as well as any configuration changes that may be required. Having a runbook ensures that anyone on the team can switch environments quickly and easily. It's like having a cheat sheet for switching between modes, making sure we don't miss any steps.
Shared Resources: Managing Conflicts
Both environments will share several key resources, requiring a strategic approach to conflict management:
- RDS PostgreSQL: Single instance - ensure only one environment runs at a time
- Cognito User Pool: Same users across environments
- S3 Buckets: Same storage for uploads
- EventBridge: Same event bus
Strategy: Time-Based Isolation
The chosen strategy is time-based isolation. We'll use the local environment during active development and the AWS dev environment for pre-staging validation. This approach minimizes the risk of conflicts arising from simultaneous access to shared resources. It’s all about playing it smart, guys. We use one environment at a time to avoid any clashes.
Expected Outcome: A Robust Dual Environment
By implementing this solution, we anticipate the following outcomes:
- Local:
http://localhost:3000(via docker-compose) - AWS DEV:
https://dev.batbern.chor CloudFront URL (via ECS) - Cost: ~$200-250/month (ECS Fargate, ALB, NAT Gateway)
- Benefit: Validate production-like deployment before promoting to staging
These outcomes reflect a significant improvement in our development and testing workflow. The ability to validate changes in a production-like environment before staging reduces the risk of introducing bugs and ensures a smoother deployment process. The estimated cost provides transparency into the operational expenses, allowing us to budget effectively. We're talking about a major upgrade here, guys. Smoother deployments, fewer bugs, and a better overall experience!
Files to Modify: Key Configuration Points
To implement these changes, we'll need to modify the following files:
infrastructure/lib/config/dev-config.tsdocs/architecture/deployment-workflow.md(new or update existing)- Optional:
infrastructure/lib/stacks/dns-stack.ts
Conclusion: Embracing a Dual Environment for Enhanced Reliability
Enabling a dual development/test environment is a significant step towards enhancing the reliability and robustness of our deployments. By combining the agility of local development with the fidelity of an AWS-deployed environment, we create a powerful platform for innovation and validation. The time-based isolation approach ensures that shared resources are managed effectively, while the detailed implementation tasks provide a clear roadmap for success. So, let’s get this show on the road, guys! A dual environment means a smoother ride for everyone.