• Test data → a clean database on every test run. We get green tests with migration problems. • Selenium → a fake user, using a mouse and keyboard, over and over again. We assume it’s realistic. • Stamina → the runtime is always fresh for tests. What about subtle memory leaks? • Scaling → tests run against a single process web server. Reality: a multi-thread environment.
Ensure access to a test AWS account. • Separate Test Env: Use a dedicated AWS environment. • Data Isolation: Keep test data separate from production. • Service Isolation: Use separate accounts or tagging to avoid conflicts. • Cleanup: Ensure resources are cleaned up post-testing. • CI Integration: Run tests automatically in CI/CD pipelines. • IAM Permissions: Ensure test users/roles have the necessary permissions. • Logging & Monitoring: Enable CloudWatch logs and metrics for debugging test failures. • Secrets Management: Store and manage test credentials securely (e.g., AWS Secrets Manager, Parameter Store). • Cost Management: Track and limit test resource usage to avoid unexpected costs. • Test Parallelization: Ensure tests can run concurrently without conflicts. • State Management: Handle persistent vs. ephemeral resources carefully.
cloud apps • Ships as a container image, easy to install and start up • Support for 100+ services (and growing): ◦ compute (Lambda, ECS, EKS) ◦ various databases (DynamoDB, RDS) ◦ messaging (SQS, Kinesis, MSK) ◦ some sophisticated/exotic APIs (Athena, Glue) • CI integrations & advanced collaboration features • Branching out into other areas: Chaos Engineering, IAM Security Testing, Cloud Ephemeral Environments, 3rd Party Extensions, etc