In 20 minutes at the AWS Day Prague, Petr Svoboda, CEO of CodeNOW, built two microservices using Amazon Q and CodeNOW.
Many enterprises hesitate to adopt microservices, often perceiving them as overly complex, costly, or time-intensive. While it’s true that managing microservices comes with its own set of challenges — such as testing, deployment, and ensuring smooth transitions between on-premises and cloud environments — these hurdles highlight the importance of adopting infrastructure abstraction to simplify the process and enable their full potential.
Tasks like authentication, debugging, and disaster recovery are no longer solved once, as with monoliths, but must be addressed repeatedly for each microservice. Without automation, these repetitive operational demands can overwhelm developers, consuming valuable coding time that could be spent on implementing revenue-generating business logic.
If we are going to develop a new solution. What does it look like? We have an omnichannel application, with three different front-ends with three different security deployments, three different technological implementations, some front-end for public, for internal, and some monolithic mobile applications. Three different back-ends for front-ends, because you're most likely going to communicate over different protocols with the different security implementations, etc.
Nonetheless, the business logic is still the same. It's omnichannel, so the same data and same processes on every channel. That's how business works.
On the right side you have the concepts that will be applied with every single component.
With the limited time Petr showcased a three-tier application. Some front-end, back-end for front-end, and a back-end component at the AWS days Prague.
To avoid unnecessary meetings, we adopt a "share-nothing" approach. This means maintaining a personal, cloud-based development environment where we control the schema and can work independently, whether at home or in a café in Thailand. Our setup involves leveraging open-source stateful components: a back-end-for-front-end exposing a REST API, producing Kafka messages, and processing them into a PostgreSQL database. This cloud-based approach ensures flexibility and independence.
In higher environments, production systems typically cannot rely on unsupported open-source tools. To address this, it’s wise to replace them with scalable, managed services like AWS RDS for PostgreSQL and Managed Kafka, which come with disaster recovery procedures handled by centralized teams.
For the demo, Petr used pre-recorded videos to illustrate creating a new application and environment, leveraging free components in Git repositories.
While there may be underperformance or bugs, tools like end-to-end code tracing are essential for troubleshooting communication issues in microservices. Once everything checks out, the application is promoted to staging, preparing it for further observation and refinement.
Petr set up a new development environment using a Kubernetes namespace on a cluster he had access to. Each component in the application starts with a golden template scaffolded to a Git repository and configured accordingly (e.g., Maven group IDs for Java). Using 12-factor principles, connections to external services are tracked and replaced with appropriate connection strings during deployment, varying between development and staging.
Petr set up a managed Kafka service restricted to the development environment, alongside a personal PostgreSQL instance, ensuring independence from corporate-wide upgrades. This setup formed the foundation for an application composed of two components with CI pipelines and a Python-based frontend. The frontend connected to the back-end-for-front-end (BFF) via deployment-specific endpoints, enabling independent development while maintaining clarity in deployment and connection management.
Building on this foundation, the process began with the creation of an application, environment, and two stateful components, using CI pipelines scaffolded from golden templates for Java and Python. Coding efforts then focused on the frontend component. Despite limited proficiency in frontend or Python, Petr leveraged Amazon Q to generate the necessary code. This included updating the application file to render a form and establish a connection with the BFF, injected externally. The backend service was designed to handle HTTP 200 for success responses and HTTP 400 for form data errors. While the frontend wasn’t production-ready, it effectively supported debugging of the BFF and RBL components.
The overall workflow centered around a Git repository, where changes were pushed to trigger CI pipelines either manually or automatically, accommodating various coding styles such as Git flow. Konow further streamlined the process by orchestrating integrations with enterprise tools like GitHub Actions, ensuring component management while enforcing limited permissions for enhanced safety and security.
Next steps include creating a release by satisfying dependencies (e.g., Kafka, database connections) and synchronizing the state to the target environment using Argo. This process can be automated via APIs or tools like Terraform, with options for GitOps or manual configurations for demonstration purposes.
Centralized platform teams play a critical role but must acknowledge potential challenges. A new release configuration for the three components was created, validated by Argo CD, and deployed. Load testing followed using Amazon Q to generate a K6 script targeting the BFF REST endpoint, with results monitored via pre-integrated open-source observability tools like Grafana. Logs and live tails provided insights into component performance.
Petr's talk at AWS Day Prague was a treasure trove of practical insights for modern software development.
Watch the demo video here.