Have you heard about The Twelve-Factor App methodology ?
The 12 Factor App methodology is an influential pattern to designing scalable application architecture. The goal of the 12-factor framework is to help developers build apps that use an architecture that ensures speed, reliability, agility, portability and ultimately results in a robust and reliable application.
12 factors developers should think about when building native cloud apps.
- Code base
Cloud-native applications must always consist of a single codebase that is tracked in a version-control system. This single codebase helps to support collaboration between development teams and helps to enable proper versioning of applications. This codebase could be a Git repository (including GitHub, GitHub Enterprise, GitLab, etc).
Most applications require the use of external dependencies. A cloud-native application can never rely on the implicit existence of system-wide packages. This is what this factor focuses on — encouraging the explicit declaration and isolation of application dependencies.
The first step to achieving this factor is to identify, declare, and isolate any external dependencies within your application. Most contemporary programming languages have tools or facilities for managing these dependencies. In Java, two of the most popular tools for dependency management are Maven and Gradle.
Don’t store config as constants in code. Instead, design the app to read its config from the environment.
This could include:
- URLs and other information about backing services, such as web services, and SMTP servers
- Information necessary to locate and connect to databases
- Credentials to third-party services such as Amazon AWS or APIs like Google Maps, Twitter, and Facebook
- Information that might normally be bundled in properties files or configuration XML, or YML
It is important that configuration and credentials are separated from the application code. Credentials are highly sensitive pieces of information and should never be shipped within application code as it runs the risk of exposing your application’s backing services, internal URLs, the resources and services your application relies on, etc. This approach helps to simplify application deployment to multiple environments, reduces the risk of leaking credentials and passwords, and enables more effective release management.
4. Backing Services
Treat back-end services as attached resources to be accessed with a URL or other locator stored in config. A resource binding for a database might include a username, a password, and a URL that allows your application to consume that resource. An application should declare its need for a given backing service but allow the cloud environment to perform the actual resource binding. The binding of an application to its backing services should be done via external configuration. Embracing backing services as bound resources enables cloud-native applications to have greater flexibility and resilience, enabling loose-coupling between services and deployment.
5. Build, Release, Run
Strictly separate build and run stages.
A codebase is transformed into a (non-development) deploy through three stages:
- The build stage is a transform which converts a code repo into an executable bundle known as a build. Using a version of the code at a commit specified by the deployment process, the build stage fetches vendors dependencies and compiles binaries and assets.
- The release stage takes the build produced by the build stage and combines it with the deploy’s current config. The resulting release contains both the build and the config and is ready for immediate execution in the execution environment.
- The run stage (also known as “runtime”) runs the app in the execution environment, by launching some set of the app’s processes against a selected release.
The twelve-factor app uses strict separation between the build, release, and run stages.
Execute the app as one or more stateless processes. Data that must be persistent should be stored in a stateful backing service. State should not be maintained within your application. This is a useful factor as it means that if one instance of your application goes down, you don’t lose the current state. It also simplifies workload balancing as your application doesn’t have an affinity to any particular instance of a service.
7. Port binding
Use port binding to export services.
The port-binding factor states that cloud-native applications should export services using port binding. Your cloud provider should be managing the port assignment for you because it is likely also managing routing, scaling, high availability, and fault tolerance, all of which require the cloud provider to manage certain aspects of the network, including routing host names to ports and mapping.
Scale out apps horizontally, not vertically.
The concurrency factor stresses that microservices should be able to be scaled up or down, elastically, depending on their workload. Previously, when many applications were designed as monoliths and were run locally, this scaling was achieved through vertical scaling (i.e., adding CPUs, RAM, and other resources, virtual or physical). However, now that our applications are more fine-grained and running in the cloud, a more modern approach, one ideal for the kind of elastic scalability that the cloud supports, is to scale out, or horizontally. Rather than making a single big process even larger, you create multiple processes, and distribute the load of your application among those processes.
Use fast startups and graceful shutdowns to maximize robustness. A cloud-native application’s processes must be disposable, which means they can be started or stopped rapidly. An application cannot scale, deploy, release, or recover rapidly if it cannot start rapidly and shut down gracefully. This is especially important in cloud-native applications because, if you are bringing up an application, and it takes minutes to get into a steady state, in today’s world of high traffic, that could mean hundreds or thousands of requests get denied while the application is starting.
Facilitate continuous deployment by ensuring that development, staging, and production environments are as similar as possible.
Historically, there have been substantial gaps between development (a developer making live edits to a local deploy of the app) and production (a running deploy of the app accessed by end users). These gaps manifest in three areas:
- The time gap: A developer may work on code that takes days, weeks, or even months to go into production.
- The personnel gap: Developers write code, ops engineers deploy it.
- The tools gap: Developers may be using a stack like Nginx, SQLite, and OS X, while the production deploy uses Apache, MySQL, and Linux.
- Make the time gap small: a developer may write code and have it deployed hours or even just minutes later.
- Make the personnel gap small: developers who wrote code are closely involved in deploying it and watching its behavior in production.
- Make the tools gap small: keep development and production as similar as possible.
Treat logs as event streams. Logs should not be concerned with routing or storing the app’s output. Logs provide visibility into the behavior of a running app. In server-based environments they are commonly written to a file on disk (a “logfile”); but this is only an output format.
Logs are the stream of aggregated, time-ordered events collected from the output streams of all running processes and backing services. Logs in their raw form are typically a text format with one event per line (though backtraces from exceptions may span multiple lines). Logs have no fixed beginning or end, but flow continuously as long as the app is operating.
Each running process writes its event stream, unbuffered, to
stdout. During local development, the developer will view this stream in the foreground of their terminal to observe the app’s behavior.
These systems allow for great power and flexibility for introspecting an app’s behavior over time, including:
- Finding specific events in the past.
- Large-scale graphing of trends (such as requests per minute).
- Active alerting according to user-defined heuristics (such as an alert when the quantity of errors per minute exceeds a certain threshold).
12. Admin processes
Run admin tasks as one-off processes from a machine in the production environment that’s running the latest production code.
The process formation is the array of processes that are used to do the app’s regular business (such as handling web requests) as it runs. Separately, developers will often wish to do one-off administrative or maintenance tasks for the app, such as:
- Running database migrations (e.g.
manage.py migratein Django,
rake db:migratein Rails).
- Running a console (also known as a REPL shell) to run arbitrary code or inspect the app’s models against the live database. Most languages provide a REPL by running the interpreter without any arguments (e.g.
perl) or in some cases have a separate command (e.g.
rails consolefor Rails).
- Running one-time scripts committed into the app’s repo (e.g.
One-off admin processes should be run in an identical environment as the regular long-running processes of the app. They run against a release, using the same codebase and config as any process run against that release. Admin code must ship with application code to avoid synchronization issues.
Twelve-factor strongly favors languages which provide a REPL shell out of the box, and which make it easy to run one-off scripts. In a local deploy, developers invoke one-off admin processes by a direct shell command inside the app’s checkout directory. In a production deploy, developers can use ssh or other remote command execution mechanism provided by that deploy’s execution environment to run such a process.
Thanks for reading.
The Twelve-Factor App
In the modern era, software is commonly delivered as a service: called web apps, or software-as-a-service.
What is 12 factor app (twelve-factor app)? - Definition from WhatIs.com
Twelve-factor app is a methodology for building distributed applications that run in the cloud and are delivered as a…