Friday, December 26, 2014

Reactive Microservice Part 4

Best practices

Shared Test Environment

Developer can run CI jobs in the test environment and it’s very close to the prod environment. The stability of it is treated with the same respect as trunk which stays open and stable so developer can get a clear signal of their tests by testing their local changes again the shared test environment.

Canary deployment

Canary services can be rollout little by little and direct traffic towards it. If there’s any problem, we can roll back the service. Netflix Asgard is a deployment management which manages red/black pushes.

Monitor Everything

Service dependency visualization should be able to answer questions like, how many dependencies does my services have? What’s the call volume on my service? Are any dependency service running hot? What are the top N slowest “business transaction”? Distributed tracing can be used to build a graph of all the service dependency. Some APM (application performance monitoring) tools like Dyntrace would also be able to trace a request through multiple services. Dashboard metrics (quick scan of services health, median, 99%ile, 99.9%ile) are also essential, e.g. Netflix hystrix / turbine. And 500px is using datadog. Google has cloud trace and cloud debug (part of Google Cloud Monitoring). Some of the other metrix to measure incident response are MTTD (mean time to detection), MTTR (mean time to resolution) and the method of detection.

12 factor application

The twelve-factor app is a methodology for building software-as-a-service apps that:
  • Use declarative formats for setup automation, to minimize time and cost for new developers joining the project;
  • Have a clean contract with the underlying operating system, offering maximum portability between execution environments;
  • Are suitable for deployment on modern cloud platforms, obviating the need for servers and systems administration;
  • Minimize divergence between development and production, enabling continuous deployment for maximum agility;
  • And can scale up without significant changes to tooling, architecture, or development practices.
Below is the 12 factors:
  1. Codebase – one codebase tracked in revision control, many deploys
  2. Dependencies – Explicitly declare and isolate dependencies
  3. Config – Store config in the environment
  4. Backing Services – Treat backing services as attached resources
  5. Build, release, run – strictly separate build and run stages
  6. Processes – execute the app as one or more stateless processes
  7. Port binding – export services via port binding
  8. Concurrency – scale out via the process model
  9. Disposability – maximize robustness with fast startup and graceful shutdown
  10. Dev/prod parity – keep development, staging and production as similar as possible
  11. Logs – treat logs as event streams
  12. Admin processes – run admin/management tasks as one-off processes

Automation

Automate everything, from building, testing to deployment, rollback. Developer should be self-service, no need to ask for production team or building team to enable them to develop.

Mock services

Each service should have a mock service for the consumer service to do testing. It’s best to be provided by the team who develop the service but in reality, it often developed by consumer service team.

Service Chassis (@Kixeye)

Service chassis is a framework to develop microservice that has all the core integration included.

  • Configuration integration
  • Registration and discovery
  • Monitoring and metrics
  • Load-balancing for downstream services
  • Failure management for downstream services
  • Dev: service template in maven
  • Deployment: build pipeline through puppet -> packer -> ami
  • NetflixOSS: asgard, hystrix, ribbon + websockets Janus, eureka