2016년 8월 9일 화요일

클라우드 백본, Ochestration tool, Nomad 소개


안녕하세요 Kin입니다. 



오늘은 ochestration tool 중에 하나인 nomad 를 소개하고자 합니다.  예전에 ochestration tool 관련 발표 자료만들다가 정리한 문서인데 훔 나쁘지는 않아요.

일단 nomad는 vagrant와 consul로 유명한  hashicorp에서 나온 야심작(?)인데요.  

훔 다른 솔루션들에 비해 상당히 늦은 후발주자이기 때문에 기존 제품들의 비해 아키텍처 관점에서 
세련된 장점들을 가지고 있습니다. 

제가 글을 쓰는 현재 버전이 0.4 라서 mature와는 거리가 아주 먼 제품이라 말할 수 있지만 굉장히 큰 로드맵을 가진 오픈 소스입니다. 

아 참 nomad는 google borg and omega paper 기반 구현체입니다. 

이제 살살 살펴보시죠


Nomad

Nomad is orchestration tool that abstracts away machines and the location of applications,  and instead enables  users to declare what they want to run and Nomad handles where they should run and how to run them.

Backgroud Multi-Datacenter and Multi-Region Aware
Nomad models infrastructure by defining hierarchical structures as below :

  • Datacenter
    • Datacenter contains nodes that are all located on the same local area network
    • ex) us-west, us-east
  • Region
    • Regions may contain multiple datacenter
    • Scheduling operates at the region level
    • Region is single consensus group which means work together and elect a single leader
    • Regions are fully independent from each other( do not share jobs, clients, or state)
    • Each region is expected to have either three or five servers.
    • ex) us, korea
  • Muti-regison
    • One single cluster combining regions
    • Scheduling does not operates at the muti-regison level
    • Regions do not share jobs, clients, or state
    • Request for query, submitting jobs can be forwarded among regions
Gossip protocol( to manage membership. Serf base)
  • LAN Gossip(Datacenter) - Refers to the LAN gossip pool which contains nodes that are all located on the same local area network or datacenter.
  • WAN Gossip(Region, Multi-region) - Refers to the WAN gossip pool which contains only servers. These servers are primarily located in different datacenters and typically communicate over the internet or wide area network.
Subsystem 
  • Nomad server
    • Servers manage all jobs and clients
    • Servers in a region participate in making scheduling decisions in parallel
    • Run evaluations, and create task allocations for scheduling
    • Replicate cluster states between each other
    • Perform leader election
    • Leader & follower
      • Leader - The leader is responsible for processing all queries and transactions
      • Follower - participate in making scheduling and forward query to leader
  • Nomad client
    • Register themselves with server
    • Send heartbeats for liveness,
    • Wait for new allocations,
    • Update the status of allocations
    • Provide server with the resources available, attributes, and installed drivers.
Job 
Each job file has only a single job a job may have multiple task groups
  • Task
    • A Task is the smallest unit of work in Nomad, Tasks are executed by drivers
  • Task Group
    •  A Task Group is a set of tasks that must be run togetherTask group on the same client node
  • Drivers
    • means of executing your Tasks (Docker, Qemu, Java, and static binaries.)
  • Job type( = scheduler type)
    • Service
      • services that should never go down
      • based on best fit scoring algorithm of google borg
      • relatively long scheduling time
    • batch
      • short term performance and are short lived
      • based on Berkeley's Sparrow scheduler
    • system
      • jobs that should be run on all clients that meet the job's constraints
      • invoked when clients join the cluster or transition into the ready state
      • jobs will be re-evaluated and their tasks will be placed on the newly available nodes if the constraints are met.
  • Constraint
    •  Constraints can be specified at the job, task group, or task level
    • Constraint type
      • hardware filter
        • arch type
        • Number of CPU cores
        • AWS client id & instance type
      • software filter
        • kernal name & version
        • installed driver
      • name & id
        • datacenter name
        • noide id or name
        • one of keys of metataday


Nomad cons/pros
  • cons
    • 멀티 데이터 센터 모델링 ( 여러개의 데이터 센터를 통합하여 관리하는 모델링 메카니즘 제공(현재 kubernetes가 ubernetes를 통합하여 지원) 
    • container 기술뿐만아니라 standalone, 다양한 virtual machine 지원 
  • props
    • 버전 0.4(아직 갈 길이 멈) 
    • compact하고 simple 하다고 자랑하지만 사실 기능이 없어 가벼운 것임(volume management 지원 예정)
    • 무조건 docker host network 기반 (오버레이 네트워크? 사용 불가능)
    • docker compose 나 kubernetes는 서비스 concept과 같은 logical group 개념 없음. (물어보면 다 지원 예정)
    • docker 기능 사용 할 수 없음. 로드 맵이 너무 general해서 docker specific한 옵션들을 job file에 사용 할 수 없음. ex) docker의 로깅 기능 즉 logstash 같은 플러그인을 사용 못함.

댓글 없음:

댓글 쓰기