Unlike Eureka's intrusive service center, Consul runs as independent software, which is less intrusive and easier to deploy.
The above picture shows the deployment of multi-computer room data centers. Each data center has at least three consuls, one leader and two followers.
The agent is the daemon of each member in the consultative cluster, which is started by the consultative agent command. The agent can run in client or server mode. Since all nodes must run agents, it is easier to refer to nodes as clients or servers, but there are other instances of agents. All agents can run DNS or HTTP interfaces and are responsible for running checks and keeping services synchronized.
The client can forward all RPC requests to the proxy of the server. The client is relatively stateless. The only background activity performed by the client is LANgossip pool, which consumes the least resource overhead and a small amount of network bandwidth.
The server is an agent with extended functions, which is mainly involved in maintaining the cluster state, responding to RPC queries, exchanging WAN gossip with other data centers, and forwarding queries to leading nodes or remote data centers.
Although the definition of a data center seems obvious, there are still some subtle details that must be considered. For example, in EC2, should multiple available centers (EC2 and AZ are concepts in AWS, and you can read AWS documents if you don't understand them) be considered as a single data center? We define the data center as a private, low-latency and high-bandwidth network environment, excluding communication through the public Internet. However, for our purposes, multiple available areas in a single EC2 area will be considered as part of a single data center.
In our document, "consistency" means the recognition of elected leaders and the order of things. Because these events apply to finite state machines, our definition of consistency means the consistency of replicated and backed-up state machines.
Consul is based on Serf, which provides a complete gossip protocol and is used in many places. Serf provides member management, fault detection and event broadcasting functions. UDP protocol is used for communication between nodes in Gossip.
Refers to the local area network gossip pool on nodes in the same local area network or data center. The client to the server will gossip through the LAN, and all nodes are in the gossip pool.
WAN Gossip pool containing servers in different data centers, which communicate through the network.
Start with the development mode: consume agent-dev. If you need a Web interface, just add -ui. By default, the cluster LAN service is started on port 830 1, WAN service is started on port 8302, Web service is 8500, DNS service is 8600, and GRPC service is 8502.
By default, it is started in the role of server. After start-up, you can view the node information under the service with the member of consul, or you can request HTTP://localhost: 8500/v1/catalog/nodes through the http interface to view the node information data returned as json.
You can use dig command to view some information of DNS service in consultant, such as dig @127.0.0.1-p8600nodename.
Generally speaking, consul will be configured in the form of parameters at startup, but this is troublesome. We first create a new configuration file in the/etc/consult.d directory and load it every time we start.
You can view the configuration information of consul in https://www.consul.io/docs/agent/options.html,. Some options are as follows:
Traditionally, we put configuration files in the /etc/consult. d/ directory and create directories on several machines. On the server 1 that started in boot mode, we created the/etc//etc /etc/consul.d/bootstrap and/etc//etc /etc/consul.d/server directories. On server2 and server3, we create/etc//etc /etc/consul.d/server directory, and on agent, we create/etc//etc /etc/consul.d/agent directory.
The configuration file config.json in the bootstrap directory of server 1 is as follows:
The configuration file config.json in the server directories of the other two servers is as follows:
The config.json in the proxy directory in the proxy server is as follows:
In this way, first start consult: consult agent-config-dir/etc/consult.d/bootstrap on server 1, and then start server2 in turn. On server3, consult: consult agent-config-dir/etc/consult.d/server. In this way, three consultant servers form a cluster. At this point, the advisor on server 1 runs in the boot state. The resolution can be directly executed without consulting the server 2 and the server 3. At this point, we will terminate the consultant running on server 1. And execute consult agent-config-dir/etc/consult.d/server, and let server 1 rejoin the cluster as an ordinary server. Finally, start the proxy mode consult agent-config-dir/etc/consult.d/agent.
* * Note: encrypt can be generated by using the consul keygen command, and all servers need the same configuration. If the startup fails due to any configuration error and the report fails after modification, you can try to delete $data_dir/serf/local.keyring and restart * *
Create /etc/consult. d/
Then start the service by specifying the configuration file through ConsumAgent. ConsumAgent can directly specify the service to be registered in the configuration file, or use ConsumJoin command to actively register the service after starting, so that we can log in to the web management interface again and find our newly created service.
Q: What is the boot mode?
A: The server started in this mode will automatically choose itself as the leader, and a server will be set in advance for bootstrap startup, which is convenient for building a cluster.