技术转型之路四:资源隔离之应用拆分

域名拆分解决的是流量激增引起的主链路不可用的问题,而应用拆分是为了杜绝故障引起的雪崩效应

单体应用的雪崩效应

现在团队的服务是个独立的单体应用,无论消费方是内部应用还是外部 APP 最终都是路由到同一个后端服务,也就是所有的消费方共享后端机器资源,包括进程资源,这样的架构组织方式带来的问题是显而易见的:由于单一应用引起的服务故障会波及所有服务消费方。举例来说,由于设计不当,某些极端情况下,客户端 APP 会对服务器做重试请求,这些请求由于后端的处理不当演变成了流量攻击直接打垮所有后端服务,由于进程未做隔离,后端服务打垮之后,其支撑的所有业务基本处于瘫痪状态,发生雪崩效应

应用拆分部署

之所以所有的服务都依赖共同的机器资源,很大程度上是因为原来的代码是一个庞大的、各个模块耦合紧密的单体应用,鉴于业务的复杂程度要把所有业务代码全部拆分短时间很难达成,但是可以把各个业务分开部署,根据业务功能划分的不同的域名或 URL 路由到不同的后端 APP 以达到业务拆分的目的

改造之前的 nginx 配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
upstream rest_frontends {
# ip_hash;
keepalive 1200;
server ip1:19600 max_fails=1 fail_timeout=10s;
server ip2:19600 max_fails=1 fail_timeout=10s;
server ip3:19600 max_fails=1 fail_timeout=10s;
server ip4:19600 max_fails=1 fail_timeout=10s;
server ip5:19600 max_fails=1 fail_timeout=10s;
server ip6:19600 max_fails=1 fail_timeout=10s;
server ip7:19600 max_fails=1 fail_timeout=10s;
server ip8:19600 max_fails=1 fail_timeout=10s;
server ip9:19600 max_fails=1 fail_timeout=10s;
}
server {
listen 80;
server_name api.host.com;
client_max_body_size 50M;
access_log /data/log/nginx/rest.log api_access;
error_log /data/log/nginx/rest_error.log;
location /volvo {
include /etc/nginx/martin_allow.conf;
deny all;
proxy_pass_header User-Agent;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
proxy_next_upstream error timeout invalid_header http_502;
proxy_pass http://rest_frontends;
}
location /lotus {
proxy_pass_header User-Agent;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
proxy_pass http://rest_frontends;
}
}

改造之后的 nginx 配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
upstream rest_frontends_volvo {
# ip_hash;
keepalive 1200;
server ip1:19600 max_fails=1 fail_timeout=10s;
server ip2:19600 max_fails=1 fail_timeout=10s;
server ip3:19600 max_fails=1 fail_timeout=10s;
server ip4:19600 max_fails=1 fail_timeout=10s;
}
upstream rest_frontends_lotus {
keepalive 1200;
server ip5:19600 max_fails=1 fail_timeout=10s;
server ip6:19600 max_fails=1 fail_timeout=10s;
server ip7:19600 max_fails=1 fail_timeout=10s;
server ip8:19600 max_fails=1 fail_timeout=10s;
server ip9:19600 max_fails=1 fail_timeout=10s;
}
server {
listen 80;
server_name api.host.com;
client_max_body_size 50M;
access_log /data/log/nginx/rest.log api_access;
error_log /data/log/nginx/rest_error.log;
location /volvo {
include /etc/nginx/martin_allow.conf;
deny all;
proxy_pass_header User-Agent;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
proxy_next_upstream error timeout invalid_header http_502;
proxy_pass http://rest_frontends_volvo;
}
location /lotus {
proxy_pass_header User-Agent;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
proxy_pass http://rest_frontends_lotus;
}
}

拆分之后,相当于各个业务功能的后端都是相互隔离的,即使某一业务导致后端服务故障,并不会波及其他业务。

三月沙 wechat
扫描关注 wecatch 的公众号