#1199 closed defect (invalid)
nginx sends traffic to all or some of the upstrems
Reported by: | Owned by: | ||
---|---|---|---|
Priority: | blocker | Milestone: | |
Component: | other | Version: | 1.8.x |
Keywords: | multiple upstream | Cc: | ops@… |
uname -a: | Linux nginx-baas91 2.6.32-504.el6.x86_64 #1 SMP Wed Oct 15 04:27:16 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | ||
nginx -V: |
nginx version: nginx/1.8.1
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC) built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS SNI support enabled configure arguments: --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_ssl_module |
Description
Nginx is sending traffic to more than 1 upstream, that is causing us to process the same request in multiple backend systems.
Creating multiple charging to end user.
Nginx configuration,
events {
worker_connections 1024;
}
worker_rlimit_nofile 10240;
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
# log_format combinedt '$remote_addr - $remote_user [$time_local] '
# '"$request" $status $body_bytes_sent '
# '"$http_referer" "$http_user_agent" "$request_time" '
# '"$http_x_forwarded_for" "$http_x_operamini_phone_ua" '
# '"$http_x_device_user_agent" $upstream_addr';
log_format combinedt '$remote_addr $host $server_name $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" "$request_time" '
'"$http_x_forwarded_for" "$http_x_operamini_phone_ua" '
- '"$http_x_device_user_agent"' '"$http_x_up_calling_line_id"' '"$http_productId" to: $upstream_addr "HE
- device_user_agent:" $http_x_device_user_agent "HE : : up_calling_line_id:" $http_x_up_calling_line_id "HE :: msisdn:" $http_msisdn "HE :: du_msisdn:" $http_x_du_msisdn "HE :: x_msisdn:" $http_x_msisdn "HE :: nokia_msisdn:" $http_x_nokia_msisdn "HE :: radius_1:" $http_x_radius_1 "HE :: radius_2:" $http_x_radius_2 "ARGS::" $args ';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
tcp_nopush on;
tcp_nodelay off;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
proxy_connect_timeout 25;
proxy_read_timeout 60;
proxy_send_timeout 25;
proxy_buffer_size 16k;
proxy_buffers 4 64k;
proxy_busy_buffers_size 128k;
proxy_temp_file_write_size 128k;
proxy_temp_path /dev/shm/vtemp_dir;
proxy_cache_path /dev/shm/vcache_dir levels=1:2
keys_zone=cache_one:10m
inactive=3d max_size=3g;
upstream vuconnect {
server 192.168.254.81:8080;
server 192.168.254.82:8080;
server 192.168.254.83:8080;
}
server {
listen 7777;
server_name 192.168.254.91;
access_log logs/access.log combinedt;
location / {
proxy_pass http://vuconnect;
proxy_read_timeout 250s;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
}
}
}
Log received,
eslog.2017-01-28.10.bz2:192.168.254.94 192.168.254.9 192.168.254.91 - [28/Jan/2017:10:46:03 +0000] "GET /vuconnect/billing?msisdn=21629xxxxxx&transactionID=null&billingCode=1341343074&parentbillingCode=1341343074&activityType=renewal&callbackURL=http://xxxxxx.com&authToken=null&activityResult=Success&requestTypeId=2&chargingMode=backend&itemId=26817&itemTypeId=4&languageId=1&subscriptionStatusId=4 HTTP/1.1" 200 490 "-" "Jakarta Commons-HttpClient/3.1" "86.906" "-" "-" "-""-""-" to: 192.168.254.81:8080, 192.168.254.82:8080 "HE :: device_user_agent:" - "HE : : up_calling_line_id:" - "HE :: msisdn:" - "HE :: du_msisdn:" - "HE :: x_msisdn:" - "HE :: nokia_msisdn:" - "HE :: radius_1:" - "HE :: radius_2:" - "ARGS::" msisdn=216296XXXXX&transactionID=null&billingCode=1341343074&parentbillingCode=1341343074&activityType=renewal&callbackURL=http://xxxxxxx.com&authToken=null&activityResult=Success&requestTypeId=2&chargingMode=backend&itemId=26817&itemTypeId=4&languageId=1&subscriptionStatusId=4
Change History (4)
comment:1 by , 8 years ago
comment:2 by , 8 years ago
Putting the log more time for more clarification,
request is sent to 2 upstream,
to: 192.168.254.81:8080, 192.168.254.82:8080.(Request is sent to 2 upstream)
Is this expected behavior? if it is, how can one make sure that request sent to 1 upstream will not go to next upstream in any case. This is required for out billing application to not process 2 requests in any case.
Any help will be appreciated.
comment:3 by , 8 years ago
Resolution: | → invalid |
---|---|
Status: | new → closed |
Please check the documentation of the proxy_next_upstream directive.
If you have any questions about configuration or functionality, please use mailing lists.
comment:4 by , 8 years ago
sensitive: | 1 → 0 |
---|
And that's true, because you did not specify any balancing method in upstream section and round-robin method is used by default. Look at upstream documentation http://nginx.org/en/docs/http/ngx_http_upstream_module.html