当前位置: 首页 > 知识库问答 >
问题:

库伯内特斯Nginx入口文件上传返回502

赫连晋
2023-03-14

我正试图通过nginx入口从客户端上传文件。在收到413响应后,我在入口上设置了以下注释;

Annotations:   nginx.ingress.kubernetes.io/body-size: 1024m
               nginx.ingress.kubernetes.io/client-body-buffer-size: 50m
               nginx.ingress.kubernetes.io/client-max-body-size: 50m
               nginx.ingress.kubernetes.io/proxy-body-size: 1024m
               nginx.ingress.kubernetes.io/proxy-buffer-size: 32k
               nginx.ingress.kubernetes.io/proxy-buffers-number: 8

客户端是一个Angular应用程序。它在请求正文中发送文件的bas64字符串。我尝试过上传一些KB的图像,所以我绝对没有达到这些限制。我是库伯内特斯的新手。我是否需要重新启动入口以使这些注释生效?

我也尝试过创建一个ConfigMap;

apiVersion: v1
kind: ConfigMap
metadata:
  name: app-ingress-configuration
  namespace: development
  labels:
    app.kubernetes.io/name: [name of ingress]
    app.kubernetes.io/part-of: [name of ingress]
data:
  proxy-connect-timeout: "50"
  proxy-read-timeout: "120"
  proxy-send-timeout: "120"
  body-size: "1024m"
  client-body-buffer-size: "50m"
  client-max-body-size: "50m"
  proxy-body-size: "1024m"
  proxy-buffers: "8 32k"
  proxy-buffer-size: "32k"

仍然得到502。

不知道如何通过kubectl访问nginx.conf,它似乎从文档如果我更新这个配置映射的设置在nginx无论如何改变。

任何帮助赞赏。

使现代化

nginx.conf


# Configuration checksum: 1961171210939107273

# setup custom paths that do not require root access
pid /tmp/nginx.pid;

daemon off;

worker_processes 2;

worker_rlimit_nofile 523264;

worker_shutdown_timeout 240s ;

events {
    multi_accept        on;
    worker_connections  16384;
    use                 epoll;
}

http {
    client_max_body_size 100M;

    lua_package_path "/etc/nginx/lua/?.lua;;";
    
    lua_shared_dict balancer_ewma 10M;
    lua_shared_dict balancer_ewma_last_touched_at 10M;
    lua_shared_dict balancer_ewma_locks 1M;
    lua_shared_dict certificate_data 20M;
    lua_shared_dict certificate_servers 5M;
    lua_shared_dict configuration_data 20M;
    
    init_by_lua_block {
        collectgarbage("collect")
        
        -- init modules
        local ok, res
        
        ok, res = pcall(require, "lua_ingress")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        lua_ingress = res
        lua_ingress.set_config({
            use_forwarded_headers = false,
            use_proxy_protocol = false,
            is_ssl_passthrough_enabled = false,
            http_redirect_code = 308,
        listen_ports = { ssl_proxy = "442", https = "443" },
            
            hsts = true,
            hsts_max_age = 15724800,
            hsts_include_subdomains = true,
            hsts_preload = false,
        })
        end
        
        ok, res = pcall(require, "configuration")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        configuration = res
        end
        
        ok, res = pcall(require, "balancer")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        balancer = res
        end
        
        ok, res = pcall(require, "monitor")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        monitor = res
        end
        
        ok, res = pcall(require, "certificate")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        certificate = res
        end
        
        ok, res = pcall(require, "plugins")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        plugins = res
        end
        -- load all plugins that'll be used here
    plugins.init({})
    }
    
    init_worker_by_lua_block {
        lua_ingress.init_worker()
        balancer.init_worker()
        
        monitor.init_worker()
        
        plugins.run()
    }
    
    geoip_country       /etc/nginx/geoip/GeoIP.dat;
    geoip_city          /etc/nginx/geoip/GeoLiteCity.dat;
    geoip_org           /etc/nginx/geoip/GeoIPASNum.dat;
    geoip_proxy_recursive on;
    
    aio                 threads;
    aio_write           on;
    
    tcp_nopush          on;
    tcp_nodelay         on;
    
    log_subrequest      on;
    
    reset_timedout_connection on;
    
    keepalive_timeout  75s;
    keepalive_requests 100;
    
    client_body_temp_path           /tmp/client-body;
    fastcgi_temp_path               /tmp/fastcgi-temp;
    proxy_temp_path                 /tmp/proxy-temp;
    ajp_temp_path                   /tmp/ajp-temp;
    
    client_header_buffer_size       1M;
    client_header_timeout           60s;
    large_client_header_buffers     4 5M;
    client_body_buffer_size         1M;
    client_body_timeout             60s;
    
    http2_max_field_size            1M;
    http2_max_header_size           5M;
    http2_max_requests              1000;
    http2_max_concurrent_streams    128;
    
    types_hash_max_size             2048;
    server_names_hash_max_size      1024;
    server_names_hash_bucket_size   64;
    map_hash_bucket_size            64;
    
    proxy_headers_hash_max_size     512;
    proxy_headers_hash_bucket_size  64;
    
    variables_hash_bucket_size      256;
    variables_hash_max_size         2048;
    
    underscores_in_headers          off;
    ignore_invalid_headers          on;
    
    limit_req_status                503;
    limit_conn_status               503;
    
    include /etc/nginx/mime.types;
    default_type text/html;
    
    gzip on;
    gzip_comp_level 5;
    gzip_http_version 1.1;
    gzip_min_length 256;
    gzip_types application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/javascript text/plain text/x-component;
    gzip_proxied any;
    gzip_vary on;
    
    # Custom headers for response
    
    server_tokens on;
    
    # disable warnings
    uninitialized_variable_warn off;
    
    # Additional available variables:
    # $namespace
    # $ingress_name
    # $service_name
    # $service_port
    log_format upstreaminfo '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id';
    
    map $request_uri $loggable {
        
        default 1;
    }
    
    access_log /var/log/nginx/access.log upstreaminfo  if=$loggable;
    
    error_log  /var/log/nginx/error.log notice;
    
    resolver 10.245.0.10 valid=30s;
    
    # See https://www.nginx.com/blog/websocket-nginx
    map $http_upgrade $connection_upgrade {
        default          upgrade;
        
        # See http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
        ''               '';
        
    }
    
    # Reverse proxies can detect if a client provides a X-Request-ID header, and pass it on to the backend server.
    # If no such header is provided, it can provide a random value.
    map $http_x_request_id $req_id {
        default   $http_x_request_id;
        
        ""        $request_id;
        
    }
    
    # Create a variable that contains the literal $ character.
    # This works because the geo module will not resolve variables.
    geo $literal_dollar {
        default "$";
    }
    
    server_name_in_redirect off;
    port_in_redirect        off;
    
    ssl_protocols TLSv1.2;
    
    ssl_early_data off;
    
    # turn on session caching to drastically improve performance
    
    ssl_session_cache builtin:1000 shared:SSL:10m;
    ssl_session_timeout 10m;
    
    # allow configuring ssl session tickets
    ssl_session_tickets on;
    
    # slightly reduce the time-to-first-byte
    ssl_buffer_size 4k;
    
    # allow configuring custom ssl ciphers
    ssl_ciphers '';
    ssl_prefer_server_ciphers on;
    
    ssl_ecdh_curve auto;
    
    # PEM sha: ---
    ssl_certificate     /etc/ingress-controller/ssl/default-fake-certificate.pem;
    ssl_certificate_key /etc/ingress-controller/ssl/default-fake-certificate.pem;
    
    proxy_ssl_session_reuse on;
    
    upstream upstream_balancer {
        ### Attention!!!
        #
        # We no longer create "upstream" section for every backend.
        # Backends are handled dynamically using Lua. If you would like to debug
        # and see what backends ingress-nginx has in its memory you can
        # install our kubectl plugin https://kubernetes.github.io/ingress-nginx/kubectl-plugin.
        # Once you have the plugin you can use "kubectl ingress-nginx backends" command to
        # inspect current backends.
        #
        ###
        
        server 0.0.0.1; # placeholder
        
        balancer_by_lua_block {
            balancer.balance()
        }
        
        keepalive 32;
        
        keepalive_timeout  60s;
        keepalive_requests 100;
        
    }
    
    # Cache for internal auth checks
    proxy_cache_path /tmp/nginx-cache-auth levels=1:2 keys_zone=auth_cache:10m max_size=128m inactive=30m use_temp_path=off;
    
    # Global filters
    
    ## start server _
    server {
        server_name _ ;
        
        listen 80 default_server reuseport backlog=511 ;
        listen [::]:80 default_server reuseport backlog=511 ;
        listen 443 default_server reuseport backlog=511 ssl http2 ;
        listen [::]:443 default_server reuseport backlog=511 ssl http2 ;
        
        set $proxy_upstream_name "-";
        
        ssl_certificate_by_lua_block {
            certificate.call()
        }
        
        location / {
            
            set $namespace      "";
            set $ingress_name   "";
            set $service_name   "";
            set $service_port   "";
            set $location_path  "/";
            
            rewrite_by_lua_block {
                lua_ingress.rewrite({
                    force_ssl_redirect = false,
                    ssl_redirect = false,
                    force_no_ssl_redirect = false,
                    use_port_in_redirects = false,
                })
                balancer.rewrite()
                plugins.run()
            }
            
            # be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
            # will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
            # other authentication method such as basic auth or external auth useless - all requests will be allowed.
            #access_by_lua_block {
            #}
            
            header_filter_by_lua_block {
                lua_ingress.header()
                plugins.run()
            }
            
            body_filter_by_lua_block {
            }
            
            log_by_lua_block {
                balancer.log()
                
                monitor.call()
                
                plugins.run()
            }
            
            access_log off;
            
            port_in_redirect off;
            
            set $balancer_ewma_score -1;
            set $proxy_upstream_name "upstream-default-backend";
            set $proxy_host          $proxy_upstream_name;
            set $pass_access_scheme  $scheme;
            
            set $pass_server_port    $server_port;
            
            set $best_http_host      $http_host;
            set $pass_port           $pass_server_port;
            
            set $proxy_alternative_upstream_name "";
            
            client_max_body_size                    1m;
            
            proxy_set_header Host                   $best_http_host;
            
            # Pass the extracted client certificate to the backend
            
            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;
            
            proxy_set_header                        Connection        $connection_upgrade;
            
            proxy_set_header X-Request-ID           $req_id;
            proxy_set_header X-Real-IP              $remote_addr;
            
            proxy_set_header X-Forwarded-For        $remote_addr;
            
            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            
            proxy_set_header X-Scheme               $pass_access_scheme;
            
            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
            
            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";
            
            # Custom headers to proxied server
            
            proxy_connect_timeout                   5s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;
            
            proxy_buffering                         off;
            proxy_buffer_size                       5M;
            proxy_buffers                           4 5M;
            
            proxy_max_temp_file_size                1024M;
            
            proxy_request_buffering                 on;
            proxy_http_version                      1.1;
            
            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;
            
            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout;
            proxy_next_upstream_timeout             0;
            proxy_next_upstream_tries               3;
            
            proxy_pass http://upstream_balancer;
            
            proxy_redirect                          off;
            
        }
        
        # health checks in cloud providers require the use of port 80
        location /healthz {
            
            access_log off;
            return 200;
        }
        
        # this is required to avoid error if nginx is being monitored
        # with an external software (like sysdig)
        location /nginx_status {
            
            allow 127.0.0.1;
            
            allow ::1;
            
            deny all;
            
            access_log off;
            stub_status on;
        }
        
    }
    ## end server _
    
    ## start server dev-api
    server {
        server_name dev-api ;
        
        listen 80  ;
        listen [::]:80  ;
        listen 443  ssl http2 ;
        listen [::]:443  ssl http2 ;
        
        set $proxy_upstream_name "-";
        
        ssl_certificate_by_lua_block {
            certificate.call()
        }
        
        location / {
            
            set $namespace      "development";
            set $ingress_name   "app-ingress";
            set $service_name   "app-api-svc";
            set $service_port   "80";
            set $location_path  "/";
            
            rewrite_by_lua_block {
                lua_ingress.rewrite({
                    force_ssl_redirect = false,
                    ssl_redirect = true,
                    force_no_ssl_redirect = false,
                    use_port_in_redirects = false,
                })
                balancer.rewrite()
                plugins.run()
            }
            
            # be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
            # will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
            # other authentication method such as basic auth or external auth useless - all requests will be allowed.
            #access_by_lua_block {
            #}
            
            header_filter_by_lua_block {
                lua_ingress.header()
                plugins.run()
            }
            
            body_filter_by_lua_block {
            }
            
            log_by_lua_block {
                balancer.log()
                
                monitor.call()
                
                plugins.run()
            }
            
            port_in_redirect off;
            
            set $balancer_ewma_score -1;
            set $proxy_upstream_name "development-app-api-svc-80";
            set $proxy_host          $proxy_upstream_name;
            set $pass_access_scheme  $scheme;
            
            set $pass_server_port    $server_port;
            
            set $best_http_host      $http_host;
            set $pass_port           $pass_server_port;
            
            set $proxy_alternative_upstream_name "";
            
            client_max_body_size                    1024M;
            
            client_body_buffer_size                 50M;
            
            proxy_set_header Host                   $best_http_host;
            
            # Pass the extracted client certificate to the backend
            
            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;
            
            proxy_set_header                        Connection        $connection_upgrade;
            
            proxy_set_header X-Request-ID           $req_id;
            proxy_set_header X-Real-IP              $remote_addr;
            
            proxy_set_header X-Forwarded-For        $remote_addr;
            
            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            
            proxy_set_header X-Scheme               $pass_access_scheme;
            
            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
            
            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";
            
            # Custom headers to proxied server
            
            proxy_connect_timeout                   50s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;
            
            proxy_buffering                         off;
            proxy_buffer_size                       5M;
            proxy_buffers                           8 5M;
            
            proxy_max_temp_file_size                1024M;
            
            proxy_request_buffering                 on;
            proxy_http_version                      1.1;
            
            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;
            
            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout;
            proxy_next_upstream_timeout             0;
            proxy_next_upstream_tries               3;
            
            proxy_pass http://upstream_balancer;
            
            proxy_redirect                          off;
            
        }
        
    }
    ## end server dev-api
    
.......

更新2

来自kubectl日志的日志-n nginx入口控制器XXX命令

127.0.0.1--[16/Jul/2020:10:11:14 0000]"POST[入口/服务endpoint]HTTP/2.0"502 4"https://[client-host-name]/[client-path]"Mozilla/5.0(Windows NT 10.0; Win64; x64)AppleWebKit/537.36(KHTML,像壁虎)Chrome/83.0.4103.116Safari /537.36Edg/83.0.478.58"9351 0.659[service-name-80][]10.244.1.72:8014 0.652 502 7b7bdf8a9319e88c80ba3444372daf2d

共有2个答案

喻嘉泽
2023-03-14

您需要确保在入口控制器上设置了文件大小。Nginx会赶上设置。试试这个。有关注释的更多信息,请遵循以下内容。https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/

kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: service-api-tls-ingress
  namespace: production
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/proxy-body-size: 8m
阮疏珂
2023-03-14

我最初的问题是nginx,但在我更改了限制后,它将请求转发给服务,但我没有检查正确的日志。所以@mWatney直接对服务/pod进行双重检查是正确的。

对其他任何人来说,我看到的问题都与经营一家公司有关。Linux alpine容器中的NET core 3.1应用程序。在应用程序中,我使用的是系统的一个版本。绘画导致Linux下运行异常的常见错误,如下所示;

系统TypeInitializationException:“Gdip”的类型初始值设定项引发异常---

解决方案是添加到dockerfile;

运行apk添加libgdiplus-dev fontconfig ttf-dejavu--update-ache--repositoryhttp://dl-3.alpinelinux.org/alpine/edge/testing/

这允许使用System.绘图。常见的Linux通过添加加载共享库libgdiplus的能力。

这里的信用:https://github.com/dotnet/dotnet-docker/issues/618#issuecomment-467619498

更持久的解决方案是消除对系统的依赖。绘画完全从应用程序中删除。谢谢@mWatney的帮助,你让我走上了正确的道路。

 类似资料:
  • 我的公司有一个伪造的CA证书。实例com和一张伪造地图的唱片。实例com连接到我们的负载平衡器的IP 负载平衡器正在将流量转发到我们的Kubernetes群集。 在集群中,我部署了nginx ingress helm图表,在30200处公开了https节点端口 我根据上述证书创建了一个名为test secret的k8s TLS机密。 我部署了一个带有服务“test”的应用程序,并安装了以下入口:

  • 我试图设置Kubernetes入口,将外部http流量路由到前端pod(路径/)和后端pod(路径/rest/*),但我总是得到400错误,而不是主nginx索引。html。 所以我在第https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer页尝试了谷歌库伯内特斯的例子,但我总是得到400个错误。有什么想法吗?

  • 是否可以在库伯内特斯中配置入口控制器,仅当传入请求的标头具有特定值时才将HTTP请求路由到服务? 实例 带有以下标头的HTTP请求 应该转发给服务1 带有以下标头的HTTP请求 应该被阻止 如果可能的话,你能详细一点或指向一些文档,因为我找不到这种用例的文档吗

  • 我有一个网站在kubernetes集群中运行。我可以在本地访问它,但希望通过internet访问它。(我有一个注册域),但外部IP一直挂起 我按照以下指示工作:https://dev.to/peterj/expose-a-kubernetes-service-on-your-own-custom-domain-52dd 这是服务和入口的代码 因此,我正在使用helm安装nginx控制器,但在那之后

  • 为了在minikube的k8s上运行php fpm应用程序,我缺少一些kubernetes ingress nginx配置。 k8s配置文件: 先前docker开发环境的docker Nginx映像上使用的Nginx服务器工作配置文件(Nginx位于docker中,是与php容器分离的容器): 在浏览器上访问minikube ip时,出现502错误。 我已经安装了运行以下命令的ingress ng

  • 我在K8S集群中面临一个奇怪的问题 基本上我有两个应用程序: identity manager(基于WSO2,但该问题与WSO2无关) 将管理X509身份验证的外部SAML2 IDP 为了使用这个外部SAML2 IDP,我配置了WSO2 当我尝试通过X509登录时,WSO2显示登录页面,我单击智能卡,重定向到外部SAML IDP。 在这种情况下,nginx入口为我提供了502个坏网关。如果我复制U