2016-09-21 38 views
2

我发现了一个有趣的问题,我不清楚根本原因。我有一台服务器和两个虚拟主机A和B,端口分别为80和81。我写了一个简单的PHP代码在一个看起来像这样php curl localhost在发出并发请求时速度很慢

<?php 

echo "from A server\n"; 

和早餐服务器上的另一个简单的PHP代码

<?php 

echo "B server:\n"; 

// create curl resource 
$ch = curl_init(); 

// set url 
curl_setopt($ch, CURLOPT_URL, "localhost:81/a.php"); 

//return the transfer as a string 
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); 

// $output contains the output string 
$output = curl_exec($ch); 

// close curl resource to free up system resources 
curl_close($ch); 

echo $output; 

当制作用ab,我得到如下结果并发请求:

ab -n 10 -c 5 http://192.168.10.173/b.php 
This is ApacheBench, Version 2.3 <$Revision: 1706008 $> 
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ 
Licensed to The Apache Software Foundation, http://www.apache.org/ 

Benchmarking 192.168.10.173 (be patient).....done 


Server Software:  nginx/1.10.0 
Server Hostname:  192.168.10.173 
Server Port:   80 

Document Path:   /b.php 
Document Length:  26 bytes 

Concurrency Level:  5 
Time taken for tests: 2.680 seconds 
Complete requests:  10 
Failed requests:  0 
Total transferred:  1720 bytes 
HTML transferred:  260 bytes 
Requests per second: 3.73 [#/sec] (mean) 
Time per request:  1340.197 [ms] (mean) 
Time per request:  268.039 [ms] (mean, across all concurrent requests) 
Transfer rate:   0.63 [Kbytes/sec] received 

Connection Times (ms) 
       min mean[+/-sd] median max 
Connect:  0 0 0.1  0  1 
Processing:  2 1339 1408.8 2676 2676 
Waiting:  2 1339 1408.6 2676 2676 
Total:   3 1340 1408.8 2676 2677 

Percentage of the requests served within a certain time (ms) 
    50% 2676 
    66% 2676 
    75% 2676 
    80% 2676 
    90% 2677 
    95% 2677 
    98% 2677 
    99% 2677 
100% 2677 (longest request) 

但要做出1000个请求并发1级是非常快:

$ ab -n 1000 -c 1 http://192.168.10.173/b.php 
This is ApacheBench, Version 2.3 <$Revision: 1706008 $> 
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ 
Licensed to The Apache Software Foundation, http://www.apache.org/ 

Benchmarking 192.168.10.173 (be patient) 
Completed 100 requests 
Completed 200 requests 
Completed 300 requests 
Completed 400 requests 
Completed 500 requests 
Completed 600 requests 
Completed 700 requests 
Completed 800 requests 
Completed 900 requests 
Completed 1000 requests 
Finished 1000 requests 


Server Software:  nginx/1.10.0 
Server Hostname:  192.168.10.173 
Server Port:   80 

Document Path:   /b.php 
Document Length:  26 bytes 

Concurrency Level:  1 
Time taken for tests: 1.659 seconds 
Complete requests:  1000 
Failed requests:  0 
Total transferred:  172000 bytes 
HTML transferred:  26000 bytes 
Requests per second: 602.86 [#/sec] (mean) 
Time per request:  1.659 [ms] (mean) 
Time per request:  1.659 [ms] (mean, across all concurrent requests) 
Transfer rate:   101.26 [Kbytes/sec] received 

Connection Times (ms) 
       min mean[+/-sd] median max 
Connect:  0 0 0.1  0  1 
Processing:  1 1 10.3  1  201 
Waiting:  1 1 10.3  1  201 
Total:   1 2 10.3  1  201 

Percentage of the requests served within a certain time (ms) 
    50%  1 
    66%  1 
    75%  1 
    80%  1 
    90%  1 
    95%  1 
    98%  1 
    99%  2 
100% 201 (longest request) 

任何人都可以解释为什么发生这种情况?我真的想知道根本原因。这是卷曲问题吗?它并不像网络瓶颈或开放文件问题,因为并发只是5.顺便说一下,我也尝试与guzzlehttp相同的事情,但结果是一样的。我在笔记本电脑上使用ab,并且服务器位于同一本地网络中。另外,它与网络带宽无关,因为主机A和B之间的请求是在本地主机完成的。


我修改了代码,所以我们可以测试它更灵活。

<?php 

require 'vendor/autoload.php'; 

use GuzzleHttp\Client; 

$opt = 1; 
$url = 'http://localhost:81/a.php'; 

switch ($opt) { 
    case 1: 
     // create curl resource 
     $ch = curl_init(); 

     // set url 
     curl_setopt($ch, CURLOPT_URL, $url); 

     //return the transfer as a string 
     curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); 

     // $output contains the output string 
     $output = curl_exec($ch); 

     curl_close($ch); 

     echo $output; 
     break; 
    case 2: 
     $client = new Client(); 
     $response = $client->request('GET', $url); 
     echo $response->getBody(); 
     break; 
    case 3: 
     echo file_get_contents($url); 
     break; 
    default: 
     echo "no opt"; 
} 

echo "app server:\n"; 

我尝试了file_get_contents,但切换到file_get_contents时没有明显的区别。当并发性为1时,所有方法都很好。但是,当并发性增加时,它们都开始降级。


我想我找到,所以我只是张贴另一个问题concurrent curl could not resolve host与这个问题有关的东西。这可能是根本原因,但我还没有任何答案。


经过这么长时间,我认为这肯定与名称解析有关。这里是PHP脚本,可以在并发水平500

<?php 

require 'vendor/autoload.php'; 

use GuzzleHttp\Client; 

$opt = 1; 
$url = 'http://localhost:81/a.php'; 

switch ($opt) { 
    case 1: 
     // create curl resource 
     $ch = curl_init(); 

     // set url 
     curl_setopt($ch, CURLOPT_URL, $url); 

     //return the transfer as a string 
     curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); 

     curl_setopt($ch, CURLOPT_PROXY, 'localhost'); 

     // $output contains the output string 
     $output = curl_exec($ch); 

     curl_close($ch); 

     echo $output; 
     break; 
    case 2: 
     $client = new Client(); 
     $response = $client->request('GET', $url, ['proxy' => 'localhost']); 
     echo $response->getBody(); 
     break; 
    case 3: 
     echo file_get_contents($url); 
     break; 
    default: 
     echo "no opt"; 
} 

echo "app server:\n"; 

真正重要的是什么curl_setopt($ch, CURLOPT_PROXY, 'localhost');$response = $client->request('GET', $url, ['proxy' => 'localhost']);执行。它告诉curl使用localhost作为代理。

这里是AB测试的结果

ab -n 1000 -c 500 http://192.168.10.173/b.php 
This is ApacheBench, Version 2.3 <$Revision: 1528965 $> 
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ 
Licensed to The Apache Software Foundation, http://www.apache.org/ 

Benchmarking 192.168.10.173 (be patient) 
Completed 100 requests 
Completed 200 requests 
Completed 300 requests 
Completed 400 requests 
Completed 500 requests 
Completed 600 requests 
Completed 700 requests 
Completed 800 requests 
Completed 900 requests 
Completed 1000 requests 
Finished 1000 requests 


Server Software:  nginx/1.10.0 
Server Hostname:  192.168.10.173 
Server Port:   80 

Document Path:   /b.php 
Document Length:  182 bytes 

Concurrency Level:  500 
Time taken for tests: 0.251 seconds 
Complete requests:  1000 
Failed requests:  184 
    (Connect: 0, Receive: 0, Length: 184, Exceptions: 0) 
Non-2xx responses:  816 
Total transferred:  308960 bytes 
HTML transferred:  150720 bytes 
Requests per second: 3985.59 [#/sec] (mean) 
Time per request:  125.452 [ms] (mean) 
Time per request:  0.251 [ms] (mean, across all concurrent requests) 
Transfer rate:   1202.53 [Kbytes/sec] received 

Connection Times (ms) 
       min mean[+/-sd] median max 
Connect:  0 6 4.9  5  14 
Processing:  9 38 42.8  22  212 
Waiting:  8 38 42.9  22  212 
Total:   11 44 44.4  31  214 

Percentage of the requests served within a certain time (ms) 
    50%  31 
    66%  37 
    75%  37 
    80%  38 
    90% 122 
    95% 135 
    98% 207 
    99% 211 
100% 214 (longest request) 

但仍不能使用localhost作为代理的时候,为什么名称解析的并发级别5失败了吗?


虚拟主机设置非常简单和干净,并且几乎所有内容都处于默认配置。我不在这台服务器上使用iptables,我也没有配置任何特别的东西。

server { 
    listen 81 default_server; 
    listen [::]:81 default_server; 

    root /var/www/html; 

    index index.html index.htm index.nginx-debian.html; 

    server_name _; 

    location/{ 
     try_files $uri $uri/ =404; 
    } 

    location ~ \.php$ { 
     include snippets/fastcgi-php.conf; 
     fastcgi_pass unix:/run/php/php7.0-fpm.sock; 
    } 
} 

找到一些有趣的事情!如果您在大约3秒后立即进行另一次ab测试。第二次ab测试很快。

不使用localhost作为代理

ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 2.8 seconds to finish. 
ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 0.008 seconds only. 

使用localhost作为代理

ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 0.006 seconds. 
ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 0.006 seconds. 

我认为它仍然意味着问题是名称解析。但为什么?


假设:nginx的是不听为localhost:81

我尝试添加listen 127.0.0.1:81; nginx的,它表明没有任何影响。

发现我自己在使用卷曲代理时发生了一些错误,不起作用!稍后更新其他详细信息。


已解决,与代理无关,或任何其他。根本原因是在php-fpm的www.conf中的pm.start_servers

+0

如果您使用不同的并发级别(如2)或URL fopen()来代替(如file_get_contents(“http:// localhost:81/a.php”);),这会发生根本变化吗? –

+0

@BJBlack at concurrcy = 2,3,4,5,时间分别为0.018s,0.87s,1.75s,2.42s。我怀疑这与低级别的Linux行为有关。 – cwhsu

+1

@cwhsu在这种情况下,你可能想挤压linux作为标签 – Goose

回答

0

好吧,经过这么多天的试图解决这个问题,我终于找出原因。这不是名称解析。我无法相信需要很多天才能找到根本原因,这是php-fpm的www.confpm.start_servers的数量。最初,我将pm.start_servers的数量设置为3,这就是为什么对本地主机的测试ab总是在并发级别3之后变得更差。虽然php-cli没有数量有限的php进程问题,因此,php-cli总是表现出色。在将pm.start_servers增加到5之后,ab测试结果与php-cli一样快。如果这是你的php-fpm速度慢的原因,你也应该考虑改变pm.min_spare_servers,pm.max_spare_serverspm.max_children和任何相关的数量。