2012-02-10 88 views
2

我在玩新的C++标准。我编写了一个测试来观察调度算法的行为,并看看线程发生了什么。考虑到上下文切换时间,我预期实际特定线程的等待时间比由std::this_thread::sleep_for()函数指定的值多一点。但令人惊讶的是,它有时甚至比睡眠时间少!我想不通为什么会这样,或者说我做错了什么......在C++ 0x多线程中等待

#include <iostream> 
#include <thread> 
#include <random> 
#include <vector> 
#include <functional> 
#include <math.h> 
#include <unistd.h> 
#include <sys/time.h> 

void heavy_job() 
{ 
    // here we're doing some kind of time-consuming job.. 
    int j=0; 
    while(j<1000) 
    { 
     int* a=new int[100]; 
     for(int i=0; i<100; ++i) 
      a[i] = i; 
     delete[] a; 
     for(double x=0;x<10000;x+=0.1) 
      sqrt(x); 
     ++j; 
    } 
    std::cout << "heavy job finished" << std::endl; 
} 

void light_job(const std::vector<int>& wait) 
{ 
    struct timeval start, end; 
    long utime, seconds, useconds; 
    std::cout << std::showpos; 
    for(std::vector<int>::const_iterator i = wait.begin(); 
     i!=wait.end();++i) 
    { 
     gettimeofday(&start, NULL); 
     std::this_thread::sleep_for(std::chrono::microseconds(*i)); 
     gettimeofday(&end, NULL); 
     seconds = end.tv_sec - start.tv_sec; 
     useconds = end.tv_usec - start.tv_usec; 
     utime = ((seconds) * 1000 + useconds/1000.0); 
     double delay = *i - utime*1000; 
     std::cout << "delay: " << delay/1000.0 << std::endl; 
    } 
} 

int main() 
{ 
    std::vector<int> wait_times; 
    std::uniform_int_distribution<unsigned int> unif; 
    std::random_device rd; 
    std::mt19937 engine(rd()); 
    std::function<unsigned int()> rnd = std::bind(unif, engine); 
    for(int i=0;i<1000;++i) 
     wait_times.push_back(rnd()%100000+1); // random sleep time between 1 and 1 million µs 
    std::thread heavy(heavy_job); 
    std::thread light(light_job,wait_times); 
    light.join(); 
    heavy.join(); 
    return 0; 
} 

输出我的英特尔Core-i5的机器上:

..... 
delay: +0.713 
delay: +0.509 
delay: -0.008 // ! 
delay: -0.043 // !! 
delay: +0.409 
delay: +0.202 
delay: +0.077 
delay: -0.027 // ? 
delay: +0.108 
delay: +0.71 
delay: +0.498 
delay: +0.239 
delay: +0.838 
delay: -0.017 // also ! 
delay: +0.157 
+3

您是否认为您的时间码是错误的? – 2012-02-10 16:10:40

回答

3

你的计时代码导致整体截断。

utime = ((seconds) * 1000 + useconds/1000.0); 
double delay = *i - utime*1000; 

假设您的等待时间为888888微秒,并且您准确地进入了该睡眠状态。 seconds将为0并且useconds将为888888。除以1000.0后,您将得到888.888。然后您添加0*1000,仍然产生888.888。然后分配到一个很长的时间,让你888,并明显延迟888.888 - 888 = 0.888

您应该更新utime实际存储微秒,这样你就不会得到截断,而且还因为顾名思义,该单位是微秒,就像useconds。例如:

long utime = seconds * 1000000 + useconds; 

您也得到了延迟计算。忽略截断的作用,它应该是:

double delay = utime*1000 - *i; 
std::cout << "delay: " << delay/1000.0 << std::endl; 

你得到它的方式,你要输出的正延迟实际上是截断的结果,而消极的代表实际延迟。

+6

嗯......并且在掌握了使用容易出错的'timeval'之后,使用'std :: chrono :: clocks'来测量已用时间!它们很容易,当你减去它们时,你会得到'chrono :: durations',这几乎不容易出错。如果你手动转换时间单位,你做错了。 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2661.htm#Clocks – 2012-02-10 16:45:07

+0

+1:I lol'd ... – 2012-02-10 16:45:32

+0

Ooops ...谢谢! – 2012-02-10 17:29:16