尝试计算UDP客户端和服务器之间的RTT(往返时间)时,我遇到了一个非常不直观的结果。当我使用20字节的数据包大小时,RTT是4.0 ms,但是当我将数据包大小增加到15000字节时,RTT为2.8 ms。这是为什么发生?随着数据包大小的增加,RTT不应该增加吗?增加数据包大小时RTT减少
下面是UDP服务器的代码。我运行这个是java RTTServer 8080
。
public class RTTServer {
final static int BUFSIZE = 1024, COUNT=100000;
public static void main(String args[]) {
long start=Integer.MAX_VALUE;
byte[] bufferRecieve = new byte[BUFSIZE];
DatagramPacket recievePacket = new DatagramPacket(bufferRecieve, BUFSIZE);
for (;;)
try (DatagramSocket aSocket = new DatagramSocket(Integer.parseInt(args[0]));) {
aSocket.receive(recievePacket);
DatagramPacket sendPacket = new DatagramPacket(recievePacket.getData(), recievePacket.getLength(), recievePacket.getAddress(), recievePacket.getPort());
aSocket.send(sendPacket);
} catch (Exception e) {
System.out.println("Socket: " + e.getMessage());
}
}
}
下面是UDP客户端的代码。我运行这个为java RTTClient 192.168.1.20 8080 15000
。
public class RTTClient {
final static int BUFSIZE = 1024;
final static int COUNT = 1000;
public static void main(String args[]) throws UnknownHostException {
InetAddress aHost = InetAddress.getByName(args[0]);
byte[] dataArray = args[2].getBytes();
byte[] bufferReceive = new byte[BUFSIZE];
DatagramPacket requestPacket = new DatagramPacket(
dataArray, dataArray.length, aHost, Integer.parseInt(args[1]));
DatagramPacket responsePacket = new DatagramPacket(bufferReceive,BUFSIZE);
long rtts = 0;
for (int i =0 ; i < COUNT; i++){
try (DatagramSocket aSocket = new DatagramSocket();) {
long start = System.currentTimeMillis();
aSocket.send(requestPacket);
aSocket.receive(responsePacket);
System.out.println(i);
rtts += System.currentTimeMillis() - start;
} catch (Exception e) {
System.out.println("Socket: " + e.getMessage());
}
}
System.out.println("RTT = "+(double)rtts/(double)COUNT);
}
}