2017-05-30 57 views
3

我试图通过WebSocket传输大于4M的文件。我使用org.glassfish.tyrus:tyrus-server:1.13.1org.glassfish.tyrus:tyrus-container-grizzly-server:1.13.1作为依赖关系。传入缓冲区大小不能设置为Tyrus客户端

默认情况下,传入缓冲区大小约为4M(请参阅:8.4. Incoming buffer size)。文档清楚地说明如果我想增加文件的大小,但仍然不能改变传入的缓冲区大小,需要做些什么。这里是我想要做的精髓:

CountDownLatch messageLatch = new CountDownLatch(1); 

final ClientEndpointConfig cec = ClientEndpointConfig.Builder.create().build(); 
ClientManager client = ClientManager.createClient(); 

client.getProperties().put(ClientProperties.INCOMING_BUFFER_SIZE, new Integer(17_000_000)); 
Integer tyrusIncomingBufferSize = Utils.getProperty(client.getProperties(), ClientProperties.INCOMING_BUFFER_SIZE, Integer.class); 
System.out.println("tyrusIncomingBufferSize: " + tyrusIncomingBufferSize); // 17000000 

client.connectToServer(new Endpoint() { 
    @Override 
    public void onOpen(Session session, EndpointConfig config) { 
     try { 
      session.addMessageHandler(new MessageHandler.Whole<ByteBuffer>() { 

       @Override 
       public void onMessage(ByteBuffer message) { 
        System.out.println("Received message: " + message); 
        messageLatch.countDown(); 
       } 
      }); 

      File pic = new File(TEST_PIC); // the size is more than 4M 
      FileInputStream fileReader = new FileInputStream(pic); 
      final long sizeOfScreenshotFile = pic.length(); 
      System.out.println(sizeOfScreenshotFile); // 4734639 
      byte[] screenshotData = new byte[(int) sizeOfScreenshotFile]; 
      fileReader.read(screenshotData); 
      fileReader.close(); 

      ByteBuffer bb = ByteBuffer.wrap(screenshotData); 
      session.getBasicRemote().sendBinary(bb); 

     } catch (IOException e) { 
      e.printStackTrace(); 
     } 
    } 
}, cec, new URI(URI)); 
messageLatch.await(100, TimeUnit.SECONDS); 

但我仍然有同样的错误:

17000000 
4734639 
V 30, 2017 1:39:58 PM org.glassfish.tyrus.core.TyrusEndpointWrapper onError 
WARNING: Unexpected error, closing connection. 
java.lang.IllegalArgumentException: Buffer overflow. 
    at org.glassfish.tyrus.core.Utils.appendBuffers(Utils.java:346) 
    at org.glassfish.tyrus.core.TyrusWebSocketEngine$TyrusReadHandler.handle(TyrusWebSocketEngine.java:523) 
    at org.glassfish.tyrus.container.grizzly.server.GrizzlyServerFilter$ProcessTask.execute(GrizzlyServerFilter.java:379) 
    at org.glassfish.tyrus.container.grizzly.client.TaskProcessor.processTask(TaskProcessor.java:114) 
    at org.glassfish.tyrus.container.grizzly.client.TaskProcessor.processTask(TaskProcessor.java:91) 
    at org.glassfish.tyrus.container.grizzly.server.GrizzlyServerFilter.handleRead(GrizzlyServerFilter.java:215) 
    at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119) 
    at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:284) 
    at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:201) 
    at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:133) 
    at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112) 
    at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77) 
    at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:526) 
    at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112) 
    at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117) 
    at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:56) 
    at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:137) 
    at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:591) 
    at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:571) 
    at java.lang.Thread.run(Thread.java:745) 

java.io.IOException: An established connection was aborted by the software in your host machine 
    at sun.nio.ch.SocketDispatcher.write0(Native Method) 
    at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:51) 
    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) 
    at sun.nio.ch.IOUtil.write(IOUtil.java:51) 
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) 
    at org.glassfish.grizzly.nio.transport.TCPNIOUtils.flushByteBuffer(TCPNIOUtils.java:149) 
    at org.glassfish.grizzly.nio.transport.TCPNIOUtils.writeSimpleBuffer(TCPNIOUtils.java:133) 
    at org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.write0(TCPNIOAsyncQueueWriter.java:126) 
    at org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.write0(TCPNIOAsyncQueueWriter.java:106) 
    at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.processAsync(AbstractNIOAsyncQueueWriter.java:344) 
    at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:108) 
    at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77) 
    at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:526) 
    at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112) 
    at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117) 
    at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.executeIoEvent(WorkerThreadIOStrategy.java:103) 
    at org.glassfish.grizzly.strategies.AbstractIOStrategy.executeIoEvent(AbstractIOStrategy.java:89) 
    at org.glassfish.grizzly.nio.SelectorRunner.iterateKeyEvents(SelectorRunner.java:415) 
    at org.glassfish.grizzly.nio.SelectorRunner.iterateKeys(SelectorRunner.java:384) 
    at org.glassfish.grizzly.nio.SelectorRunner.doSelect(SelectorRunner.java:348) 
    at org.glassfish.grizzly.nio.SelectorRunner.run(SelectorRunner.java:279) 
    at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:591) 
    at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:571) 
    at java.lang.Thread.run(Thread.java:745) 
-9.8634088E7 
V 30, 2017 1:41:38 PM org.glassfish.grizzly.http.server.NetworkListener shutdownNow 
INFO: Stopped listener bound to [0.0.0.0:8025] 
V 30, 2017 1:41:38 PM org.glassfish.tyrus.server.Server stop 
INFO: Websocket Server stopped. 

我试着调试泰鲁斯项目,只见那incomingBufferSize变量实际上仍与默认值。

有没有人有任何想法我该如何解决这个问题?

回答

1

您正在设置客户端中的属性,但异常显然会引发到服务器上。

你是如何启动服务器的?它几乎看起来像你正在使用灰熊独立 - 如果你这样做,你可以尝试启动服务器TyrusWebSocketEngine#INCOMING_BUFFER_SIZE属性设置为17_000_000或任何你想要的值。

(这可以通过使用Server(Map, Class ...)或其他构造函数创建的服务器完成更多细节请Server class javadoc

+0

我已经解决了一个问题,感谢您的回复。我将新的传入缓冲区大小的映射注入到服务器构造函数中,并解决了我的问题。非常感谢! – dim

0

解决方案(感谢@Pavel布切克):

在服务器端:

Map<String,Object> properties = new HashMap<String, Object>(); 
properties.put("org.glassfish.tyrus.incomingBufferSize", 17000000); 
Server server = new org.glassfish.tyrus.server.Server("localhost", 8025, null, 
               properties, TestServer.class);