2017-05-31 23 views
0
I am using concurrend append method from the class Core in Azure to store data to Azure Data lake.Below is the code and the exception which I got.I am getting this exception rarely not always.Could anyone guide me?... 





public void invoke(String value) { 
     BitfinexSingletonClass obj = null; 
     try { 
      obj = BitfinexSingletonClass.getInstance(); 
     } catch (IOException e1) { 
      slf4jLogger.info(e1.getMessage()); 
     } 
     ADLStoreClient client = obj.getADLStoreClient(); 
     byte[] myBuffer = (value + "\n").getBytes(); 

     RequestOptions opts = new RequestOptions(); 

     opts.retryPolicy = new ExponentialBackoffPolicy(); 

     OperationResponse resp = new OperationResponse(); 
     slf4jLogger.info("" + value); 
     slf4jLogger 
       .info("...............Writing.........above......BITFINEX_DSHBTC_ORDER..Data............................ToADLake............"); 
     Core.concurrentAppend(BITFINEX_DSHBTC_ORDER, myBuffer, 0, myBuffer.length, true, client, opts, resp); 
     slf4jLogger.info("...............BITFINEX_DSHBTC_ORDER...Data...Successfully....written.....to...AzureDataLake............"); 
     if (!resp.successful) { 
      try { 
       throw client.getExceptionFromResponse(resp, "BITFINEX_DSHBTC_ORDER data is not written to ADL"); 
      } catch (IOException e) { 
       // TODO Auto-generated catch block 
       e.printStackTrace(); 
      } 
     } 

    } 

com.microsoft.azure.datalake.store.ADLException:操作CONCURRENTAPPEND失败,异常java.net.SocketTimeoutException:读超时 最后遇到例外5名试图[的java.net.UnknownHostException,java.net后抛出。 UnknownHostException,java.net.UnknownHostException,java.net.SocketTimeoutException,java.net.SocketTimeoutException] at com.microsoft.azure.datalake.store.ADLStoreClient.getExceptionFromResponse(ADLStoreClient.java:1124) at co.biz.yobit。 sink.YobitLtcbtcTickerADLSink.invoke(YobitLtcbtcTickerADLSink.java:41) at org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:38) at org.apache.flink.streaming.runtime.io。 STR eamInputProcessor.processInput(StreamInputProcessor.java:185) at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:63) at org.apache.flink.streaming.runtime.tasks.StreamTask。调用(StreamTask.java:261) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:665) at java.lang.Thread.run(Thread.java:748) 引起: java.net.SocketTimeoutException:读取超时 at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at java.net.SocketInputStream.read(SocketInputStream。 java:171) at java.net.SocketInputStream.read(SocketInputStream.java:141) at sun.security.ssl.InputRecord.readFully(InputRecord.java:465) at sun.security.ssl.InputRecord.read(InputRecord.java:503) at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl。 java:973) at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:930) at sun.security.ssl.AppInputStream.read(AppInputStream.java:105) at java.io.BufferedInputStream.fill( BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at sun.net.www.http.HttpClient。 parseHTTPHeader(HttpClient.java:735) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678) at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1569) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474) at java.net。 HttpURLConnection.getResponseCode(HttpURLConnection.java:480) 在sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:338) 在com.microsoft.azure.datalake.store.HttpTransport.makeSingleCall(HttpTransport。的java:292) 在com.microsoft.azure.datalake.store.HttpTransport.makeCall(HttpTransport.java:91) 在com.microsoft.azure.datalake.store.Core.concurrentAppend(Core.java:210) 在co co y Y当我运行flink作业将数据存储到Azure Data Lake时,我得到以下异常。任何人都可以在此指导我?

+0

也许你可以使用调试器并找出哪个主机是未知的。对我来说,它看起来像一个错误的配置或网络地址/端口问题。 – twalthr

+0

上述错误通常是由于运行代码的主机与Azure Data Lake Store之间的网络条件不可靠造成的。你在哪里运行代码?它是在Azure虚拟机上还是在Azure之外运行? –

+0

@阿米特库尔卡尼。 我在公司本地虚拟机上的flink独立群集上运行此代码。 – Dhinesh

回答

1

上述错误通常是运行代码的主机与Azure Data Lake Store之间的不可靠网络条件的结果。正如评论中所证实的那样,主机正在跨WAN连接的各个地区运行。因此,这些错误是可以预料的,如果看到这些错误,您应该重试

建议Flink集群在与Azure Data Lake Store相同的区域中的VM上运行。在该配置中,您将看不到这些网络错误。

相关问题