2016-12-09 38 views
0

我们正在集群模式下构建wso2am。 有没有关于构建wso2am-analytics集群的文档? 我试过使用wso2das,参考如下。 https://docs.wso2.com/display/DAS310/Working+with+Product+Specific+Analytics+Profiles如何群集wso2am-analytics-2.0.0

但得到的错误如下

TID: [-1234] [] [2016-12-09 15:00:00,101] ERROR {org.wso2.carbon.analytics.spark.core.AnalyticsTask} - Error while executing the scheduled task for the script: APIM_LATENCY_BREAKDOWN_STATS {org.wso2.carbon.analytics.spark.core.AnalyticsTask} 
org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException: Exception in executing query CREATE TEMPORARY TABLE APIMGT_PERHOUR_EXECUTION_TIME USING CarbonAnalytics OPTIONS(tableName "ORG_WSO2_APIMGT_STATISTICS_PERHOUREXECUTIONTIMES", schema " year INT -i, month INT -i, day INT -i, hour INT -i, context STRING, api_version STRING, api STRING, tenantDomain STRING, apiPublisher STRING, apiResponseTime DOUBLE, securityLatency DOUBLE, throttlingLatency DOUBLE, requestMediationLatency DOUBLE, responseMediationLatency DOUBLE, backendLatency DOUBLE, otherLatency DOUBLE, firstEventTime LONG, _timestamp LONG -i", primaryKeys "year, month, day, hour, context, api_version, tenantDomain, apiPublisher", incrementalProcessing "APIMGT_PERHOUR_EXECUTION_TIME, DAY", mergeSchema "false") 
     at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:764) 
     at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:721) 
     at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201) 
     at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151) 
     at org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:60) 
     at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67) 
     at org.quartz.core.JobRunShell.run(JobRunShell.java:213) 
     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
     at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
     at java.lang.Thread.run(Thread.java:745) 
Caused by: java.lang.RuntimeException: Unknown options : incrementalprocessing 
     at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.checkParameters(AnalyticsRelationProvider.java:123) 
     at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.setParameters(AnalyticsRelationProvider.java:113) 
     at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.createRelation(AnalyticsRelationProvider.java:75) 
     at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.createRelation(AnalyticsRelationProvider.java:45) 
     at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158) 
     at org.apache.spark.sql.execution.datasources.CreateTempTableUsing.run(ddl.scala:92) 
     at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58) 
     at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56) 
     at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70) 
     at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) 
     at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) 
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) 
     at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) 
     at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55) 
     at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55) 
     at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145) 
     at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130) 
     at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52) 
     at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817) 
     at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:760) 
     ... 11 more 

=============================== =======================

任何建议将不胜感激!

回答

1

只是为了清楚你在问什么。您是使用DAS还是wso2am-2.0.0发布的wso2am分析?

如果您将wso2am-2.0.0与wso2das-3.1.0一起使用,那么DAS附带的火花分析脚本会出现问题。

这是由于使用incremental processing。 IncrementalProcessing应该更改为IncrementalParams。

你可以看到这已被wso2 here修复,但尚未发布。

您可以在主/批分析/脚本更新从DAS碳控制台脚本

+0

Tks很多,我使用wso2am-analytics。 另一个问题是,有没有关于净化分析数据的文档? 我们需要puge分析数据,但将汇总的数据保存在statdb中。 根据此文档https://docs.wso2.com/display/AM191/Publishing+API+Runtime+Statistics+Using+WSO2+DAS#PublishingAPIRuntimeStatisticsUsingWSO2DAS-​​PurgingData(optional) – Angus

+0

什么是您的分布和/或聚类模式?你是否将wso2am-analytics作为wso2am-2.0.0的单个节点运行,还是集群?不清楚清除,对不起但如果我记得正确的数据默认保存2周,然后它会自动清除。 –

+0

根据此文档,以HA模式运行wso2am-analytics https://docs.wso2.com/display/CLUSTER44x/Minimum+High+Availability+Deployment+-+DAS+3.0.1 我运行负载测试后, 'WSO2_ANALYTICS_EVENT_STORE_DB'db增长约500mb。那么,有没有配置可以清除数据以防止磁盘空间耗尽。非常感谢!! – Angus

0

韩国社交协会乌拉圭回合的反应!我们需要在statdb中保留汇总数据,以便清除哪个表?我已经找到了一个链接如下http://www.rukspot.com/Publishing_APIM_1100_Runtime_Statistics_to_DAS.html 它提到,只是清除见下表

ORG_WSO2_APIMGT_STATISTICS_DESTINATION

ORG_WSO2_APIMGT_STATISTICS_FAULT

ORG_WSO2_APIMGT_STATISTICS_REQUEST

ORG_WSO2_APIMGT_STATISTICS_RESPONSE

ORG_WSO2_APIMGT_STATISTICS_WORKFLOW

ORG_WSO2_APIMGT_STATISTICS_THROTTLE