2016-08-11 78 views
0

我正在使用API​​管理器1.10.0和DAS 3.0.1。WSO2 DAS不支持Postgres?

我正在尝试为DAS设置Postgres。没有postpresql.sql,所以我用oracle.sql

但我得到例外。

[2016-08-11 15:06:25,079] ERROR {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} - Error in executing task: Don't know how to save StructField(max_request_time,DecimalType(30,0),true) to JDBC 
java.lang.RuntimeException: Don't know how to save StructField(max_request_time,DecimalType(30,0),true) to JDBC 
     at org.apache.spark.sql.jdbc.carbon.JDBCRelation.insert(JDBCRelation.scala:194) 
     at org.apache.spark.sql.sources.InsertIntoDataSource.run(commands.scala:53) 
     at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57) 
     at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57) 
     at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68) 
     at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88) 
     at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88) 
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) 
     at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87) 
     at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950) 
     at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950) 
     at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:144) 
     at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:128) 
     at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51) 
     at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755) 
     at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:731) 
     at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:709) 
     at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201) 
     at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151) 
     at org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:59) 
     at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67) 
     at org.quartz.core.JobRunShell.run(JobRunShell.java:213) 
     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
     at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
     at java.lang.Thread.run(Thread.java:745) 
Caused by: java.lang.IllegalArgumentException: Don't know how to save StructField(max_request_time,DecimalType(30,0),true) to JDBC 
     at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$$anonfun$schemaString$1$$anonfun$2.apply(carbon.scala:55) 
     at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$$anonfun$schemaString$1$$anonfun$2.apply(carbon.scala:42) 
     at scala.Option.getOrElse(Option.scala:120) 
     at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$$anonfun$schemaString$1.apply(carbon.scala:41) 
     at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$$anonfun$schemaString$1.apply(carbon.scala:38) 
     at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) 
     at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108) 
     at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$.schemaString(carbon.scala:38) 
     at org.apache.spark.sql.jdbc.carbon.JDBCRelation.insert(JDBCRelation.scala:180) 
     ... 26 more 

创建API_REQUEST_SUMMARY脚本表是:

CREATE TABLE API_REQUEST_SUMMARY ( 
api character varying(100) 
, api_version character varying(100) 
, version character varying(100) 
, apiPublisher character varying(100) 
, consumerKey character varying(100) 
, userId character varying(100) 
, context character varying(100) 
, max_request_time decimal(30) 
, total_request_count integer 
, hostName character varying(100) 
, year SMALLINT 
, month SMALLINT 
, day SMALLINT 
, time character varying(30) 
, PRIMARY KEY(api,api_version,apiPublisher,consumerKey,userId,context,hostName,time) 
); 

如何使Postgres的这项工作?

回答

0

我必须将列max_request_time定义为bigint