2017-10-28 95 views
0

我正在加载几个Oracle表格到蜂巢,它似乎正在工作,但2表格正在出错 - IllegalArgumentException: requirement failed: Decimal precision 136 exceeds max precision 38 我检查了Oracle表格,并没有与十进制(136)精度的列,在来源中。星火/斯卡拉加载Oracle表格到Hive

这里是spark-shell星火/ Scala代码:

val df_oracle = spark.read.format("jdbc").option("url", "jdbc:oracle:thin:@hostname:port:SID").option("user",userName).option("password",passWord).option("driver", "oracle.jdbc.driver.OracleDriver").option("dbtable", inputTable).load() 

    df_oracle.repartition(10).write.format("orc").mode("overwrite").saveAsTable(outputTable) 

下面是完整的错误信息 -

java.lang.IllegalArgumentException: requirement failed: Decimal precision 136 exceeds max precision 38 
    at scala.Predef$.require(Predef.scala:224) 
    at org.apache.spark.sql.types.Decimal.set(Decimal.scala:113) 
    at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:434) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3$$anonfun$9.apply(JdbcUtils.scala:337) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3$$anonfun$9.apply(JdbcUtils.scala:337) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$nullSafeConvert(JdbcUtils.scala:438) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3.apply(JdbcUtils.scala:337) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3.apply(JdbcUtils.scala:335) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:286) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:268) 
    at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73) 
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39) 
    at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:147) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53) 
    at org.apache.spark.scheduler.Task.run(Task.scala:99) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 

17/10/28 07:56:58 ERROR TaskSetManager: Task 0 in stage 36.0 failed 4 times; aborting job 
17/10/28 07:56:58 ERROR FileFormatWriter: Aborting job null. 
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 36.0 failed 4 times, most recent failure: Lost task 0.3 in stage 36.0 (TID 201, alphd1dx009.dlx.idc.ge.com, executor 1): java.lang.IllegalArgumentException: requirement failed: Decimal precision 136 exceeds max precision 38 
    at scala.Predef$.require(Predef.scala:224) 
    at org.apache.spark.sql.types.Decimal.set(Decimal.scala:113) 
    at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:434) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3$$anonfun$9.apply(JdbcUtils.scala:337) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3$$anonfun$9.apply(JdbcUtils.scala:337) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$nullSafeConvert(JdbcUtils.scala:438) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3.apply(JdbcUtils.scala:337) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3.apply(JdbcUtils.scala:335) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:286) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:268) 
    at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73) 
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39) 
    at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:147) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53) 
    at org.apache.spark.scheduler.Task.run(Task.scala:99) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 

Driver stacktrace: 
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422) 
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) 
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802) 
    at scala.Option.foreach(Option.scala:257) 
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594) 
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1928) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1941) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1961) 
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply$mcV$sp(FileFormatWriter.scala:127) 
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:121) 
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:121) 
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57) 
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:121) 
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:101) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135) 
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116) 
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92) 
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92) 
    at org.apache.spark.sql.execution.datasources.DataSource.writeInFileFormat(DataSource.scala:484) 
    at org.apache.spark.sql.execution.datasources.DataSource.writeAndRead(DataSource.scala:500) 
    at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:263) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135) 
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116) 
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92) 
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92) 
    at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:404) 
    at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:358) 
    at $line34.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$anonfun$1.apply(<console>:41) 
    at $line34.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$anonfun$1.apply(<console>:28) 
    at scala.collection.immutable.List.foreach(List.scala:381) 
    at $line34.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:28) 
    at $line34.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:54) 
    at $line34.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:56) 
    at $line34.$read$$iw$$iw$$iw$$iw$$iw.<init>(<console>:58) 
    at $line34.$read$$iw$$iw$$iw$$iw.<init>(<console>:60) 
    at $line34.$read$$iw$$iw$$iw.<init>(<console>:62) 
    at $line34.$read$$iw$$iw.<init>(<console>:64) 
    at $line34.$read$$iw.<init>(<console>:66) 
    at $line34.$read.<init>(<console>:68) 
    at $line34.$read$.<init>(<console>:72) 
    at $line34.$read$.<clinit>(<console>) 
    at $line34.$eval$.$print$lzycompute(<console>:7) 
    at $line34.$eval$.$print(<console>:6) 
    at $line34.$eval.$print(<console>) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786) 
    at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047) 
    at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:638) 
    at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:637) 
    at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31) 
    at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19) 
    at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:637) 
    at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569) 
    at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825) 
    at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681) 
    at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395) 
    at scala.tools.nsc.interpreter.ILoop.loop(ILoop.scala:415) 
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:923) 
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909) 
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909) 
    at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97) 
    at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909) 
    at org.apache.spark.repl.Main$.doMain(Main.scala:69) 
    at org.apache.spark.repl.Main$.main(Main.scala:52) 
    at org.apache.spark.repl.Main.main(Main.scala) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:751) 
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187) 
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212) 
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126) 
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 
Caused by: java.lang.IllegalArgumentException: requirement failed: Decimal precision 136 exceeds max precision 38 
    at scala.Predef$.require(Predef.scala:224) 
    at org.apache.spark.sql.types.Decimal.set(Decimal.scala:113) 
    at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:434) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3$$anonfun$9.apply(JdbcUtils.scala:337) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3$$anonfun$9.apply(JdbcUtils.scala:337) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$nullSafeConvert(JdbcUtils.scala:438) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3.apply(JdbcUtils.scala:337) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3.apply(JdbcUtils.scala:335) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:286) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:268) 
    at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73) 
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39) 
    at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:147) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53) 
    at org.apache.spark.scheduler.Task.run(Task.scala:99) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 

请让我知道我失去了什么。

感谢

Spark version 2.1.1.2.6.2.0-205

+0

火花哪个版本的? – N3WOS

+0

你可以发布在'sqlplus'中执行的'desc table_name;'命令的输出(对于“有问题”的表)吗? – MaxU

+0

'Spark version 2.1.1.2.6.2.0-205' – user5319411

回答

0

解决方法1:

在Oracle数据库中创建一个视图:

create or replace view schema_name.v_table_name 
as 
select 
    cast(number_col1_name as number(20, 6)) as number_col1_name, /* problematic column */ 
    col2, 
    col3, 
    ... 
from table_name; 

,并用它代替table_name的这一观点

解决方法2:做星火侧相同的飞行:

正在使用
val query = """ 
(
select 
    cast(number_col1_name as number(20, 6)) as number_col1_name, 
    col2, 
    col3, 
    ... 
from table_name 
) as v_table_name 
""" 

val df_oracle = spark.read 
        .format("jdbc") 
        .option("url", "jdbc:oracle:thin:@hostname:port:SID") 
        .option("user",userName) 
        .option("password",passWord) 
        .option("driver", "oracle.jdbc.driver.OracleDriver") 
        .option("dbtable", query) 
        .load() 
+0

由于MaxU,我更喜欢解决方法2。它需要一些发展,但我可以将其应用于其他具有类似问题的表。谢谢 ! – user5319411

+0

我测试过,看起来替代方法不适用于我的情况。由于它的数据问题和Oracle中的Number列有一些巨大的值,比如'99900000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000',我可能不得不对你提到的行进行解码,而不是进行强制转换。 – user5319411