2016-03-19 22 views
0

我试图做使用Twitter流API鸣叫一些分析鸣叫流。无法获得使用Twitter流API的星火

我首先想从流打印状态消息,并从那里开始。

我的代码如下所示:

public static void main(String[] args) { 
    SparkConf conf = new SparkConf().setAppName("TwitterStreamPrinter").setMaster("local"); 

    Configuration twitterConf = new ConfigurationBuilder() 
     .setOAuthConsumerKey(consumerKey) 
     .setOAuthConsumerSecret(consumerSecret) 
     .setOAuthAccessToken(accessToken) 
     .setOAuthAccessTokenSecret(accessTokenSecret).build(); 
    OAuth2Authorization auth = new OAuth2Authorization(twitterConf); 
    JavaReceiverInputDStream<Status> twitterStream = TwitterUtils.createStream(ssc, auth); 

    JavaDStream<String> statuses = twitterStream.map(new Function<Status, String>() { 
    public String call(Status status) throws Exception { 
     return status.getText(); 
    } 
    }); 
    statuses.print(); 

它不会打印出比Spark日志其他任何东西。我最初以为这是因为授权,所以我尝试了各种不同的方式来通过授权,但也许这不是授权。

我看着每一个例子,我可以从网络上找到的(虽然有没有很多),这种代码看起来像一个标准的代码来获取Twitter的状态,但它为什么不打印什么?我也试过System.out.println,但它没有奏效。

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 
16/03/19 12:02:23 INFO SparkContext: Running Spark version 1.6.1 
16/03/19 12:02:24 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
16/03/19 12:02:24 INFO SecurityManager: Changing view acls to: abcd 
16/03/19 12:02:24 INFO SecurityManager: Changing modify acls to: abcd 
16/03/19 12:02:24 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(abcd); users with modify permissions: Set(abcd) 
16/03/19 12:02:24 INFO Utils: Successfully started service 'sparkDriver' on port 50995. 
16/03/19 12:02:24 INFO Slf4jLogger: Slf4jLogger started 
16/03/19 12:02:25 INFO Remoting: Starting remoting 
16/03/19 12:02:25 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:51003] 
16/03/19 12:02:25 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 51003. 
16/03/19 12:02:25 INFO SparkEnv: Registering MapOutputTracker 
16/03/19 12:02:25 INFO SparkEnv: Registering BlockManagerMaster 
16/03/19 12:02:25 INFO DiskBlockManager: Created local directory at /private/var/folders/3b/wzflbsn146qgwdglbm_6ms3m0000hl/T/blockmgr-e3de07a6-0c62-47cf-9940-da18382c9241 
16/03/19 12:02:25 INFO MemoryStore: MemoryStore started with capacity 2.4 GB 
16/03/19 12:02:25 INFO SparkEnv: Registering OutputCommitCoordinator 
16/03/19 12:02:25 INFO Utils: Successfully started service 'SparkUI' on port 4040. 
16/03/19 12:02:25 INFO SparkUI: Started SparkUI at http://10.0.0.12:4040 
16/03/19 12:02:25 INFO Executor: Starting executor ID driver on host localhost 
16/03/19 12:02:25 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 51016. 
16/03/19 12:02:25 INFO NettyBlockTransferService: Server created on 51016 
16/03/19 12:02:25 INFO BlockManagerMaster: Trying to register BlockManager 
16/03/19 12:02:25 INFO BlockManagerMasterEndpoint: Registering block manager localhost:51016 with 2.4 GB RAM, BlockManagerId(driver, localhost, 51016) 
16/03/19 12:02:25 INFO BlockManagerMaster: Registered BlockManager 
16/03/19 12:02:25 WARN StreamingContext: spark.master should be set as local[n], n > 1 in local mode if you have receivers to get data, otherwise Spark jobs will not get resources to process the received data. 
16/03/19 12:02:26 INFO SparkContext: Invoking stop() from shutdown hook 
16/03/19 12:02:26 INFO SparkUI: Stopped Spark web UI at http://10.0.0.12:4040 
16/03/19 12:02:26 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 
16/03/19 12:02:26 INFO MemoryStore: MemoryStore cleared 
16/03/19 12:02:26 INFO BlockManager: BlockManager stopped 
16/03/19 12:02:26 INFO BlockManagerMaster: BlockManagerMaster stopped 
16/03/19 12:02:26 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 
16/03/19 12:02:26 INFO SparkContext: Successfully stopped SparkContext 
16/03/19 12:02:26 INFO ShutdownHookManager: Shutdown hook called 
16/03/19 12:02:26 INFO ShutdownHookManager: Deleting directory /private/var/folders/3b/..... 

回答

1

你在你的日志中的一切:

19年6月3日12时02分25秒WARN的StreamingContext:spark.master应设置为本地[N],正以本地模式如果> 1你有接收器来获取数据,否则Spark作业将不会获取资源来处理接收到的数据。

所以答案设置主站是本地[*]

除了

,有你忘了启动?

jssc.start(); //开始计算

jssc.awaitTermination();

+0

改变当地[*]删除WARN消息,但仍然无法打印任何东西。 – user2418202

+0

所以你可以更新代码和日志?你有多少核心? –

+0

完全相同的日志没有WARN消息。就我所知,输出不应取决于内核的数量。 – user2418202