我会参考我以前的question。基本上我有这两个数据集。使用场地名称,我想输出每条场地在推文消息中发生的次数。我得到的答案适用于小数据集,但想象一下,我有10000个场所,使用CROSS
的20000条推文消息会给我一个与200米记录的关系,这是相当多的。猪 - 试图避免CROSS
简单的数据集在前面的问题中提出,我现在使用的PIG脚本正如答案中的建议。我正在寻找想法如何在没有CROSS
产品的情况下进行此计数。谢谢!
REGISTER piggybank.jar
venues = LOAD 'venues_mid' USING org.apache.hcatalog.pig.HCatLoader();
tweets = LOAD 'tweets_mid' USING org.apache.hcatalog.pig.HCatLoader();
tweetsReduced = foreach tweets generate text;
venuesReduced = foreach venues generate name;
/* Create the Cartesian product of venues and tweets */
crossed = CROSS venuesReduced, tweetsReduced;
/* For each record, create a regex like '.*name.*' */
regexes = FOREACH crossed GENERATE *, CONCAT('.*', CONCAT(venuesReduced::name, '.*')) AS regex;
/* Keep tweet-venue pairs where the tweet contains the venue name */
venueMentions = FILTER regexes BY text MATCHES regex;
venueCounts = FOREACH (GROUP venueMentions BY venuesReduced::name) GENERATE group, COUNT($1) as counter;
venueCountsOrdered = order venueCounts by counter;
STORE venueCountsOrdered INTO 'Pig_output/venueCountsOrdered_mid.csv'
USING org.apache.pig.piggybank.storage.CSVExcelStorage(',', 'NO_MULTILINE', 'WINDOWS');
tweets.csv
created_at,text,location
Sat Nov 03 13:31:07 +0000 2012, Sugar rush dfsudfhsu, Glasgow
Sat Nov 03 13:31:07 +0000 2012, Sugar rush ;dfsosjfd HAHAHHAHA, London
Sat Apr 25 04:08:47 +0000 2009, at Sugar rush dfjiushfudshf, Glasgow
Thu Feb 07 21:32:21 +0000 2013, Shell gggg, Glasgow
Tue Oct 30 17:34:41 +0000 2012, Shell dsiodshfdsf, Edinburgh
Sun Mar 03 14:37:14 +0000 2013, Shell wowowoo, Glasgow
Mon Jun 18 07:57:23 +0000 2012, Shell dsfdsfds, Glasgow
Tue Jun 25 16:52:33 +0000 2013, Shell dsfdsfdsfdsf, Glasgow
venues.csv
city,name
Glasgow, Sugar rush
Glasgow, ABC
Glasgow, University of Glasgow
Edinburgh, Shell
London, Big Ben
是的,但这样我只会计算发生的事件,如果场地是在发布在同一个城市的推文中提到的,这个场地应该不是这种情况。 –
编辑答案 - 添加“另一个尝试:” –
好吧,看来这将是我的第一个UDF:)你的解决方案听起来很合理。但是,如果有人提出没有UDF的解决方案,欢迎分享。 –