0
我想在hadoop hdfs上使用我的java类,现在我必须重写我的函数。 问题是,如果我使用InputStreamReader,我的应用程序读取错误的值。java hadoop:FileReader VS InputStreamReader
这里我的代码(所以它的工作,我想用注释掉的代码部分):
public static GeoTimeDataCenter[] readCentersArrayFromFile(int iteration) {
Properties pro = new Properties();
try {
pro.load(GeoTimeDataHelper.class.getResourceAsStream("/config.properties"));
} catch (Exception e) {
e.printStackTrace();
}
int k = Integer.parseInt(pro.getProperty("k"));
GeoTimeDataCenter[] Centers = new GeoTimeDataCenter[k];
BufferedReader br;
try {
//Path pt=new Path(pro.getProperty("seed.file")+(iteration-1));
//FileSystem fs = FileSystem.get(new Configuration());
//br=new BufferedReader(new InputStreamReader(fs.open(pt)));
br = new BufferedReader(new FileReader(pro.getProperty("seed.file")+(iteration-1)));
for(int i =0; i<Centers.length; i++){
String[] temp = null;
try{
temp = br.readLine().toString().split("\t");
Centers[i] = new GeoTimeDataCenter(Integer.parseInt(temp[0]),new LatLong(Double.parseDouble(temp[1]),Double.parseDouble(temp[2])),Long.parseLong(temp[3]));
}
catch(Exception e) {
temp = Seeding.randomSingleSeed().split("\t");
Centers[i] = new GeoTimeDataCenter(i,new LatLong(Double.parseDouble(temp[0]),Double.parseDouble(temp[1])),DateToLong(temp[2]));
}
}
br.close();
} catch (IOException e) {
e.printStackTrace();
}
return Centers;
}
也许有人知道这个问题吗?
最好的问候
你期待什么价值?你有什么价值? – Virmundi
它应该读取一个种子文件,例如k = 4:具有2个double值的4行和由tabulator分隔的一个long值。 –