2016-09-24 93 views
2

我想计算具有30.000个观测值的数据帧的行之间的欧几里德距离。一个简单的方法是dist功能(例如dist(数据))。但是,由于我的数据帧很大,这需要花费太多时间。以更快的方式计算欧几里德距离

某些行包含缺少的值。我不需要两行之间包含缺失值的行之间的距离,也不需要行之间包含缺少值的行之间的距离。我试图排除我不需要的组合。不幸的是,我的解决方案需要更多时间:

# Some example data 
data <- data.frame(
    x1 = c(1, 22, NA, NA, 15, 7, 10, 8, NA, 5), 
    x2 = c(11, 2, 7, 15, 1, 17, 11, 18, 5, 5), 
    x3 = c(21, 5, 6, NA, 10, 22, 12, 2, 12, 3), 
    x4 = c(13, NA, NA, 20, 12, 5, 1, 8, 7, 14) 
) 


# Measure speed of dist() function 
start_time_dist <- Sys.time() 

# Calculate euclidean distance with dist() function for complete dataset 
dist_results <- dist(data) 

end_time_dist <- Sys.time() 
time_taken_dist <- end_time_dist - start_time_dist 


# Measure speed of my own loop 
start_time_own <- Sys.time() 

# Calculate euclidean distance with my own loop only for specific cases 

# # # 
# The following code should be faster! 
# # # 

data_cc <- data[complete.cases(data), ] 
data_miss <- data[complete.cases(data) == FALSE, ] 

distance_list <- list() 

for(i in 1:nrow(data_miss)) { 

    distances <- numeric() 
    for(j in 1:nrow(data_cc)) { 
    distances <- c(distances, dist(rbind(data_miss[i, ], data_cc[j, ]), method = "euclidean")) 
    } 

    distance_list[[i]] <- distances 
} 

end_time_own <- Sys.time() 
time_taken_own <- end_time_own - start_time_own 


# Compare speed of both calculations 
time_taken_dist # 0.002001047 secs 
time_taken_own # 0.01562881 secs 

有没有更快的方式来计算我需要的欧氏距离?非常感谢!

+2

dist在C中实现,当然它比R for循环更快。你应该在Rcpp中实现你的循环。 – Roland

+0

谢谢你的提示!我会试着弄清楚它是如何工作的。 – JSP

回答

3

我建议你使用并行计算。将所有代码放在一个函数中,并行执行。

R将默认在一个线程中进行所有计算。您应该手动添加并行线程。在R中启动集群需要时间,但是如果您有大数据框,主要工作的性能将会提高(your_processors_number-1)倍。

该链接可能也有帮助:How-to go parallel in R – basics + tipsA gentle introduction to parallel computing in R

不错的选择是将您的工作分成更小的包装,并在每个线程中分别计算。只创建一次线程,因为它在R中耗时。

library(parallel) 
library(foreach) 
library(doParallel) 
# I am not sure that all libraries are here 
# try ??your function to determine which library do you need 
# determine how many processors has your computer 
no_cores <- detectCores() - 1# one processor must be free always for system 
start.t.total<-Sys.time() 
print(start.t.total) 
startt<-Sys.time() 
print(startt) 
#start parallel calculations 
cl<-makeCluster(no_cores,outfile = "mycalculation_debug.txt") 
registerDoParallel(cl) 
# results will be in out.df class(dataframe) 
out.df<-foreach(p=1:no_cores 
        ,.combine=rbind # data from different threads will be in one table 
        ,.packages=c()# All packages that your funtion is using must be called here 
        ,.inorder=T) %dopar% #don`t forget this directive 
        { 
         tryCatch({ 
          # 
          # enter your function here and do what you want in parallel 
          # 
          print(startt-Sys.time()) 
          print(start.t.total-Sys.time()) 
          print(paste(date,'packet',p, percent((x-istart)/packes[p]),'done')) 
         } 
         out.df 
         },error = function(e) return(paste0("The variable '", p, "'", 
                  " caused the error: '", e, "'"))) 
        } 
stopCluster(cl) 
gc()# force to free memory from killed processes 
+0

非常感谢您的回答,这对我有很大帮助!我甚至不知道这对R来说是可能的,并会尝试实现您的解决方案! – JSP

+0

我觉得'amap'包在这里可能会有所帮助,如果你不想创建自己的函数,请检查[answer](http://stackoverflow.com/a/25767588/6327771) –