2

我正在创建一个情感识别程序,并设法产生两个不同的算法/功能来提供给sklearn的SVM。我获得密集光流数据,将其压缩成矩阵并将其输入SVM函数,而我也是通过跟踪来自面部地标的数据进行的。如何将两个特征/分类器合并为一个统一的更好的分类器?

现在,我有每个产生不同的精度,但做同样的事情两个不同的程序:根据面部运动识别情感。

我现在的目标是既密集光流和面部特征的标志性建筑/分类相结合,团结他们获得更好的一个分类,将同时使用这些的实现分类精度更高。

基本上,我试图重新从这个分类器:矩阵的含面部界标跟踪

>>> main.shape 
(646, 403680) 
>>> main 
array([[ -1.18353125e-03, -2.41295085e-04, -1.88367767e-03, ..., 
     -5.19892928e-05, 8.53588153e-06, -3.90818786e-05], 
     [ 6.32877424e-02, -7.24349543e-02, 8.19472596e-02, ..., 
     -4.71765925e-05, 5.41217596e-05, -3.12083102e-05], 
     [ -1.66368652e-02, 2.50510368e-02, -6.03965335e-02, ..., 
     -9.85100851e-05, -7.69595645e-05, -7.09727174e-05], 
     ..., 
     [ -3.44874617e-03, 5.31123485e-03, -8.47499538e-03, ..., 
     -2.77953018e-06, -2.96417579e-06, -1.51305017e-06], 
     [ 3.24894954e-03, 5.05338283e-03, 3.91049543e-03, ..., 
     -3.23493354e-04, 1.30995919e-04, -3.06804082e-04], 
     [ 7.82454386e-03, 1.69946514e-02, 8.11014231e-03, ..., 
     -1.02751539e-03, 7.68289610e-05, -7.82517891e-04]], dtype=float32) 

矩阵结构:含有密集光流数据矩阵的 http://cgit.nutn.edu.tw:8080/cgit/PaperDL/LZJ_120826151743.PDF

Confusion Matrix for Dense Optical Flow: 
[[27 22 0 0] 
[ 0 57 1 0] 
[ 0 12 60 0] 
[ 0 9 3 68]] 
Accuracy: 80-90% range 

Confusion Matrix for Facial Landmarks: 
[[27 10 5 2] 
[ 7 44 5 3] 
[ 6 14 33 1] 
[ 1 13 1 60]] 
Accuracy: 60-72% range 

矩阵结构信息:

>>> main.shape 
(646, 17, 68, 2) 
>>> main 
array([[[[ 0.  , 0.  ], 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ], 
     ..., 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ]], 

     [[ -2.23606798, -1.10714872], 
     [ -2.23606798, -1.10714872], 
     [ 3.  , 1.  ], 
     ..., 
     [ 1.41421356, 0.78539816], 
     [ 1.41421356, 0.78539816], 
     [ 1.  , 0.  ]], 

     [[ 2.82842712, -0.78539816], 
     [ 2.23606798, -1.10714872], 
     [ 2.23606798, -1.10714872], 
     ..., 
     [ -1.  , -0.  ], 
     [ -1.  , -0.  ], 
     [ -1.  , -0.  ]], 

     ..., 
     [[ 2.  , 1.  ], 
     [ -2.23606798, 1.10714872], 
     [ -3.16227766, 1.24904577], 
     ..., 
     [ -1.  , -0.  ], 
     [ -1.41421356, 0.78539816], 
     [ -1.  , -0.  ]], 

     [[ -1.41421356, -0.78539816], 
     [ 1.  , 1.  ], 
     [ -1.41421356, -0.78539816], 
     ..., 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ]], 

     [[ 3.  , 1.  ], 
     [ 4.  , 1.  ], 
     [ 4.  , 1.  ], 
     ..., 
     [ 1.41421356, -0.78539816], 
     [ 1.  , 0.  ], 
     [ 1.  , 0.  ]]], 


     [[[ 0.  , 0.  ], 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ], 
     ..., 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ]], 

     [[ 1.  , 1.  ], 
     [ -1.41421356, -0.78539816], 
     [ -1.  , -0.  ], 
     ..., 
     [ 2.  , 0.  ], 
     [ 1.  , 0.  ], 
     [ -1.  , -0.  ]], 

     [[ 0.  , 0.  ], 
     [ 1.  , 1.  ], 
     [ 0.  , 0.  ], 
     ..., 
     [ -4.  , -0.  ], 
     [ -3.  , -0.  ], 
     [ -2.  , -0.  ]], 

     ..., 
     [[ -2.23606798, -1.10714872], 
     [ -2.23606798, -1.10714872], 
     [ 2.  , 1.  ], 
     ..., 
     [ 0.  , 0.  ], 
     [ 1.41421356, 0.78539816], 
     [ 1.41421356, 0.78539816]], 

     [[ 0.  , 0.  ], 
     [ 0.  , 0.  ], 
     [ -1.  , -0.  ], 
     ..., 
     [ 1.  , 1.  ], 
     [ 0.  , 0.  ], 
     [ -1.41421356, 0.78539816]], 

     [[ 1.  , 1.  ], 
     [ 1.  , 1.  ], 
     [ 1.  , 1.  ], 
     ..., 
     [ 1.  , 1.  ], 
     [ 0.  , 0.  ], 
     [ 1.  , 0.  ]]], 


     [[[ 0.  , 0.  ], 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ], 
     ..., 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ]], 

     [[ 3.16227766, 1.24904577], 
     [ 2.23606798, 1.10714872], 
     [ 2.23606798, 1.10714872], 
     ..., 
     [ -1.41421356, -0.78539816], 
     [ -1.  , -0.  ], 
     [ -1.41421356, 0.78539816]], 

     [[ -1.41421356, 0.78539816], 
     [ 0.  , 0.  ], 
     [ 1.41421356, 0.78539816], 
     ..., 
     [ -1.41421356, 0.78539816], 
     [ -1.  , -0.  ], 
     [ 0.  , 0.  ]], 

     ..., 
     [[ 1.  , 1.  ], 
     [ 1.  , 1.  ], 
     [ 0.  , 0.  ], 
     ..., 
     [ 1.  , 1.  ], 
     [ 1.  , 1.  ], 
     [ -1.41421356, 0.78539816]], 

     [[ 1.  , 1.  ], 
     [ 2.  , 1.  ], 
     [ 2.23606798, 1.10714872], 
     ..., 
     [ 1.  , 1.  ], 
     [ 1.  , 1.  ], 
     [ -1.41421356, -0.78539816]], 

     [[ 1.  , 1.  ], 
     [ 1.  , 1.  ], 
     [ 1.  , 1.  ], 
     ..., 
     [ -2.  , -0.  ], 
     [ -2.  , -0.  ], 
     [ -1.  , -0.  ]]], 


     ..., 
     [[[ 0.  , 0.  ], 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ], 
     ..., 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ]], 

     [[ 1.41421356, 0.78539816], 
     [ 1.41421356, 0.78539816], 
     [ 1.41421356, 0.78539816], 
     ..., 
     [ 1.  , 1.  ], 
     [ 0.  , 0.  ], 
     [ 1.  , 1.  ]], 

     [[ 5.  , 1.  ], 
     [ -4.12310563, 1.32581766], 
     [ -4.12310563, 1.32581766], 
     ..., 
     [ 1.  , 1.  ], 
     [ 0.  , 0.  ], 
     [ 1.  , 1.  ]], 

     ..., 
     [[ 3.16227766, 1.24904577], 
     [ 2.  , 1.  ], 
     [ 2.  , 1.  ], 
     ..., 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ]], 

     [[ -3.16227766, 1.24904577], 
     [ 2.  , 1.  ], 
     [ -2.23606798, 1.10714872], 
     ..., 
     [ 0.  , 0.  ], 
     [ 1.  , 0.  ], 
     [ 1.  , 0.  ]], 

     [[ 1.  , 1.  ], 
     [ 1.  , 1.  ], 
     [ 1.41421356, 0.78539816], 
     ..., 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ]]], 


     [[[ 0.  , 0.  ], 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ], 
     ..., 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ]], 

     [[ -2.23606798, 0.46364761], 
     [ -1.41421356, 0.78539816], 
     [ -2.23606798, 0.46364761], 
     ..., 
     [ 1.  , 0.  ], 
     [ 1.  , 0.  ], 
     [ 1.  , 1.  ]], 

     [[ -2.23606798, -0.46364761], 
     [ -1.41421356, -0.78539816], 
     [ 2.  , 1.  ], 
     ..., 
     [ 0.  , 0.  ], 
     [ 1.  , 0.  ], 
     [ 1.  , 0.  ]], 

     ..., 
     [[ 1.  , 0.  ], 
     [ 1.  , 1.  ], 
     [ -2.23606798, -1.10714872], 
     ..., 
     [ 19.02629759, 1.51821327], 
     [ 19.  , 1.  ], 
     [-19.10497317, -1.46591939]], 

     [[ 3.60555128, 0.98279372], 
     [ 3.60555128, 0.5880026 ], 
     [ 5.  , 0.64350111], 
     ..., 
     [ 7.28010989, -1.29249667], 
     [ 7.61577311, -1.16590454], 
     [ 8.06225775, -1.05165021]], 

     [[ -7.28010989, 1.29249667], 
     [ -5.  , 0.92729522], 
     [ -5.83095189, 0.5404195 ], 
     ..., 
     [ 20.09975124, 1.47112767], 
     [ 21.02379604, 1.52321322], 
     [-20.22374842, -1.42190638]]], 


     [[[ 0.  , 0.  ], 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ], 
     ..., 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ], 
     [ 0.  , 0.  ]], 

     [[ -1.41421356, 0.78539816], 
     [ -2.23606798, 1.10714872], 
     [ 2.  , 1.  ], 
     ..., 
     [ 1.  , 1.  ], 
     [ 1.  , 0.  ], 
     [ 2.23606798, -0.46364761]], 

     [[ 1.  , 0.  ], 
     [ 1.41421356, 0.78539816], 
     [ 1.  , 1.  ], 
     ..., 
     [ 0.  , 0.  ], 
     [ 1.  , 1.  ], 
     [ 0.  , 0.  ]], 

     ..., 
     [[ -1.41421356, -0.78539816], 
     [ 0.  , 0.  ], 
     [ 1.  , 1.  ], 
     ..., 
     [ 1.  , 0.  ], 
     [ 1.  , 0.  ], 
     [ 1.  , 0.  ]], 

     [[ 1.  , 1.  ], 
     [ -1.  , -0.  ], 
     [ 1.  , 1.  ], 
     ..., 
     [ -1.  , -0.  ], 
     [ 0.  , 0.  ], 
     [ -1.  , -0.  ]], 

     [[ 0.  , 0.  ], 
     [ 1.41421356, -0.78539816], 
     [ -1.  , -0.  ], 
     ..., 
     [ 1.  , 0.  ], 
     [ 0.  , 0.  ], 
     [ 1.  , 0.  ]]]]) 

我对密集光流分类代码:

features_train, features_test, labels_train, labels_test = cross_validation.train_test_split(main, target, test_size = 0.4) 

# Determine amount of time to train 
t0 = time() 
model = SVC(probability=True) 
#model = SVC(kernel='poly') 
#model = GaussianNB() 

model.fit(features_train, labels_train) 

print 'training time: ', round(time()-t0, 3), 's' 

# Determine amount of time to predict 
t1 = time() 
pred = model.predict(features_test) 

我的人脸标志性跟踪分类代码:

features_train, features_test, labels_train, labels_test = cross_validation.train_test_split(main.reshape(len(main), -1), target, test_size = 0.4) 

# Determine amount of time to train 
t0 = time() 
#model = SVC() 
model = SVC(kernel='linear') 

#model = GaussianNB() 

model.fit(features_train, labels_train) 


# Determine amount of time to predict 
t1 = time() 
pred = model.predict(features_test) 

在sklearn(或一般的机器学习),我怎么这两个特性结合在一起为了创建一个统一的,更好的分类器,在训练和预测时考虑到这两个信息?

回答

0

您可以按照Daniel的建议构建单独的分类器。但是,您可能会考虑串接您的两个数据集:

main_dense_optical 
main_face_landmark = main_face_landmark.reshape(len(main_face_landmark), -1) 
main = np.concatenate([main_dense_optical, main_face_landmark], axis=1) 
# Code for train/test, training, evaluating here 
+0

我考虑过这个问题,通常会这样做,但我担心,因为某些功能彼此不同, – user3377126

相关问题