2015-04-03 100 views
7

我有一个NSURL包含一个视频,我想记录该视频的帧每秒十次。我的代码能够捕捉到我的播放器的图像,但我无法将其设置为每秒捕捉10帧。我正在尝试这样的事情,但是它会返回视频的相同初始帧,正确的次数?这里是我有什么:iOS采取多个屏幕截图

AVAsset *asset = [AVAsset assetWithURL:videoUrl]; 
CMTime vidLength = asset.duration; 
float seconds = CMTimeGetSeconds(vidLength); 
int frameCount = 0; 
for (float i = 0; i < seconds;) { 
    AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc]initWithAsset:asset]; 
    CMTime time = CMTimeMake(i, 10); 
    CGImageRef imageRef = [imageGenerator copyCGImageAtTime:time actualTime:NULL error:NULL]; 
    UIImage *thumbnail = [UIImage imageWithCGImage:imageRef]; 
           CGImageRelease(imageRef); 
NSString* filename = [NSString stringWithFormat:@"Documents/frame_%d.png", frameCount]; 
NSString* pngPath = [NSHomeDirectory() stringByAppendingPathComponent:filename]; 

[UIImagePNGRepresentation(thumbnail) writeToFile: pngPath atomically: YES]; 
frameCount++; 
i = i + 0.1; 
} 

但是,而不是获得当前时间我的视频帧,我只是得到初始帧?

我该如何获得每秒10次的视频帧?

感谢您的帮助:)

回答

2

随着CMTimeMake(A, B)你存储一个有理数,一个确切的分数A/B秒,而这个函数的第一个参数接受int值。对于20秒的视频,您将在循环的最后一次迭代中捕获一个帧(时间((int)19.9)/ 10 = 1.9秒)。使用CMTimeMakeWithSeconds(i, NSEC_PER_SEC)函数来解决这个时间问题。

10

你得到,因为你正试图与浮点值的帮助下创建CMTime初步框架:

​​

由于CMTimeMake功能需要的int64_t值作为第一个参数,你的浮点值将四舍五入到整数,你会得到不正确的结果。

让我们改变你的代码位:

1)首先,你需要找到总帧数计算,你需要从视频中得到的。你写,你需要每秒10帧,所以代码将是:

int requiredFramesCount = seconds * 10; 

2)接下来你需要找到一个值,将在每一步增加你的CMTime值:

int64_t step = vidLength.value/requiredFramesCount; 

3)最后,你需要requestedTimeToleranceBefore和requestedTimeToleranceAfter设置为kCMTimeZero,以获得在精确的时间框架:

imageGenerator.requestedTimeToleranceAfter = kCMTimeZero; 
imageGenerator.requestedTimeToleranceBefore = kCMTimeZero; 

这里是你的代码将如何看起来像:

CMTime vidLength = asset.duration; 
float seconds = CMTimeGetSeconds(vidLength); 

int requiredFramesCount = seconds * 10; 
int64_t step = vidLength.value/requiredFramesCount; 

int value = 0; 

for (int i = 0; i < requiredFramesCount; i++) { 

    AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc]initWithAsset:asset]; 
    imageGenerator.requestedTimeToleranceAfter = kCMTimeZero; 
    imageGenerator.requestedTimeToleranceBefore = kCMTimeZero; 

    CMTime time = CMTimeMake(value, vidLength.timescale); 

    CGImageRef imageRef = [imageGenerator copyCGImageAtTime:time actualTime:NULL error:NULL]; 
    UIImage *thumbnail = [UIImage imageWithCGImage:imageRef]; 
    CGImageRelease(imageRef); 
    NSString* filename = [NSString stringWithFormat:@"Documents/frame_%d.png", i]; 
    NSString* pngPath = [NSHomeDirectory() stringByAppendingPathComponent:filename]; 

    [UIImagePNGRepresentation(thumbnail) writeToFile: pngPath atomically: YES]; 

    value += step; 
} 
+0

使用斯威夫特 同样的问题http://stackoverflow.com/questions/32286320/grab-frames-from-video-using-swift/32297251 – arpo 2015-08-30 14:13:39