2013-02-04 45 views
2

有没有办法在UIImage上获得运动模糊效果? 我试过GPUImage,Filtrr和iOS Core Image,但所有这些都有规律的模糊 - 没有运动模糊。iOS上的UIImage上的运动模糊效果

我也试过UIImage-DSP,但它的运动模糊几乎不可见。我需要更强大的东西。

+0

看看http://stackoverflow.com/questions/7475610/how-to-do-a-motion-blur-effect-on-an-uiimageview-in-monotouch – howanghk

+0

我试过UIImage-DSP和运动模糊效果几乎不可见。我需要更强大的东西。 – YogevSitton

回答

6

正如我在仓库评论,我只是增加了运动和缩放模糊到GPUImage。这些是GPUImageMotionBlurFilter和GPUImageZoomBlurFilter类。这是变焦模糊的示例:

GPUImage zoom blur

对于运动模糊,我做在单个方向上的9-击中高斯模糊。

顶点::

attribute vec4 position; 
attribute vec4 inputTextureCoordinate; 

uniform highp vec2 directionalTexelStep; 

varying vec2 textureCoordinate; 
varying vec2 oneStepBackTextureCoordinate; 
varying vec2 twoStepsBackTextureCoordinate; 
varying vec2 threeStepsBackTextureCoordinate; 
varying vec2 fourStepsBackTextureCoordinate; 
varying vec2 oneStepForwardTextureCoordinate; 
varying vec2 twoStepsForwardTextureCoordinate; 
varying vec2 threeStepsForwardTextureCoordinate; 
varying vec2 fourStepsForwardTextureCoordinate; 

void main() 
{ 
    gl_Position = position; 

    textureCoordinate = inputTextureCoordinate.xy; 
    oneStepBackTextureCoordinate = inputTextureCoordinate.xy - directionalTexelStep; 
    twoStepsBackTextureCoordinate = inputTextureCoordinate.xy - 2.0 * directionalTexelStep; 
    threeStepsBackTextureCoordinate = inputTextureCoordinate.xy - 3.0 * directionalTexelStep; 
    fourStepsBackTextureCoordinate = inputTextureCoordinate.xy - 4.0 * directionalTexelStep; 
    oneStepForwardTextureCoordinate = inputTextureCoordinate.xy + directionalTexelStep; 
    twoStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 2.0 * directionalTexelStep; 
    threeStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 3.0 * directionalTexelStep; 
    fourStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 4.0 * directionalTexelStep; 
} 

片段:

precision highp float; 

uniform sampler2D inputImageTexture; 

varying vec2 textureCoordinate; 
varying vec2 oneStepBackTextureCoordinate; 
varying vec2 twoStepsBackTextureCoordinate; 
varying vec2 threeStepsBackTextureCoordinate; 
varying vec2 fourStepsBackTextureCoordinate; 
varying vec2 oneStepForwardTextureCoordinate; 
varying vec2 twoStepsForwardTextureCoordinate; 
varying vec2 threeStepsForwardTextureCoordinate; 
varying vec2 fourStepsForwardTextureCoordinate; 

void main() 
{ 
    lowp vec4 fragmentColor = texture2D(inputImageTexture, textureCoordinate) * 0.18; 
    fragmentColor += texture2D(inputImageTexture, oneStepBackTextureCoordinate) * 0.15; 
    fragmentColor += texture2D(inputImageTexture, twoStepsBackTextureCoordinate) * 0.12; 
    fragmentColor += texture2D(inputImageTexture, threeStepsBackTextureCoordinate) * 0.09; 
    fragmentColor += texture2D(inputImageTexture, fourStepsBackTextureCoordinate) * 0.05; 
    fragmentColor += texture2D(inputImageTexture, oneStepForwardTextureCoordinate) * 0.15; 
    fragmentColor += texture2D(inputImageTexture, twoStepsForwardTextureCoordinate) * 0.12; 
    fragmentColor += texture2D(inputImageTexture, threeStepsForwardTextureCoordinate) * 0.09; 
    fragmentColor += texture2D(inputImageTexture, fourStepsForwardTextureCoordinate) * 0.05; 

    gl_FragColor = fragmentColor; 
} 

作为一种优化,我计算纹理样本之间的步长大小的片段着色器的外这是通过使用下面的顶点和片段着色器实现通过使用角度,模糊大小和图像尺寸。然后将它传递给顶点着色器,以便我可以计算纹理采样位置并在片段着色器中插入它们。这可以避免iOS设备上的相关纹理读取。

缩放模糊要慢得多,因为我仍然在片段着色器中进行这些计算。毫无疑问,我可以优化这一点,但我还没有尝试过。缩放模糊使用9点高斯模糊,其中方向和每个样本的偏移距离作为像素与模糊中心的位置的函数而变化。

它使用以下片段着色器(和一个标准的直通顶点着色器):

varying highp vec2 textureCoordinate; 

uniform sampler2D inputImageTexture; 

uniform highp vec2 blurCenter; 
uniform highp float blurSize; 

void main() 
{ 
    // TODO: Do a more intelligent scaling based on resolution here 
    highp vec2 samplingOffset = 1.0/100.0 * (blurCenter - textureCoordinate) * blurSize; 

    lowp vec4 fragmentColor = texture2D(inputImageTexture, textureCoordinate) * 0.18; 
    fragmentColor += texture2D(inputImageTexture, textureCoordinate + samplingOffset) * 0.15; 
    fragmentColor += texture2D(inputImageTexture, textureCoordinate + (2.0 * samplingOffset)) * 0.12; 
    fragmentColor += texture2D(inputImageTexture, textureCoordinate + (3.0 * samplingOffset)) * 0.09; 
    fragmentColor += texture2D(inputImageTexture, textureCoordinate + (4.0 * samplingOffset)) * 0.05; 
    fragmentColor += texture2D(inputImageTexture, textureCoordinate - samplingOffset) * 0.15; 
    fragmentColor += texture2D(inputImageTexture, textureCoordinate - (2.0 * samplingOffset)) * 0.12; 
    fragmentColor += texture2D(inputImageTexture, textureCoordinate - (3.0 * samplingOffset)) * 0.09; 
    fragmentColor += texture2D(inputImageTexture, textureCoordinate - (4.0 * samplingOffset)) * 0.05; 

    gl_FragColor = fragmentColor; 
} 

注意,这两个模糊的在9个样品由于性能原因硬编码。这意味着在较大的模糊尺寸下,您将开始在这里看到来自有限样本的文物。对于较大的模糊,您需要多次运行这些滤镜或将它们扩展以支持更多的高斯样本。但是,由于iOS设备上纹理采样带宽有限,更多采样会导致渲染时间更慢。

+0

哇,不错的一个BradLarson! – howanghk