2014-02-20 31 views
3

我想找到一种应用羽毛效果的方法,在UIImage附近有阴影效果,而不是UIImageView我在iOS中找到了,我找不到任何完美的解决方案。我有一个想法,可以通过掩饰来完成,但我对CoreGraphics很新。iOS:在uiimage上有辉光/阴影效果的羽毛

如果有人可以帮忙。

谢谢。

+0

显示你有什么到目前为止已经试过和你有什么问题。 – rmaddy

+0

我不知道该怎么办,这就是为什么我问这个问题。谢谢 – iphonic

+0

@bradlarson你能帮忙吗? – iphonic

回答

3

OK这样: 我一直在寻找同样的事情,但遗憾的是,没有运气。 我决定创建自己的羽毛代码。

将此代码添加到UIImage扩展名,然后调用[image featherImageWithDepth:4],4就是例子。尽量保持尽可能低的深度。

//============================================================================== 


- (UIImage*)featherImageWithDepth:(int)featherDepth { 

    // First get the image into your data buffer 
    CGImageRef imageRef = [self CGImage]; 
    NSUInteger width = CGImageGetWidth(imageRef); 
    NSUInteger height = CGImageGetHeight(imageRef); 
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
    unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char)); 
    NSUInteger bytesPerPixel = 4; 
    NSUInteger bytesPerRow = bytesPerPixel * width; 
    NSUInteger bitsPerComponent = 8; 
    CGContextRef context = CGBitmapContextCreate(rawData, width, height, 
               bitsPerComponent, bytesPerRow, colorSpace, 
               kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); 
    CGColorSpaceRelease(colorSpace); 

    CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef); 


    // Now your rawData contains the image data in the RGBA8888 pixel format. 
    NSUInteger byteIndex = 0; 



    NSUInteger rawDataCount = width*height; 
    for (int i = 0 ; i < rawDataCount ; ++i, byteIndex += bytesPerPixel) { 

     NSInteger alphaIndex = byteIndex + 3; 

     if (rawData[alphaIndex] > 100) { 

      for (int row = 1; row <= featherDepth; row++) { 
       if (testBorderLayer((long)alphaIndex, 
            rawData, 
            (long)rawDataCount, 
            (long)width, 
            (long)height, 
            row)) { 

        int destinationAlpha = 255/(featherDepth+1) * (row + 1); 
        double alphaDiv = (double)destinationAlpha/(double)rawData[alphaIndex]; 

        rawData[alphaIndex] = destinationAlpha; 
        rawData[alphaIndex-1] = (double)rawData[alphaIndex-1] * alphaDiv; 
        rawData[alphaIndex-2] = (double)rawData[alphaIndex-2] * alphaDiv; 
        rawData[alphaIndex-3] = (double)rawData[alphaIndex-3] * alphaDiv; 

//     switch (row) { 
//      case 1: 
//       rawData[alphaIndex-1] = 255; 
//       rawData[alphaIndex-2] = 0; 
//       rawData[alphaIndex-3] = 0; 
//       break; 
//      case 2: 
//       rawData[alphaIndex-1] = 0; 
//       rawData[alphaIndex-2] = 255; 
//       rawData[alphaIndex-3] = 0; 
//       break; 
//      case 3: 
//       rawData[alphaIndex-1] = 0; 
//       rawData[alphaIndex-2] = 0; 
//       rawData[alphaIndex-3] = 255; 
//       break; 
//      case 4: 
//       rawData[alphaIndex-1] = 127; 
//       rawData[alphaIndex-2] = 127; 
//       rawData[alphaIndex-3] = 0; 
//       break; 
//      case 5: 
//       rawData[alphaIndex-1] = 127; 
//       rawData[alphaIndex-2] = 0; 
//       rawData[alphaIndex-3] = 127; 
//      case 6: 
//       rawData[alphaIndex-1] = 0; 
//       rawData[alphaIndex-2] = 127; 
//       rawData[alphaIndex-3] = 127; 
//       break; 
//      default: 
//       break; 
//     } 

        break; 

       } 
      } 
     } 
    } 


    CGImageRef newCGImage = CGBitmapContextCreateImage(context); 

    UIImage *result = [UIImage imageWithCGImage:newCGImage scale:[self scale] orientation:UIImageOrientationUp]; 

    CGImageRelease(newCGImage); 

    CGContextRelease(context); 
    free(rawData); 

    return result; 
} 


//============================================================================== 


bool testBorderLayer(long byteIndex, 
        unsigned char *imageData, 
        long dataSize, 
        long pWidth, 
        long pHeight, 
        int border) { 


    int width = border * 2 + 1; 
    int height = width - 2; 

    // run thru border pixels 
    // |-| 
    // | | 
    // |-| 

    //top,bot - hor 
    for (int i = 1; i < width - 1; i++) { 


     long topIndex = byteIndex + 4 * (- border * pWidth - border + i); 
     long botIndex = byteIndex + 4 * (border * pWidth - border + i); 

     long destColl = byteIndex/4 % pWidth - border + i; 

     if (destColl > 1 && destColl < pWidth) { 
      if (testPoint(topIndex, imageData, dataSize) || 
       testPoint(botIndex, imageData, dataSize)) { 
       return true; 
      } 

     } 

    } 


    //left,right - ver 
    if (byteIndex/4 % pWidth < pWidth - border - 1) { 
     for (int k = 0; k < height; k++) { 
      long rightIndex = byteIndex + 4 * (border - (border) * pWidth + pWidth * k); 

      if (testPoint(rightIndex, imageData, dataSize)) { 
       return true; 
      } 
     } 
    } 

    if (byteIndex/4 % pWidth > border) { 

     for (int k = 0; k < height; k++) { 
      long leftIndex = byteIndex + 4 * (- border - (border) * pWidth + pWidth * k); 

      if (testPoint(leftIndex, imageData, dataSize)) { 
       return true; 
      } 
     } 
    } 

    return false; 
} 


//============================================================================== 


bool testPoint(long pointIndex, unsigned char *imageData, long dataSize) { 
    if (pointIndex >= 0 && pointIndex < dataSize * 4 - 1 && 
     imageData[pointIndex] < 30) { 
     return true; 
    } 
    return false; 
} 

//============================================================================== 

对不起罕见评论;)

+0

感谢您的支持,你能解释你的代码是如何工作的吗? – iphonic

+0

代码查找透明(alpha通道<30)和不透明像素相交的边框。想象一下这个像素情况:p1(alpha = 0) - p2(alpha = 0) - p3(alpha = 255) - p4(alpha = 255) - p5(alpha = 255) - p6(alpha = 255)。结果将是:p1(alpha = 0)-p2(alpha = 0)-p3(alpha = 255/3)-p4(alpha = 255/3 * 2)-p5(alpha = 255) ) - – zurakach

+0

这是代码的目的,它通过遍历所有像素找到这个“边界”,如果当前像素不透明并且相邻像素是透明的,则意味着这是边界像素并将alpha设置为255 /深度。 如果邻近像素AR不透明的,但是如果相邻像素的邻居是(至少一个)透明,这意味着,当前像素是从边界第二像素和它的阿尔法被设定为255 /深度* 2等。 – zurakach