2012-08-07 105 views
3

所以我正在使用iOS 4.2来添加缩放和平移到我的应用程序。我已经实现了UIPinchGestureRecognizer和UIPanGestureRecognizer的一个实例。在我看来,只有其中一个是一次识别一个手势。特别是,后者仅在一个手指向下时起作用,而前者在第二个手指存在时起作用。这没关系,但它有一些副作用,我认为会造成用户体验质量较差。是否有一个手势识别器同时处理捏和平底锅?

当您放下两根手指然后移动其中一根手指时,图像会像应该放大(放大)一样,但手指下面的像素不再位于手指下方。图像从图像的中心缩放,而不是两个手指之间的中点。而且这个中心点本身在动。我希望该中心点的运动能够决定整体图像的平移。

几乎所有的iOS应用程序都具有相同的行为,图像放大或缩小图像中心周围,而不是手指跟踪手指下的像素?

在我看来,创建自定义手势识别器是解决此问题的正确设计方法,但在我看来,有人会为了商业免费下载和使用而创建这样的识别器。有没有这样的UIGestureRecognizer?

+0

我注意到iPad上的safari使用手指跟踪像素,这让我想知道这是苹果专有的行为,我不应该模仿?????? – user574771 2012-08-08 06:58:37

+0

我注意到谷歌地图有相同的行为,所以我怀疑这种行为有任何专利... – user574771 2012-08-20 21:20:15

回答

2

所以我创建的自定义手势识别中没有一个光给我一个更好的解决方案,取得了预期的效果。下面是关键代码片段,它允许自定义识别器指示视图应该重新定位的位置以及它的新比例应该以质心作为平移和缩放效果的中心,以便手指之下的像素保持在手指之下时间,除非手指旋转,这是不支持的,我不能做任何事情阻止他们从这样的姿势。这个手势识别器用两个手指同时平移和缩放。我需要稍后为一个手指平移添加支持,即使两个手指中的一个被抬起。

- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event 
{ 
    // We can only process if we have two fingers down... 
    if (FirstFinger == nil || SecondFinger == nil) 
     return; 

    // We do not attempt to determine if the first finger, second finger, or 
    // both fingers are the reason for this method call. For this reason, we 
    // do not know if either is stale or updated, and thus we cannot rely 
    // upon the UITouch's previousLocationInView method. Therefore, we need to 
    // cache the latest UITouch's locationInView information each pass. 

    // Break down the previous finger coordinates... 
    float A0x = PreviousFirstFinger.x; 
    float A0y = PreviousFirstFinger.y; 
    float A1x = PreviousSecondFinger.x; 
    float A1y = PreviousSecondFinger.y; 
    // Update our cache with the current fingers for next pass through here... 
    PreviousFirstFinger = [FirstFinger locationInView:nil]; 
    PreviousSecondFinger = [SecondFinger locationInView:nil]; 
    // Break down the current finger coordinates... 
    float B0x = PreviousFirstFinger.x; 
    float B0y = PreviousFirstFinger.y; 
    float B1x = PreviousSecondFinger.x; 
    float B1y = PreviousSecondFinger.y; 


    // Calculate the zoom resulting from the two fingers moving toward or away from each other... 
    float OldScale = Scale; 
    Scale *= sqrt((B0x-B1x)*(B0x-B1x) + (B0y-B1y)*(B0y-B1y))/sqrt((A0x-A1x)*(A0x-A1x) + (A0y-A1y)*(A0y-A1y)); 

    // Calculate the old and new centroids so that we can compare the centroid's movement... 
    CGPoint OldCentroid = { (A0x + A1x)/2, (A0y + A1y)/2 }; 
    CGPoint NewCentroid = { (B0x + B1x)/2, (B0y + B1y)/2 };  

    // Calculate the pan values to apply to the view so that the combination of zoom and pan 
    // appear to apply to the centroid rather than the center of the view... 
    Center.x = NewCentroid.x + (Scale/OldScale)*(self.view.center.x - OldCentroid.x); 
    Center.y = NewCentroid.y + (Scale/OldScale)*(self.view.center.y - OldCentroid.y); 
} 

视图控制器通过将新比例和中心分配给相关视图来处理事件。我注意到其他手势识别器往往会让控制器做一些数学运算,但我试图在识别器中做所有的数学运算。

-(void)handlePixelTrack:(PixelTrackGestureRecognizer*)sender 
{ 
    sender.view.center= sender.Center; 
    sender.view.transform = CGAffineTransformMakeScale(sender.Scale, sender.Scale); 
} 
1

更简单的解决方案是将您的视图放在滚动视图中。然后你可以免费捏和平底锅。否则,您可以将平移和缩放手势代表设置为self,并同时返回YES以进行shouldRecognize。至于放大到用户手指的中心位置,我从来没有正确解决过这个问题,但它涉及在改变视图的层次(我认为)之前操纵视图层的anchorPoint

+0

滚动视图添加问题,如反弹和酒吧。我认为结合识别器不会解决识别手指之间点的问题...... – user574771 2012-08-08 01:34:38

+0

您可以轻松禁用弹跳和酒吧......它们只是属性。当你在捏手势上获得'locationInView'时,它返回中点。 – borrrden 2012-08-08 01:36:15

+0

我试图禁用反弹,但它似乎并没有完全禁用从滚动视图的边界伸展。我想我可以再试一次。但是,虽然中点是为我们计算的,那么像素是否会追踪他们的手指?我不认为他们做到了。 – user574771 2012-08-08 02:49:53

7

对不起,赶时间,但这是我用于演示应用程序之一的代码,它可以在不使用滚动视图的情况下同时捏住缩放和平移。

不要忘记,以符合UIGestureRecognizerDelegate协议

如果你不能够在同一时间同时获得捏和平移,也许是因为你错过了这个方法:

-(BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer 
{ 
    return YES; 
} 

下面是完整的源代码:

#import "ViewController.h" 
#import <QuartzCore/QuartzCore.h> 

@interface ViewController() 

@end 

@implementation ViewController 

- (void)viewDidLoad 
{ 
    [super viewDidLoad]; 
    // Do any additional setup after loading the view, typically from a nib. 

    isEditing = false; 

    photoView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 320, 460)]; 
    [photoView setImage:[UIImage imageNamed:@"photo.png"]]; 
    photoView.hidden = YES; 

    maskView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 320, 460)]; 
    [maskView setImage:[UIImage imageNamed:@"maskguide.png"]]; 
    maskView.hidden = YES; 

    displayImage = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 320, 460)]; 

    UIPanGestureRecognizer *panGesture = [[UIPanGestureRecognizer alloc] initWithTarget:self action:@selector(handlePan:)]; 
    UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:@selector(handlePinch:)]; 

    [panGesture setDelegate:self]; 
    [pinchGesture setDelegate:self]; 

    [photoView addGestureRecognizer:panGesture]; 
    [photoView addGestureRecognizer:pinchGesture]; 
    [photoView setUserInteractionEnabled:YES]; 

    [panGesture release]; 
    [pinchGesture release]; 

    btnEdit = [[UIButton alloc] initWithFrame:CGRectMake(60, 400, 200, 50)]; 
    [btnEdit setBackgroundColor:[UIColor blackColor]]; 
    [btnEdit setTitle:@"Start Editing" forState:UIControlStateNormal]; 
    [btnEdit addTarget:self action:@selector(toggleEditing) forControlEvents:UIControlEventTouchUpInside]; 

    [[self view] addSubview:displayImage]; 
    [[self view] addSubview:photoView]; 
    [[self view] addSubview:maskView]; 
    [[self view] addSubview:btnEdit]; 

    [self updateMaskedImage]; 
} 

- (void)viewDidUnload 
{ 
    [super viewDidUnload]; 
    // Release any retained subviews of the main view. 
} 

- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation 
{ 
    return (interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown); 
} 

-(void)dealloc 
{ 
    [btnEdit release]; 

    [super dealloc]; 
} 

#pragma mark - 
#pragma mark Update Masked Image Method 
#pragma mark - 

-(void)updateMaskedImage 
{ 
    maskView.hidden = YES; 

    UIImage *finalImage = 
    [self maskImage:[self captureView:self.view] 
      withMask:[UIImage imageNamed:@"mask.png"]]; 


    maskView.hidden = NO; 

    //UIImage *finalImage = [self maskImage:photoView.image withMask:[UIImage imageNamed:@"mask.png"]]; 

    [displayImage setImage:finalImage]; 
} 

- (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage { 

    CGImageRef maskRef = maskImage.CGImage; 

    CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef), 
             CGImageGetHeight(maskRef), 
             CGImageGetBitsPerComponent(maskRef), 
             CGImageGetBitsPerPixel(maskRef), 
             CGImageGetBytesPerRow(maskRef), 
             CGImageGetDataProvider(maskRef), NULL, false); 

    CGImageRef masked = CGImageCreateWithMask([image CGImage], mask); 
    return [UIImage imageWithCGImage:masked]; 

} 

#pragma mark - 
#pragma mark Touches Began 
#pragma mark - 

// adjusts the editing flag to make dragging and drop work 
-(void)toggleEditing 
{ 
    if(!isEditing) 
    { 
     isEditing = true; 

     NSLog(@"editing..."); 

     [btnEdit setTitle:@"Stop Editing" forState:UIControlStateNormal]; 

     displayImage.hidden = YES; 
     photoView.hidden = NO; 
     maskView.hidden = NO; 
    } 
    else 
    { 
     isEditing = false; 

     [self updateMaskedImage]; 

     NSLog(@"stopped editting"); 

     [btnEdit setTitle:@"Start Editing" forState:UIControlStateNormal]; 

     displayImage.hidden = NO; 
     photoView.hidden = YES; 
     maskView.hidden = YES; 
    } 
} 

/* 
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event 
{ 
    if(isEditing) 
    { 
     UITouch *finger = [touches anyObject]; 
     CGPoint currentPosition = [finger locationInView:self.view]; 

     //[maskView setCenter:currentPosition]; 
     //[photoView setCenter:currentPosition]; 
     if([touches count] == 1) 
     { 
      [photoView setCenter:currentPosition]; 
     } 
     else if([touches count] == 2) 
     { 

     } 
    } 
} 
*/ 

-(void)handlePan:(UIPanGestureRecognizer *)recognizer 
{  
    CGPoint translation = [recognizer translationInView:self.view]; 
    recognizer.view.center = CGPointMake(recognizer.view.center.x + translation.x, 
             recognizer.view.center.y + translation.y); 
    [recognizer setTranslation:CGPointMake(0, 0) inView:self.view]; 
} 

-(void)handlePinch:(UIPinchGestureRecognizer *)recognizer 
{  
    recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale); 
    recognizer.scale = 1; 
} 

-(BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer 
{ 
    return YES; 
} 

#pragma mark - 
#pragma mark Capture Screen Function 
#pragma mark - 

- (UIImage*)captureView:(UIView *)yourView 
{ 
    UIGraphicsBeginImageContextWithOptions(yourView.bounds.size, yourView.opaque, 0.0); 
    CGContextRef context = UIGraphicsGetCurrentContext(); 
    [yourView.layer renderInContext:context]; 
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); 
    UIGraphicsEndImageContext(); 
    return image; 
} 

#pragma mark - 

@end 
+0

我从你的代码中取几个元素,他们的工作,你说他们会。但行为不是我想要的。我希望像素跟踪像iPad/Safari一样的手指。但是,如果有人向我展示模仿Safari的行为存在法律问题,那么我会将其标记为目前为止的最佳答案。 – user574771 2012-08-08 19:51:23