3
Feb
2009
 

Record Your Core Animation Animation

by Matt Long

Every once in a while I find a way to combine multiple technologies that, while they don’t produce anything terribly useful, are very interesting when combined. In this post I will be taking a look at combining Core Animation and QuickTime. As you may or may not be aware, you can draw in a graphics context while your Core Animation animation is running and add each image created to a QTMovie object from QTKit. This enables you to create a QuickTime movie of your Core Animation animation. Here’s how.

The basic process flow goes like this. Clicking the ‘Capture’ button on the user interface calls an IBAction called -saveAnimation. It prompts the user to select an output file for the movie. When the user has selected the file, the animations are created and added to the layer. Next we create a timer that is going to call a function that will grab the current frame and place it into the QTMovie object using -addImage at a specified interval. We set our AppDelegate to also be the delegate for the animation group so that when the animation completes we get notified and can then write our QTMovie object data to disk.

Use An Interesting Animation

First you are going to need an animation that is worth recording. Of course any old animation will do, but we’ll keep it interesting by adding multiple animations to a single layer. I have created four different keyframe animations that we will add to an animation group. The keypaths are “backgroundColor”, “borderWidth”, “position”, and “bounds”. Check the sample code to see how these animations are constructed.

We set the duration for all of the animations to five seconds. We also need to make sure that we set the duration for the group itself, otherwise it will override the five second duration we set for the animations themselves and run in the default 0.25 seconds. The code below shows how the animations are added to the layer.

- (void)loadAnimations;
{
  CAAnimationGroup *group = [CAAnimationGroup animation];
  
  [group setAnimations:[NSArray arrayWithObjects:[self backgroundColorAnimation],
                          [self borderWidthAnimation],
                          [self positionAnimation],
                          [self boundsAnimation], nil]];
  
  [group setValue:@"mainGroup" forKey:@"name"];
  [group setDuration:5.0];
  [group setAutoreverses:YES];
  [group setDelegate:self];
  
  [layer addAnimation:group forKey:@"group"];  
}

Notice that we have used KVC here to set a name for the animation group. We will use this as a tag later to make sure the animation that triggers our -animationDidStop:finished animation delegate is the correct one. More on that later.

Get Our Movie Ready

Prior to loading the animations, we prompt the user to select a file to write the movie file to. After loading the animations, we start our timer.

- (IBAction)saveAnimation:(id)sender;
{
  NSSavePanel *savePanel;
  
  savePanel = [NSSavePanel savePanel];
  [savePanel setExtensionHidden:YES];
  [savePanel setCanSelectHiddenExtension:NO];
  [savePanel setTreatsFilePackagesAsDirectories:NO];
  
  if( [savePanel runModal] == NSOKButton )
  {
    movie = [[QTMovie alloc] initToWritableFile:[savePanel filename] error:nil];    
  }
  
  [self loadAnimations];

  timer = [NSTimer scheduledTimerWithTimeInterval:1.0/(NSTimeInterval)10.0
                                           target:self
                                         selector:@selector(updateTime:)
                                         userInfo:NULL
                                          repeats:YES];    
  
}

Our -updateTime selector will get called every 1/10th of a second and will grab the current frame to save it to the QTMoive object. Here is the -updateTime code.

- (void)updateTime:(NSTimer*)theTimer;
{
  NSBitmapImageRep *image = [self getCurrentFrame];
  
  QTTime time = QTMakeTime(1, 10);
  NSDictionary *attrs = [NSDictionary dictionaryWithObject:@"png " forKey:QTAddImageCodecType];
  NSImage *img = [[NSImage alloc] initWithData:[image TIFFRepresentation]];
  [movie addImage:img forDuration:time withAttributes:attrs];
  
  [image release];
}

Obtaining the Current Frame

The code to obtain the current frame is somewhat lengthy, but the concepts are pretty simple. We need to create a graphics context that we can draw into and then draw into it using the presentationLayer of the contenView’s root layer. If you’re not familiar, the presentationLayer provides the current state of the animated fields while “in-flight”.

Core Animation doesn’t provide any callbacks for when a frame is ready to be displayed which is why we are using a timer. This means that we may be capturing more frames than we need to, so getting the right frame rate takes a bit of trial and error, which I have to confess I wasn’t able to get nailed down completely. I’m still working on it and will update here when I get that part figured out. Meanwhile, here is the code for -getCurrentFrame

- (NSBitmapImageRep*)getCurrentFrame;
{
  CGContextRef    context = NULL;
  CGColorSpaceRef colorSpace;
  int bitmapByteCount;
  int bitmapBytesPerRow;
  
  int pixelsHigh = (int)[[[window contentView] layer] bounds].size.height;
  int pixelsWide = (int)[[[window contentView] layer] bounds].size.width;

  bitmapBytesPerRow   = (pixelsWide * 4);
  bitmapByteCount     = (bitmapBytesPerRow * pixelsHigh);
  
  colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
  
  context = CGBitmapContextCreate (NULL,
                                   pixelsWide,
                                   pixelsHigh,
                                   8,
                                   bitmapBytesPerRow,
                                   colorSpace,
                                   kCGImageAlphaPremultipliedLast);
  if (context== NULL)
  {
    NSLog(@"Failed to create context.");
    return nil;
  }
  
  CGColorSpaceRelease( colorSpace );
  
  [[[[window contentView] layer] presentationLayer] renderInContext:context];
  
  CGImageRef img = CGBitmapContextCreateImage(context);
  NSBitmapImageRep *bitmap = [[NSBitmapImageRep alloc] initWithCGImage:img];
  CFRelease(img);
  
  return bitmap;

}

Notice that we are calling -renderInContext on the presentationLayer of the window’s contentView’s root layer. If we were only to render our animated layer, we wouldn’t be able to see the animation as it will only render the containing rectangle of the animating layer.

Finishing Up

Finally we need to write the movie data out to disk. The QTMovie object provides a single call to do so, but we need a way to know when the animation has finished so we can make this call. When we created our animation group, we set its delegate to our AppDelegate which will cause the delegate method -animationDidStop:finished to get called. Remember that setting the delegate for each of the individual animations gets ignored when you are using an animation group. We implement the -animationDidStop:finished delegate as shown below.

- (void)animationDidStop:(CAAnimation *)theAnimation finished:(BOOL)flag;
{
  NSLog(@"Animation stopped: %@", theAnimation);
  
  id name = [theAnimation valueForKey:@"name"];
  if( name )
    if( [name isEqualToString:@"mainGroup"] )
    {
      [movie updateMovieFile];
      [timer invalidate];
    }
}

The first thing we do is check the animation tag we set when creating the animation group. This really isn’t necessary in this code example since there is only one animation that is going to use it, but this code shows you how to differentiate if you were to use multiple animations or groups and wanted to know when each of them finished animating.

The call to -updateMovieFile writes the data to disk and we now have a QuickTime movie that will play our animation. Open the resulting file in QuickTime or just invoke QuickLook to see the result.

Conclusion

Maybe you can think of a use for this kind of thing. I haven’t yet–other than for writing a blog post of course. Shoot me your thoughts and comments in the comments section. Until next time.

CA Animation Capture Demo Project

Comments

mozketo says:

I 2nd using the idea for Blog postings, having just blogged about a CAConstraint Grid Layout I’d love to add a small QT to accompany the Post.

Another use would be to record your CA in HD resolutions and pipe it to your AppleTV or for use as a screensaver (if there’s a simple “play this QT as screensaver, screensaver”) with minimal CPU/GPU usage.

Eric Wing says:

Try Core Video instead of using NSTimer. Core Video callbacks happen on a high priority thread that is supposed to be synchronized with your display’s refresh rate. Though this assumes you actually have a display connected.

Matt Long says:

@Eric Wing

Can you be a bit more specific? Which callbacks? I can do a screen grab session using OpenGL, but the point of the post was that you can record your Core Animation without having to set up OpenGL or some other mechanism. While what you’re saying sounds right, it doesn’t sound easier.

Thanks.

-Matt

Eric Wing says:

Just use Core Video. You don’t have to use OpenGL with Core Video. All you do is create a displaylink for a display, set a callback, and start the displaylink.

You can try it very easily for yourself and see. Create a new Cocoa application from Xcode. Create a class to act as you application delegate and set it up in IB.

In AppDelegate.h:

#import
#import

@interface AppDelegate : NSObject
{
CVDisplayLinkRef displayLink;
}

@end

In AppDelegate.m

#import “AppDelegate.h”

CVReturn MyDisplayLinkCallback( CVDisplayLinkRef displayLink,
const CVTimeStamp *inNow,
const CVTimeStamp *inOutputTime,
CVOptionFlags flagsIn,
CVOptionFlags *flagsOut,
void *displayLinkContext
)
{
fprintf(stderr, “In MyDisplayLinkCallback\n”);
// draw/get your Core Animation frame here
return kCVReturnSuccess;
}
@implementation AppDelegate

– (void) applicationDidFinishLaunching:(NSNotification*)the_notification
{
NSLog(@”applicationDidFinishLaunching”);
CVDisplayLinkCreateWithActiveCGDisplays(&displayLink);
CVDisplayLinkSetOutputCallback(displayLink, &MyDisplayLinkCallback, self);
CVDisplayLinkStart(displayLink);
}

@end

The function MyDisplayLinkCallback will be called back in sync with the refresh rate of your display. This is the perfect time to draw.

Keep in mind that Core Video callbacks operate on a secondary thread, so you may need to lock. But maybe because you are drawing in a new context you create, you might not need to lock anything.

Lukasz says:

This example is good, but when the layer has set mask (with method setMask: ), recorded films doesn’t have this mask. So this example is not good for layers with masks. Have somebody any idea how to repair this mistake? Thanks for answer.

Lukasz says:

Ok no matter what I wrote earlier.
The size of movie is very big. Is there any way to compress movie while recording?

Lukasz says:

Hello,

Is there any way to record movies in iphone OS, as it is in MAC OS ?

Matt Long says:

@Lukasz

Not in the same way. This sample code uses QTKit on OSX. QTKit is not available on the phone currently. You could probably grab the individual images with -renderInContext on the layer and insert that into a movie using ffmpeg. Of course, I’ve not worked with ffmpeg, so this is speculation, however, I know people have successfully implemented ffmpeg code on the phone.

Best Regards.

Lukasz says:

Thanks for response,

I have no problem with making images from presentation layer. The only thing I have to do, is to join this images to make movie.

Matt Long says:

@Lukasz

Unfortunately, joining the images together into a movie is the hard part. Unless you want to roll your own encoder, you will need to use a library like ffmpeg.

-Matt

Lukasz says:

Can you give an example of how to join images into a movie using ffmpeg ?