Handling Canon DSLR cameras from Objective-C Mac OS X app


For one of our clients we had to use professional cameras to take pictures of various objects inside a warehouse. Therefore, we created a Mac OS app that does what the client needs. The Mac app is connected to 3 DSLR cameras to take quality pictures of the shipments boxes and the items stored in them. We apply some image processing on these pictures to detect items, barcodes and documents. So, in what follows we will talk about handling Canon DSLR from Mac OS Application.

The cameras that we have to use are Canon Rebel T3 (branded as Canon 1100D in Europe), because there were already several cameras of this model in the warehouse.

To have access to the Canon SDK (EDSDK), you need to apply to the Canon website (http://www.usa.canon.com) and explain how you want to use it. In one or two days you will receive an email with details to access your developer account from Canon and then you can download the SDK.

Initially, we receive a list from our client with all three cameras ids and we have to configure them in order specified in that list. To identify each connected camera, EDSDK provide 2 methods:

EdsError EDSAPI EdsGetCameraList(EdsCameraListRef *outCameraListRef); // Get camera list objects
EdsError EDSAPI EdsGetChildCount(EdsBaseRef inRef, EdsUInt32 *outCount); // Get number of objects in camera list

Going through this list we have managed to identify each connected camera by id and arrange them in the preferred order. For an easy use, we create a protocol with multiple methods on our CameraManager (our manager that handle cameras), 2 important methods for configuring cameras are:

- (void)cameraWasConnected:(CameraModel *)camera;
- (void)cameraWasDisconnected:(CameraModel *)camera;

which are used to notify the delegate object through this methods if one camera has been connected or disconnected.

1. Live Preview

It's all about commands! If we want to send an action to cameras we have to create a command (CameraCommand).

For live preview there are 2 main commands we have to remember:

    - StartEvfCommand

    - EndEvfCommand

Each command is created with the camera model on which we perform the operation.


StartEvfCommand *startCommand = [[StartEvfCommand alloc] initWithCameraModel:self.cameraModel];
[startCommand execute];

- optional we can set a completion block for each command

[startCommand setCompletionBlock:^{
    self.evfTimer = [NSTimer scheduledTimerWithTimeInterval:1 / fps target:self selector:@selector(updateEvf) userInfo:nil repeats:YES];

EndEvfCommand *endCommand = [[EndEvfCommand alloc] initWithCameraModel:self.cameraModel];
[endCommand setCompletionBlock:^{

2. Taking Picture

Now that we learnt how to configure the cameras and start a live preview we should take a photo to see how it goes.

To take a picture we will use another command, TakePictureCommand.   

When a picture is taken, that picture is temporarily stored in a buffer in the Canon camera.

If we want to save that picture or display it in an image view, firstly you have to register for the callback event when an object is created.

EdsError EDSAPI EdsSetObjectEventHandler(EdsCameraRef inCameraRef, 
                                         EdsObjectEvent inEvnet,    
                                         EdsObjectEventHandler inObjectEventHandler,
                                         EdsVoid *inContext);

Now, when a picture is taken, the callback method is called and will receive an EdsDirectoryItemRef which can be used in a DownloadCommand to get our picture.

Here is a sample code that downloads the image and saves it to a specific path.

DownloadCommand *downloadCommand = [[DownloadCommand alloc] init:self.cameraModel
// itemInfo is an EdsDirectoryItemRef received from callback method
[downloadCommand execute];

3. Zooming

Zooming is the second important functionality that we had to to implement in this project. But, before we talk about zoom, let’s first talk about EvfPictureView.

EvfPictureView is basically a subclass of NSImageView in which the actual preview of the cameras will be displayed. When we start the live preview with StartEvfCommand (see point 1.) the live feed from the cameras will be taken by the app and in order to display it in our preview (EvfPictureView), we just have to set the "cameraModel" property to our instance of EvfPictureView.

Now we have a functional live preview. To perform zoom on the live preview is easier than you think.

EvfPictureView is a subclass of NSImageView which inherits from NSView. The easier way is to use a method on NSView: (- (void)scaleUnitSquareToSize:(NSSize)newUnitSize) to zoom the preview to your preferred size.

4. Multiple cameras problems

One of the main problems that we encounter is when we display three live previews of three different cameras at the same time and in the same screen. When we do this, Canon SDK crashes sometimes with bad access (DownloadEvfImage(__EdsObject*, __EdsObject*) - EXC_BAD_ACCESS (SIGSEGV)). 

After some research and investigation, we concluded that the problem is caused by the live previews frames that are being deallocated. Unfortunately, the implementation of the method in which the crash occurs (DownloadEvfImage()) is private, so we can’t access or modify it. We found a temporary solution for this problem, to keep strong references to the frames of the live previews and to dealloc them after a delay when we are sure that those frames were already displayed. This is only a temporary solution, because this "hack" involves some memory issues. We are searching for new solutions to this problem. If you have any suggestions for solving this issue, feel free to leave us a message.


In conclusion, we can say that we would have liked to have more access to Canon SDK to see exactly how it works and to improve performance, memory issues and not last, to add support for multiple cameras.

If you need any help working with the Canon SDK, please drop us a line. What kind of apps would you create with the Canon SDK?