A CIDetector
object uses image processing to look for features (i.e., faces) in a picture. You might also want to use the CIFaceFeature
class, which can find eye and mouth positions in faces that are detected with CIDetector
.
This class can maintain many state variables that can impact performance. So for best performance, reuse CIDetector
instances instead of creating new ones.
Creates and returns a configured detector.
A string indicating the kind of detector you are interested in. See “Detector Types”
.
A Core Image context that the detector can use when analyzing an image.
A dictionary containing details on how you want the detector to be configured. See “Detector Configuration Keys”
.
A configured detector.
A CIDetector
object can potentially create and hold a significant amount of resources. Where possible, reuse the same CIDetector
instance. Also, when processing CIImages
with a detector object, your application performs better if the CIContext
used to initialize the detector is the same context used to process the CIImage
objects it will process.
CIDetector.h
Searches for features in an image.
The image you want to examine.
An array of CIFeature
objects. Each object represents a feature detected in the image.
CIDetector.h
Searches for features in an image based on the specified image orientation.
The image you want to examine.
A dictionary that specifies face detection options. See “Feature Detection Keys”
for allowed keys and their possible values.
An array of CIFeature
objects. Each object represents a feature detected in the image.
The options dictionary should contain a value for the key CIDetectorImageOrientation
, and may contain other values specifying optional face-recognition features.
CIDetector.h
Strings used to declare the detector for which you are interested.
NSString* const CIDetectorTypeFace
CIDetectorTypeFace
A detector that searches for faces in a photograph.
Available in iOS 5.0 and later.
Declared in CIDetector.h
.
Keys used in the options dictionary to configure a detector.
NSString* const CIDetectorAccuracy;
CIDetectorAccuracy
A key used to specify the desired accuracy for the detector.
The value associated with the key should be one of the values found in “Detector Accuracy Options”
.
Available in iOS 5.0 and later.
Declared in CIDetector.h
.
CIDetectorTracking
A key used to enable or disable face tracking for the detector. Use this option when you want to track faces across frames in a video.
Available in iOS 6.0 and later.
Declared in CIDetector.h
.
CIDetectorMinFeatureSize
A key used to specify the minimum size that the detector will recognize as a feature.
The value for this key is an NSNumber
object ranging from 0.0 through 1.0 that represents a fraction of the minor dimension of the image.
Available in iOS 6.0 and later.
Declared in CIDetector.h
.
Value options used to specify the desired accuracy of the detector.
NSString* const CIDetectorAccuracyLow; NSString* const CIDetectorAccuracyHigh;
CIDetectorAccuracyLow
Indicates that the detector should choose techniques that are lower in accuracy, but can be processed more quickly.
Available in iOS 5.0 and later.
Declared in CIDetector.h
.
CIDetectorAccuracyHigh
Indicates that the detector should choose techniques that are higher in accuracy, even if it requires more processing time.
Available in iOS 5.0 and later.
Declared in CIDetector.h
.
Keys used in the options dictionary for featuresInImage:options:
.
NSString* const CIDetectorImageOrientation; NSString* const CIDetectorEyeBlink; NSString* const CIDetectorSmile;
CIDetectorImageOrientation
An option for the display orientation of the image whose features you want to detect.
The value of this key is an NSNumber
object whose value is an integer between 1
and 8
. The TIFF and EXIF specifications define these values to indicate where the pixel coordinate origin (0,0) of the image should appear when it is displayed. The default value is 1
, indicating that the origin is in the top left corner of the image. For further details, see kCGImagePropertyOrientation
.
Core Image only detects faces whose orientation matches that of the image. You should provide a value for this key if you want to detect faces in a different orientation.
Available in iOS 5.0 and later.
Declared in CIDetector.h
.
CIDetectorEyeBlink
An option for whether Core Image will perform additional processing to recognize closed eyes in detected faces.
Available in iOS 7.0 and later.
Declared in CIDetector.h
.
CIDetectorSmile
An option for whether Core Image will perform additional processing to recognize smiles in detected faces.
Available in iOS 7.0 and later.
Declared in CIDetector.h
.