I’m trying to decide between using CGColorSpaceCreateDeviceRGB()
and CGColorSpaceCreateWithName(kCGColorSpaceDisplayP3)
when calling CGBitmapContextCreate
. Since all iOS devices now support the Display P3 color space, I was thinking that using CGColorSpaceCreateWithName(kCGColorSpaceDisplayP3)
would be a better choice, as it covers a wider color gamut. I assumed that this wouldn’t cause any issues because Display P3 includes all the colors in the sRGB space.
However, when I use Display P3, my images (JPEGs without an embedded P3 profile) appear with less saturated colors. Can someone explain why this is happening, and how I can fix it?
Here’s what I’m doing:
- Load a JPEG image inside a UIImage.
- Create a CGContextRef using CGBitmapContextCreate with CGColorSpaceCreateWithName(kCGColorSpaceDisplayP3).
- Draw the UIImage.cgImage onto the CGContextRef using CGContextDrawImage.
- Convert the CGContextRef to an OpenGL texture and render it on top of my app’s surface.
Why are the colors less saturated when using CGColorSpaceCreateWithName(kCGColorSpaceDisplayP3)
? Shouldn’t the wider gamut display everything the same or better than sRGB?
2