Using PHPickerViewController Images in a Memory-Efficient Way
September 26, 2020
PHPickerViewController
is (in my opinion) one of the more exciting parts of iOS 14. We developers now have a fully-fledged photo picker that we can just use, rather than having to spend a bunch of our time creating our own (much like SFSafariViewController
did for developers and having to write in-app web browsers). Similar to SFSafariViewController
it also has terrific privacy benefits, in that previously for our custom UIs, in order to show the pictures to choose from, we had to request access to all the user’s photos, which is not something users or developers really wanted to contend with. PHPickerController
works differently in that iOS throws up the picker in a separate process, and the host app only sees the pictures that the user gave the app access to, and not a single one more. Much nicer!
(Note we did/still do have UIImagePickerController
, but many of us didn’t use it due to the missing functionality like selecting multiple photos that PHPickerController
does brilliantly.)
Apollo uses this API in iOS 14 to power its image uploader, so you can upload images directly into your comments or posts.
How to Use
The API is even really nice and simple to integrate. The only hitch I ran into is that the API callback when the user selects the photos provides you with essentially a bunch of objects that wrap NSItemProvider
objects, which seemed a little intimidating at first glance versus something “simpler” like a bunch of UIImage
objects (but there’s good reason they don’t do the latter).
Presenting the picker in the first place is easy:
var configuration = PHPickerConfiguration()
configuration.selectionLimit = 10
configuration.filter = .images
configuration.preferredAssetRepresentationMode = .current // Don't bother modifying how they're represented since we're just turning them into Data anyway
let picker = PHPickerViewController(configuration: configuration)
picker.delegate = self
present(picker, animated: true, completion: nil)
But acting on the user’s selections is where you can have some trouble:
func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) {
/// What do I do here?! 👉🥺👈
}
In reality though, it’s not too hard.
What Not to Do
My first swing at bat was… not great. If the user selected a bunch of photos to upload and the images were decently sized (say, straight off a modern iPhone camera) the memory footprint of the app could temporarily swell to multiple gigabytes. Yeah, with a g. Caused some crashing and user confusion, understandably, and was quite silly of me.
At first my naive solution was something along the lines of (simplified):
var images: [UIImage] = []
for result in results {
result.itemProvider.loadObject(ofClass: UIImage.self) { (object, error) in
guard let image = object as? UIImage else { return }
guard let resizedImage: UIImage = UIGraphicsImageRenderer(size: CGSize(width: 2_000, height: 2_000)).image { (context) in
image.draw(in: CGRect(origin: CGPoint.zero, size: newSize))
} else { return }
images.append(resizedImage)
}
}
Long story short, decoding the potentially large image objects into full-fledged UIImage
objects, and especially then going and re-drawing them to resize them is a very memory-expensive operation, which is multiplied with each image. Bad. Don’t do this. I know better. You know better.
(If you’re curious for more information, Jordan Morgan has a great overview with his try! Swift NYC talk on The Life of an Image and there’s also an excellent WWDC session from 2018 called Image and Graphics Best Practices that goes even more in depth.)
What You Should Do
It’s a tiny bit longer because we have to dip down into Core Graphics, but don’t fret, it’s really not that bad. I’ll break it down.
let dispatchQueue = DispatchQueue(label: "com.christianselig.Apollo.AlbumImageQueue")
var selectedImageDatas = [Data?](repeating: nil, count: results.count) // Awkwardly named, sure
var totalConversionsCompleted = 0
for (index, result) in results.enumerated() {
result.itemProvider.loadFileRepresentation(forTypeIdentifier: UTType.image.identifier) { (url, error) in
guard let url = url else {
dispatchQueue.sync { totalConversionsCompleted += 1 }
return
}
let sourceOptions = [kCGImageSourceShouldCache: false] as CFDictionary
guard let source = CGImageSourceCreateWithURL(url as CFURL, sourceOptions) else {
dispatchQueue.sync { totalConversionsCompleted += 1 }
return
}
let downsampleOptions = [
kCGImageSourceCreateThumbnailFromImageAlways: true,
kCGImageSourceCreateThumbnailWithTransform: true,
kCGImageSourceThumbnailMaxPixelSize: 2_000,
] as CFDictionary
guard let cgImage = CGImageSourceCreateThumbnailAtIndex(source, 0, downsampleOptions) else {
dispatchQueue.sync { totalConversionsCompleted += 1 }
return
}
let data = NSMutableData()
guard let imageDestination = CGImageDestinationCreateWithData(data, kUTTypeJPEG, 1, nil) else {
dispatchQueue.sync { totalConversionsCompleted += 1 }
return
}
// Don't compress PNGs, they're too pretty
let isPNG: Bool = {
guard let utType = cgImage.utType else { return false }
return (utType as String) == UTType.png.identifier
}()
let destinationProperties = [
kCGImageDestinationLossyCompressionQuality: isPNG ? 1.0 : 0.75
] as CFDictionary
CGImageDestinationAddImage(imageDestination, cgImage, destinationProperties)
CGImageDestinationFinalize(imageDestination)
dispatchQueue.sync {
selectedImageDatas[index] = data as Data
totalConversionsCompleted += 1
}
}
}
Break it Down Now
There’s a bit to unpack here, but I’ll try to hit everything.
The core concept is we’re no longer loading the full UIImage
and/or drawing it into a context each time (which can be monstrously large, and why PHPicker
doesn’t just give us UIImage
objects), especially because in my case I’m just uploading the Data
and getting a resulting URL
, I don’t ever need the image. But if you do, creating a UIImage
from the smaller CGImage
will be much better all the same.
Okay! So we start off with a queue, and the data to be collected. loadFileRepresentation
fires on an async queue, and the docs don’t mention if it executes serially (in practice, it does, but that could change), so create a queue to ensure you’re not writing to this array of Data
across multiple threads. Also note that the array itself is set up in a way that we can maintain the order of the images, otherwise the order the user selected the photos in and the order they’re processed in may not line up 1:1. Lastly we keep a separate counter to know when we’re done.
let dispatchQueue = DispatchQueue(label: "com.christianselig.Apollo.AlbumImageQueue")
var selectedImageDatas = [Data?](repeating: nil, count: results.count) // Awkwardly named, sure
var totalConversionsCompleted = 0
Moving onto the main loop, instead of asking NSItemProvider
to serve us up a potentially enormous UIImage
, we approach more cautiously by requesting a URL
to the image in the tmp
directory. More freedom.
result.itemProvider.loadFileRepresentation(forTypeIdentifier: UTType.image.identifier) { (url, error) in
We then go onto create a CGImage
but with certain requirements around the image size so as to not create something larger than we need. These Core Graphics functions can seem a little intimidating, but between their names and the corresponding docs they paint a clear picture as to what they’re doing.
let sourceOptions = [kCGImageSourceShouldCache: false] as CFDictionary
guard let source = CGImageSourceCreateWithURL(url as CFURL, sourceOptions) else {
dispatchQueue.sync { totalConversionsCompleted += 1 }
return
}
let downsampleOptions = [
kCGImageSourceCreateThumbnailFromImageAlways: true,
kCGImageSourceCreateThumbnailWithTransform: true,
kCGImageSourceThumbnailMaxPixelSize: 2_000,
] as CFDictionary
guard let cgImage = CGImageSourceCreateThumbnailAtIndex(source, 0, downsampleOptions) else {
dispatchQueue.sync { totalConversionsCompleted += 1 }
return
}
Lastly, we convert this into Data
with a bit of compression (only if it’s not a PNG though, PNGs are typically screenshots and whatnot, and I personally don’t want to hurt the quality of those).
let data = NSMutableData()
guard let imageDestination = CGImageDestinationCreateWithData(data, kUTTypeJPEG, 1, nil) else {
dispatchQueue.sync { totalConversionsCompleted += 1 }
return
}
// Don't compress PNGs, they're too pretty
let isPNG: Bool = {
guard let utType = cgImage.utType else { return false }
return (utType as String) == UTType.png.identifier
}()
let destinationProperties = [
kCGImageDestinationLossyCompressionQuality: isPNG ? 1.0 : 0.75
] as CFDictionary
CGImageDestinationAddImage(imageDestination, cgImage, destinationProperties)
CGImageDestinationFinalize(imageDestination)
Now we have much smaller compressed Data
objects kicking around, rather than our previously large UIImage
objects, and we can POST
those to an API endpoint for upload or whatever you’d like! Thanks to everyone on Twitter who gave me pointers here as well. In the end this went from spiking to in excess of 2GB to a small blip of 30MB for a few seconds.
Adopt this API! It’s great!