• First on New Blog

    January 28, 2023

    Converted the blog over to a new Hugo theme! Hopefully everything here sorta works. Test post will remove.

    A very cute small goat

    This goat’s name is apparently Gubgub, and very cute.

  • Theming Apps on iOS is Hard

    February 7, 2022

    Theming apps (the ability to change up the color scheme for an app from say, a white background with blue links to a light green background with green links) is a pretty common feature across a lot of apps. It’s one of the core features of the new “Twitter Blue” subscription, Tweetbot and Twitterific have had it for awhile, my app Apollo has it (and a significant subset of users use it), and it’s basic table stakes in text editors. When you use the heck out of an app, it’s pretty nice to be able to tweak it in a way that suits you more.

    Three examples of different dark themes in the Apollo app

    For the longest time, by default, an app had one color scheme. Dark mode didn’t exist at the iOS level, so it was up to apps to have two sets of colors to swap between individually. With iOS 12 Apple made that a lot nicer, and made switching between a light color scheme and a dark color scheme really easy.

    The Current Light/Dark Mode System

    The current system is great for switching between light mode and dark mode. Each “color” basically has two colors: a light mode version and a dark mode version, and instead of calling it “whiteColor”, the color might be called “backgroundColor”, and have a lightish color for light mode and a darker color for dark mode. You set that on whatever you’re theming, and bam, iOS handles the rest, automatically switching when the iOS system theme changes. Heck, Apple even defines a bunch of built in ones, like “label” and “secondaryLabel”, so you likely don’t even have to define your own colors.

    The code defining, say, a custom blue accent/tint color for your app looks basically like:

    if lightMode {
        // A rich blue
        return UIColor(hexcode: "007aff") 
    } else {
        // A little brighter blue to show up on dark backgrounds
        return UIColor(hexcode: "4BA1FF") 

    (For a thorough explanation of this system, NSHipster has a great article.)

    The Problem

    This quickly falls apart when you introduce theming. Maybe blue is a safe bet as your app’s “button color” for 95% of users, but a subset are going to want to make that more personal. Maybe a mint color? A pink! If we’ve learned anything through the craze of app’s like Widgetsmith, people love to make things their own.

    But wait, how do we do this when the system is built around only having two options: one for light mode, and one for dark? We might want to have a “Mint” theme, with a delightful green tint instead.

    Perhaps something like this?

    if lightMode {
        if mintSelected {
            // Minty!
            return UIColor(hexcode: "26C472")
        } else {
            // A rich blue
            return UIColor(hexcode: "007aff")     
    } else {
        if mintSelected {
            // Dark mode minty!
            return UIColor(hexcode: "84FFBF")
        } else {
            // A little brighter blue to show up on dark backgrounds
            return UIColor(hexcode: "4BA1FF") 

    Beautiful! This actually works super well, if we start up our app, iOS will see that mint is selected and choose the mint colors instead.

    However, there’s a serious catch. If the app started up in normal (AKA non-minty) mode, and the user selects the mint theme at some point, iOS kinda looks the other way and ignores the change, sticking with blue instead. The conversation kinda goes like:

    Me: Hey iOS! The theme is minty now, blue is so last season. Can you update those buttons to mint-colored?

    iOS: Well, I asked earlier and you said blue. No take backs. The paint is dry, no updates allowed.

    Me: But if the user changes the device theme to dark mode, you’ll happily update the colors! Could you just do that same thing now for me?

    iOS: Hard pass.

    Me: But the header file for hasDifferentColorAppearanceComparedToTraitCollection even says changes in certain traits could affect the dynamic colors, could you just wrap what those changes call into a general function?

    iOS: I said no take backs! But let’s together hope one of the awesome folks who works on me adds that in my next major version!

    So what do you do? Have the user force-quit the app and relaunch every time they want to change the theme? That’s not very Apple-y. Reinitialize the app’s view hierarchy? That can mess with lots of things like active keyboards.

    Let’s Go Back in Time

    Remember how I said in the pre-iOS 12 days, where iOS didn’t even had a dark mode, developers had to get a bit more inventive? Apollo’s theming system was actually written way back then, so I’m pretty familiar with it! Basically how it works is you don’t talk to iOS like above, instead you talk to each view on screen directly. Cut out the middleman!

    Leveraging something like NSNotificationCenter (or a more type-safe version via NSHashTable with weak object references) you’d basically go to each view you wanted to color, and say “Hey, you’re blue now, but why don’t you give me your phone number so if anything changes I’ll let you know?” and you’d register that view. Then when the user asked to go to dark mode, you’d quickly phone up all the views in the app and say “Change! Now! Green!” and they would all do that.

    The beauty is that when you “phone them up”, you can tell them any color under the sun! You have full control!

    Here’s a quick example of what this might look like, somewhat based on how I do it in Apollo:

    protocol Themeable: AnyObject {
        func applyTheme(theme: Theme)
    enum ColorScheme {
        case `default`, pumpkin
    struct Theme {
        let isLightModeActive: Bool
        let colorScheme: ColorScheme
        var backgroundColor: UIColor {
            switch colorScheme {
                case .default:
                    return isForLightMode ? UIColor(hexcode: "ffffff") : UIColor(hexcode: "000000")
                case .pumpkin:
                    return isForLightMode ? UIColor(hexcode: "ff6700") : UIColor(hexcode: "733105")
        // Add more colors for things like tintColor, textColor, separators, inactive states, etc.
    class ThemeManager: NSObject {
        static let shared = ThemeManager()
        var currentTheme: Theme = // initialize value from UserDefaults or something similar
        private var listeners = NSHashTable<AnyObject>.weakObjects()
        // This would be called by an external event, such as iOS changing or the user selecting a new theme
        func themeChangeDidOccur(toTheme newTheme: Theme) {
            currentTheme = newTheme
        func makeThemeable(_ object: Themeable) {
            object.applyTheme(theme: currentTheme)
        private func refreshListeners() {
                .compactMap { $0 as? Themeable }
                .forEach { $0.applyTheme(theme: currentTheme) }
    // Do this in every view controller/view:
    class IceCreamViewController: UIViewController, Themeable {
        let leftBarButtonItem = UIBarButtonItem(title: "Accounts")
        override func viewDidLoad() {
        func applyTheme(theme: Theme) {
            // e.g.:
            leftBarButtonItem.tintColor = theme.tintColor

    So this works but has a lot of downsides. For one, it’s a lot harder. Rather than just setting view.textColor = appTextColor in a single call and have it automatically switch between light and dark mode colors that you defined as needed, you have to set the color, register the view, have a separate theming function, and then go back and talk to that view whenever anything changes. A lot more arduous in comparison.

    There’s other aspects to consider as well. Because iOS is smart, when an app goes into the background, iOS quickly takes a screenshot of the app to show up in the app switcher, but it also quickly toggles the app to the opposite theme (so dark mode if the system is in light mode) and takes a screenshot of that as well, so if the system theme changes iOS can instantly update the screenshot in the app switcher.

    The result of this is that iOS rapidly asks your app to change its theme twice in a row (to the opposite theme, and then back to the normal), if you don’t do this quickly, you’re in trouble. Indeed, it’s one of my top crashers as of iOS 15, and I assume it’s because I use this old method of talking to every single view to update, and iOS uses a more efficient method under the hood.

    You also hit speed bumps you don’t really think of when you start out. For instance, say parts of your app support Markdown rendering where links embedded in a block of text reflect a specific theme’s tint color. When the theme changes, with this system you get that notification, and what do you do? Recompute the NSAttributedString each time you get a theme change? Perhaps only do it the first time, cache the result, and then on theme change iterate over that specific attribute and update only those attributes to the new color. You know what’s a lot nicer than all that rigamarole each time? Just setting the dynamic color in your Markdown renderer/attributed string once, and having iOS handle all the color changes like in the newer solution.

    So as you may have guessed I’ve been meaning to update my old system to this newer one. (Wonder why I was writing this blog post?)

    (For a thorough writeup on this kind of system, the SoundCloud Developer Blog has a great article, and Joe Fabisevich also has a really cool variation based on Combine.)


    SwiftUI is new and really exciting, and something I’m looking forward to using more in my app. The tricky thing with this antiquated solution is it doesn’t work too well with SwiftUI, subscribing everything into NotificationCenter calls and callbacks isn’t exactly very SwiftUI-esque and ruins a lot of the elegance of creating views in SwiftUI and at best adds a lot of boilerplate.

    So if the old system isn’t great, what about the newer, post-iOS 12 dynamic color one? While SwiftUI has its own Color object which unlike UIColor lacks support for custom dynamic colors (I believe) you can initialize a Color object with a UIColor and SwiftUI will dynamically update when light/dark mode changes occur, just like UIKit! Which makes the “newer” solution a lot nicer as it works well in both “worlds”.

    What Would be the Perfect Solution from Apple?

    The perfect solution would be Apple simply having a method like UIApplication.shared.refreshUserInterfaceStyle() that performs the same thing that occurs when iOS switches from light mode to dark mode. In that situation, there’s a code path/method on iOS that says “Hey app, update all your colors, things have changed”, and simply making it so app developers could call that on their own app would make everything perfect. Theme changes would redraw as requested, no having to force-quit or talk to each and every view manually, and it would work nicely with SwiftUI! (Apple folks: FB9887856)


    In the absence of that method (fingers crossed for iOS 16!), can we make our own method that accomplishes effectively the same thing? An app color refresh? Well, there’s a couple ways!

    • Martin Rechsteiner mentioned a clever way on Twitter, wherein you change the app’s displayed color gamut. Since the color profile of the entire app is changing, iOS will indeed update all the colors. The downside is, well, you’re changing the app’s color gamut from say, P3 to SRGB, which can presumably have some effects on how colors look. It shouldn’t be super obvious, since from what I can tell UIImageViews and whatnot have their embedded color profiles separate from app, so pictures and whatnot should still display correctly. But it’s still suboptimal. You could always immediately switch back to the previous color gamut after, but that has the problems of solution 2.
    • If you’re in light mode set overrideUserInterfaceStyle to dark mode on the app’s UIWindow, and then change it back (or vice-versa). The downside here, is that if you do it in the same pass of the runloop, colors will update but traitCollectionDidChange does not fire in the relevant view controllers which may be important for things like CALayer updates. You can dispatch it to the next loop with good ol’ DispatchQueue.main.async { ... } on the second call, but then traitCollectionDidChange will be called twice, and unless you do a bit more work the screen will have a quick flash as it jumps between light and dark mode very quickly.

    Of the two, I think I prefer the second solution slightly. Even though it calls the method twice, and flashes a bit, you can negate the flash by putting a view overtop the main window (say, a snapshot from immediately before that pleasantly fades to the new theme) and the traitCollectionDidChange being called twice likely isn’t much concern.

    Put Those Two Together? PB & J Sandwich?

    Another solution would be to take parts of both systems that work and put them together: use dynamic colors for 97% of the heavy lifting, but when a color has to change immediately in response to a user changing themes, then you use the “notify all the views in the app manually” method. This would likely be fine when going into the background and snapshotting, because that would use dynamic colors, and the “notifying all the views” would only occur when the app is in the foreground with the user manually changing the theme.

    Still, I don’t really like that we have to have a separate system maintained where we have to keep track of every view in the app that might need a color change, for the 3% of the time the user might change the theme. That’s a lot of boilerplate and excess code for something that could simply be handled by a refresh method on UIApplication. (And yes, you could say “if it’s that rare, just have them force quit the app or something else gross”, but you want the user to be able to quickly preview different themes without a ton of friction in between.)

    So all in all, I think I’m going to go with the overrideUserInterfaceStyle kinda hack, and hope iOS 16 sees a proper, built-in way to refresh the app’s colors. But if you have a better solution I’m all ears, hit me up on Twitter!

  • Table of Contents Selector View

    April 25, 2021

    I wrote a new little view for a future version of Apollo that makes some changes to the default iOS version (that seems to be a weird trend in my recent programming, despite me loving built-in components). Here’s some details about it! It’s also available as a library on GitHub if you’re interested!

    Are you familiar with UITableView’s sectionIndexTitles API? The little alphabet on the side of some tables for quickly jumping to sections? Here’s a tutorial if you’re unfamiliar.

    This is a view very similar to that (very little in the way of originality here, folks) but offers a few nice changes I was looking for, so I thought I’d open source it in case anyone else wanted it too.


    The UITableView API is great, and you should try to stick with built-in components when you can avoid adding in unnecessary dependencies. That being said, here are the advantages this brought me:

    • πŸ‡ Symbols support! SF Symbols are so pretty, and sometimes a section in your table doesn’t map nicely to a letter. Maybe you have some quick actions that you could represent with a lightning bolt or bunny!
    • 🌠 Optional overlay support. I really liked on my old iPod nano how when you scrolled really quickly an a big overlay jumped up with the current alphabetical section you were in so you could quickly see where you were. Well, added!
    • πŸ– Delayed gesture activation to reduce gesture conflict. For my app, an issue I had was that I had an optional swipe gesture that could occur from the right side of the screen. Whenever a user activated that gesture, it would also activate the section index titles and jump everywhere. This view requires the user long-press it to begin interacting. No conflicts!
    • πŸ› Not tied to sections. If you have a less straight forward data structure for your table, where maybe you want to be able to jump to multiple specific items within a section, this doesn’t require every index to be a section. Just respond to the delegate and you can do whatever you want.
    • πŸ“ Not tied to tables. Heck, you don’t even have to use this with tables at all. If you want to overlay it in the middle of a UIImageView and each index screams a different Celine Dion song, go for it.
    • πŸ‚ Let’s be honest, a slightly better name. The Apple engineers created a beautiful API but I can never remember what it’s called to Google. sectionIndexTitles doesn’t roll off the tongue.
    • 🌝 Haha moon emoji

    How to Install

    No package managers here. Just drag and drop TableOfContentsSelector.swift into your Xcode project. You own this code now. You have to raise it as your own.

    How to Use

    Create your view.

    let tableOfContentsSelector = TableOfContentsSelector()

    (Optional: set a font. Supports increasing and decreasing font for accessibility purposes)

    tableOfContentsSelector.font = UIFont.systemFont(ofSize: 12.0, weight: .semibold) // Default

    The table of contents needs to know the height it’s working with in order to lay itself out properly, so let it know what it should be

    tableOfContentsSelector.frame.size.height = view.bounds.height

    Set up your items. The items in the model are represented by the TableOfContentsItem enum, which supports either a letter (.letter("A")) case or a symbol case (.symbol(name: "symbol-sloth", isCustom: true)), which can also be a custom SF Symbol that you created yourself and imported into your project. As a helper, there’s a variable called TableOfContentsSelector.alphanumericItems that supplies A-Z plus just as the UITableView API does.

    let tableOfContentsItems: [TableOfContentsItem] = [
        .symbol(name: "star", isCustom: false),
        .symbol(name: "house", isCustom: false),
        .symbol(name: "symbol-sloth", isCustom: true)
        + TableOfContentsSelector.alphanumericItems

    At this point add it to your subview and position it how you see fit. You can use sizeThatFits to get the proper width as well.

    Lastly, implement the delegate methods so you can find out what’s going on.

    func viewToShowOverlayIn() -> UIView? {
        return self.view
    func selectedItem(_ item: TableOfContentsItem) {
        // You probably want to do something with the selection! :D
    func beganSelection() {}
    func endedSelection() {}

    That’s it! If you’re curious, internally it’s just a single UILabel with a big ol’ attributed string. Hope you enjoy!

  • More Efficient/Faster Average Color of Image

    April 2, 2021

    Skip to the ‘Juicy Code πŸ§ƒ’ section if you just want the code and don’t care about the preamble of why you might want this!

    Finding the average color of an image is a nice trick to have in your toolbelt for spicing up views. For instance on iOS, it’s used by Apple to make their pretty homescreen widgets where you put the average color of the image behind the text so the text is more readable. Here’s Apple’s News widget, and my Apollo widget, for instance:

    News and Apollo widgets on home screen

    Core Image Approach Pitfalls

    There’s lots of articles out there on how to do this on iOS, but all of the code I’ve encountered accomplishes it with Core Image. Something like the following makes it really easy:

    func coreImageAverageColor() -> UIColor? {
        // Shrink down a bit first
        let aspectRatio = self.size.width / self.size.height
        let resizeSize = CGSize(width: 40.0, height: 40.0 / aspectRatio)
        let renderer = UIGraphicsImageRenderer(size: resizeSize)
        let baseImage = self
        let resizedImage = renderer.image { (context) in
            baseImage.draw(in: CGRect(origin: .zero, size: resizeSize))
        // Core Image land!
        guard let inputImage = CIImage(image: resizedImage) else { return nil }
        let extentVector = CIVector(x: inputImage.extent.origin.x, y: inputImage.extent.origin.y, z: inputImage.extent.size.width, w: inputImage.extent.size.height)
        guard let filter = CIFilter(name: "CIAreaAverage", parameters: [kCIInputImageKey: inputImage, kCIInputExtentKey: extentVector]) else { return nil }
        guard let outputImage = filter.outputImage else { return nil }
        var bitmap = [UInt8](repeating: 0, count: 4)
        let context = CIContext(options: [.workingColorSpace: kCFNull as Any])
        context.render(outputImage, toBitmap: &bitmap, rowBytes: 4, bounds: CGRect(x: 0, y: 0, width: 1, height: 1), format: .RGBA8, colorSpace: nil)
        return UIColor(red: CGFloat(bitmap[0]) / 255, green: CGFloat(bitmap[1]) / 255, blue: CGFloat(bitmap[2]) / 255, alpha: CGFloat(bitmap[3]) / 255)

    Core Image is a great framework capable of some insanely powerful things, but in my experience isn’t optimal for something as simple as finding the average color of an image because it takes up quite a bit more memory and time, things that you don’t have a lot of when creating widgets. That or I don’t know enough about Core Image (it’s a substantial framework!) to figure out how to optimize the above code (which is entirely possible, but hey the other solution is easier to understand, I think).

    You have around 30 MB of headroom with widgets, and from my tests the normal Core Image filter way was taking about 5 MB of memory just for the calculation. That’s about 17% of the total memory you get for the entire widget for a single operation, which could really hurt you if you’re up close to the limit. And you don’t want to break that 30MB limit if you can avoid it, from what I can see it seems iOS (understandably) penalizes you for it, and repeated offenses mean your widget doesn’t get updated as often.

    I’m no Core Image expert, but I’m guessing since it’s this super powerful GPU-based framework the memory consumption seems inconsequential when you’re doing crazy realtime image filters or something. But who knows, I’m just going off measurements.

    You can see in Xcode’s memory debugger very clearly when Core Image kicks in for instance, causing a little spike, and almost more concerning is that it doesn’t seem to normalize back down any time soon.

    Memory useclass= before, spike

    (That might not be the most egregious example. It can be worse.)

    Just Iterating Over Pixels Approach

    An easy approach would just be to iterate over every pixel in the image, add up all their colors, then average them. Downside is there could be a lot of pixels (think of a 4K image), but thankfully for us we can just resize the image down a bunch first (fast), and the “gist” of the color information will be preserved and we have a lot less pixels to deal with.

    One other catch is that just ‘iterating over the pixels’ isn’t as easy as it sounds when the image you’re dealing with could be in a variety of different formats, (CMYK, RGBA, ARGB, BBQ, etc.). I came across a great answer on StackOverflow that linked to an Apple Technical Q&A that recommended just drawing out the image anew in a standard format you can always trust, so that solves that.

    Lastly, there’s some debate over which algorithm is best for averaging out all the colors in an image. Here’s a very interesting blog post that talks about how a sum of squares approach could be considered better. Through a bunch of tests, I see how it could be with approximating a bunch of color blocks of a larger imager, but the ‘simpler’ way by just summing seems to have better color results, and more closely mimics Core Image’s results. The code below includes both options, and I’ll include a comparison table so you can choose for yourself.

    The Juicy Code πŸ§ƒ

    Here’s the code I landed on, feel free to change it as you see fit. I like to keep in lots of comments so if I come back to it later I can understand what’s going on, especially when it’s dealing with bitmasking and color profile bit structures and whatnot, which I don’t use often in my day-to-day and requires a bit of a rejogging of the Computer Sciencey part of my brain, and it’s really pretty simple once you read it over.

    extension UIImage {
        /// There are two main ways to get the color from an image, just a simple "sum up an average" or by squaring their sums. Each has their advantages, but the 'simple' option *seems* better for average color of entire image and closely mirrors CoreImage. Details: https://sighack.com/post/averaging-rgb-colors-the-right-way
        enum AverageColorAlgorithm {
            case simple
            case squareRoot
        func findAverageColor(algorithm: AverageColorAlgorithm = .simple) -> UIColor? {
            guard let cgImage = cgImage else { return nil }
            // First, resize the image. We do this for two reasons, 1) less pixels to deal with means faster calculation and a resized image still has the "gist" of the colors, and 2) the image we're dealing with may come in any of a variety of color formats (CMYK, ARGB, RGBA, etc.) which complicates things, and redrawing it normalizes that into a base color format we can deal with.
            // 40x40 is a good size to resize to still preserve quite a bit of detail but not have too many pixels to deal with. Aspect ratio is irrelevant for just finding average color.
            let size = CGSize(width: 40, height: 40)
            let width = Int(size.width)
            let height = Int(size.height)
            let totalPixels = width * height
            let colorSpace = CGColorSpaceCreateDeviceRGB()
            // ARGB format
            let bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue
            // 8 bits for each color channel, we're doing ARGB so 32 bits (4 bytes) total, and thus if the image is n pixels wide, and has 4 bytes per pixel, the total bytes per row is 4n. That gives us 2^8 = 256 color variations for each RGB channel or 256 * 256 * 256 = ~16.7M color options in total. That seems like a lot, but lots of HDR movies are in 10 bit, which is (2^10)^3 = 1 billion color options!
            guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: width * 4, space: colorSpace, bitmapInfo: bitmapInfo) else { return nil }
            // Draw our resized image
            context.draw(cgImage, in: CGRect(origin: .zero, size: size))
            guard let pixelBuffer = context.data else { return nil }
            // Bind the pixel buffer's memory location to a pointer we can use/access
            let pointer = pixelBuffer.bindMemory(to: UInt32.self, capacity: width * height)
            // Keep track of total colors (note: we don't care about alpha and will always assume alpha of 1, AKA opaque)
            var totalRed = 0
            var totalBlue = 0
            var totalGreen = 0
            // Column of pixels in image
            for x in 0 ..< width {
                // Row of pixels in image
                for y in 0 ..< height {
                    // To get the pixel location just think of the image as a grid of pixels, but stored as one long row rather than columns and rows, so for instance to map the pixel from the grid in the 15th row and 3 columns in to our "long row", we'd offset ourselves 15 times the width in pixels of the image, and then offset by the amount of columns
                    let pixel = pointer[(y * width) + x]
                    let r = red(for: pixel)
                    let g = green(for: pixel)
                    let b = blue(for: pixel)
                    switch algorithm {
                    case .simple:
                        totalRed += Int(r)
                        totalBlue += Int(b)
                        totalGreen += Int(g)
                    case .squareRoot:
                        totalRed += Int(pow(CGFloat(r), CGFloat(2)))
                        totalGreen += Int(pow(CGFloat(g), CGFloat(2)))
                        totalBlue += Int(pow(CGFloat(b), CGFloat(2)))
            let averageRed: CGFloat
            let averageGreen: CGFloat
            let averageBlue: CGFloat
            switch algorithm {
            case .simple:
                averageRed = CGFloat(totalRed) / CGFloat(totalPixels)
                averageGreen = CGFloat(totalGreen) / CGFloat(totalPixels)
                averageBlue = CGFloat(totalBlue) / CGFloat(totalPixels)
            case .squareRoot:
                averageRed = sqrt(CGFloat(totalRed) / CGFloat(totalPixels))
                averageGreen = sqrt(CGFloat(totalGreen) / CGFloat(totalPixels))
                averageBlue = sqrt(CGFloat(totalBlue) / CGFloat(totalPixels))
            // Convert from [0 ... 255] format to the [0 ... 1.0] format UIColor wants
            return UIColor(red: averageRed / 255.0, green: averageGreen / 255.0, blue: averageBlue / 255.0, alpha: 1.0)
        private func red(for pixelData: UInt32) -> UInt8 {
            // For a quick primer on bit shifting and what we're doing here, in our ARGB color format image each pixel's colors are stored as a 32 bit integer, with 8 bits per color chanel (A, R, G, and B).
            // So a pure red color would look like this in bits in our format, all red, no blue, no green, and 'who cares' alpha:
            // 11111111 11111111 00000000 00000000
            //  ^alpha   ^red     ^blue    ^green
            // We want to grab only the red channel in this case, we don't care about alpha, blue, or green. So we want to shift the red bits all the way to the right in order to have them in the right position (we're storing colors as 8 bits, so we need the right most 8 bits to be the red). Red is 16 points from the right, so we shift it by 16 (for the other colors, we shift less, as shown below).
            // Just shifting would give us:
            // 00000000 00000000 11111111 11111111
            //  ^alpha   ^red     ^blue    ^green
            // The alpha got pulled over which we don't want or care about, so we need to get rid of it. We can do that with the bitwise AND operator (&) which compares bits and the only keeps a 1 if both bits being compared are 1s. So we're basically using it as a gate to only let the bits we want through. 255 (below) is the value we're using as in binary it's 11111111 (or in 32 bit, it's 00000000 00000000 00000000 11111111) and the result of the bitwise operation is then:
            // 00000000 00000000 11111111 11111111
            // 00000000 00000000 00000000 11111111
            // -----------------------------------
            // 00000000 00000000 00000000 11111111
            // So as you can see, it only keeps the last 8 bits and 0s out the rest, which is what we want! Woohoo! (It isn't too exciting in this scenario, but if it wasn't pure red and was instead a red of value "11010010" for instance, it would also mirror that down)
            return UInt8((pixelData >> 16) & 255)
        private func green(for pixelData: UInt32) -> UInt8 {
            return UInt8((pixelData >> 8) & 255)
        private func blue(for pixelData: UInt32) -> UInt8 {
            return UInt8((pixelData >> 0) & 255)

    The Results

    Memory use after,class= no spike

    As you can see, we don’t see any memory spike whatsoever from the call. Yay! If anything, it kinda dips a bit. Did we find the secret to infinte memory?

    In terms of speed, it’s also about 4x faster. The Core Image approach takes about 0.41 seconds on a variety of test images, whereas the ‘Just Iterating Over Pixels’ approach (I need a catchier name) only takes 0.09 seconds.

    These tests were done on an iPhone 6s, which I like as a test device because it’s the oldest iPhone that still supports iOS 13/14.

    Comparison of Colors

    Lastly, here’s a quick comparison chart showing the differences between the ‘simple’ summing algorithm, the ‘sum of squares’ algorithm, and the Core Image filter. As you can see, especially for the second flowery image, the ‘simple/sum’ approach seems to have the most desirable results and closely mirrors Core Image.

    Comparison of average colors from Simple, Squared, andclass= Core Image

    Okay, that’s all I got! Have fun with colors!

  • Trials and Tribulations of Making an Interruptable Custom View Controller Transition on iOS

    February 19, 2021

    I think it’s safe to say while the iOS custom view controller transition API is a very powerful one, with that power comes a great deal of complexity. It can be tricky, and I’m having one of those days where it’s getting the better of me and I just cannot get it to do what I want it to do, even though what I want it to do seems pretty straightforward. Interruptible/cancellable custom view controller transitions.

    What I Want

    I built a little library called ChidoriMenu that effectively just reimplements iOS 14’s Pull Down Menus as a custom view controller for added flexibility.

    As it always goes, 99% of it went smoothly as could be, but then I was playing around in the Simulator with Apple’s version, and noticed with Apple’s you could tap outside the menu while it was being presented to cancel the presentation and it would smoothly retract. With mine, you have to wait for the animation to finish before dismissing. 0.4 seconds can be a long time. I NEED IT. The fluidity/cancellability of iOS’ animations is one of the most fun parts of the operating system, and a big reason the iPhone X’s swipe up to go home feels so nice.

    Here is Apple’s with Toggle Slow Animations enabled to better illustrate how you can interrupt/cancel it.

    How I Implemented My Menu

    Mine’s pretty simple. Just a custom view controller presentation that is non-interactive, using an animation controller and a UIPresentationController subclass. You just tap to summon the menu, and tap away to close it, not really anything interactive, and virtually every tutorial on the web about interactive view controller transitions have “the interaction” being driven by something like UIPanGestureRecognizer, so it didn’t seem really needed in this case. So it’s just an animation controller that animates it on and off screen.

    Catch #1

    Well, how do I make this interruptable? Say I manually set the animation duration to 10 seconds, and then programatically dismiss it 2 seconds after it starts as a test.

    let tappedPoint = tapGestureRecognizer.location(in: view)
    let chidoriMenu = ChidoriMenu(menu: existingMenu, summonPoint: tappedPoint)
    present(chidoriMenu, animated: true, completion: nil)
    DispatchQueue.main.asyncAfter(deadline: .now() + .seconds(2)) {
        chidoriMenu.dismiss(animated: true, completion: nil)

    No dice. It queues up the dismissal and it occurs at the 10 second mark, right after the animation concludes. Not exactly interrupting anything.

    Okay, let’s see. Bruce Nilo and Michael Turner of the UIKit team did a great talk at WWDC 2016 about view controller transitions and making them interruptible.

    The animation is powered by UIViewPropertyAnimator, and they mention in iOS 10 they added a method called interruptibleAnimator(using:context:), wherein you return your animator as a means for the transition to be interruptible. They even state the following at the 25:40 point:

    If you do not implement the interaction controller, meaning you only implement a custom animation controller, then you need to implement animateTransition. And you would do so very simply, like this method. You take the interruptible animator that you would return and you would basically tell it to start.

    Which sounds great, as mine is just a normal, non-interactive animation controller. Let’s do that!

    var animatorForCurrentSession: UIViewPropertyAnimator?
    func interruptibleAnimator(using transitionContext: UIViewControllerContextTransitioning) -> UIViewImplicitlyAnimating {
        // Required to use the same animator for life of transition, so don't create multiple times
        if let animatorForCurrentSession = animatorForCurrentSession {
            return animatorForCurrentSession
        let propertyAnimator = UIViewPropertyAnimator(duration: transitionDuration(using: transitionContext), dampingRatio: 0.75)
        propertyAnimator.isInterruptible = true
        propertyAnimator.isUserInteractionEnabled = true
        // ... animation set up goes here ...
        // Animate! πŸͺ„
        propertyAnimator.addAnimations {
            chidoriMenu.view.transform = finalTransform
            chidoriMenu.view.alpha = finalAlpha
        propertyAnimator.addCompletion { (position) in
            guard position == .end else { return }
            self.animatorForCurrentSession = nil
        self.animatorForCurrentSession = propertyAnimator
        return propertyAnimator
    func animateTransition(using transitionContext: UIViewControllerContextTransitioning) {
        let interruptableAnimator = interruptibleAnimator(using: transitionContext)
        if type == .presentation {
            if let chidoriMenu: ChidoriMenu = transitionContext.viewController(forKey: UITransitionContextViewControllerKey.to) as? ChidoriMenu {

    However, it still doesn’t interrupt it at the 2 second point, still opting to wait until the 10 second point that the animation completes. It calls the method, but it’s still not interruptible. I tried intercepting the dismiss call and calling .isReversed = true manually on the property animator, but it still waits 10 seconds before the completion handler is called.

    After that above quote, they then state “However, we kind of advise that you use an interaction controller if you’re going to make it interruptible.” so I’m going to keep that in mind.

    Catch #2

    Even if the above did work, it has to be powered by a user tapping outside the menu to close it. This is accomplished in my UIPresentationController subclass by adding a tap gesture recognizer to a background view, which then calls dismiss upon being tapped.

    override func presentationWillBegin() {
        darkOverlayView.backgroundColor = UIColor(white: 0.0, alpha: 0.2)
        presentingViewController.view.tintAdjustmentMode = .dimmed
        tapGestureRecognizer.addTarget(self, action: #selector(tappedDarkOverlayView(tapGestureRecognizer:)))
    @objc private func tappedDarkOverlayView(tapGestureRecognizer: UITapGestureRecognizer) {
        presentedViewController.dismiss(animated: true, completion: nil)

    Problem is, all taps also refuse to be registered until the animation completes. And it’s not an issue with the UITapGestureRecognizer, adding a simple UIButton results in the same behavior where it becomes tappable as soon as the animation ends.

    (Note: when switching to an interactive transition below, UIPresentationController becomes freed up and accepts these touches.)

    All Signs Point to Interactive

    Between the advice of the UIKit engineers in the WWDC video, and the fact it doesn’t seem interactible during the presentation, let’s just bite the bullet and make it an interactive transition. Plus, the WWDC 2013 video on Custom Transitions Using View Controllers states (paraphrasing) “Interactive transitions don’t need to be powered by gestures only, anything iterable works”.

    My issue here is, what is iterating? It’s just a “fire and forget” animation from the tap of a button. Essentially the API works by incrementing a “progress” value throughout the animation so the custom transition is aware of where you’re at in the transition. For instance if you’re swiping back to dismiss, it would be a measurement from 0.0 to 1.0 of how close to the left side of the screen you are. There’s many examples online, Apple included, showing how to implement interactive view controllers powered by a UIPanGestureRecognizer, but I’m really having trouble wrapping my head around what is iterating or driving the progress updates here.

    The only thing I could really think of was CADisplayLink (which is basically just an NSTimer synchronized with the refresh rate of the screen β€” 60 times per second typically) that just tracks how long it’s been since the animation started. If it’s a 10 second animation, and 5 seconds have passed, you’re 50% done! Here’s an implementation, after I changed my animation controller to be a subclass of UIPercentDrivenInteractiveTransition rather than NSObject:

    var displayLink: CADisplayLink?
    var transitionContext: UIViewControllerContextTransitioning?
    var presentationAnimationTimeStart: CFTimeInterval?
    override func startInteractiveTransition(_ transitionContext: UIViewControllerContextTransitioning) {
        // ...
        self.transitionContext = transitionContext
        self.presentationAnimationTimeStart = CACurrentMediaTime()
        let displayLink = CADisplayLink(target: self, selector: #selector(displayLinkUpdate(displayLink:)))
        self.displayLink = displayLink
        displayLink.add(to: .current, forMode: .common)
    @objc private func displayLinkUpdate(displayLink: CADisplayLink) {
        let timeSinceAnimationBegan = displayLink.timestamp - presentationAnimationTimeStart
        let progress = CGFloat(timeSinceAnimationBegan / transitionDuration(using: transitionContext))
        self.update(progress) // <-- secret sauce

    Again, this seems kinda counter intuitive to me. In our case time powers the animation, and we’re trying to shoehorn it into an interactive progress API by measuring time itself. But hey, if it works, it works.

    But alas, it doesn’t.

    Catch #3

    The issue now is that, once the animation starts, it no longer obeys our custom timing curve. Mimicking Apple’s, we want our view controller to present with a subtle little bounce, rather than a boring, linear animation. But using CADisplayLink to power it results in the animation being shown with a linear animation, despite the interruptiblePropertyAnimator we returned looking like this: UIViewPropertyAnimator(duration: transitionDuration(using: transitionContext), dampingRatio: 0.75). See that damping? That’s springy! I even tried really spelling it out to the UIPercentDrivenInteractiveTransition with a self.timingCurve = propertyAnimator.timingParameters. No luck still.

    But wait, that’s really weird. I use interactive view controller transitions in Apollo to power the custom navigation controller animations, and I distinctly remember it annoyingly following the animation curve during the interactive transition. I specifically had to program around this, because when you’re actually interactive, say following a user’s finger, you need it to be linear so that it follows the finger predictably.

    Okay, so I check out Apollo’s code. Ah ha, I wrote it a few years back, so it uses the older school UIView.animate… rather than UIViewPropertyAnimator. Surely that can’t be it.

    … It was it.

    UIView.animate(withDuration: transitionDuration(using: transitionContext), delay: 0.0, usingSpringWithDamping: 0.75, initialSpringVelocity: 0, options: [.allowUserInteraction, .beginFromCurrentState]) {
        chidoriMenu.view.transform = finalTransform
        chidoriMenu.view.alpha = finalAlpha
    } completion: { (didComplete) in
        if (isPresenting && transitionContext.transitionWasCancelled) || (!isPresenting && !transitionContext.transitionWasCancelled) {
            presentingViewController.view.tintAdjustmentMode = .automatic

    It works if I use the old school UIView.animate APIs in startInteractiveTransition and remove the interruptibleAnimator method, and CADisplayLink perfectly follows the animation curve. Okay what gives, implementing interruptibleAnimator was supposed to bridge this gap, there’s even a question on StackOverflow about it but I suppose that question doesn’t say anything about animation curves. So, bug maybe?

    End Result

    So I guess that kinda works? But this all feels so hacky. I don’t like CADisplayLink much here, it seems to have a few jitters when dismissing as opposed to the first solution (only on device, not Simulator), and it would be nice to know how to use it with the newer UIViewPropertyAnimator APIs. I get a general “fragile” feeling with my code here that I don’t really want to ship, so I reverted back to the initial, non-interactive solution. (Additional minor thing that might not even be possible is that Apple’s also allows you to add another one as the existing one is dismissing, which my code doesn’t do and I didn’t even realize was possible.) And worst of all, you ask? CADisplayLink means “Toggle Show Animations” in the Simulator doesn’t work for the animation anymore!

    (Maybe I just need to rebuild Apollo in SwiftUI.)

    Here’s some gists showing the two final “solutions”:

    A Call for Help

    If you know your way around the custom view controller transition APIs and have any insight, you’d be my favorite person on the planet. Making animations more interruptible would be a fun skill to learn, I’m just at wit’s end with trying to implement it. I’ve linked the gists in the previous paragraph, and ChidoriMenu in its entirety with the non-interactive implementation is also on GitHub.

    I’m curious if there’s a way to implement it without requiring an interactive transition, but if not, it’d be neat to know if it actually does require CADisplayLink, and if it does, it’d be neat to know what I’m still doing wrong in the above code, haha.

    DMs are open on my Twitter, feel free to reach out (alternatively my email is me@ my domain name).